There has recently been increased interest, some of it critical, regarding the methods we use to conduct election polls in Hawaii. Our methods are straightforward and based on industry best practices. Our goal is to accurately capture the opinions and intentions of Hawaii’s voters.

For every poll we conduct, we follow a standardized, step-by-step procedure. Even though we may sometimes be surprised by what we find, we are always guided by our data. Because of the nature of political polling, we do have to make some judgment calls along the way. But we fully appreciate that the most important experts about public opinion in Hawaii are you — the public.

Voting booth on General Election Day 2012

Voters go to the polls in 2012.

Anita Hofschneider/Civil Beat

Bearing that in mind, here is a brief, step-by-step description of our methods:

Step 1: Create an Unbiased Survey

Our most important goal is to understand public opinion in Hawaii without influencing it. We carefully vet every question in each of our surveys for potential sources of bias. And in election surveys, we take the additional step of creating multiple versions of candidate match-up questions so different survey takers hear the candidate’s names in different orders.

For instance, half of the respondents to the current poll answered questions about a Senate race between Brian Schatz and Colleen Hanabusa, while the other half answered questions about a race between Colleen Hanabusa and Brian Schatz. This randomization exceeds industry standards for automated polling (that is to say, surveys where the questions are pre-recorded and responses are indicated by pressing keys on your phone).

We conduct automated surveys because automation reduces bias. Everyone who responds to a survey hears the exact same recordings of each question. This builds in a high degree of quality checking that is not possible in a poll conducted by live callers. There is no chance that people’s responses will be influenced by the tone of an interviewer’s voice, the unintentional mispronunciation of a candidate’s name, or any other factors that might not be consistent from call to call in a live-caller survey. Some research suggests that people are more willing to express unpopular opinions when they have a perception of privacy that automated polling provides, rather than talking to another person who is conducting the interview.

Step 2: Interview the Right People

There are ongoing debates within the polling profession about whether election polls should aim to interview a representative sample of a state’s entire population, or just the subset of the population who are likely to vote. For election polls, we believe the latter makes infinitely more sense, and makes for much more accurate results.

To that end, we select our samples randomly using publicly-available lists of registered voters in Hawaii. We also incorporate public information about how often individuals vote to make sure that we interview people with a wide range of voter likelihood, while at the same time focusing on likely voters.

It is especially important to focus on likely voters when conducting polls for elections with lower voter turnout rates, such as primaries. The best expert judgments about what will happen in any election comes from the people who will actually be voting in it!

One major disadvantage of automated polls is that we’re prohibited by law from sending our recorded surveys directly to cell phones. Contacting cell phones is becoming increasingly important for conducting accurate polling, because so many people no longer have landline phones. This is particularly true of younger people.

Some research suggests that people are more willing to express unpopular opinions when they have a perception of privacy that automated polling provides.

To remedy this, for the past two years, we have been including cell phones in our surveys. We employ live operators who make initial contact with people on their cell phones (thus complying with the law). They ask each individual whether they would be willing to take our survey. If they say “yes” (and confirm that they’re not driving or doing something else that demands their undivided attention), the operator patches them through to our recorded survey.

This addition of cell phones matters — it allows us to speak to younger voters, and it affects the results of our polls. We do not currently know of any other automated polling firm that supplements its polls by calling cell phones. It adds expense, but it makes us more accurate, and is another way in which we exceed industry standards for automated polling.

Step 3: Determine Who Will Vote in the Primary and Weight Accordingly

In any election, there are clear demographic trends — members of certain groups tend to prefer one candidate over the other. For instance, in the past two presidential elections, younger voters and voters of color were more likely to vote for Barack Obama, while older voters and Caucasians were more likely to support John McCain in 2008 and Mitt Romney in 2012.

In any poll, it is crucial to balance the size of various demographic groups correctly. If your sample is not balanced properly on its own, you can use mathematical “weighting” techniques to balance it after the fact. For example, some polls overestimated the turnout of Caucasians relative to people of color in the 2012 presidential election, and consequently, predicted that Mitt Romney would win. If these polls had applied different weights to their sample to increase their predicted turnout of voters of color they would have been more accurate.

Making sure we have the right balance is where we must exercise our own judgment more than anywhere else in the process. But fortunately, we can use publicly available data as a guide, so that we make as few subjective decisions as possible.

Based on public records, we know the vital demographics — gender, age, ethnicity, county of residence, etc. — of the people who have actually voted in recent primaries in Hawaii. We also know that these numbers change only slightly every two years with each subsequent primary. So, we have a very good estimate, for example, of what percentage of the primary electorate will be female, older than 50, Caucasian, residents of Oahu, etc.

With this information, we can easily and accurately adjust our results based on past voting patterns to have the right mix of gender, age, ethnicity, etc. What we do not know, however, is which of these voters voted in the Democratic primary, the Republican primary, or another primary — that information is part of every voter’s secret ballot.

Step 4: Determine Who Will Vote in the Democratic Primary

Hawaii, along with many other states, holds “open” primaries in which any registered voter can vote in the primary for any party. For instance, in June, a voter in Hawaii may have believed that for sure, he or she would vote in the Democratic primary. But by July or August, that voter may have decided to vote in the Republican primary instead.

This presents a challenge to us as pollsters. And there are essentially two ways we can approach this challenge.

The first would be to apply our subjective judgment to create a turnout model that is specific to the Democratic primary. In other words, we know a lot about who will turn out for all of the primaries combined (see Step 3), so we could then apply our knowledge of Hawaii politics to estimate what the demographics of the Democratic primary turnout will be.

We know, for instance, that Republican voters are somewhat more likely to be male and Caucasian, so we can “do the math” and assume that Democratic primary voters will be more female and less Caucasian. This is a perfectly valid method, and one that other pollsters may use.

In any poll, it is crucial to balance the size of various demographic groups correctly.

But we use a different method that we believe is preferable and more accurate.

The second option, and the one we use, is to trust the collective wisdom of the voters who are responding to the poll — in other words, we simply ask “will you most likely vote in the Democratic primary, the Republican primary, or neither of those primaries?”

It’s not perfect, but we believe it’s better, simpler, and most importantly, more objective, than the alternative described above. Since we trust voters to truthfully and accurately tell us which candidates they will support — and it is the rare respondent indeed who would spend 10 minutes on a call giving false information — we also trust them to tell us which primary they will vote in.

Put into practice in the current poll, we determined that 1,240 of the people we interviewed were likely primary voters (Democratic, Republican, or other). We then weighted the demographics of that sample of 1,240 based on our objective knowledge of turnout in Hawaii’s 2012 and 2010 primaries, with some minor adjustments based on our knowledge of the state’s shifting demographic landscape.  At this point in the process, the demographics of the sample were fully adjusted.

Then, of those 1,240 who were voting in one of the primaries, 895 (72 percent) told us they were planning to vote in the Democratic primary. We did not further adjust the demographics of those 895. In other words, we applied any and all adjustments to the full set of 1,240 primary voters, rather than to the subset of 895 Democratic primary voters.

Is There a Potential Downside?

Occasionally, the method we use produces big changes in the demographics of the likely Democratic electorate from one poll to the next. This doesn’t happen often, but it is what happened if you compare the poll we released in May to the poll we released in June and the one we’re currently releasing. In May, a much smaller proportion of the projected Democratic primary electorate was Caucasian (28 percent) than in June (41 percent) or currently (43 percent).

The reason for this is simply that in May, a smaller percentage of likely Caucasian primary voters said they were planning to vote in the Democratic primary. In June, as now, Caucasians have shifted their intent toward the Democratic primary.

We take these numbers at face value — we believe that in May, more Caucasians intended to vote as Republicans. But as the political season has heated up, and as the major races in the Democratic party have taken shape, Caucasian interest has shifted somewhat toward the Democratic primary.

On the other hand, it is also possible that this large jump from May to June and July reflects some of the random nature of the polling enterprise — it is possible that the May poll was just somewhat off regarding ethnicity composition, although the overall results were largely the same. But ultimately, this shift in Caucasian interest in the Democratic primary is not particularly surprising, although it might be a little bigger than we would have predicted.

We appreciate that these numbers — the demographics of the Democratic primary electorate — matter a great deal to the outcomes of our polls. We have been polling in Hawaii in close collaboration with our colleagues at Civil Beat for more than four years. We understand very clearly that ethnicity plays an important role in determining who some residents of Hawaii vote for, and we understand just as clearly that a mainland approach to ethnicity is not adequate or appropriate for polling in Hawaii. For example, we have never lumped all “Asians” together into one ethnic category as one might find in a mainland poll. We know, for instance, that Americans of Japanese Ancestry (AJAs)  and Americans of Chinese ancestry tend to have very different voting patterns in Hawaii.

We believe it is best to tinker with the results as little as possible. We prefer to let the voters be the experts.

Ethnicity is certainly not the only determinant in an election in Hawaii, and usually not even the main one. But it does have an influence. Because of this, the higher the percentage of Caucasians or AJAs in a poll, the better a Caucasian or AJA candidate, respectively, is likely to fare in the poll’s results. That is why we believe it is best to tinker with the results as little as possible. Rather than relying heavily on our own subjective judgment, we prefer to let the voters be the experts, through both their responses to our poll questions and records of past voting history.

So What Happened in 2012?

As longtime members of the Civil Beat community recall, our 2012 polling consistently underestimated Mazie Hirono’s vote totals in her Democratic primary matchup against Ed Case. But in those same polls, we accurately detected and reported the meteoric rise of Tulsi Gabbard, against Mufi Hannemann and others, in the Democratic primary in the 2nd Congressional District.

Looking closely at the those polls, it quickly became apparent that the results had systematically underestimated Senator Hirono’s standing among AJA voters — our polls frequently found her to be receiving just about half of the AJA vote, which we concluded was at odds with our knowledge of Hawaii’s political landscape, not to mention common sense. And given that the Senate primary pitted an AJA candidate against a Caucasian, this mattered a great deal to the polling outcomes (in contrast to the CD 2 race, where voter ethnicity played very little role).

To figure out how this happened, and guard against it happening again, we dedicated a portion of each of our Civil Beat polls in 2013 and early 2014 to testing a number of hypotheses. In fact, if you have responded to any of our polls during the past two years, you may have been asked to answer a few questions about how you voted in 2012. Those questions were included to help us resolve the issues we encountered in 2012.

One theory we tested was that some residents of Hawaii might be less likely to respond to a poll or endorse an AJA candidate because our voiceover specialist has a recognizably mainland, Caucasian accent. To test this, we conducted simultaneous polls that were identical, except that in one, the questions were read by our mainland voiceover specialist, and in the other, the questions were read by a local-sounding resident of Asian ancestry.

It is important to remember that any poll is just a snapshot in time of people’s opinion. What voters think they will do days or weeks before primary day may not match what happens when that day arrives.

We tried this test twice, and found that the accent in which the questions were read made no systematic difference in the results. Indeed, we were surprised to discover that the only difference was that people were 25 percent more likely to take the survey, rather than hang up, when the questions were delivered by our “mainland voice” compared to our “local voice.”

But another hypothesis we tested seems to have provided our answer. The simplest explanation for why we were getting unusual results, particularly among AJA respondents, was that we simply were not calling a representative sample of AJA voters in the state (see Step 2).

To test this, we again ran simultaneous polls. But this time, we tested the public voter list (that is to say, the list of voter phone numbers) that we had used in 2012 against a voter list we obtained from a national company that specializes in providing voter information to campaigns and pollsters.

The difference in the results jumped off the page. When we used our customary list from 2012, we found that when we asked who voters had chosen in 2012, the results were right in line with our 2012 polls — they continued to underestimate Mazie Hirono’s vote totals.

But the same poll conducted using the new list of voters produced results that were very close to the actual 2012 results — it reproduced very accurately the results of both Hirono’s primary victory over Ed Case and her General Election victory over Linda Lingle. This does not necessarily mean that our original list of voters was inaccurate. But it did indicate that the new list performed better for us, using our particular methods.

It’s always important to interpret retrospective voting questions with a grain of salt. But these results using the new list were so accurate that, after running this test three times just to make sure, we decided to change permanently to the new list. This new list is updated regularly with a fresh sample drawn for each poll, and should make Step 2 of our process (and all the steps that follow) as accurate as possible in 2014.

It is important to remember that any poll is just a snapshot in time of people’s opinion. In any election, and particularly in a low-turnout, open primary, what voters think they will do days or weeks before primary day may not match what happens when that day arrives. But we are confident in our methodology, and we’ll all find out one way or the other on Saturday night.

About the Authors