© 2024 NPR Illinois
The Capital's Community & News Service
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Illinois Issues
Archive2001-Present: Scroll Down or Use Search1975-2001: Click Here

Question & Answer: Richard Schuldt - A Primer on Polling

He has been director of the Survey Research Office in the Center for State Policy and Leadership at the University of Illinois at Springfield for almost 20 years. 

The office specializes in surveys, both mail-out and telephone, for state and local government agencies, nonprofit organizations and the center.While the substantive focus for most projects relates to public policy, the office at times asks candidate preference questions as part of its periodic “omnibus” statewide surveys. Among Schuldt’s recent projects was a statewide survey on perceptions of political ethics in Illinois. 

He has taught at Knox College in Galesburg, Illinois College in Jacksonville and the University of Illinois at Springfield.

This is an edited version of a conversation with Executive Editor Peggy Boyer Long. 

Q. We’re going to see lots of political polls in the coming months. How can we weigh their credibility?

Many kinds of pollsters can do credible polls. There are academic institutions. There are private consultants. And, of course, their clients are the candidates or parties. The media also initiate and sponsor polls. One thing to think about is who did the poll and why are they doing it. Is it for the use of the candidate or party? Is it to come up with information about public opinion, whether it be on an issue or on which candidates are preferred? The source of the poll, the intentions behind it, can determine how you analyze it.

Q. Should we automatically dismiss the candidates’ polls?

I don’t think you automatically do. But I think one characteristic to keep in mind is that there’s probably a reason they get released. Most candidate-sponsored polls are for the use of the campaigns. If they find information to identify supporters or issues, a lot of times they aren’t going to give that information away. And they’re not going to release negative results. 

Q.You could compare polls for candidates.

Exactly. I think most pollsters, while the methods they use can be different, are credible. Pollsters can be creative, though. Part of politics is defining the issues. It’s a totally legitimate way of using polls, to see how different dimensions of issues can influence distribution of public opinion. That is a good way to look at public opinion polls. 

And, don’t just look at an isolated poll that’s taken at a single point in time, but look at the variety of polls that are taken over different periods of time by different kinds of organizations.

Q. How is the media doing in terms of polling?

Overall, election-related horse-race polls have done a good job. Now, I think the higher the level of the office — the more voters are knowledgeable of particular races — the better the poll does because opinions are more formed and they’re easier to measure, the less likely they are to change. We still have some complications on estimating how the undecideds are going to break and estimating turnout. And you still have to keep polling until the end because there are classic instances where pollsters quit too early and then didn’t find last-minute changes. But, overall, I think they’ve done a good job.

Q. Are there professional standards pollsters should follow?

Yes, yes, yes. When I talk about polling, I’m talking about what’s called scientific polling with valid methods. The first essential is identifying the relevant research population. If we’re talking about an election poll, it’s the voting public. To approximate this, you have your sampling frame. More and more, you can get lists of registered voters, and even how they voted in the previous primary. We can also ask people if they are registered to vote, but we know we get overreports of registration. We know we get overreports on whether people voted in the last election. But, one way or another, you come up with a sampling frame. Then you have to do some kind of probability method for selecting who’s in the sample. Any credible survey organization is going to do that. There’s different ways of doing it, but, in common parlance, you have to do some kind of random sample. 

Q. What is that, the probability sample?

In the easiest case, we would say everybody has an equal probability of ending up in the sample. There are reasons you might want to depart from that, but let’s just take the easiest case. The main purpose of taking a sample is to make conclusions about that relevant population: all the people who are going to vote. 

Q. So you’ve got the frame and you select a proportion that will represent the whole?

A given number to represent the whole. Herbert Asher, in Polling and the Public: What Every Citizen Should Know, uses two good analogies. When you test your blood, you don’t take all of your blood and test it because you would die. You take a sample. You test that particular part because all of the blood has the same mixture. The other analogy is when you are making soup and you taste it to see if it tastes good. You don’t need all of the soup. You mix it up really well so that any given part of it has a representative taste. In essence, that’s what we’re doing when we’re doing sampling.

You know you aren’t going to get a 100 percent response rate. So you have to choose more than you need. Then you’re going to have to decide how you’re going to get information from these people. The most common method is telephone polls. 

What lists do we go by? One is a telephone directory. Two is a list of registered voters, and that’s excellent if you can get it, except you better have phone numbers or you’re going to have to look them up, which adds cost. And another method is random digit dialing. This gets at people who have unlisted numbers. Overall in Illinois, 70 percent of the households have listed numbers. But that also means 30 percent don’t. It wouldn’t be a problem if the 30 percent was the same as the 70 percent in terms of characteristics. But we know that isn’t the case. There’s a bias in who’s listed and who isn’t. The more you get to urban areas, the greater percent of unlisted numbers you find. In the city of Chicago, it’s estimated that about 50 percent of the households do not have listed numbers. In the suburbs more are listed. And when you get to rural areas even more are listed. 

It’s also true with response rates in general. The more you get into urban areas, the tougher it is to get people to respond.

Q. You’ve got to allow for that?

That’s right. You’ve got to choose more numbers, or you’ve got to have more callbacks.

Once you get hold of the household, by some random method or some, let’s say, nonbiased method, you have to determine who to talk to in the household because there’s biases in who answers the phones. In most households where you have mixed genders, women are still more likely to answer the phone. There’s another reason we get more women then men. It’s because there are more all-female households than there are all-male households. But, anyway, we have to watch that. What we do when we call is ask for the person with the next birthday. There are other ways to do it. 

Then you have to determine for election polls whether that person is registered and how likely they are to vote. There’s different methods, usually proprietary, that pollsters use to determine who’s going to vote and who isn’t. There’s some judgment that comes into play.

Q. How many interviews make the sample credible?

I think a lot of the skepticism comes from the question of how can you make valid conclusions when you only talk to 400, 600, 800 people and you’re trying to make inferences to a voting population of 4 million, let’s say. This actually is the least valid of the criticisms because, if you do the random sampling that we just talked about, if you do that in a valid fashion, the theory of sampling is such that you can talk to 400 voters and, if you validly measure their intentions, you will be accurate within plus or minus 5 percent of the actual results 95 percent of the time.

Q. I’m going to ask what the heck that means?

Here’s the sampling error: plus or minus 3 percent, plus or minus 4, plus or minus 5. Or they’ll say it’s accurate within plus or minus 3 percent.

Q. So, if you see something that’s, say, 6 percent, 8 percent accurate?

That’s OK as long as they report it. But you just have to know how to interpret that then. Usually it is 3 to 5 percent because, if it gets beyond that, it has to be a big difference between two candidates to matter. A lot of times you’ve just got to say it’s too close to call. 

Sometimes I see the media emphasizing a small difference when the real story is that there is little difference or none between the candidates at that point in time. I think over the past 20 years the media has become more sophisticated in their interpretation of that. But you still see it going on now and then.

Let’s talk about specific numbers though. All this plus or minus business is called sampling error. To be plus or minus 3 percent, you basically need to talk to about 1,100 voters. To be plus or minus 4 percent, 600 voters. Plus or minus 5 percent, just under 400 voters.

Q. It sounds like it’s what you can afford to sample.

That’s right. To get it down to plus or minus 2 percent, you’ve got to jump up to about 2,200. There is a point of diminishing returns. On the other hand, you don’t want to go too far below 400, plus or minus 5 percent, because that means if 60 percent of your respondents said, “I’m for candidate A,” it could mean, in reality, as low as 55 or as high as 65. And a lot of times we’ll see the survey finds it’s 48 to 45. Well the 48 could be as low as 43. It could be as much as 53. And, of course, the 43 could be as low as 38, but it could be as high as 48. There’s an overlap there. 

The other thing is subsets, groups within the whole. 

Q. Women vs. men?

Exactly. If you’re dealing with 1,100 plus or minus 3 percent overall and you start talking women vs. men, each are about half. What you actually have for each of those groups is 550.

Q. A different sampling error within the subsets?

There are ways you can test for statistical, significant differences. So reports will say there are significant differences. 

Let me mention one other concept. It’s called “confidence level.” We’re talking about sampling error, but it doesn’t mean that 100 percent of the time we do a poll this way it’s going to be plus or minus 3 percent. It means that if we did this same poll 100 times, 95 times out of a hundred, it’s going to be plus or minus 3 percent. Another way of putting it is that over a number of polls, there’s a likelihood that 5 percent of them are going to be outside that range.

If it’s not reported, you can generally assume it’s 95 percent because that is the conventional confidence level. One could decide to do a poll at 90 percent, but you should report that.Now, I should mention that all this about confidence level applies to a single poll. If you have a number of polls taken at the same time, and all — or nearly all — point to the same conclusion, you can have even greater confidence in the results. 

Q. Is it getting more difficult to get people to respond to polls? 

Yes. For probably the first half of the 19 years I’ve been in this business, we had far more completions than refusals. Then all of a sudden our refusals started inching up to our completions. I talked to other people in the business. It’s happened to everybody. And most of it is attributed to all of the telemarketing that has made people sick of answering the phone, mostly for sales. But also some people don’t like pollsters.

The other thing is the proliferation of phone numbers because of fax machines, because of Internet modem connections. There’s just a lot more phone numbers. That means it’s much less efficient to do a telephone poll than it was before. You have to dial a lot more numbers to get the same number of completions.

There’s a debate among pollsters, the academic community and commentators about the meaning of lower response rates. Can you still do a valid poll even though the response rates are much less? The real problem isn’t the lower response rate per se. It’s do we have a biased rate? Are we getting systematic differences in whom we talk to vs. whom we don’t talk to? 

Q. That’s under study?

The Pew Research Center several years ago did a typical telephone survey where they got a 42 percent response rate. Then they followed up with the people who didn’t answer the phone, and they got the response rate up to 70 percent. They looked at the difference in the two. There was little substantive difference. Now that made some of us feel better about using our typical six-to-10 call-back method. 

Q. People worry questions could be leading. Is there also concern about the order of question categories?

For the voter preference question, I think most pollsters ask, “If the election were held today ... .” It’s more concrete. Any time you ask people to project you can get into trouble. You probably want to avoid hypotheticals if at all possible.

But then there’s the response alternatives, the order of those. Particularly in races with a long list of candidates. The order is probably most important. Which do you list first? Which do you list second? Which do you list third? One way of doing it is put it on a computer. You can randomize the list. I like, particularly in shorter races, to simulate what voters will see on the ballot. 

Q. How do we factor undecideds?

If you’re using the poll to predict the election, the question is how you allocate the undecideds. Do you ignore them? Well, some of those people are going to vote. Do you assume they’re going to split the same way as the decideds? Some people say in many races you can. But I think the fact that they’re undecided means they’re different. And a lot of the undecideds will break against the incumbent. 

In 1992, the national polls, Gallup, were a bit off in the Bill Clinton, George Bush, Ross Perot presidential race because of how they allocated the undecideds at the end. They didn’t give enough to Perot and, as I remember, they gave too many to Clinton. Perot got 18 percent of the vote. 

Q. What challenges are pollsters likely to face in the future?

For telephone surveying here’s a couple. The increased use of cell phone vs. hard wired phones. Until recently, the samples we have purchased, the frame that they come from to do random digit dialing, are noncell phone exchanges. The more people are depending upon cell phones rather than hard-wired phones — if there’s a bias there, we’re missing those people. But there’s a recent FCC ruling that changes that. As I understand it, not only can you keep the same cell phone number when you move from cell phone provider to cell phone provider, but if you are a hard-wired phone user and you want to become a cell phone user there’s an opportunity to keep the same number. That may help us out. But if people have to pay when they answer the phone, that can make them even more irritated with us. 

And people are using Internet services for telephone services. So what are those exchanges and can you reach people the same way? That is a big challenge. And what about Internet surveying, surveying people through e-mail. Internet usage now by percentage of population is probably in the mid-60s. There’s a systematic bias in who uses it and who doesn’t. Particularly, older people and the lower educated don’t. That’s a controversial area. 

Technology and how people use technology to communicate is going to change surveying in the next 10 years. 

 


Illinois Issues, February 2004

Related Stories