By Shannon Sigafoos

Trent Gaugler, associate professor of mathematics, teaches a Politics & Polling FYS course

How are polls conducted? Do polls measure public opinion or do they influence public opinion? Are some groups of people “more important” to survey? These are all questions Trent Gaugler, associate professor of math, is covering in his First-Year Seminar (FYS) course, “Politics and Polling.” Here, he also covers why the 2020 polls are different from 2016—and why Nate Silver is the statistician to watch.

Q: Nate Silver (founder and editor in chief of FiveThirtyEight and author of The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t) says the problem with the polls from 2016 to 2020 is not from the polls themselves, but how the press interpreted them. What do you make of that? 

A: To my mind, some things in the polls were not done well. It’s always true that if your data isn’t representative, then when you try to extrapolate from that data, you’re going to get the wrong idea. If I want to know what the average height of a Lafayette student is and I go down to the basketball court during practice, and I say that Lafayette guys are about 6 feet, 5 inches on average, that’s incorrect, right? That’s what we did in the polls. By and large, we didn’t look at the right group. We looked at a bunch of people who were supporting Hillary Clinton, and then we went to extrapolate that and said, ‘She’s going to win in a landslide.’ We were excluding a big chunk of the population. When you miss a big chunk of the population and they’re not in your sample, then your sample is not going to tell you the whole story. 

There is also something to be said for the overstatement of confidence in some polls. I’ve long told students that we humans, as a general rule, don’t generally do well with probabilistic thinking. When we’re trying to interpret probabilities, chance, or risk, we’re not good at it by default. Near the end of 2016, most of the polls were putting Hillary at a 99 percent chance to win. And Silver, at one point, had Trump around 25 or 30 percent—so he was much more optimistic on Trump than anyone else. After that election, I asked people in class, ‘If I flipped a coin twice, and it came up heads twice in a row, would you be shocked?’ Nobody said they would be shocked. If Silver gave Trump a 25 percent chance of winning, we shouldn’t have been shocked when it happened. It wasn’t a totally ‘out of the blue’ outcome. We were told to believe by so many pollsters that it was a totally surprising outcome.

Q: How does a person get included in a polling group, and what makes a good sample size?

A: When you think about the probability of inclusion, think of it this way: most polls are based on about 1,000 people. If there are about 1,000 people in a poll and there are millions of registered voters, the chances of you being included, if you’re really just randomly sampling from that group, is nearly zero. So, most people won’t probably ever be polled. There’s a lot more that goes into it. If you read polls at all, what you’ll probably notice is the vast majority say the margin of error for this poll is right around 3 percent. The quick way that you can think about this margin of error is just one divided by the square root of the sample size. So, the square root of 1,000 is about 30, and inverting 30 gives us about 0.03—or 3 percent. People usually aim for 1,000 in a sample size, because they know they’re going to get right around a 3 percent margin of error.

Video

Ask the Expert

Have you ever wondered how one candidate can seemingly have an election 'in the bag,' according to the polls, and still lose an election? Prof. Trent Gaugler explains

Watch

Q: Have any of your students asked if we can trust the polls this year? 

A: We’ve talked about it. We read fivethirtyeight.com semi-regularly, and I think Silver made the claim that really the 3 percent margin of error, that is just what happens because you took a random sample. If I ask 100 people right now, ‘Who are you going to vote for?’ and 48 say Biden, I could ask another set of 100 the same question, and I might get 52, or 47, or 53 who say Biden. The fact that different sets of 100 people move around gives us that wiggle room, but that’s assuming that those hundred people are again representative. But there are other types of error that can happen that aren’t just the sampling error. There are also biases that happen because we didn’t talk to the right people, or we asked them the wrong question. There’s a lot out there on how the raw data is combined into an actual polling estimate, but Silver suggested that with all of those sources of error combined, the margin of error is probably about double. So, when you see a poll that says plus or minus three, it’s probably plus or minus six. This year with COVID, and mail-in voting and all the chaos that’s going on with that, it’s probably more.

Q: A recent major news headline read, ‘Poll shows Biden landslide and narrow Trump win are both possible.’ Because of the outcome in 2016, is the media covering all of their bases?

A: There’s so much uncertainty in this process. One of the things that we’ve talked about a lot in this class and that I’ve really tried to get students to think more about is how the media and politicians deal with uncertainty. I said earlier that humans are very bad at assessing probability and risk. Part of that is, of course, uncertainty. What are the likelihoods of these different possible outcomes? We don’t communicate uncertainty nearly enough. Far too often we get headlines that say, ‘This is what’s going to happen,’ or a candidate in the debate will say, ‘My economic plan will create 4 million jobs.’ These types of statements simply cannot be true—there is no way we can know things like that with so much precision. It could be 4 million plus or minus 2 million jobs. 

They’re sort of incentivized not to communicate uncertainty, because they want to sound confident. In class, we’ve talked a lot about the appropriate level of honesty and how to communicate uncertainty. A headline like that is just being honest, considering the real margin of error is at least 6 percent. I think sometimes we don’t want to hear that honesty.

Q: In a recent interview with Nate Silver, he admitted he’s been struggling with trying to figure out how to present information in 2020 in a way that the American public can understand it. What do you make of that?

A: In our current media climate, we are ‘140 characters or less’ type of people. We want the headline, and we want to know the story. I think most people, if they wanted to take the time, could really understand the whole story. But I understand we’re all busy. And if you’re not an expert in it, you don’t want to take the time. You want the headline, you want the tidbit, you want that visual on his website that just tells you the story quickly. 

It’s fighting with the climate of the way we consume information too. In this particular case, where there’s so much more uncertainty that we need to communicate—and we’re so bad at it already—then yes, we fight this battle with, ‘How much do we try to put out there and be transparent?’ That’s an ideal we should all be really striving for. That’s what allows us to check each other and make sure that people aren’t putting out false or misleading narratives.

Q: Silver has kind of become the ‘go-to guy’ since the last election because he’s much more a statistician—much more of a numbers guy—than a CNN pollster. Is that why you focused on him for your class?

A: I’ve looked up to him and his methodology for quite some time. In a nutshell, he averages polls. He doesn’t just take a poll and report it. He understands that one poll can be an outlier. Silver says, ‘Look, there’s thousands of polls out there, and they’re all trying to measure the same thing. So, let’s leverage them and harness their collective power.’ The way he weights them and averages them is fairly unique. I think that’s a smart way to do it, because we’re not putting too much weight on any one result.

Q: In 2016, we had a large crop of people who have never voted before. I’d be interested in seeing the data on the reliability of those people coming back a second time.

A: That is actually an issue that I have brought up in class repeatedly. Part of it was, I think Trump energized a voting base that had been dormant for ages. He was not the norm. And people said, ‘There’s this guy who doesn’t look like all the other politicians I’ve ever seen. Maybe I’ll do something this time.’ Maybe that energized them to the point where it changed their behavior, and we weren’t able to understand that change in behavior was coming. There’s a statistics quote I love, and it says ‘Doing predictions is like driving by looking in the rearview mirror.’ As soon as that bend in the road comes and you’re still looking backwards, you’re going to crash. And that’s exactly what happened in 2016. Trump was a giant bend in the road, and our polls crashed hard. Data would never have suggested those folks were going to vote. That’s one of the reasons polling is hard.

Q: We tend to see polls change right after debates. Does 90 minutes of TV time really change the way people have been forming their opinions over the last 10 months or so of the campaign season?

A: It’s bizarre, because we already know where both candidates stand on the issues. How is what they’re telling you in an hour something you already didn’t know from having read in the news 100 times over? That brings us back to the way people consume information. Some people pay attention and read the news constantly, and may just be more informed than other voters. Maybe some others get a lot of their information from that debate, which is a shame because these issues are a much more nuanced thing that takes more than an hour to explain.

With that being said, with every presidential election, someone’s leading big in the summer [polls], and it always narrows. By and large, it’s going to be close to a 50/50 popular vote every election cycle. Whatever lead a candidate has in the summer is going to evaporate. Don’t look at the polls in August. Start looking at them now.

 

Engaged Community

Election 2020

Lafayette students, faculty, and staff have shared expertise and encouraged conversation during the 2020 political season

Learn more
Categorized in: Faculty and Staff, Featured News, First-Year Seminar, Government and Law, Mathematics, News and Features

1 Comment

  1. Catherine Mairone says:

    I enjoyed your article & will keep an eye on Nate Silver.

Comments are closed.