While visiting a farm in Loudoun County, Virginia, on Saturday, I met several gentlemen wearing Trump gear and eager to discuss the election with strangers. They explained while polling looks bad for President Donald Trump, that his polling was basically the same in 2016 (not true), that Trump voters were less likely to answer phone surveys, and that therefore Trump is underrated in the polls.
These gentlemen were nice enough, but weren’t exactly right. But they aren’t alone in their beliefs, leading to a discourse around whether or not so-called “shy Trump voters” are distorting the polling. And that discourse gets confused in part because the premise of the Shy Trump Voters thesis contains a measure of truth.
There are demographic attributes that correlate with both Trump voting and with non-response to polls, and if pollsters are careless with their work, this could lead them to underrate Trump.
That being said, there are a lot of things pollsters can do that, if they are careless, would lead them to underrate Democratic presidential nominee Joe Biden. And “shyness” is almost certainly not the attribute of interest here. Trump fans are rather infamously vocal about their support for the president, and he boasts bigger rallies, more garish hats, more boat parades, and other visible signs of support than Biden does.
Are there potential distortions in survey response? Of course.
But the real issue is: Do pollsters adjust for these distortions? And even more to the point: Is there reason to believe their methodological failures are systematically biased in one direction?
The answer to that latter question is basically no. Polling errors happen, and while a polling error of the scale needed to generate a Trump win would be unusual, it’s not out of the question. But a similarly sized polling error could happen in the other direction, too. Pollsters make mistakes, but they’re not incompetent.
The most basic way to conduct a telephone poll of American public opinion would be to generate a few thousand random phone numbers, call everyone on the list, and ask the people who answer who they are planning to vote for.
Election polling of the past really did, more or less, work like that, which is one reason that the margin of error has traditionally loomed so large in reporting on polls, and in polling itself. As you may dimly remember from a high school statistics class, a relatively small random sample of a much larger group of people can give you a fairly accurate estimate of what the larger group is like. And there’s a formula you can use that relates the size of your sample to the reliability of your estimate — thus allowing you to generate a 95 percent confidence interval and a margin of error that surrounds it.
But in the modern world, the problem you will have if you try to conduct a poll by calling random people has nothing to do with sampling error and everything to do with the fact that few people will answer the phone. Poll response rates have plummeted in recent years to the point where you need to dial over 15,000 phone numbers to get 950 responses.
Worse for pollsters, the group of people who do answer the phone is going to be a non-random slice of the population. Young people, people of color, people who do not speak English as their first language, and people with lower levels of education are all much less likely to answer surveys.
The only slight saving grace of a random phone poll will be that, in partisan terms, these biases somewhat offset each other. But if you think in terms of local geographies, you’ll see that poll is going to be way off. In an overwhelmingly white, more educated state like Maine, you’ll be massively biased toward Democrats because of the educational skew. In a less educated, but ethnically diverse state like Nevada, you’ll be biased toward Republicans because of the racial skew. If you happened to get the national numbers right, that would be a lucky coincidence.
So this is not actually how modern pollsters do things.
A more sophisticated approach to the problem starts with a model of what the electorate should look like, and then “weights” the poll responses to fit the model. For example, suppose you have good reason to believe (from the census, say) that the electorate should be 12 percent Black but only 8 percent of your survey respondents are Black. Then you can “increase the weight” of the Black respondents so that each of them counts more than a non-Black respondent would. You could fiddle like this to weight up Black, Latino, and young respondents, and to weight down older white people.
This is where things went awry in 2016 — although the polls were still not so far off by historical comparisons. If you weight by race and by age but not by educational attainment, then you end up correcting two pro-Republican biases while leaving out the anti-Republican bias of educational attainment.
Many pollsters did this in their 2016 state polls, and as a result ended up underestimating Trump. That’s not because Trump voters are “shy” — the vast majority of people of all types don’t answer polls these days. It’s because pollsters deal with non-response by doing a lot of weighting and modeling, so poll accuracy is increasingly a function of how well you design your model.
It’s easy to say, “Well, they should have weighted for education” (and indeed they should have), but it’s worth reflecting on the fact that these weighting decisions are difficult. Some pollsters, for example, weight based on party identification, which could be a useful master control on your sample. On the other hand, you might think that doing that, in effect, rules out large changes in public opinion — you won’t detect a sharp swing against the Republican Party in Texas if you insist, as a matter of definition, that most of the Texas electorate has to be Republicans.
Should you weight for religious observance? For denomination? We know that a white person who goes to church weekly is way more likely to be a Republican than one who never attends services. But the relationship goes the other way if you’re talking about a Black person who affiliates with a Black church (but not another kind of church). We also know that subgroups matter. If it happens to be the case that half the Latinos in your sample are Cuban, that will bias your result — all the more so if you end up weighting them up due to a small overall number of Latinos.
Critically, these problems are more intense when you’re talking about state-level polling than national polling, because the little state idiosyncrasies matter more.
In a broad national sample the fact that, say, Mormons or Jews vote differently from other white people sort of comes out in the wash. But in the handful of states where LDS adherents are a large share of the population (like Arizona), then it can matter a great deal. When you try to poll states that don’t get polled a lot (Alaska, for example) then you are faced with the problem that there isn’t much track record of different weighting concepts — you’re just sort of guessing how to weight.
The most advanced phone polls try to gain a leg up on the competition by starting not with a set of random digits, but by acquiring a list of registered voters. That lets you weight based on party registration — an objective variable that’s obviously politically relevant — but this costs money, and it’s not possible in every state. These days, more and more pollsters aren’t doing phone polls at all. Instead, they do surveys online or via text message with respondents who’ve agreed to be surveyed in exchange for money. This is a non-random group, but since phone polling is so model-intensive to begin with, these days a well-designed online poll may outperform a phone poll.
All of this means that crafting accurate state-level polls is difficult, and nobody should be shocked that it goes awry sometimes. But “the polls might be wrong” is a different claim from “there is a specific reason to believe the polls are underrating Trump.” These days, for example, most well-regarded pollsters do weight by education.
We can say with some confidence that pollsters will not make literally the exact same methodological mistake they made in 2016. But some people familiar with the education weighting issue have jumped too hastily to assuming that the polls have been “fixed” since Trump’s initial victory.
When Nate Cohn, of the New York Times, surveyed state-level poll accuracy in the wake of the 2018 midterms, he found that on average the polls had become more accurate.
But in states where the polls overestimated Clinton, they also tended to overestimate Democrats in 2018, and vice versa. National polling, in both years, was more accurate. The 2018 race was largely focused on the House of Representatives, so the state-level polling errors didn’t seem like a huge deal psychologically. Democrats underperformed here and there and disappointed themselves, but also overperformed massively in California and made up for it by winning some surprise seats out west. In the Electoral College, of course, underperforming the polls in Pennsylvania and Florida (as Democrats did in both 2016 and 2018) and making it up in California would not be so benign.
All that said, just because the polls underestimated Republicans in Pennsylvania two elections in a row doesn’t mean they’ll do it a third time. The size of the polling error there did shrink between 2016 and 2018, and pollsters may have successfully shrunk it down to zero. Or they may end up overcorrecting and have an error in the other direction, underestimating Democrats. Either way, there is no specific empirical or theoretical basis for believing that the polls are skewed in favor of Biden — they certainly might be skewed — but the professional pollsters are aware of potential biases and mostly try to correct for them.
Perhaps even more important, Biden’s polling lead is just really large at this point. The polls could be off badly and he might win anyway.
Right now, Biden is up by about 5 percentage points in Pennsylvania polling averages.
That means that if the Pennsylvania polls are exactly as inaccurate as they were in 2016, he’d win Pennsylvania by 1 percentage point. He’d also carry Michigan and Wisconsin by a few points, and Arizona by 2 points.
And because the 2016 poll errors in Georgia and Florida were much more modest, Biden’s narrow 2-point edges in those state poll averages would simply translate into narrow 1-point wins. If the election ended up playing out that way, we’d likely end up remembering it as a solid Biden win and forgetting all the fuss about polling errors. Nonetheless, particularly in the key Great Lakes states, these would actually be big polling errors.
The reason Hillary Clinton didn’t bother to campaign in Wisconsin is that the polls there were off by 6 percentage points. If Biden ends up winning there by 4 percentage points rather than by 10, that will still be a larger than usual error — it’s just that nobody will care.
To win the election at this point, Trump doesn’t just need the polling error to be in his favor again — he needs it to be giant, since Biden’s lead is bigger than Clinton’s was. Alternatively, he needs to narrowly eke out a win in Florida (which would only require a small poll error) and try to get the courts to invalidate mail votes in the Great Lakes.
"behind" - Google News
November 02, 2020 at 04:30AM
https://ift.tt/3efJEix
Trump isn’t behind in polls just because of “shy voters” - Vox.com
"behind" - Google News
https://ift.tt/2YqUhZP
https://ift.tt/2yko4c8
Bagikan Berita Ini
0 Response to "Trump isn’t behind in polls just because of “shy voters” - Vox.com"
Post a Comment