7.11.24

Shreveport pollster goes big, goes home big

If you think Democrats with Vice Pres. Kamala Harris at the top of the ticket just suffered a disastrous showing, wait until you hear about the launch of the Shreveport-based polling firm that went all in and then some predicting the opposite for her campaign.

Vantage Data House operates on a subscription-based model. Rather than a one-shot picture in time of a particular race or a few, subscribers have access to an entire database and can pick from many contests nationally. It claims a proprietary method that relies upon information about a respondent’s residence, party registration, race, age, and gender. The firm appears to have been running in background for perhaps a year or more, apparently gathering a national panel of voters up to 40,000 a state through the web. It claims it called correctly 29 of 30 Louisiana contests last year in a test run, and so recently rolled out the entire operation focusing on national contests this year.

It announced itself about 10 days prior to the election with a lengthy web document predicting not only that Harris would defeat Republican former Pres. Donald Trump in the so-called “swing states” minus Wisconsin plus Florida, but that nationally it would be a “blowout” in her favor. It went to length justifying its conclusions, along the way stating “Many [independent polls] prefer to be wrong with the crowd rather than risk standing as outliers, so they adjust their numbers and reinforce the faulty averages,” “Republicans are in serious trouble, though few are willing to acknowledge it,” and “A significant widening of the gender gap and Harris’ growing support among independents … is propelling her toward a potential 300+ electoral college victory.”

On election eve, it released final numbers in these state for the presidential contest and Senate races in most of them. It showed Harris winning all of these states, and all Democrats winning Senate seats, with an error margin of 3 points either way in 95 percent of instances (a common polling technique is to draw sample sizes sufficiently large so that in 19 of 20 cases the sample should reflect a population within a stated margin of error, which depends upon an estimation of the population variance). Only under the best case scenarios for Trump – that is, each poll was fully 3 percent wrong in his direction – would he win enough of these states to win the presidency under the firm’s predictions. As for the five Senate races, even under their best case scenarios, none were forecast to receive at least 50 percent of the vote.

Then came election day, which turned out to be an absolute disaster for the firm. In fact, that’s an understatement. The average miss (as of the end of the business day Nov. 7) by state for the seven presidential votes for Trump vote was just about -3.7 and for the GOP senators around -4.3. It’s a really bad miss when you don’t even hit your confidence interval especially in the wrong direction – which it turned out in the majority of the 12 races. Only two of the 12 picked the correct winner – and both those are as yet unconfirmed.

You could argue, as in the case of the bad Iowa polling miss, that it was a bad sample, that one out of 20 times when you drew an unrepresentative sample. The problem is if that were the case with these guys they did it again and again, pointing more to a flaw in their methodology than extended bad luck.

Being a proprietary (for whatever now that appears to be worth) model, it difficult to guess where they went wrong, but a guess can be made. They appear to be drawing what are called probabilistic samples, using for a constituency an actual or estimated reflection of party registration, race, age, and gender (sex). If so, one problem may be that they have to estimate in many cases some or all of these criteria. Louisiana is unusual in that it makes public all of this information in the aggregate for registrations in jurisdictions, but many states hardly collect or make public much of it.

As well, apparently at least some of the sample construction comes from web surfers voluntarily asking to be part of it all. This raises the dreaded sampling threat to validity of volunteerism, as there is a long and rich history in social science research that unsolicited volunteers have different characteristics than the public, even if trying to weigh a sample in a probabilistic fashion. Initial indications this cycle are that Democrats disproportionately comprise such volunteers.

Whatever the problem, the real world left plenty of egg on these guys’ faces. In fact, of the dozens of polling operations out there offering forecasts on the presidency – and while a number underpredicted the Trump proportions in swing states most did have him ahead in most of them – it probably did the worst of all. And with betting markets proving far more accurate in picking up a looming Trump win (although the longtime Iowa Electronic Markets, doing this since 1996, completely blew it), why bother with polls?

With this record, who would fork over hundreds of dollars month to get such lousy information? It will be interesting to see whether the firm principals publicly address their misfire at a future date, if not to see if the firm survives this colossal blunder.

No comments:

Post a Comment