Search This Blog

12.4.22

One poll does better in assessing LA officials

I’m not sure what to make of a survey by my alma mater concerning Louisiana’s major statewide elected officials that stood in great contrast with another recent poll of the same figures, or even whether it can tell much meaningfully at all given the characteristics of that effort.

Recently, JMC Analytics completed a poll of 600 statewide likely voters in the period Mar. 21-23, apparently by random interactive voice with about three-quarters coming from cell phones, without a response rate listed. A week later, from Mar. 28-Apr. 1, the University of New Orleans Survey Research Center did the same of 325 statewide registered voters through live interviewing asking for a particular respondent with an unknown distribution of cell phones involved that triggered a response rate of seven percent. Both adjusted sample composition to reflect race, sex, and geography, while JMC also balanced its by partisanship and UNO by age.

The JMC effort asked many more questions and differently, using a five-category Likert ordering by creating “very” and “somewhat” vessels for approval or disapproval to go along with “no opinion.” UNO asked about only those three figures using just response categories of “approve,” “disapprove,” and “don’t know.” (Survey research literature generally considers “don’t know” and “no opinion” as interchangeable methods of preventing neutral answers, which may disguise themselves as actually a respondent not caring, or at least not caring enough about the question to spend the cognitive effort to form and relate an opinion.)

On both the three questions asked about approval of Democrat Gov. John Bel Edwards and Republican Sens. Bill Cassidy and John Kennedy. On one figure they registered results not too dissimilar, on another the dissimilarity broadened, and on the remaining one this almost reached unexplainable incompatibility.

For Edwards, deflating for JMC to a scale of three answers, his approval/disapproval numbers were 48/44 and for UNO 38/34. For Cassidy, the JMC numbers were 38/49 and for UNO 31/34. For Kennedy, JMC had him at 53/39 and UNO 36/31.

While Edwards numbers didn’t differ a whole lot between the two, Cassidy was seen more positively in UNO’s and Kennedy much more negatively in its. Keep in mind these two polls were compiled within days of each other, so such dramatic shifts can’t be explained by sudden events of salient magnitudes. Therefore, polling construction and execution must explain much of the differences.

We can begin to explain the differences, and to sort out which results seem closer to reality, first by noting the extraordinarily high proportion of “don’t know” responses for UNO – Edwards 28 percent, Cassidy 35 percent, and Kennedy 33 percent – contrasted with the JMC “no opinion” figures respectively of 8, 13, and 8 percent. Four aspects about the samples collected undoubtedly affected these wildly variant numbers.

First, JMC explicitly went for likely voters, while UNO took from the entire registered voter pool. Likely voters are much more probable to form these kinds of opinions than those registered but who don’t admit interest in an upcoming election. UNO did attempt to collect a proxy for this by establishing a voter frequency variable and analyzed results discriminating with it, but even here among the “chronic voters” (defined as voting in seven of the past ten elections, although this approach loses much explanatory power with the newest registrants which also can contaminate results), the “don’t know” answers for Cassidy and Kennedy (this wasn’t reported for Edwards) still registered 16 and 23 percent respectively, while close for Cassidy about three times the JMC figure for Kennedy.

UNO doesn’t report the sample by voting frequency, but by the “don’t know” distributions from the entire sample and sub-samples it appears a relatively large majority of respondents weren’t chronic voters, which would inflate that answer’s frequency. Intriguingly, survey research literature suggests live interviewing produces samples more likely to give a non-null opinion as well as to complete surveys than interactive voice, yet the seven percent response rate is rather low especially for such a short length, and the lower such a rate, the greater is the danger of sample deficiencies, particularly in the area of self-selection bias.

Potentially this also could explain the differences, although JMC didn’t report a response rate. It used interactive voice which typically has a lower rate, although it doesn’t take much to go above seven percent. However, it apparently collected at least some data through texting, which broadens the scope of contact hours (most voice telephone polling occurs after business hours on weekdays and afternoons on weekends), and interactive voice can push out a lot more calls even if the protocol used by JMC (the published version of which doesn’t include a likely vote query, but had to in order to isolate to those voters which uses a somewhat-similar methodology to isolate likely voters, but used only those who voted more frequently in the recent past plus some newest registrants) was four times the length of UNO’s.

This leads to the consideration of sample size and related issues. The 600 collected by JMC is minimally serviceable for a statewide effort, but the 325 by UNO is very low and raises questions about sample representativeness. Further, while live interviewing inherently has an advantage over interactive voice, the quality of live interviewing can be suspect (not only being one of those student callers 36 years for what was then called the UNO Poll but also 35 years ago as a graduate assistant supervising the student calling ranks, drawn from classes in research methods, I can attest some do a really good job and others not so much).

Also of note, the UNO survey didn’t appear to weigh the sample by partisanship on file with registration. Trawling through the breakdown of support for the senators suggests a bit of overweighing of nonpartisans, which again would be expected if the sample contained a significant amount of unlikely voters, and it is these registrants that are most likely not to give an opinion.

Finally, polls often contain an initial stimulative question designed not only to entice the respondent to take up and complete fully the survey but also primes them to think about the questions and not to give subsequent null answers. The JMC one did have such a question, leading off with view of Democrat Pres. Joe Biden (the other kind of question besides presidential approval often used to fulfill this function is something along the lines of “what is the biggest problem facing …”), but UNO’s that asked about Edwards was much weaker. This may have knocked down response rate and encouraged more “don’t know” answers over the subsequent three questions.

Put all of this together and it probably explains much of why UNO’s non-opinionated answers were three to four times higher than JMC’s. It probably had something to do as well with some of head-scratching distributions UNO catalogued. Perhaps the most curious was greater female approval of Kennedy by men, where more men actually disapproved of Kennedy than approved. Not only does this run counter to the literature that in recent years Republican elected officials typically gain net positive approval from men and more of it than from women, but also according to JMC men approved of Kennedy by 15 points but only held a small approval edge with women who were less approving than men by 12 percent.

In short, if I had to make a judgement on the current approval of the three politicians and the political futures of the two senators, I ruefully would admit placing greater reliance on the professional pollster’s results than those of the poll with which I was formerly associated.

No comments: