金沙8888js官方(中國)官方網站-App Platform

The President That Wasn’t: The Case for Higher Response Rates

The President That Wasn’t: The Case for Higher Response Rates

Read Time: 0 Minutes

In 1936, the United States was entering the eighth year of the Great Depression and President Franklin Delano Roosevelt was still pushing parts of the New Deal policy through Congress and the courts. He had been renominated for president by the Democratic party without opposition. FDR would face off against Republican Alf Landon, Governor of Kansas, and a staunch opponent of the president’s expansive fiscal policy.

Three days before the election, The Literary Digest, a weekly American magazine, announced the results of its straw poll of a phenomenal 2.5 million respondents. It predicted that Governor Landon would win the election by a landslide, expecting him to win 56% of the public vote and secure 370 of 531 electoral votes.

Why have we never heard of President Landon? Because FDR won the election easily, winning 61% of the popular vote and carrying all but two states — Maine and Vermont. FDR’s 98.5% share of the electoral vote is the highest ever in a two-party election in the US.

No alt text provided for this image

What went wrong – the conventional explanation

So, what happened? How could The Literary Digest get it so wrong? Prior to this mishap, they had successfully predicted all five selections since they first began polling in 1916. Both The Literary Digest and the new kid on the block, Gallup (founded in 1935), argued that, despite two and a half million responses, the huge population they surveyed was not economically representative of the voting public.

To conduct their poll, The Literary Digest mailed out 10 million ballots on which respondents were asked to respond to four questions regarding their planned vote, voting history, and state of residence. Two and a half million respondents aside, the conventional what-went-wrong explanation revolves around the bias inherent in how those 10 million voters were selected in the first place.

No alt text provided for this image

 

The Literary Digest’s survey population was created from records of registered automobile users and telephone owners – supposedly both hallmarks of greater wealth, especially in the depths of the Great Depression.

Survey respondents, the argument goes, were well-to-do and thus more likely to vote Republican. Representation bias was the culprit. The survey was not representative of the estimated population.

But this conventional explanation is incorrect. At least partially.

Testing the conventional explanation

To test the representation bias explanation, The Literary Digest ran follow-up surveys that included a representative sample of lower-income voters. The results did not differ from the original survey.

What’s more, a 1937 Gallup survey looked at four different groups: those who own cars, those who own phones, those who own both, and those who own neither. It showed that while the third group (owners of both) was more likely to vote for Landon, all other groups favored FDR by anywhere from 22 to 48 percentage points. Landon’s lead in the auto-and-phone-owning group was only six percentage points.

No alt text provided for this image

 

The Literary Digest was so discredited by this failure to produce accurate results that it folded within two years. Meanwhile, the results of this second survey were not analyzed in detail until much later.

The other explanation – nonresponse bias

In 1976, Maurice Bryson, a professor of statistics at Colorado State University, published his paper, “The ‘Literary Digest’ Poll: Making of a Statistical Myth.”

Bryson asserted that The Literary Digest’s epic miss resulted from the differing likelihood of either candidate’s supporters to respond to the poll. In support of the underdog, Landon’s voters were simply more excited to share their preference and thus more likely to mail in the response.

This would be no more than an interesting hypothesis if Gallup hadn’t already confirmed it in 1937. They asked their respondents to provide information relating to their participation in The Literary Digest poll. Did they receive the ballot? If so, did they mail in the card? Who did they vote for? The results confirm the nonresponse bias hypothesis – 33% of Landon voters mailed in their responses to The Literary Digest ballot, while only 19% of the Roosevelt voters did the same.

No alt text provided for this image

Based on these findings, Dominic Lusinchi of Far West Research concluded in 2012 that of the 20-point difference between The Literary Digest poll and the election results, 14 came from nonresponse bias and only six from sample bias. In short, nonresponse bias was more than twice as big as sample bias.

The case for higher response rates

Now that we know how misleading and impactful nonresponse bias can be, how do we control for it in our B2B surveys? In The Literary Digest case, Gallup identified the bias by running additional surveys that queried respondents of the original one. In a business context, this would be prohibitively expensive, lengthy, and complex.

So, what can we do?

First, get the information on response rates from your quant house or panel provider. The lower the response rate is, the higher the risk that some nonresponse bias could be at play. In B2B surveys, it is not rare to achieve only a 10% response rate. This is very low. Try to soft launch surveys to a portion of the population and track how many of the people invited to participate end up responding. If numbers are too low, take note and adjust.

Second, supplement your quantitative research with qualitative research. Quantitative can show you the direction. Qualitative can tell you the why and can also confirm the what. This is extremely valuable. If possible, try to run qualitative interviews with a subset of survey respondents. You’ll be able to tease out the degree of confidence in a certain opinion being shared and assess the rationale for having this degree of confidence.

Lastly, take measured steps to increase your response rates. Companies like GLG, which have extensive information on potential respondents, can drive up qualification rates, resulting in a higher willingness-to-participate.

To drive this advantage further, you should also look for other ways to engage your potential respondents. Consider doing qualitative work with members of your target respondent pool. This enables you to hone the screening criteria further. At GLG, we also conduct activities with our experts, such as topic-oriented roundtables, learning opportunities, and opportunities to share their own expertise. This helps our target respondents feel more engaged.

B2B research results are only as good as the inputs and panel quality matters. Mistakes still happen, but it’s important to study these errors in order to avoid them in the future. Stricter control over one’s polling population is the best way to avoid the kind of bias that skews results.


About Elad Goldman

Elad Goldman is VP of Account Management at GLG, the world’s knowledge marketplace. Before GLG, Elad was the Business Operations Lead of Services at Amdocs, an IT provider to Telcos and Media companies, leading strategy formulation exercises driving towards aggressive investment in both M&A and internal IP development. He started his career as a software engineer for Intel, developing the baseband processor for mobile phone chipsets, an activity which was later acquired from Intel by Marvell Semiconductors. Elad later shifted focus from engineering to business, and joined INSEAD as a full time MBA, graduating in July 2010. Elad holds an MBA from INSEAD and an Electrical Engineering Degree (BSc Cum Laude) from Tel-Aviv University.

订阅 GLG 洞见趋势月度专栏

输入您的电子邮件,接收我们的月度通讯,获取来自全球约 100 万名 GLG 专家团成员的专业洞见。

XML 地图