Post developed by Catherine Allen-West, Megan Bayagich and Ted Brader
The initial release of the 2016 American National Election Studies (ANES) Time Series dataset is approaching. Since 1948, the ANES- a collaborative project between the University of Michigan and Stanford University- has conducted benchmark election surveys on voting, public opinion, and political participation. This year’s polarizing election warranted especially interesting responses. Shanto Iyengar, one of the project’s principal investigators and Stanford professor of political science, noted, “The data will tell us the extent to which Trump and Clinton voters inhabit distinct psychological worlds.”
To learn more about the study, we asked Ted Brader (University of Michigan professor of political science and one of the project’s principal investigators) a few questions about this year’s anticipated release.
When was the data collected?
The study interviewed respondents in a pre-election survey between September 7 and November 7, 2016. Election day was November 8. The study re-interviewed as many as possible of the same respondents in a post-election survey between November 9 and January 8, 2017.
The ANES conducted face-to-face and internet interviews again for 2016. How are these samples different from 2012? What are the sample sizes and the response rates?
The study has two independently drawn probability samples that describe approximately the same population. The target population for the face-to-face mode was 222.6 million U.S. citizens age 18 or older living in the 48 contiguous states and the District of Columbia, and the target population for the Internet mode was 224.1 million U.S. citizens age 18 or older living in the 50 U.S. states or the District of Columbia. In both modes, the sampling frame was lists of residential addresses where mail is delivered, and to be eligible to participate, a respondent had to reside at the sampled address and be a U.S. citizen age 18 or older at the time of recruitment.
The response rate, using the American Association for Public Opinion Research (AAPOR) formula for the minimum response rate on the pre-election interview, was 50 percent for the face-to-face component and 44 percent for the Internet component. The response rate for the face-to-face component is weighted to account for subsampling during data collection; due to subsampling for the face-to-face mode, the unweighted response rate would not be meaningful.
The re-interview rate on the post-election survey was 90 percent for the face-to-face component and 84 percent for the Internet component.
Are there any other aspects of the design that you think are particularly important?
I’d emphasize the effort to collect high quality samples via both in-person and online interviews for the whole survey as obviously the most important design aspect of the 2016 study, helping us to learn more about the trade-offs between survey mode and potential benefits of mixed mode data collection.
Are there any new questions that you think users will be particularly interested in?
Along with many previous questions that allow researchers to look at short and long term trends, we have lots of new items related to trade, outsourcing, immigration, policing, political correctness, LGBT issues, gender issues, social mobility, economic inequality, campaign finance, and international affairs.
What do you think some of the biggest challenges were for the 2016 data collection?
With increasing levels of polarization and a highly negative campaign, some Americans were much more resistant to participating in the survey. Many seemed to feel alienated, distrustful, and sick of the election. Under these circumstances, we worked hard with our partners at Westat to overcome this reluctance and are pleased to have recruited such a high quality sample by Election Day.
What are you most excited about when you think of the 2016 ANES?
The 2016 contest was in many ways a particularly fascinating election, even for those of us who usually find elections interesting! The election ultimately centered on two highly polarizing candidates, and people of many different backgrounds felt a lot was at stake in the outcome. Thus, not surprisingly, there was energetic speculation throughout the year about what voters were thinking and why they supported Clinton or Trump. The 2016 ANES survey provides an incredibly rich and unparalleled set of data for examining and testing among these speculations. I expect it will take some time to arrive at definitive answers, but I’m excited to release this wealth of evidence so the search for the truth can begin in earnest.
Is there anything else you’d like to share?
I would note that future releases will include redacted open-ended comments by respondents, numerical codings of some of the open-ended answers, and administrative data (e.g., interviewer observations, timing, etc.).
For more information about ANES please visit electionstudies.org and follow ANES on Twitter @electionstudies
nice article