Author Archives: Catherine Allen-West

What We Know About Race and the Gender Gap in the 2016 US Election

This post was created by Catherine Allen-West.

As of October, the latest national polls, predicted that the 2016 Election results will reflect the largest gender gap in vote choice in modern U.S. history. Today, according to NPR, “An average of three recent national polls shows that women prefer Clinton by roughly 13 points, while men prefer Trump by 12, totaling a 25-point gap.” If these polls prove true, the 2016 results would indicate a much larger gender gap than what was observed in 2012, where women overwhelmingly supported Barack Obama over Mitt Romney.

2012 vote by gender based on exit polls.

2012 vote by gender based on national exit poll conducted by Edison Media Research.

University of Texas at Austin Professor Tasha Philpot argues that what really may be driving this gap to even greater depths, is race. For instance, here’s the same data from the 2012 Election, broken down by gender and race.

2012 vote by gender and race based on exit polls

2012 vote by gender and race based on national exit poll conducted by Edison Media Research.

Often overlooked in the discussion of the gender gap, race figures prominently into many American’s political identities.

2016 Gender Gap in Party Identification

2016 Gender Gap in Party Identification.

2016 Gender Gap in Party Identification.

Philpot recently participated in the panel “What We Know So Far About the 2016 Elections” at the University of Michigan’s Center for Political Studies. In her talk, “Race and the Gender Gap in the 2016 Election,” Philpot outlined the potential sources for the gender gap and emphasized the role that race is playing in widening the gap.

Using data from the ANES 2016 Pilot Study, Philpot compared opinions from white and black men and women on several issues such as government spending, inequality and discrimination, and evaluations of the economy. While there were noticeable differences strictly between men and women, the real story became clear when Philpot sorted the results by gender and race. Small gender gaps exist among both whites and blacks, but the most remarkable difference of opinions on all issues is between black women and white men.

SPENDING ON HEALTH CARE AND DEFENSE

2016 Gender Gap in Spending on Healthcare and Defense.

2016 Gender Gap in Spending on Healthcare and Defense.

Perceived Gender Discrimination

Gender Gap in Perceived Discrimination Based on Gender.

2016 Gender Gap in Perceived Discrimination Based on Gender

Evaluations of the Economy

2016 Gender Gap in Economic Evaluations.

2016 Gender Gap in Economic Evaluations.

On most issues, black women and white men fall on opposite sides of the political spectrum. Philpot concludes that it’s an oversimplification to consider the gender gap as merely a gap between men and women, when, in reality, the observed gender gap is largest between white men and black women.

Watch Tasha Philpot’s full presentation here: 

 


Related Links:

Tasha Philpot on NPR:  Reports of Lower Early Voting Turnout Among African-Americans, NPR, The Diane Rehm Show (November 4, 2016)

What We Know So Far About the 2016 Elections, was held on October 5, 2016 at the Center for Political Studies, University of Michigan. The panel also included the following talks:

Stuart Soroka: Read, Seen or Heard: A Text-Analytic Approach to Campaign Dynamics
Nicholas Valentino: The Underappreciated Role of Sexism in the 2016 Presidential Race
Michael Traugott: Pre-Election Polls in the 2016 Campaign

All videos from the event can be found here: https://www.youtube.com/playlist?list=PLAvEYYDf9x8XFzBWadaPcV6kFjZkBFuHP

 

 

Tracking the Dynamics of the 2016 Election

This post was developed by Catherine Allen-West, Stuart Soroka and Michael Traugott

It’s an election year in America, and with that comes an endless string of media coverage of the political campaigns. If you are like 70% to 80% of Americans over the past 12 weeks, you’ve read, seen or heard some information about the top two presidential candidates, Hillary Clinton and Donald Trump, on any given day.

These are the findings from an ongoing research collaboration between Gallup, the University of Michigan and Georgetown University. Since July 11, 2016 Gallup has asked 500 respondents per night what they have read, seen or heard about Clinton or Trump that day. The resulting data include open-ended responses from over 30,000 Americans thus far.

Content analyses of these open-ended responses offer a unique picture of campaign dynamics.  The responses capture whatever respondents remember hearing about the candidates over the previous few days from traditional media, social media, or friends and family. As Gallup points out in the article above, results from this project are noteworthy because while most survey research tracks Americans’ opinions on candidates leading up to an election, this study looks directly at the information the public absorbs, on a daily basis.


For up to date results from this project visit: www.electiondynamics.org


Tracking the ‘Tone’ of What Americans Have Read, Seen or Heard

In this blog post, we offer some supplementary analysis, focusing on the tone of responses to the “read, seen or heard” question.  Positive and negative tone (or sentiment) are captured using the Lexicoder Sentiment Dictionary, run in Lexicoder.  The Lexicoder Sentiment Dictionary includes roughly 6,000 positive or negative words.  We count the frequency of both, and produce a measure of tone that is the % positive words – % negative words, for every response, from every respondent.

Taking the average tone of responses daily provides insight into the content that American citizens are receiving (and remembering) during the campaign.  In this analysis, we focus on measures of “candidate advantage,” where “Clinton advantage” is the gap between the tone of responses to the “read, seen or heard” question about Clinton, and the tone of responses to the “read, seen or heard” question about Trump.  Positive values reflect a systematic advantage for Clinton; that is, a tendency for recalled information about Clinton to be more positive than recalled information about Trump.  Negative values reflect the opposite.

As would be expected, when we look at partisanship, Republicans have more a net positive assessment for Trump. This is particularly true in the first weeks of September.  Democrats show a similar tendency in that they have more net positive assessments for Clinton.  That said, the first few weeks of September show, at best, a very weak advantage for Clinton among Democrats.  During the early weeks of September, Democrats’ recalled news was not markedly more positive for Clinton than it was for Trump.  ‘Read, seen or heard’ comments from Democrats even turned to Trump’s advantage in the period from September 16th to 18th, before trending more positive towards Clinton again.  This shift from Democrats followed concerns about Clinton’s health, but it also (and relatedly) reduced mentions of emails. This trend continued after the recent bombings in New York and New Jersey became prominent. And then came her performance in the debate.  All of this coverage led to a steady increase in Clinton’s advantage among Democrats.

figure_cand_tone_daily_clinton_sept29

For Republicans, the picture is nearly the opposite.  The gap between recalled information about Trump and recalled information about Clinton was striking through the first few weeks of September.  While Democrats did not recall information favorable to Clinton, Republicans clearly recalled information favorable to Trump.  But responses started to shift in the middle of the month and the ‘Trump Advantage’ in the tone of recalled information from Republicans has continued to fall since the first debate.

figure_cand_tone_daily_trump_sept29-1

What do these findings suggest about the presidential campaign thus far?  While these results do not capture vote intentions, nor are they direct assessments of the candidates, these data do give us a unique sense for the information that voters remember.  Whether shifts in ‘read, seen or heard’ mentions are predictive of attitudes towards the candidates remains to be seen.  Exploring this possibility is one objective of the ongoing project.

The Gallup, Michigan, Georgetown Working Group consists of: Frank Newport, Lisa Singh, Stuart Soroka, Michael Traugott, and Andrew Dugan.

Related Article: After the Debate, Trump is still dominating news coverage. But Clinton is getting the good press. The Washington Post.

Identifying the Sources of Scientific Illiteracy

Post developed by Catherine Allen-West in coordination with Josh Pasek

ICYMI (In Case You Missed It), the following work was presented at the 2016 Annual Meeting of the American Political Science Association (APSA).  The presentation, titled “Motivated Reasoning and the Sources of Scientific Illiteracy” was a part of the session “Knowledge and Ideology in Environmental Politics” on Friday, September 2, 2016.

At APSA 2016, Josh Pasek, Assistant Professor of Communication Studies and Faculty Associate at the Center For Political Studies presented work that delves into the reasons that people do not believe in prevailing scientific consensus.

He argues that widespread scientific illiteracy in the general population is not simply a function of ignorance. In fact, there are several reasons why an individual may answer a question about science or a scientific topic incorrectly.

  1. They are ignorant of the correct answer
  2. They have misperceptions about the science
  3. They know what scientists say and disagree (rejectionism)
  4. They are trying to express some identity that they hold in their response

The typical approach to measuring knowledge involves asking individuals multiple-choice questions where they are presumed to know something when they answer the questions correctly and to lack information when they either answer the questions incorrectly or say that they don’t know.

Pasek Slide 2

Pasek suggests that this current model for measuring scientific knowledge is flawed, because individuals who have misperceptions can appear less knowledgeable than those who are ignorant. So he and his co-author Sedona Chinn, also from the University of Michigan, set out with a new approach to disentangle these cognitive states (knowledge, misperception, rejectionism and ignorance) and then determine which sorts of individuals fall into each of the camps.

Instead of posing multiple-choice questions, the researchers asked the participants what most scientists would say about a certain scientific topic (like, climate change or evolution) and then examined how those answers compared to the respondent’s personal beliefs.

Pasek Slide 4

Across two waves of data collection, respondent answers about scientific consensus could fall into four patterns. They could be consistently correct, change from correct to incorrect, change from incorrect to correct or be consistently correct.

Pasek Slide 5

This set of cognitive states lends itself to a set of equations producing each pattern of responses:

Consistently Correct = Knowledge + .5 x Learning + .25 x Ignorance
Correct then Incorrect = .25 x Ignorance
Incorrect -> Correct =.5 x Learning + .25 x Ignorance
Consistently Incorrect = Misperception + .25 x Ignorance

The researchers then reverse-engineered this estimation strategy for a survey aimed at measuring knowledge on various scientific topics. This yielded the following sort of translations:

Pasek Slide 6

In addition to classifying respondents as knowledgeable, ignorant, or misinformed, Pasek was especially interested in identifying a fourth category: rejectionist. These are individuals who assert that they know the scientific consensus but fail to hold corresponding personal beliefs. Significant rejectionism was apparent for most of the scientific knowledge items, but was particularly prevalent for questions about the big bang, whether humans evolved, and climate change.

Pasek Slide 3

Rejectionism surrounding these controversial scientific topics is closely linked to religious and political motivations. Pasek’s novel strategy of parsing out rejectionism from ignorance and knowledge provides evidence that religious individuals are not simply ignorant about the scientific consensus on evolution or that partisans are unaware of climate change research. Instead, respondents appear to have either systematically wrong beliefs about the state of the science or seem liberated to diverge in their views from a known scientific consensus.

Pasek’s results show a much more nuanced, yet at times predictable, relationship between scientific knowledge and belief in scientific consensus.

 

Motivated Reasoning in the Perceived Credibility of Public Opinion Polls

Post developed by Catherine Allen-West and Ozan Kuru.

ICYMI (In Case You Missed It) the following work was presented at the 2016 Annual Meeting of the American Political Science Association (APSA). The presentation, titled “Motivated Reasoning in the Perceived Credibility of Public Opinion Polls,” was part of the session “Surprises: A Magical Mystery Tour of Public Opinion and Political Psychology” on Saturday, September 3, 2016.

Polls have been an integral part of American democracy, political rhetoric, and news coverage since the 1930s. Today, there are new polls reported constantly, showing public opinion on a range of issues from the President’s approval rating to the direction of the country. Polls remain relevant because numbers and statistical evidence have always been regarded as sound evidence to support one’s beliefs or affirm their affiliations; similarly, polls are supposed to provide relatively objective information in politics.

However, despite their importance and ever-increasing prevalence, polls are often heavily criticized, both by the public and politicians, especially when they fail to predict election outcomes. Such criticisms and discounting of poll credibility is important, because people’s perceptions of polls matter. In such an environment, the perceived credibility of polls becomes an important issue for the public’s reception of poll findings, which then determines the likelihood of any meaningful impact of their results.

Continue reading

New research contest announced to study the 2016 election

Post developed by Catherine Allen-West and Arthur Lupia

ICYMI (In Case You Missed It) this post details the Election Research Preacceptance Competition, organized by Arthur Lupia and Brendan Nyhan. Lupia discussed this initiative at the “Roundtable on the CPS Special Issue on Transparency in the Social Sciences” at APSA 2016 on Friday, September 2, 2016.

ERPCHow can scholars study politics most effectively? The Election Research Preacceptance Competition (http://www.erpc2016.com) is an innovative initiative that will test a new approach to conducting and publishing political science research during the 2016 election.

Entrants in the competition will preregister a research design intended to study an important aspect of the 2016 general election using data collected by the American National Election Studies (ANES). A condition of entering the competition is that entrants must complete and register a design before the ANES data are released. Many leading academic journals have agreed to review scholarly articles that include these research plans and to review them before the data are available or results are known.  

Continue reading

Income and Preferences for Centralization of Authority

Post developed by Catherine Allen-West in coordination with Diogo Ferrari

DiogoFerrari

Diogo Ferrari, PhD Candidate, University of Michigan, Ann Arbor

ICYMI (In Case You Missed It), the following work was presented at the 2016 Annual Meeting of the American Political Science Association (APSA).  The presentation, titled “The Indirect Effect of Income on Preferences for Centralization of Authority,” was a part of the session “Devolution, Fragmented Power, and Electoral Accountability” on Thursday September 1, 2016.

One of the primary activities of any elected government is to decide how to allocate public funds for policies like health care and education. In countries that adopted a federal system – like the United States, Canada, Australia, Germany, and others – the central government usually has some policies that promotes distribution of fiscal resources among different jurisdictions, like among states or cities. Take Australia for example. The federal government collects taxes that are funneled to local governments  in accordance with their needs. This diminishes the inequality between different Australian sub-national governments in their capacity to invest and provide public services. Brazil is another example. Brazil has a huge federal program that transfers resources from rich to poor states and whose goal is to reduce regional inequality. These federal governments can only continue to operate in this way, that is, promoting interregional redistribution, if the power to control fiscal resources is centralized.  Therefore, there is a connection between interregional redistribution and centralization of authority.

Now, voters have different preferences about how the government should spend the fiscal resources. They have different opinions, for instance, to which degree taxes collected in one region should be invested in another region. Do voters that support interregional redistribution also prefer that the fiscal authority is concentrated in the hands of the federal government as opposed to the sub-national ones? Which characteristics determine the preference of voters regarding interregional redistribution and centralization of authority? How those preferences are connected?

Continue reading

Support for the Islamic State in the Arab World

Post developed by Catherine Allen-West in coordination with Michael Robbins.

ICYMI (In Case You Missed It), the following work was presented at the 2016 Annual Meeting of the American Political Science Association (APSA).  The presentation, titled “Passive Support for the Islamic State: Evidence from a Survey Experiment” was a part of the session “Survey and Laboratory Experiments in the Middle East and North Africa” on Thursday, September 1, 2016.

On Thursday morning at APSA 2016, Michael Robbins,  Amaney Jamal and Mark Tessler presented work which explores levels of support for the Islamic State among Arabs, using new data from the Arab Barometer. The slide set used in their presentation can be viewed here: slides from Robbins/Jamal/Tessler presentation

Their results show that among the five Arab countries studied (Jordan, Morocco, Tunisia, Palestine and Algeria) there is very little support for the tactics used by Islamic State.

Picture1

Furthermore, even among Islamic State’s key demographic –  younger, less-educated males – support remains low.

Picture2

For a more elaborate discussion of this work and the above figures, please see their recent post in the Washington Post’s Monkey Cage blog, “What do ordinary citizens in the Arab world really think about the Islamic State?

Mark Tessler is the Samuel J. Eldersveld Collegiate Professor of Political Science at the University of Michigan. Michael Robbins is the director of the Arab Barometer. Amaney A. Jamal is the Edwards S. Sanford Professor of Politics at Princeton University and director of the Mamdouha S. Bobst Center for Peace and Justice.