Rising inequality isn’t driving mass public support for redistribution: Charlotte Cavaillé’s ‘Fair Enough? explains why not.

Rising inequality isn’t driving mass public support for redistribution: Charlotte Cavaillé’s ‘Fair Enough? explains why not.

In the past, excessive economic inequality has ended… badly. As Charlotte Cavaillé points out in her new book that studies the public’s reaction to rising inequality, “only mass warfare, a state collapse, or catastrophic plagues have significantly altered the distribution of income and wealth.” Will this time be different?

Through income redistribution, democratic and political institutions today have a clear mechanism to peacefully address income inequality if voters demand it. Still, as highlighted by Cavaille in Fair Enough?: Support for Redistribution in the Age of Inequality (Cambridge University Press), greater wealth and income inequality are not leading to greater demand for an egalitarian policy response as many would expect.

Cavaillé reports there is little evidence of rising support for redistribution, especially among the worse off. Consider public opinion in the two Western countries with the sharpest increase in income inequality: In Great Britain, public support for redistribution is decreasing, and in the United States, the gap between the attitudes of low-income and high-income voters is narrowing. What, asks Cavaillé, can we conclude about public opinion’s role as a countervailing force to rising inequality?

Based on Cavaillé’s doctoral work, Fair Enough? introduces a framework for studying mass attitudes toward redistributive social policies. Cavaillé shows that these attitudes are shaped by at least two motives: material self-interest and fairness concerns. People support policies that would increase their own expected income. On the other hand, they also support policies that, if implemented, “would move the status quo closer to what is prescribed by shared norms of fairness.” Material interest comes most into play when policies have large material consequences, according to Cavaillé, but in a world of high uncertainty and low personal stakes, considerations of fairness trump considerations about one’s personal pocketbook.

How fair is it for some to make a lot more money than others? How fair is it for some to receive more benefits than they pay in taxes? Cavaillé emphasizes two norms of fairness that come into play when we think about such questions: proportionality, where rewards are proportional to effort and merit, and reciprocity, where groups provide basic security to members that cooperatively contribute. Policy disagreement arises because people hold different empirical beliefs regarding how well the status quo aligns with what these norms of fairness prescribe.

With fairness reasoning in the picture, Cavaillé writes, “baseline expectations are turned on their heads: Countries that are more likely to experience an increase in income inequality are also those least likely to interpret this growth as unfair.”

Should we expect growing support for redistribution to be a driving force behind policy change in the future? A change in aggregate fairness beliefs, Cavaillé argues, will require a perfect storm: a discursive shock that repeatedly exposes people to critiques of the status quo as unfair on the one hand, and a large subset of individuals whose own individual experience predispose them to accept these claims as true on the other. Policy changes in postindustrial democracies are possible, Cavaillé concludes– but they are unlikely to be in response to a pro-redistribution shift in public opinion.

Charlotte CavailléCharlotte Cavaillé is an assistant professor of public policy at the University of Michigan’s Gerald R. Ford School of Public Policy and an affiliate of the Center for Political Studies at the Institute for Social Research. Her dissertation, on which ‘Fair Enough’ is based, received the 2016 Mancur Olson Best Dissertation Award.

Tevah Platt and Charlotte Cavaillé contributed to the development of this post.

How Voter Loyalties Change

This post was developed by Ken Kollman and Tevah Platt, based on the talk, “When People Change Their Partisanship, is it Bottom-Up or Top-Down?” that Ken Kollman presented for the Research Center for Group Dynamics Winter Seminar Series on Political Polarization (2023) at the University of Michigan Institute for Social Research. Ken Kollman is the Director of the Center for Political Studies.

Dynamic Partisanship: How and Why Voter Loyalties Change

Partisanship is sticky. People tend to vote like their parents and to maintain their partisan leanings over time. But to understand partisanship, we need a model that can explain why people change party loyalties when they do. This is what Ken Kollman and John E. Jackson of the University of Michigan Center for Political Studies (CPS) provide in Dynamic Partisanship: How and Why Voter Loyalties Change. The following summarizes their overarching argument.

What is partisanship?

Partisanship is a group-based, shared identity. A classic work from 1960, The American Voter, also out of ISR, describes partisan identity as a long-term, affective, psychological attachment to a political party. According to this famous “Michigan model,” the socially-informed attitudes and values we form early in life durably influence the way we identify with political parties and how we vote.

Kollman and Jackson argue that partisanship has similarities to brand loyalty. It’s relatively stable and habitual, but it’s also evaluative and cognitive. Parties compete for votes and, importantly, for voter loyalty among “consumers” who are considering and comparing candidates and party ideas. Voters “experience” parties in office and in campaigns, and evaluate parties like consumers with products. Yet voting over time for the same party can also become habitual until voters become dissatisfied with what they chose.

What drives partisanship change?

Ronald Reagan often said that he didn’t leave the Democratic party, but the Democratic party left him. The quip encapsulates what Kollman and Jackson find to be the primary answer to the question of what moves partisanship. Two other processes do influence partisan dynamics– changes in people’s political attitudes and their evaluations of the performance of politicians in office – but it’s the behaviors of parties that they find are the greatest contributors to changing partisanship.

  • At the micro-level, partisanship is driven by evaluations of parties and politicians who are themselves changing for strategic reasons to try to win office.
  • At the macro-level, party polarization is a consequence of elite-level competition for voters, mostly at a national scale– for example, in response to national policies and movements.

In the broader debates about polarization, the stake they claim is that polarization is driven by elite-level competition for power, and not by ordinary people changing their minds about their ideologies or issue positions. It’s top-down, driven by what politicians and their parties do.

How parties compete

A canonical model of party competition came out of the mid-century work of Anthony Downs, who developed a theory of party competition in ideological space. This theory drew a picture of the Democratic and Republican parties converging on the “median voter” the way that ice cream trucks would converge at the middle of a beach to attract the most customers. More complex models admit that political ideology and conflict takes place in multiple dimensions; on the ground, for example, a candidate or party that is moving right on social issues could be moving left on economic policy, perhaps testing out impacts on voters.

A case in point: the language of industrial protectionism (saving factories) was an economically leftward move of the Trump-guided GOP that effectively turned Ohio from purple to red by attracting whites in Northeastern Ohio to the Republicans. Dynamic Partisanship tracks such patterns across the US, the UK, Canada and Australia over more than a half-century, but the overarching trend is that parties are the moving gear in dynamic partisanship. Voters don’t need to be moved, but partisanship can change because voters are reacting to parties that move– and that’s the underlying dynamic.

Partisan trends in the US

The figure shows that northern white votes remain relatively stable from 1956 to present. Black partisanship moves Democratic in the 1960s and stays there, while Southern white voters gradually become more Republican from 1964 to present.

This figure, from Dynamic Partisanship, plots partisanship among three groups of the U.S. electorate– northern whites, southern whites, and African Americans– from 1956 to 2016, with Democratic partisanship increasing on the y axis. There are three distinct patterns:

  • Northern white partisanship is the most stable, coming closest to the traditional view of party identification as an unchanging personal attribute;
  • The 1964 election, on the heels of the passage of the Civil Rights Act, is a critical turning point in African American partisanship, making a full-point leap and remaining consistently high on the Democratic scale from that time;
  • Southern white partisanship shows a strong, gradual trend shifting from moderately Democratic to weakly Republican over 61 years.

The twin phenomena of southern Black voters becoming more Democratic since the 1960s and southern whites becoming slowly more Republican over time represent two of the major tectonic shifts in American society and politics that have occurred in the last half century.

The innovation is that the model used in Dynamic Partisanship can accommodate these divergent patterns– relative stasis, abrupt changes, and gradual changes. For the details and the myriad examples, check out the book.

Latinos for Trump in 2020

Post developed by Francy Luna Diaz and Tevah Platt, based on the work Luna Diaz presented at the 2022 Annual Meeting of the American Political Science Association (APSA), “Latinos for Trump in 2020: A Story of Heterogeneous Information Environments and Social Media.”

Trump wall protestors

Latinos continue to surprise with their sustained and increased support for former President Trump. After numerous episodes during his campaign for president and time in office espousing what was seen as anti-Latino and anti-immigrant rhetoric and policies, it is puzzling to many that almost one in three Latinos voted for him during the 2020 presidential election. 

Latinos for Trump

Los Angeles, United States – October 22, 2016: LOS ANGELES, USA – OCTOBER 22: A pro Donald Trump rally outside a CNN building on Sunset Blvd in Hollywood, Los Angeles

What is his appeal to many Latinos?  Scholars have found that many Latinos agree with his policies, and more Latinos say they are Republicans than before. Political science PhD student Francy Luna Diaz, however, wants to probe deeper.  She analyzes why some Latinos are attracted to Trump and others are not.  She proposes that Latinos’ information environments are crucial to understanding the wide-ranging variance of the groups’ political attitudes.  

Latinos are embedded in information environments that differ from other Americans because of distinct social media use and social networks. Information environments refer to the various sources of information that people have around them, mainly social media platforms. Latinos, in fact, are more likely than other ethnic groups to rely on social media and messaging applications to share and obtain information with close friends and relatives.

Additionally, Latinos often maintain ties to Latin America and are exposed to political information about and emerging from Latin America.

These factors combined—higher use of social media and more diverse information networks—increased Latinos’ vulnerability to disinformation and misinformation in 2020 and may have influenced some Latinos to distance themselves from the Democratic party.  

Alex Otaola: "Regimen planea crear crisis una migratoria de cubanos en la frontera de EEUU antes de las elecciones"

An example of social media misinformation: Cuban-American influencer Alex Otoala falsely claimed Democrats were going to send a caravan of Cuban immigrants to the US border to disrupt the election.

Luna Diaz analyzed ANES data from 2020 and 2016 to explore whether traditionally recognized factors such as party identification, age, education, income, trust, generation in the U.S., language, and place of birth, among others, correlated with respondents’ decision to vote for Trump. She found that in 2020, Latinos who used Facebook more frequently were significantly more likely to vote for Trump, while the same pattern was not present for non-Latinos. 

Luna Diaz also looked at answers to open-ended questions in ANES, summarized in the Table below, and found that while Latinos offered similar considerations when discussing why they like each candidate or party, differences emerged when they discussed why they disliked the Democratic candidate and the Democratic party in 2020. Interestingly, the reasons offered may point to (sometimes false or misleading) news spreading online claiming that the Democratic party is leading the U.S. towards socialism and President Biden behaves inappropriately with children. 

Summary of open-ended responses to the ANES 2020 Time SeriesUnderstanding Latinos’ political behavior is crucial to evaluating the present and future of American electoral politics.  Latinos’ share of the population is steadily increasing along with their political influence in close elections. On a larger scale, it is important to uncover whether the spread of potential disinformation via social media impacts the political participation of different groups. Understanding the effect of online disinformation and misinformation will only increase in importance as democracy remains under threat in the United States.

Online survey respondents reveal different personality traits compared to face-to-face respondents

Post developed by Nicholas Valentino and Katherine Pearson

Survey research is an ever-evolving field. Technology has increased the number of ways to reach respondents, while simultaneously reducing response rates by freeing people from the constraints of one land-line telephone per household. Surveys remain an essential tool for making inferences about societal and political trends, so many survey researchers offer incentives to survey respondents in order to ensure a large and representative sample. Financial incentives to complete surveys, in turn, entice some people to respond to a large number of online surveys on a regular basis, essentially becoming professional survey respondents. 

Survey methodologists have carefully considered the ways that survey modes may impact the way people answer questions. Talking to a real person is different than answering questions online. But less is known about how individual factors bias participation in surveys in the first place. For example, might personality traits shape your willingness to agree to answer a survey online versus someone who comes to your door? New work from researchers at the University of Michigan and Duke suggests in fact this is the case. 

In a new paper published in Public Opinion Quarterly, Nicholas A Valentino, Kirill Zhirkov, D Sunshine Hillygus, and Brian Guay, find that citizens who are most open to new experiences may be underrepresented in online surveys. Furthermore, “Since openness to experience in particular is associated with liberal policy positions, differences in this trait may bias estimates of public opinion derived from professionalized online panels.” 

In order to examine the personality traits of survey respondents, the research team used data from the 2012 and 2016 American National Election Studies (ANES). During these two study periods, the ANES ran parallel and face-to-face surveys. In both years, the ANES included the 10-item personality inventory (TIPI), which consists of pairs of items asking respondents to assess their own traits. Based on the responses, respondents build a profile of “the Big Five” personality traits: openness to experience, conscientiousness, extraversion, agreeableness, and emotional stability.

Big Five traits with corresponding TIPI qualities 

Trait TIPI Qualities Coding
Openness to experience Open to new experiences, complex 

Conventional, uncreative

Original

Reversed

Conscientiousness Dependable, self-disciplined

Disorganized, careless

Original

Reversed 

Extraversion Extraverted, enthusiastic

Reserved, quiet

Original

Reversed 

Agreeableness Critical, quarrelsome

Sympathetic, warm

Reversed 

Original

Emotional Stability  Anxious, easily upset

Calm, emotionally stable

Reversed 

Original

 

Researchers were able to compare responses to the TIPI with measures of political predispositions and policy preferences, based on responses to questions on the ANES. These include partisanship, liberal–conservative ideology, issue self-placements, and other measures of political orientation. 

Based on these data, the authors found that respondents in the online samples were, on average, less open to experience and more politically conservative on a variety of issues compared to those responding to face-to-face surveys. They also found that the more surveys a respondent completed, the lower they scored on measures of openness. Given that professionalized survey respondents comprise the majority of online survey samples, these results suggest caution for those who would like to generalize results to the population at large. It is not enough to balance samples on simple demographics. Attitudinal and personality based differences might also lead online sample estimates to diverge from the truth. 

It is difficult to say whether online survey respondents or face-to-face respondents are more representative of personality traits in the general population. If personality is a factor in whether someone will participate in a survey, that might bias both types of samples. However, the authors note that the data suggest that professional online samples are the outlier. They find “that samples based on fresh cross-sections, both face-to-face and online, yield better population estimates for personality and political attitudes compared to professionalized panels.” While it may be possible to mitigate the potential sampling bias of personality traits, it is important that survey researchers understand the role that personality traits play in professional online samples.

Update on the the ANES 2020 Time Series Study

logo for the American National Election StudiesPost developed by Ted Brader, Lauren Guggenheim, and Katherine Pearson 

In every U.S. presidential election since 1948, the American National Election Studies (ANES) has conducted pre- and post-election surveys of a large representative sample of American voters. ANES participant interviews looked different in 2020 than they did in the past; the COVID19 pandemic made traditional face-to-face interviews impractical and risky. The study team began planning for the extraordinary circumstances in March, without any idea what the conditions would be when interviews began in August. The team pivoted nimbly to redesign the study even as the onset of data collection approached. 

The majority of interviews in 2020 were completed as web surveys, some following an online format similar to one used in 2016, and others using an innovative mixed-mode design. Respondents to the mixed-mode surveys were randomly assigned either to complete the questionnaire by themselves online, or to take the survey with a live interviewer via a Zoom video link. Few surveys conduct live video interviews, but the ANES study team felt that it was critical to explore the use of this technology as a potential means of balancing issues of cost, continuity, and data quality. 

To answer online surveys, respondents must have reliable access to the Internet and comfort using computers. Under normal circumstances, people without access to computers or the Internet in their homes can gain access in public settings like libraries or at their workplace. With many of these places closed due to the pandemic, online access became a bigger challenge. In mixed-mode cases where it was difficult to complete a web or video interview, interviewers contacted the respondents to secure a phone interview. Providing phone interviews helped the team strengthen sample quality by reaching respondents without access to the Internet as well as those who are less comfortable using computers. 

Data collection for the 2020 surveys, out of necessity, departed significantly from the practices of the past 70 years of the ANES. The study team will continue to monitor and address the implications of these changes. In the end, the team was pleased to field a very high quality survey with relatively high response rates, thoroughly vetted questions, and the largest sample in the history of ANES. 

Pre-election surveys

Pre-election interviews began in August 2020. The pre-election questionnaire is available on the ANES website. The questionnaire includes time series questions dating back to the earliest days of the ANES survey, as well as new questions that reflect more recent developments in the study of American politics. The ANES team must always be prepared to add a few questions late in the design process to capture substantial developments in the presidential campaign or American society. In 2020 the survey added questions about election integrity, urban unrest, and COVID-19, among other topics. 

The investigators, ANES staff, and their survey operations partners at Westat monitored the data collection closely, in case further adjustments in procedures or sample were required. The final pre-election sample consists of over 8,200 complete or sufficient-partial interviews. This includes a reinterview panel with the respondents from the ANES 2016 Time Series. Over 2,800 respondents from the 2016 study were reinterviewed, more than three quarters of the original group. 

Post-election surveys

Post-election interviews began on November 8, 2020, and will be completed on January 4, 2021. This post-election effort includes additional respondents who took part in the 2000 study of the General Social Survey (GSS). Due to the pandemic-altered timing of the GSS data collection, it was not possible to interview these individuals prior to the election. However, these respondents completed nearly all of the ANES post-election interview, plus almost ten minutes of critical questions that appeared on the ANES pre-election interview, and several additional questions suggested by the GSS team.

ANES staff will continue to review and clean the data into the new year, including checks of respondent eligibility that may alter the final sample in modest ways. Pending this review, the team expects response rates to come in slightly below the 2016 web response rates.

Overall, despite the challenges of this past year, the ANES study team was able to gather robust data from a large probability sample of Americans, extending the longest-running, most in-depth, and highest quality survey of US public opinion and voting behavior, at a critical juncture for American society and democracy. The team will continue to share updates, here and on the ANES website, as data from this survey become available. 

The American National Election Study (ANES): History and Insights from Recent Surveys

This year the American National Election Study (ANES) will conduct its 19th time series study of a presidential election. In every U.S. presidential election since 1948, the ANES has conducted pre- and post-election surveys of a large representative sample of American voters. 

On August 12, 2020, Vincent Hutchings gave a talk outlining the history of the study, and why it is the “gold standard” of political surveys. You can view a recording of his talk below, and view tweets about the talk here

 

The history and significance of the ANES

The ANES was originally launched at the Institute for Social Research at the University of Michigan. Since 2005 the study has been a collaboration between the University of Michigan and the Institute for Research in the Social Sciences at Stanford University

Since 1977, the ANES has been funded by the National Science Foundation. It is used by scholars as well as high-school students, college students, and journalists. The data are made publicly available online for free as soon as it is processed after the election; principal investigators of the study do not receive privileged access to the survey data. 

The ANES aims to answer two fundamental questions: how do citizens select the candidate they vote for? Why do some citizens participate in politics (e.g., vote, work on campaigns, etc.) while others do not? These questions are answered with nationally representative survey data. 

The value of the ANES comes not only from the care and precision brought to designing questions, but also from the way the study balances continuity and innovation. In order to achieve this balance, the ANES asks identical questions over time about vote choice, turnout, party identification, ideology, political information, and attitudes about candidates. But even as questions are preserved over time, new questions are added about issues as they arise. The investigators and board members solicit public input on new questions and determine which ones will add value. 

Recent data trends

Professor Hutchings outlined findings from some of the questions that were recently added to the ANES, including questions about the Black Lives Matter movement and police misconduct. 

Respondents to the 2016 ANES were asked to rate the Black Lives Matter movement on a 0-100 “feeling thermometer” scale. Ratings 50-100 degrees signal favorable feelings toward the group; ratings 0-50 degrees signify unfavorable feelings. Respondents would rate the group at the 50 degree mark if they don’t feel particularly warm or cold toward the group.

Graphic showing feelings about the Black Lives Matter movement by party and race.

Hutchings points out that there are important partisan and racial divides in the results shown above. For example, Black Republicans have warmer feelings toward the Black Lives Matter movement than white Democrats in 2016. This question will be repeated in the 2020 study, giving researchers a way to track changes in perceptions of the movement over time. 

Attitudes toward the Black Lives Matter movement were a very strong predictor of the candidate a respondent would vote for in 2016. As Hutchings showed using the graphic below, voters who supported the Black Lives Matter movement were much more likely to support Hillary Clinton for president. 

Graphic showing the relationship between support for the Black Lives Matter movement and probability of voting for Hillary Clinton in 2016.

Similarly, perceptions of police violence were correlated with voter preference. Those respondents who believed that whites were treated better by the police were much more likely to support Hillary Clinton than respondents who believed that police are unbiased. 

Graphic showing the effect of perceptions of anti-Black police bias on support for Hillary Clinton in 2016.

The value of the ANES

Professor Hutchings concluded his talk by reflecting on the value of the ANES. “It allows us an opportunity to assess the health of our democracy,” he said. “We can assess levels of trust in government, levels of perceived corruption in government, levels of racial animus, levels of religious and gender intolerance. We can assess how things have changed – or how things have not changed – over time. And we can only do this as a consequence of this study.”