Category Archives: Innovative Methodology

Computer simulations reveal partisan gerrymandering 

Post developed by Katherine Pearson 

How much does partisanship explain how legislative districts are drawn? Legislators commonly agree on neutral criteria for drawing district lines, but the extent to which partisan considerations overshadow these neutral criteria is often the subject of intense controversy.

Jowei Chen developed a new way to analyze legislative districts and determine whether they have been unfairly gerrymandered for partisan reasons. Chen, an Associate Professor of Political Science and a Research Associate at the Center for Political Studies, used computer simulations to produce thousands of non-partisan districting plans that follow traditional districting criteria. 

Simulated NC map

These simulated district maps formed the basis of Chen’s recent expert court testimony in Common Cause v. Lewis, a case in which plaintiffs argued that North Carolina state legislative district maps drawn in 2017 were unconstitutionally gerrymandered. By comparing the non-partisan simulated maps to the existing districts, Chen was able to show that the 2017 districts “cannot be explained by North Carolina’s political geography.” 

The simulated maps ignored all partisan and racial considerations. North Carolina’s General Assembly adopted several traditional districting criteria for drawing districts, and Chen’s simulations followed only these neutral criteria, including: equalizing population, maximizing geographic compactness, and preserving political subdivisions such as county, municipal, and precinct boundaries. By holding constant all of these traditional redistricting criteria, Chen determined that the 2017 district maps could not be explained by factors other than the intentional pursuit of partisan advantage. 

Specifically, when compared to the simulated maps, Chen found that the 2017 districts split far more precincts and municipalities than was reasonably necessary, and were significantly less geographically compact than the simulations. 

By disregarding these traditional standards, the 2017 House Plan was able to create 78 Republican-leaning districts out of 120 total; the Senate Plan created 32 Republican-leaning districts out of 50. 

Using data from 10 recent elections in North Carolina, Chen compared the partisan leanings of the simulated districts to the actual ones. Every one of the simulated maps based on traditional criteria created fewer Republican-leaning districts. In fact, the 2017 House and Senate plans were extreme statistical outliers, demonstrating that partisanship predominated over the traditional criteria in those plans. 

The judges agreed with Chen’s analysis that the 2017 maps displayed Republican bias, compared to the maps he generated by computer that left out partisan and racial considerations. On September 3, 2019, the state court struck down the maps as unconstitutional and enjoined their use in future elections. 

The North Carolina General Assembly rushed to adopt new district maps by the court’s deadline of September 19, 2019. To simplify the process, legislators agreed to use Chen’s computer-simulated maps as a starting point for the new districts. The legislature even selected randomly from among Chen’s simulated maps in an effort to avoid possible accusations of political bias in its new redistricting process.

Determining whether legislative maps are fair will be an ongoing process involving courts and voters across different states. But in recent years, the simulation techniques developed by Chen have been repeatedly cited and relied upon by state and federal courts in Pennsylvania, Michigan, and elsewhere as a more scientific method for measuring how much districting maps are gerrymandered for partisan gain. 

Accuracy in Reporting on Public Policy

Post developed by Katherine Pearson and Stuart Soroka

ICYMI (In Case You Missed It), the following work was presented at the 2019 Annual Meeting of the American Political Science Association (APSA).  The presentation, titled “Media (In)accuracy on Public Policy, 1980-2018” was a part of the session “Truth and/or Consequences” on Sunday, September 1, 2019.

Citizens can be well-informed about public policy only if the media accurately present information on the issues. Today’s media environment is faced with valid concerns about misinformation and biased reporting, but inaccurate reporting is nothing new. In their latest paper, Stuart Soroka and Christopher Wlezien analyze historical data on media coverage of defense spending to measure the accuracy of the reporting when compared to actual spending. 

In order to measure reporting on defense spending, Soroka and Wlezien compiled text of media reports between 1980 and 2018 from three corpuses: newspapers, television transcripts, and public affairs-focused Facebook posts. Using the Lexis-Nexis Web Services Kit, they developed a database of sentences focused on defense spending from the 17 newspapers with the highest circulation in the United States. Similar data were compiled with transcripts from the three major television broadcasters (ABC, CBS, NBC) and cable news networks (CNN, MSNBC, and Fox). Although more difficult to gather, data from the 500 top public affairs-oriented public pages on Facebook were compiled from the years 2010 through 2017. 

Soroka and Wlezien estimated the policy signal conveyed by the media sources by measuring the extent to which the text suggests that defense spending has increased, decreased, or stayed the same. Comparing this directly to actual defense spending over the same time period reveals the accuracy of year-to-year changes in the media coverage. For example, if media coverage were perfectly accurate, the signal would be exactly the same as actual changes in spending. 

As the figure below shows, the signal is not perfect. While there are some years when the media coverage tracks very closely to actual spending, there are other years when there is a large gap between the signal that news reports send and the defense budget. The gap may not entirely represent misinformation, however. In some of these cases, the media may be reporting on anticipated future changes in spending. 

media signal

For most years, the gap representing misinformation is fairly small. Soroka and Wlezien note that this “serves as a warning against taking too seriously arguments focused entirely on the failure of mass media.” This analysis shows evidence that media coverage can inform citizens about policy change. 

The authors conclude that there are both optimistic and pessimistic interpretations of the results of this study. On one hand, for all of the contemporary concerns about fake news, it is still possible to get an accurate sense of changes in defense spending from the media, which is good news for democratic citizenship. However, they observed a wide variation in accuracy among individual news outlets, which is a cause for concern. Since long before the rise of social media, citizens have been at risk of consuming misinformation based on the sources they select. 

Using Text and Images to Examine 2016 Election Tweets

Post developed by Dory Knight-Ingram 

ICYMI (In Case You Missed It), the following work was presented at the 2019 Annual Meeting of the American Political Science Association (APSA).  The presentation, titled “Using Neural Networks to Classify Based on Combined  Text and Image Content: An Application to Election Incident Observation” was a part of the session “Deep Learning in Political Science” on Friday, August 30, 2019.

A new election forensics process developed by Walter Mebane and Alejandro Pineda uses machine-learning to examine not just text, but images, too, for Twitter posts that are considered reports of “incidents” from the 2016 US Presidential Election. 

Mebane and Pineda show how to combine text and images into a single supervised learner for prediction in US politics using a multi-layer perceptron. The paper notes that in election forensics, polls are useful, but social media data may offer more extensive and granular coverage. 

The research team gathered individual observation data from Twitter in the months leading up to the 2016 US Presidential Election. Between Oct. 1-Nov. 8, 2016, the team used Twitter APIs to collect millions of tweets, arriving at more than 315,180 tweets that apparently reported one or more election “incidents” – an individual’s report of their personal experience with some aspect of the election process. 

At first, the research team used only text associated with tweets. But the researchers note that sometimes, images in a tweet are informative, while the text is not. It’s possible for the text alone to not make a tweet a report of an election incident, while the image may indeed show an incident. 

To solve this problem, the research team implemented some “deep neural network classifier methods that use both text and images associated with tweets. The network is constructed such that its text-focused parts learn from the image inputs, and its image-focused parts learn from the text inputs. Using such a dual-mode classifier ought to improve performance. In principle our architecture should improve performance classifying tweets that do not include images as well as tweets that do,” they wrote.

“Automating analysis for digital content proves difficult because the form of data takes so many different shapes. This paper offers a solution: a method for the automated classification of multi-modal content.” The research team’s model “takes image and text as input and outputs a single classification decision for each tweet – two inputs, one output.” 

The paper describes in detail how the research team processed and analyzed tweet-images, which included loading image files in batches, restricting image types to .jpeg or .png., and using small image sizes for better data processing results. 

The results were mixed.

The researchers trained two models using a sample of 1,278 tweets. One model combined text and images, the other focused only on text. In the text-only model, accuracy steadily increases until it achieves top accuracy at 99%. “Such high performance is testimony to the power of transfer learning,” the authors wrote. 

However, the team was surprised that including the images substantially worsened performance. “Our proof-of-concept combined classifier works. But the model structure and hyperparameter details need to be adjusted to enhance performance. And it’s time to mobilize hardware superior to what we’ve used for this paper. New issues will arise as we do that.” 

Angela Ocampo Examines the Importance of Belonging

Post developed by Katherine Pearson and Angela Ocampo

Feelings of belonging are powerfully important. A sense of inclusion in a group or society can motivate new attitudes and actions. The idea of belonging, or attaining inclusion, is the centerpiece of Angela Ocampo’s research. Her dissertation exploring the effect of inclusion on political participation among Latinos will receive the American Political Science Association’s (APSA) Race and Ethnic Politics Section’s award for the best dissertation in the field at the Fall 2019 APSA meetings.

Dissertation and Book Project

Dr. Ocampo’s dissertation grounds the theory of belonging and political participation within the literature. This research, which she is expanding into a book, finds that feelings of belonging in American society strongly predict higher levels of political engagement among Latinos. This concept represents the intersection of political science and political psychology. Dr. Ocampo draws from psychology research that belonging is a human need; people need to feel that they are a part of a group in order to succeed and have positive individual outcomes, as well as group outcomes. She builds on these psychological concepts to develop this theory of social belonging in the national community, and how this influences the perception of relationship to the polity. 

The book will explore the social inclusion of racial and ethnic minorities, and how that shapes the way they participate in politics. Dr. Ocampo argues that the idea of perceiving that you belong, and the extent to which others accept you, has an influence on your political engagement and opinion of policies. For the most part, Dr. Ocampo looks at Latinos in the US, but the framework is applicable to other racial and ethnic groups. She is also collecting data among Asian Americans, African Americans, and American Muslims to look at perceived belonging. 

Methodological Expertise

Before she began this research, there were no measures to capture data on belonging in existing surveys. Dr. Ocampo validated new measures and tested and replicated them in the 2016 collaborative multiracial postelection survey

While observational data is useful for finding correlations, it can’t identify causality. For this reason, experiments also inform Dr. Ocampo’s research. In one experiment, she randomly assigned people to a number of different conditions. Subjects assigned to the negative condition showed a significant decrease in their perceptions of belonging. However, among those assigned to the positive condition, there were no corresponding positive results. In both the observational data and experiments, Dr. Ocampo notes that experiences of discrimination are highly influential and highly determinant of feelings of belonging. That is, the more experiences of discrimination you’ve had in the past, the less likely you are to feel that you belong.

Doing qualitative research has taught Dr. Ocampo the importance of speaking with her research subjects. “It’s not until you get out and talk to people running for office and making things happen that you understand how politics works for everyday people. That’s why the qualitative data and survey work are really important,” she says. By leveraging both qualitative and quantitative methodologies, Dr. Ocampo is able to arrive at more robust conclusions. 

A Sense of Belonging in the Academic Community

Starting in the Fall of 2020, Dr. Ocampo will be an Assistant Professor of Political Science at the University of Michigan and a Faculty Associate of the Center for Political Studies. She says that the fact that her work is deeply personal to her is what keeps her engaged. As an immigrant herself, Dr. Ocampo says, “I’m doing this for my family. I’m in this for other young women and women of color, other first-generation scholars. When they see me give a class or a lecture, they know they can do it, too.” 

Dr. Ocampo is known as a supportive member of her academic community. She says it’s an important part of her work: “The reason it’s important is that I wouldn’t be here if it wouldn’t have been for others who opened doors, were supportive, were willing to believe in me. They were willing to amplify my voice in spaces where I couldn’t be, or where I wasn’t, or where I didn’t even know they were there.” She notes that in order to improve the profession and make it a more diverse and welcoming place where scholars thrive, academics have to take it upon themselves to be inclusive. 

Redrawing the Map: How Jowei Chen is Measuring Partisan Gerrymandering

post written by Solmaz Spence

“Gerrymandering”— when legislative maps are drawn to the advantage of one party over the other during redistricting—received its name in 1812, when Massachusetts Governor Elbridge Gerry signed off on a misshapen district that was said to resemble a salamander, which a newspaper dubbed a “gerrymander.”

But although the idea of gerrymandering has been around for a while, proving that a state’s legislature has deliberately skewed district lines to benefit one political party remains challenging.

The problem is that the mere presence of partisan bias in a district map tells us very little about the intentions of those drawing the districts. Factors such as racial segregation, housing and labor markets, and transportation infrastructure can lead to areas where one party’s supporters are more geographically clustered than those of the other party. When this happens, the party with a more concentrated support base achieves a smaller seat share because it racks up large numbers of “surplus” votes in the districts it wins, while falling just short of the winning threshold in many of the districts it loses.

Further, there are many benign reasons that legislatures may seek to redistrict voters—for example, to keep communities of interest together and facilitate the representation of minorities—that may have the unintended consequence of adding a partisan spin to the map.

The research of political scientists Jowei Chen and Jonathan Rodden is helping to differentiate cases of deliberate partisan gerrymandering from other redistricting efforts. Chen, Faculty Associate at the University of Michigan’s Center for Political Studies, and Rodden, Professor of Political Science at Stanford University, have devised a computer algorithm that ignores all partisan and racial considerations when drawing districts, and instead creates thousands of alternative district maps based on traditional districting goals, such as equalizing population, maximizing geographic compactness, and preserving county and municipal boundaries. These simulated maps are then compared against the district map that has been called into question to assess whether partisan goals motivated the legislature to deviate from traditional districting criteria.

We first wrote about Chen and Rodden’s work back in December 2016, detailing a 2015 paper in the Election Law Journal, which used the controversial 2012 Florida Congressional map to show how their approach can demonstrate and unconstitutional partisan gerrymander. Now, this work is back in the spotlight: Chen’s latest research has been cited in several cases of alleged gerrymandering that are currently working through the courts in Pennsylvania, North Carolina, Wisconsin and Maryland.

In January, Chen’s testimony as an expert witness was cited when the Pennsylvania Supreme Court threw out the state’s U.S. House of Representatives district map. In its opinion, the court said the Pennsylvania map unconstitutionally put partisan interests above other line-drawing criteria, such as eliminating municipal and county divisions.

The Pennsylvania districts in question were drawn by the Republican-controlled General Assembly in 2011. Immediately, the shape of the districts was an indicator that at least one traditional criterion of districting—compactness—had been overlooked.

Though few states define exactly what compactness means, it is generally taken to mean that all the voters within a district should live near one another, and that the boundaries of the district should be create a regular shape, rather than the sprawling polygon with donut holes or tentacles that characterized the Pennsylvania district map.

In particular, District 7—said to resemble Goofy kicking Donald Duck—had been called into question. “It is difficult to imagine how a district as roschachian and sprawling, which is contiguous in two locations only by virtue of a medical facility and a seafood/steakhouse, respectively, might plausibly be referred to as compact,” the court wrote.

Although there are more registered Democrats than Republicans in Pennsylvania, Democrats hold only five of the state’s 18 congressional districts. In the 2016 election, Democrats won each of their five House seats with an average of 75 percent of the vote while Republicans’ margin of victory was an average of 62 percent across their 13 districts. This is an indicator of “packing,” a gerrymandering practice that concentrates like-minded voters into as few districts as possible to deny them representation across districts.

Chen’s expert report assessed the district map and carried out simulations to generate alternative districting plans that strictly followed non-partisan, traditional districting criteria, and then measured the extent to which the current district map deviates from these simulated plans.

To measure the partisanship of the computer-simulated plans, Chen overlaid actual Pennsylvania election results from the past ten years onto the simulated districts, and calculated the number of districts that would have been won by Democrats and Republicans under each plan (see Figure 1).

The districting simulation process used precisely the same Census geographies and population data that the General Assembly used in creating congressional districts. In this way, the simulations were able to account for any geographical clustering of voters; if the population patterns of Pennsylvania voters naturally favor one party over the other, the simulated plans would capture that inherent bias.

Generally, the simulations created seven to ten Republican districts; not one of the 500 simulated districting plans created 13 Republican districts, as exists under the Republican-drawn district map. Thus, the map represented an extreme statistical outlier, a strong indication that the enacted plan was drawn with an overriding partisan intent to favor that political party. This led Chen to conclude “with overwhelmingly high statistical certainty that the enacted plan created a pro-Republican partisan outcome that would never have been possible under a districting process adhering to non-partisan traditional criteria.”

A map showing redistricting simulation in Pennsylvania

This table compares the simulated plans to the 2011 Pennsylvania district map with respect to these various districting criteria.

Following its ruling, on February 20 the Pennsylvania Supreme Court released a new congressional district map that has been described in a Washington Post analysis as “much more compact”. In response, the state’s Republican leadership announced plans to challenge the new map in court.

 

 

Using Twitter to Observe Election Incidents in the United States

Post developed by Catherine Allen-West

Election forensics is the field devoted to using statistical methods to determine whether the results of an election accurately reflect the intentions of the electors. Problems in elections that are not due to fraud may stem from legal or administrative decisions. Some examples of concerns that may distort turnout or vote choice data are long wait times, crowded polling place conditions, bad ballot design and location of polling stations relative to population.

A key component of democratic elections is the actual, and perceived, legitimacy of the process. Individuals’ observations about how elections proceed can provide valuable, on-the-ground insight into any flaws in the administration of the election. In some countries there are robust systems for recording citizen complaints, but not in the United States. So, a team* of University of Michigan researchers led by Walter Mebane used Twitter to extract observations of election incidents by individuals across the United States throughout the 2016 election, including primaries, caucuses and the general election. Through their observations, the team shows how reported phenomena like waiting in long lines or having difficulties actually casting a vote are associated with state-level election procedures and demographic variables.

The information gathered is the beginnings of what Mebane is calling the “Twitter Election Observatory.”  The researchers collected tweets falling within a ten-day window around each primary/caucus election day and collected tweets continually during the October 1- November 9, 2016 lead up to the general election.

Mebane and his team then coded all of the tweets to extract the “incident observations” — tweets that mentioned an issue or complaint that an individual may have experienced when casting their vote. From the Twitter data, the researchers found that incidents occurred in every state during the general election period. Among the tweets that had recorded location information, the highest count of tweet observations occurred in California, Texas, Florida and New York and the smallest amount in Wyoming, North Dakota, South Dakota and Montana.

Additionally, the researchers calculated the rate of incidents relative to the population of the each state. On a per capita basis, the District of Columbia stands out with the highest rate of incident observation followed by Nevada and North Carolina with Wyoming as the lowest.

Every indication is that Twitter can be used to develop data about individuals’ observations of how American elections are conducted, data that cover the entire country with extensive and intensive local detail. Mebane notes that the frequency, and likely the diversity, of observations may vary depending on how many people care and want to participate in, observe and comment on an election. Ultimately, Mebane would like to dig further into the geolocation information of these tweets to try and pinpoint any incidents with exact polling locations.

*University of Michigan team includes: Walter R. Mebane, Jr., Alejandro Pineda, Logan Woods, Joseph Klaver, Patrick Wu and Blake Miller.

Link to full paper presented at 2017 meeting of the American Association for Political Science.

Top 10 Most Viewed CPS Blog Posts in 2016

Post written by Catherine Allen-West.

Since it’s establishment in 2013, a total of 123 posts have appeared on the Center for Political Studies (CPS) Blog. As we approach the new year, we thought to take a look back at which of these 123 posts were most viewed across 2016.

 


 

01. Tracking the Themes of the 2016 Election by Lisa Singh, Stuart Soroka, Michael Traugott and Frank Newport (from the Election Dynamics blog)

“The results highlight a central aspect of the 2016 campaign: information about Trump has varied in theme, almost weekly, over the campaign – from Russia, to taxes, to women’s issues, etc; information about Clinton has in contrast been focused almost entirely on a single theme, email.”

 


 

02. Another Reason Clinton Lost Michigan: Trump Was Listed First on the Ballot by Josh Pasek

“If Rick Snyder weren’t the Governor of Michigan, Donald Trump would probably have 16 fewer electoral votes. I say this not because I think Governor Snyder did anything improper, but because Michigan law provides a small electoral benefit to the Governor’s party in all statewide elections; candidates from that party are listed first on the ballot.”

 


 

03. Motivated Reasoning in the Perceived Credibility of Public Opinion Polls by Catherine Allen-West and Ozan Kuru

“Our results showed that people frequently discredit polls that they disagree with. Moreover, in line with motivated reasoning theories, those who are more politically sophisticated actually discredit the polls more. That is, as political knowledge increases, the credibility drops substantially for those who disagree with the poll result.”

 

 


 

04. Why do Black Americans overwhelmingly vote Democrat? by Vincent Hutchings, Hakeem Jefferson, and Katie Brown, published in 2014.

“Democratic candidates typically receive 85-95% of the Black vote in the United States. Why the near unanimity among Black voters?”

 


 

05. Measuring Political Polarization by Katie Brown and Shanto Iyengar, published in 2014.

“Both parties moving toward ideological poles has resulted in policy gridlock (see: government shutdowndebt ceiling negotiations). But does this polarization extend to the public in general?”

 


 

06. What makes a political issue a moral issue? by Katie Brown and Timothy Ryan, published in 2014.

“There are political issues and then there are moral political issues. Often cited examples of the latter include abortion and same sex marriage. But what makes a political issue moral?”

 


 

07. Moral Conviction Stymies Political Compromise, by Katie Brown and Timothy Ryan, published in 2014.

Ryan’s overarching hypothesis boils non-compromise down to morals: a moral mindset orients citizens to oppose political compromises and punish compromising politicians. There are all kinds of issues for which some citizens seem resistant to compromises: tax reform, same-sex marriage, collective bargaining, etc. But who is resistant? Ryan shows that part of the answer has to do with who sees these issues through a moral lens.

 


 

08. Exploring the Effects of Skin Tone on Policy Preferences Among African Americans by Lauren Guggenheim and Vincent Hutchings, published in 2014.

In the United States, African Americans with darker skin tones have worse health outcomes, lower income, and face higher levels of discrimination in the work place and criminal justice system than lighter skinned Blacks. Could darker and lighter skinned African Americans in turn have different policy preferences that reflect their socio economic status-based outcomes and experiences?

 


 

09. What We Know About Race and the Gender Gap in the 2016 U.S. Election by Catherine Allen-West

As of October, the latest national polls, predicted that the 2016 Election results will reflect the largest gender gap in vote choice in modern U.S. history. If these polls had proven true, the 2016 results would indicate a much larger gender gap than what was observed in 2012, where women overwhelmingly supported Barack Obama over Mitt Romney. University of Texas at Austin Professor Tasha Philpot argues that what really may be driving this gap to even greater depths, is race.

 


 

10. How do the American people feel about gun control? by Katie Brown and Darrell Donakowski, published in 2014.

As we can see, the proportion of the public supporting tougher regulation is shrinking over the time period, while satisfaction with current regulations increased. Yet, support for tougher gun laws is the most popular choice in all included years. It is important to note that these data were collected before Aurora, Newtown, and the Navy Yard shootings. The 2016 ANES study will no doubt add more insight into this contentious, important issue.

 


 

Helping the Courts Detect Partisan Gerrymanders

Post written by Lauren Guggenheim and Catherine Allen-West.

In November, a federal court ruled that the Wisconsin Legislature’s 2011 redrawing of State Assembly districts unfairly favored Republicans deeming it an unconstitutional partisan gerrymander. This ruling is the first successful constitutional challenge to partisan gerrymandering since 1986.  The case will now head to the U.S. Supreme Court—which has yet to come up with a legal standard for distinguishing between acceptable redistricting efforts and unconstitutional gerrymandering.

While there have been successful challenges to gerrymandering based on racial grounds, most recently last week in North Carolina, proving partisan gerrymandering—where the plaintiffs must show that district lines were drawn with the intent to favor one political party over another—is more difficult. One reason is that research shows that even non-partisan commissions can produce unintentional gerrymandered redistricting plans solely on the basis of the geography of a party’s supporters. Also complicating matters are legislatures’ lawful efforts to keep communities of interest together and facilitate the representation of minorities. Because traditional efforts can produce results that appear biased, showing partisan asymmetries—the main form of evidence in previous trials—is not sufficient to challenge partisan gerrymandering in the courts.

However, in recent years, scientists have devised several standards that could be used to effectively measure partisan gerrymandering. In last month’s Wisconsin ruling, the court applied one such mathematical standard called the “efficiency gap“- a method that looks at statewide election results and calculates “wasted votes.” Using this method, the court found that Republicans had manipulated districts by packing Democrats into small districts or spreading them out across many districts, which ultimately led to Republican victories across the states larger districts.

Another method to determine partisan gerrymandering, developed by political scientists Jowei Chen and Jonathan Rodden, uses a straightforward redistricting algorithm to generate a benchmark against which to contrast a plan that has been called into constitutional question, thus laying bare any partisan advantage that cannot be attributed to legitimate legislative objectives. In a paper published last year in the Election Law Journal, Chen, a Faculty Associate at the University of Michigan’s Center for Political Studies and Rodden, Professor of Political Science at Stanford University, used the controversial 2012 Florida Congressional map to show how their approach can demonstrate and unconstitutional partisan gerrymander.

First, the algorithm simulates hundreds of valid districting plans, applying criteria traditionally used in redistricting decisions—compactness, geographic contiguity, population equality, the preservation of political communities, and the protection of voting rights for minorities—while disregarding partisanship. Then, the existing plan can be compared to the partisan distribution of the simulated plans to see where in the distribution it falls. If the partisanship of the existing plan lies in the extreme tail (or outside of the distribution) that was created by the simulations, it suggests the plan is likely to have been created with partisan intent. In other words, the asymmetry is less likely to be due to natural geography or a state’s interest in protecting minorities or keeping cohesive jurisdictions together (which is accounted for by the simulations). In this way, their approach distinguishes between unintentional and intentional asymmetries in partisanship.

Using data from the Florida case, Chen and Rodden simulated the results of 24 districts in 1,000 simulated plans. They kept three African-American districts intact because of Voting Rights Act protections. They also kept 46 counties and 384 cities together, giving the benefit of the doubt to the legislature that compelling reasons exist to keep these entities within the same simulated district. The algorithm uses a nearest distance criterion to keep districts geographically contiguous and highly compact, and it iteratively reassigns precincts to different districts until equally populated districts are achieved. The figure below shows how this looks in one of the 1,000 valid plans.

screen-shot-2016-12-01-at-12-15-45-pm

Next, to measure partisanship, Chen and Rodden needed both the most recent data possible and precinct-level election results, which they found in the 2008 presidential election results. For both the existing plan and the simulated plans, they aggregated from the precinct to the district and calculated the number of districts where McCain voters outnumbered Obama voters. The figure below shows the partisan distribution of all of the plans. A majority of the plans created 14 Republican seats, and less than half of one percent of the plans produced 16 Republican seats. However, none of the simulations produced the 17 seats that were in the Florida Legislature’s plan, showing that the pro-Republican bias in the Legislature’s plan is an extreme outlier relative to the simulations.

screen-shot-2016-12-01-at-12-16-41-pm

Because the simulations they created were a conservative test of redistricting (e.g., giving the benefit of the doubt to the Legislature by protecting three African-American districts), Chen and Rodden also tried the simulations by progressively dropping some of the districts they had previously kept intact. Results suggested the Legislature’s plan was even more atypical, as they had less pro-Republican bias than the simulations with the protected districts.

Chen and Rodden note that once a plaintiff can show that the partisanship of a redistricting plan is an extreme outlier, the burden of proof should shift to the state.  Ultimately in Florida, eight districts were found invalid and, and in December 2015, new maps were approved by the court and put into use for the 2016 Election.

Identifying the Sources of Scientific Illiteracy

Post developed by Catherine Allen-West in coordination with Josh Pasek

ICYMI (In Case You Missed It), the following work was presented at the 2016 Annual Meeting of the American Political Science Association (APSA).  The presentation, titled “Motivated Reasoning and the Sources of Scientific Illiteracy” was a part of the session “Knowledge and Ideology in Environmental Politics” on Friday, September 2, 2016.

At APSA 2016, Josh Pasek, Assistant Professor of Communication Studies and Faculty Associate at the Center For Political Studies presented work that delves into the reasons that people do not believe in prevailing scientific consensus.

He argues that widespread scientific illiteracy in the general population is not simply a function of ignorance. In fact, there are several reasons why an individual may answer a question about science or a scientific topic incorrectly.

  1. They are ignorant of the correct answer
  2. They have misperceptions about the science
  3. They know what scientists say and disagree (rejectionism)
  4. They are trying to express some identity that they hold in their response

The typical approach to measuring knowledge involves asking individuals multiple-choice questions where they are presumed to know something when they answer the questions correctly and to lack information when they either answer the questions incorrectly or say that they don’t know.

Pasek Slide 2

Pasek suggests that this current model for measuring scientific knowledge is flawed, because individuals who have misperceptions can appear less knowledgeable than those who are ignorant. So he and his co-author Sedona Chinn, also from the University of Michigan, set out with a new approach to disentangle these cognitive states (knowledge, misperception, rejectionism and ignorance) and then determine which sorts of individuals fall into each of the camps.

Instead of posing multiple-choice questions, the researchers asked the participants what most scientists would say about a certain scientific topic (like, climate change or evolution) and then examined how those answers compared to the respondent’s personal beliefs.

Pasek Slide 4

Across two waves of data collection, respondent answers about scientific consensus could fall into four patterns. They could be consistently correct, change from correct to incorrect, change from incorrect to correct or be consistently correct.

Pasek Slide 5

This set of cognitive states lends itself to a set of equations producing each pattern of responses:

Consistently Correct = Knowledge + .5 x Learning + .25 x Ignorance
Correct then Incorrect = .25 x Ignorance
Incorrect -> Correct =.5 x Learning + .25 x Ignorance
Consistently Incorrect = Misperception + .25 x Ignorance

The researchers then reverse-engineered this estimation strategy for a survey aimed at measuring knowledge on various scientific topics. This yielded the following sort of translations:

Pasek Slide 6

In addition to classifying respondents as knowledgeable, ignorant, or misinformed, Pasek was especially interested in identifying a fourth category: rejectionist. These are individuals who assert that they know the scientific consensus but fail to hold corresponding personal beliefs. Significant rejectionism was apparent for most of the scientific knowledge items, but was particularly prevalent for questions about the big bang, whether humans evolved, and climate change.

Pasek Slide 3

Rejectionism surrounding these controversial scientific topics is closely linked to religious and political motivations. Pasek’s novel strategy of parsing out rejectionism from ignorance and knowledge provides evidence that religious individuals are not simply ignorant about the scientific consensus on evolution or that partisans are unaware of climate change research. Instead, respondents appear to have either systematically wrong beliefs about the state of the science or seem liberated to diverge in their views from a known scientific consensus.

Pasek’s results show a much more nuanced, yet at times predictable, relationship between scientific knowledge and belief in scientific consensus.

 

Income and Preferences for Centralization of Authority

Post developed by Catherine Allen-West in coordination with Diogo Ferrari

DiogoFerrari

Diogo Ferrari, PhD Candidate, University of Michigan, Ann Arbor

ICYMI (In Case You Missed It), the following work was presented at the 2016 Annual Meeting of the American Political Science Association (APSA).  The presentation, titled “The Indirect Effect of Income on Preferences for Centralization of Authority,” was a part of the session “Devolution, Fragmented Power, and Electoral Accountability” on Thursday September 1, 2016.

One of the primary activities of any elected government is to decide how to allocate public funds for policies like health care and education. In countries that adopted a federal system – like the United States, Canada, Australia, Germany, and others – the central government usually has some policies that promotes distribution of fiscal resources among different jurisdictions, like among states or cities. Take Australia for example. The federal government collects taxes that are funneled to local governments  in accordance with their needs. This diminishes the inequality between different Australian sub-national governments in their capacity to invest and provide public services. Brazil is another example. Brazil has a huge federal program that transfers resources from rich to poor states and whose goal is to reduce regional inequality. These federal governments can only continue to operate in this way, that is, promoting interregional redistribution, if the power to control fiscal resources is centralized.  Therefore, there is a connection between interregional redistribution and centralization of authority.

Now, voters have different preferences about how the government should spend the fiscal resources. They have different opinions, for instance, to which degree taxes collected in one region should be invested in another region. Do voters that support interregional redistribution also prefer that the fiscal authority is concentrated in the hands of the federal government as opposed to the sub-national ones? Which characteristics determine the preference of voters regarding interregional redistribution and centralization of authority? How those preferences are connected?

Continue reading