CLEA Provides Student Opportunities for Impact and Growth

CLEA Provides Student Opportunities for Impact and Growth

Brooke Booska: “CLEA offered me the opportunity to expand my role at U-M from being a student to being a part of something serving the greater public.”

Housed in the Center for Political Studies at the Institute for Social Research, The Constituency-Level Elections Archive (CLEA) is a repository of detailed results from lower and upper house elections from around the world. The project provides opportunities for students to be involved at all stages of the data collection process, providing a valuable training experience. 

Brooke Booska, an undergraduate sophomore studying economics and philosophy, joined the CLEA team as a research assistant in September 2022.

“I was hoping to gain research experience and especially to work on something that feels more impactful than an academic assignment,” she said. “CLEA offered me the opportunity to expand my role at U-M from being a student to being a part of something serving the greater public.”

At CLEA, Booska uses a source and a template to code election data and input results into a CLEA-formatted spreadsheet that contains a rich set of information about candidates, parties, awarded vote proportions, and seats won. Booska said it can take anywhere from a week to several months to code an election for CLEA, depending on the voting system size.

Booksa shared three benefits of her work on CLEA:

  • Learning about different electoral systems in different countries
  • Getting to know students and faculty outside of her regular circle, with a fun and welcoming team
  • Gaining transferable skills in Excel and project and task management

“From working on CLEA, I have learned how to tackle large projects. When you are entering data for an enormous country like Canada, it can seem daunting and never-ending,” she said. “Yet, if you can learn to break up the task into smaller, more achievable ones, the overwhelming project becomes much more manageable. This method for working on large tasks has trickled into my school work, especially essay writing. Instead of avoiding intimidating assignments, I have learned to bend them in such a way that works for me. This has made me much more comfortable tackling larger projects and goals such as studying abroad.”

More than 80 students have participated in CLEA, many of them through the Undergraduate Student Research Opportunity Program (UROP). Participation has shaped many of their research interests and career paths.

Learn more about CLEA and its alumni.

CoderSpaces provide data science support and hands-on learning opportunities for faculty, staff, and students

Post by Jule Krüger, Program Manager for big data/data science. Jule developed CoderSpaces, weekly programming sessions at the University of Michigan in support of cutting edge research and scientific advancements, and has hosted them since 2019. 

photo of the The Winter 2021 CoderSpaces Host Team

For the past two years, a team of data science experts have been experimenting with offering expert office hours to facilitate the adoption of new methods and technologies across the Institute for Social Research (ISR). These CoderSpaces provide immediate research support and offer hands-on learning opportunities for participants who wish to grow their coding and data science skills. The aim is to foster a casual learning and consulting environment that welcomes everyone regardless of skill level. 

CoderSpaces are one way to help researchers thrive in an environment that is becoming increasingly complex. With the ongoing digitization of our daily lives, scholars are gaining access to new types of data streams that have not been traditionally available in their disciplines. For example, social scientists in the ISR at the University of Michigan have started to explore the ways in which virtual interactions on social media platforms can inform the scientific inquiry of socio-behavioral phenomena spanning many aspects of our lives, including election forensics, political communication, parenting, or insights gained from survey research.

Processing and analyzing novel types and ever bigger quantities of data requires that faculty, research staff, and students incorporate new research technologies and methodologies in their scientific toolkits. For example, researchers may need to move computationally intense analyses to a high performance computing cluster, which requires familiarity with batch processing, a command line interface, and advanced data storage solutions. Or researchers may be confronted with understanding and implementing natural language processing and machine learning to systematically retrieve information from large amounts of unstructured text.

Researchers who embark on the journey of exploring new technologies or methodology often can not fall back on curricula and training opportunities provided by their disciplinary peers. The relevant learning resources still need to be developed – potentially by themselves one day. To bridge training gaps, scholars look to example applications in other disciplines, engage in interdisciplinary research collaborations to access necessary expertise, and solicit help from available support units on campus to make methodological and technological innovations possible.  

CoderSpaces provide just this kind of support. The sessions are hosted by faculty, research staff, and students who are willing to share their methodological and programming expertise with others. Initially, CoderSpaces were limited to the ISR community. Currently, anyone at the University of Michigan is welcomed to join, which has allowed us to diversify and broaden the available expertise and research applications. The weekly sessions were originally organized as in-person gatherings at the ISR with the intent to venture out to other campus locations. In March 2020, CoderSpaces moved to a virtual format facilitated by Zoom video-conferencing and a Slack communication space. Going virtual turned out to be a blessing in disguise as it enabled anyone at the university to participate regardless of their physical location, helping us broaden our reach across U-M departments and disciplines.

We have continuously increased the number of our CoderSpaces hosts over time. The current Winter 2021 team is our largest and most diverse yet, with 16 hosts representing nine campus departments that span the social and medical sciences, technical and statistical fields. The expertise we are able to provide ranges from high performance and parallel computing, cloud analytics, performance analysis, statistical modelling and machine learning, survey methods, natural language processing, research design, reproducible workflows, data management, programming in a variety of languages (bash, C, C++, C#, CMake/GNU Make, Fortran, Java, Javascript, Julia, LaTeX, Matlab, Markdown, Perl, Python, R, Rcpp, SAS, shell, Slurm, SQL, Stata), version control in Git, mobile app development, web scraping, and more. Typically, we are able to assist participants with their issues immediately during the virtual meeting. If a solution is not readily available, our hosts draw on their respective expertise and network to identify additional resources and offer support.

Participants join an ongoing Zoom meeting at the scheduled weekly times. The hosts on the call field questions and may use the breakout room feature to assist multiple participants simultaneously. For example, Bryan Kinzer, a PhD student in Mechanical Engineering, attended CoderSpaces a few times to set up and run a Singularity container. He says of his experience: “The hosts were helpful and patient. My issue was not a super easy quick fix, but they were able to point me in the right direction eventually getting the issue resolved. When I came back the following week they remembered my case and were able to pick right back up where I left off.”

Paul Schulz, a senior consulting statistician and data scientist for ISR’s Population Dynamics and Health Program (PDHP), has now been serving as a host since the CoderSpaces were launched. He describes the weekly CoderSpaces as “an enriching experience that has allowed me and the other PDHP staff members to socialize and broaden our network among other people on campus who work in the data and technical space. By sharing our technical skills and knowledge with attendees, we are providing a service. But we have also been able to improve our own skills and expertise in these areas by being exposed to what others across campus are doing. By fostering these types of informal collaborations and shared experiences, I think that the CoderSpaces have been a win-win for both attendees and hosts alike.” 

Learn more about CoderSpaces here.

Not which ones, but how many?

Perspective on research from Guoer Liu, doctoral student in Political Science, and recipient of the 2019 Roy Pierce Award 

Guoer Liu

“Not which ones, but how many” is a phrase used in list experiments instruction, where researchers instruct participants, “After I read all four (five) statements, just tell me how many of them upset you. I don’t want to know which ones, just how many.” In retrospect, I was surprised to see that this phrase encapsulates not only the key research idea, but also my fieldwork adventure: not which plans could go awry, but how many. The fieldwork experience could be frustrating at times, but it has led me to uncharted terrain and brought insights into the research contexts. The valuable exposure would not have been possible without support from the Roy Pierce Award and guidance from Professor Yuki Shiraito

Research that I conducted with Yuki Shiraito explores the effect of behavior on political attitudes in authoritarian contexts to answer the question: does voting for autocracy reinforce individual regime support? To answer this question, two conditions need to be true. First, people need to honestly report their level of support before- and after- voting in authoritarian elections. Second, voting behavior needs to be random. Neither situation is probable in illiberal autocracies. Our project addresses these methodological challenges by conducting a field experiment that combines a list experiment and a randomized encouragement design in China.

In this study, list experiments are used instead of direct questions to measure the respondents’ attitudes towards the regime in the pre- and post-election surveys. The list experiment is a survey technique to mitigate preference falsification by respondents. Although the true preference of individual respondents will be hidden, the technique allows us to identify the average level of support for the regime within a group of respondents. In addition, we employ a randomized encouragement design where get-out-the-vote messages are randomly assigned, which help us estimate the average causal effect of a treatment. For effect moderated by prior support for the regime, we estimate the probability of the prior support using individual characteristics and then estimate the effect for the prior supporters via a latent variable model.

While the theoretical part of the project went smoothly and the simulation results were promising, the complication of fieldwork exceeded my expectation. For the list experiment survey, the usually reticent respondents started asking questions about the list questions immediately after the questionnaires were distributed. Their queries took the form of “I am upset by option 1, 2, and 4, so what number should I write down here?” This was not supposed to happen. List experiments are developed to conceal individual respondents’ answers from researchers. By replacing the questions of “which ones” with the question of “how many,” respondents’ true preference is not directly observable, which makes it easier for them to answer sensitive questions honestly. Respondents’ eagerness to tell me their options directly defeats the purpose of this design. Later I learned from other researchers that the problem I encountered was common in list experiment implementation regardless of research contexts and types of respondents. 

The rationale behind respondents’ desire to share their individual options despite being given a chance to hide them is thought-provoking. Is it because of the cognitive burden of answering a list question, which is not a familiar type of questions to respondents? Or is it because the sensitive items, despite careful construction, raise the alarm? Respondents are eager to specify their stance on each option and identify themselves as regime supporters: they do not leave any room for misinterpretation. To ease the potential cognitive burden, we will try a new way to implement the list experiment in a similar project on preference falsification in Japan. We are looking forward to seeing if it improves respondents’ comprehension of the list question setup. The second explanation is more concerning, however. It suggests the scope condition of list experiments as a valid tool to elicit truthful answers from respondents. Other more implicit tools, such as endorsement experiments, may be appropriate in those contexts to gauge respondent’s preference. 

Besides the intricacies of the list experiment, carrying out encouragement design on the ground is challenging. We had to modify the behavioral intervention to adapt needs from our local collaborators, and the realized sample size was only a fraction of the negotiated size initially. Despite the compromises, the implementation is imbued with uncertainty: meetings were postponed or rescheduled last minutes, instructions from local partners are sometimes inconsistent and conflictual. The frustration was certainly real. But the pain makes me cognizant of judgment calls researchers have to make in the backstage. The amount of effort required to produce reliable data is admirable. And as a consumer of data, I should always interpret data with great caution.

While the pilot study does not lead to a significant finding directly, the research experience and the methods we developed have informed the design of a larger project that we are currently doing in Japan.

I always thought of doing research as establishing a series of logical steps between a question and an answer. Before I departed for the pilot study, I made a detailed timeline for the project with color-coded tasks, flourish-shaped arrows pointing at milestones of the upcoming fieldwork. When I presented this plan to Professor Shiraito, he smiled and told me that “when doing research, it is generally helpful to think of the world in two ways: the ideal world and the real world. You should be prepared for both.” Wise words. Because of this, I am grateful for the Roy Pierce Award for offering the opportunity to catch a glimpse of the real world. And I am indebted to Professor Shiraito for helping me see the potential of attaining the ideal world with intelligence and appropriate tools.

Top 10 Most-Viewed CPS Blog Posts in 2017

post developed by Catherine Allen-West

Since its establishment in 2013, a total of 137 posts have appeared on the Center for Political Studies (CPS) Blog. As we approach the new year, we look back at 2017’s most-viewed posts. Listed below are the posts that you, our dear readers, found most interesting on the blog this year. 


What makes a political issue a moral issue? by Katie Brown and Timothy Ryan (2014)

There are political issues and then there are moral political issues. Often cited examples of the latter include abortion and same sex marriage. But what makes a political issue moral?An extensive literature already asserts a moral vs. not moral issue distinction. Yet, there is no consensus in how to distinguish between moral and non-moral political issues. Further, trying to sort issues into these categories proves challenging.

 


 

The Spread of Mass Surveillance, 1995 to Present by Nadiya Kostyuk and Muzammil M. Hussain (2017)

By closely investigating all known cases of state-backed cross-sector surveillance collaborations, our findings demonstrate that the deployment of mass surveillance systems by states has been globally increasing throughout the last twenty years. More importantly, from 2006-2010 to present, states have uniformly doubled their surveillance investments compared with the previous decade. 

 


 

Why do Black Americans overwhelmingly vote Democrat? by Vincent Hutchings, Hakeem Jefferson and Katie Brown (2014)

In 2012, Barack Obama received 93% of the African American vote but just 39% of the White vote. This 55% disparity is bigger than vote gaps by education level (4%), gender (10%), age (16%), income (16%), and religion (28%). And this wasn’t about just the 2012 or 2008 elections, notable for the first appearance of a major ticket African American candidate, Barack Obama. Democratic candidates typically receive 85-95% of the Black vote in the United States. Why the near unanimity among Black voters?

 


 

Measuring Political Polarization by Katie Brown and Shanto Iyengar (2014)

Both parties moving toward ideological poles has resulted in policy gridlock (see: government shutdowndebt ceiling negotiations). But does this polarization extend to the public in general? To answer this question, Iyengar measured individual resentment with both explicit and implicit measures.

 


 

Is policy driven by the rich, or does government respond to all? by Catherine Allen-West (2016)

The enthusiasm for both Trump and Sanders’ messages about the influence of money in politics brings up an important question: Is policy driven by the rich, or does government respond to all? Political scientists have long been interested in identifying to what degree wealth drives policy, but not all agree on it’s impact.

 

 


 

Exploring the Tone of the 2016 Election by U-M undergraduate students Megan Bayagich, Laura Cohen, Lauren Farfel, Andrew Krowitz, Emily Kuchman, Sarah Lindenberg, Natalie Sochacki, and Hannah Suh, and their professor Stuart Soroka (2017)

Political economists often theorize about relationships between politics and macroeconomics in the developing world; specifically, which political or social structures promote economic growth, or wealth, or economic openness, and conversely, how those economic outcomes affect politics. Answering these questions often requires some reference to macroeconomic statistics. However, recent work has questioned these data’s accuracy and objectivity. An under-explored aspect of these data’s limitations is their instability over time.

 


 

Crime in Sweden: What the Data Tell Us by Christopher Fariss and Kristine Eck (2017)

In a recent piece in the Washington Post, we addressed some common misconceptions about what the Swedish crime data can and cannot tell us. However, questions about the data persist. These questions are varied but are related to two core issues: (1) what kind of data policy makers need to inform their decisions and (2) what claims can be supported by the existing data.

 


 

Moral conviction stymies political compromise by Katie Brown and Timothy Ryan (2014)

Ryan’s overarching hypothesis boils non-compromise down to morals: a moral mindset orients citizens to oppose political compromises and punish compromising politicians. There are all kinds of issues for which some citizens seem resistant to compromises: tax reform, same-sex marriage, collective bargaining, etc. But who is resistant? Ryan shows that part of the answer has to do with who sees these issues through a moral lens.

 


 

Does the order of names on a ballot affect vote choice? by Katie Brown and Josh Pasek (2013)

Ballots list all candidates officially running for a given office so that voters can easily choose between them. But could the ordering of candidate names on a ballot change some voters’ choices? 

 

 

 


 

Inside the American Electorate: The 2016 ANES Time Series Study by Catherine Allen-West, Megan Bayagich and Ted Brader (2017)

Since 1948, the ANES- a collaborative project between the University of Michigan and Stanford University- has conducted benchmark election surveys on voting, public opinion, and political participation. This year’s polarizing election warranted especially interesting responses. 

 

The Politics of Latinidad

Post developed by Mara Ostfeld and Catherine Allen-West

The effectiveness of America’s system of democratic representation, in practice, turns on broad participation. Yet only about 60 percent of voting eligible Americans cast their vote in presidential elections. This number is nearly cut in half in off-year elections (about 36 percent), and participation in local elections is even lower. This lack of electoral engagement does not fall equally across racial and ethnic subgroups. Latinos, for one, are particularly underrepresented at polling booths across the country. In 2016, eligible Latino voters were about 20 percentage points less likely to vote than their White counterparts, and about 13 percentage points less likely to vote than their Black counterparts.

This fall, a group of 24 University of Michigan undergraduate students sought to explore this disparity and pinpoint what, if anything, works to increase Latino political participation. In the class, entitled The Politics of Latinidad, CPS Faculty Associate and U-M Political Science Professor Mara Ostfeld taught her students how to measure public opinion and challenged them to analyze the factors that affect Latino political participation.

Today, more than 50,000 Latinos live in Detroit and a majority of them reside in City Council District 6 in Southwest Detroit which is precisely where this course focused. The students began by studying the history of Latinos in Southeast Michigan and exploring how Latinos played critical roles in the city’s development dating back to before World War I. They analyzed broad trends in Latino public opinion, and considered how and why these patterns might be similar or different in Detroit. Students then designed their own pre-election polls to take into the field.

In order to understand what affects voter turnout, students surveyed over 300 residents of Southwest Detroit to measure the issues that were most important to them.

Photo of U-M students \

Students pictured here: Storm Boehlke, Mohamad Zawahra, Alex Tabet , Hannel So, Sion Lee.

The results illustrate some powerful patterns. Among the issues that the residents found most important, immigration and crime stood out. Forty-nine and 45 percent of Latinos listed immigration and crime, respectively, as issues of particular concern, with only 31 percent of residents saying that they felt safe in their own home.

Latinos in Southwest Detroit feel extremely high levels of discrimination.  Seventy percent of Latinos surveyed said they felt Latinos face “a great deal” of discrimination. This significantly exceeds the roughly half of Latinos nationwide who say they have experienced discrimination.

Student Alex Garcia visits residents in Detroit.

Local issues were also at the forefront of residents’ minds. Latinos had mixed views on the city’s use of blight tickets to combat housing code violations, with one third of respondents supporting them and one third opposing them.

As local organizations, like Michigan United, continue trying to get a paid sick leave initiative on the ballot in 2018, they can expect strong support among Latinos in Southwest Detroit. About two out of every three Latinos in the area indicated they would be more likely to support a candidate who supports the paid sick leave requirement.

The students then followed up with the residents a month later to see if they planned to vote in the upcoming city council election. At this point, the students implemented some interventions that have been used to increase political participation like, evoking emotions that have been shown to have a mobilizing effect, framing voting as an important social norm, and speaking with voters immediately before an election. With the election now over, students are back in the classroom analyzing the effectiveness of these interventions and will use their first-hand experience to better understand public opinion and political participation.

 

 

Exploring the Tone of the 2016 Campaign

By undergraduate students Megan Bayagich, Laura Cohen, Lauren Farfel, Andrew Krowitz, Emily Kuchman, Sarah Lindenberg, Natalie Sochacki, and Hannah Suh, and their professor Stuart Soroka, all from the University of Michigan.


The 2016 election campaign seems to many to have been one of the most negative campaigns in recent history. Our exploration of negativity in the campaign – focused on debate transcripts and Facebook-distributed news content – begins with the following observations.

Since the advent of the first radio-broadcasted debate in 1948, debates have become a staple in the presidential campaign process.  They are an opportunity for voters to see candidates’ debate policies and reply to attacks in real-time. Just as importantly, candidates use their time to establish a public persona to which viewers can feel attracted and connected.

Research has accordingly explored the effects of debates on voter preferences and behavior. Issue knowledge has been found to increase with debate viewership, as well as knowledge of candidates’ policy preferences. Debates also have an agenda-setting effect, as the issues discussed in debates then tend to be considered more important by viewers. Additionally, there is tentative support for debate influence on voter preferences, particularly for independent and nonpartisan viewers. While debate content might not alter the preferences of strong partisans, it may affect a significant percentage of the population who is unsure in its voting decision. (For a review of the literature on debates effects, see Benoit, Hansen, & Verser, 2003).

Of course, the impact of debates comes not just from watching them but also from the news that follows. The media’s power to determine the content that is seen and how it is presented can have significant consequences. The literatures on agenda setting, priming, and framing make clear the way in which media shape our political reality. And studies have found that media’s coverage of debates can alter the public’s perception of debate content and their attitudes toward candidates. (See, for instance, Hwang, Gotlieb, Nah & McLeod 2006, Fridkin, Kenney, Gershon & Woodall 2008.)

This is true not just for traditional media, but for social media as well. As noted by the Pew Research Center, “…44% of U.S. adults reported having learned about the 2016 presidential election in the past week from social media, outpacing both local and national print newspapers.” Social media has become a valuable tool for the public to gather news throughout election cycles, with 61% of millennials getting political news from Facebook in a given week versus 37% who receive it from local TV. The significance of news disseminated through Facebook continues to increase.

It is in this context that we explore the nature of the content and coverage of the presidential debates of 2016.  Over the course of a term-long seminar exploring media coverage surrounding the 2016 presidential election, we became interested in measuring fluctuations in negativity across the last 40 years of presidential debates, with a specific emphasis on the 2016 debates. We simultaneously were interested in the tone of media coverage over the election cycle, examined through media outlets’ Facebook posts.

To test these hypotheses, we compiled and coded debate transcripts from presidential debates between 1976 and 2016. We estimated “tone” using computer-automated analyses. Using the Lexicoder Sentiment Dictionary (LSD) we counted the number of positive and negative words across all debates. We then ran the same test over news articles posted on Facebook during the election cycle, taken news feeds of main media outlets including ABC, CBS, CNN, NBC, and FOX. (Facebook data are drawn from Martinchek 2016.)

We begin with a simple measure of the volume of tone, or “sentiment,” in debates.  Figure 1 shows the total amount of sentiment – the total number of positive and negative words combined, as a percentage of all words – in all statements made by each in candidate across all debates.  In contrast with what some may expect, the 2016 debates were not particularly emotion-laden when compared to past cycles. From 1976 through to 2016, roughy 6.9% of the words said during debates are included in our sentiment dictionary. Hillary Clinton and Donald Trump’s speeches were essentially on par with this average; neither reached the peak of 8% (like 2004) or the low of 6% (like 2012).

Figure 1: Total Sentiment in Debates, 1976-2016

Figure 2 shows the percent of all sentiment words that were negative (to be clear: negative words as a percent of all sentiment words), and here we see some interesting differences.  Negativity from Democratic candidates has not fluctuated very much over time. The average percent of negative sentiment words for Democrats is 33.6%.  Even so, Hillary Clinton’s debate speeches showed relatively high levels of negativity, at 40.2%. Indeed, Clinton was the only Democratic candidate other than Mondale to express sentiment that is more than 40% negative.

Figure 2: Negativity in Debate Speeches, By Political Party, 1976-2016

Clinton’s negativity pales in comparison with Trump’s, however.  Figure 2 makes clear the large jump in negativity for Donald Trump in comparison with past candidates. For the first time in 32 years, sentiment-laden words used by Trump are nearly 50% negative – a level similar to Reagan in 1980 and 1984. Indeed, when we look at negative words as a proportion of all words, not just words in the sentiment dictionary, it seems that nearly every one in ten words uttered by Trump during the debates was negative.

The 2016 debates thus appear to be markedly more negative than most past debates. To what extent is the tone of debate content reflected in news coverage? Does negative speech in debates produce news coverage reflecting similar degrees of negativity?  Figure 3 explores this question, illustrating negativity (again, negative words as a proportion of all sentiment words) in the text of all Facebook posts concerning either Trump or Clinton, as distributed by five major news networks.

What stands out most in Figure 3 are the differences across networks: ABC, CNN, and NBC show higher negativity for Trump-related posts, while Fox shows higher negativity for Clinton-related posts.  CBS posts reflect a more neutral position.

Figure 3: Negativity in Facebook News Postings by Major Broadcasters, By Candidate, 2016

Clearly, political news content varies greatly across news sources. Trump’s expressed negativity in debates (and perhaps in campaign communications more generally) does not necessarily translate to more negative news content, at least by these measures. For instance: even as Trump is expressing more negative sentiment than Clinton, coverage in Fox is more positive towards Trump.  Of course, news coverage isn’t (and shouldn’t be) just a reflection of what candidates say. But these make clear that the tone of coverage for candidates needn’t be in line with the sentiment expressed by those candidates.  Expressing negative sentiment can produce negative coverage, or positive coverage, or (as Figure 3 suggests), both.

This much is clear: in line with our expectations, the 2016 presidential debates were among the most negative of all US presidential debates.  The same seems true of the campaigns, or at least the candidates’ stump speeches, more generally.  Although there was a good deal of negativity during debates, however, the tone of news coverage varied across sources.  Depending on citizens’ news source, even as candidates seem to have focused on negative themes, this may or may not have been a fundamentally negative campaign cycle. For those interested in the “tone” of political debate, our results highlight the importance of considering both politicians’ rhetoric, and the mass-mediated political debate that reaches citizens.

 


This article was co-authored by U-M capstone Communication Studies 463 class of 2016, which took place during the fall election campaign. Class readings and discussion focused on the campaign, and the class found themselves asking questions about the “tone” of the 2016 debates, and the campaign more generally. Using their professor Stuart Soroka as a data manager/research assistant, students looked for answers to some of their questions about the degree of negativity in the 2016 campaign.