The Center for Political Studies (CPS) is a non-partisan research center. Posts are not endorsements.
ICYMI (In Case You Missed It), the following work was presented at the 2020 Annual Meeting of the American Political Science Association (APSA). The presentation, titled “Electoral Volatility in Competitive Authoritarian Regimes” was a part of the session “Elections Under Autocracy” on Sunday, September 13, 2020.
Until recently, there has been little need to measure the electoral volatility, changes in vote shares between parties, in authoritarian regimes because most conventional authoritarian regimes were either one-party or no-party systems. In general, high levels of volatility are considered to be a sign of instability in the party system and show that the existing parties are unable to build connections with their constituencies.
As a greater number of authoritarian regimes have permitted electoral competition and greater party autonomy, electoral volatility has become more salient. Multiparty elections in competitive authoritarian regimes are different from those in democracies, in that competition is more constrained and incumbents have the ability to manipulate the outcomes.
Electoral volatility can provide clues about the level institutionalization in the ruling and opposition parties, as well as the level of support for the authoritarian incumbent. Low volatility suggests a high level of stability and control in the ruling party institutionalization; high volatility as associated with weak party organizations, weak societal roots, and low levels of cohesion.
The authors tested the relationship between electoral volatility, which is the most commonly used measure of party system institutionalization, and the survival of competitive authoritarian regimes. To do this, they used a dataset that included authoritarian regimes in the post-WWII period that hold minimally competitive multiparty elections with basic suffrage, which are determined using indicators from the Varieties of Democracy (V-Dem).
Specifically, the authors measure two types of electoral volatility in competitive authoritarian regimes: type-A volatility and type-B volatility. Type-A is volatility measures the exit and entries of parties from the system. Type-B volatility measures the reallocation of votes or seats from one party to a competitor.
Electoral authoritarian regimes are more stable when they tightly control the party system and the opposition is disorganized. The authors conclude that type-B volatility promotes authoritarian replacement, while type-A volatility is associated with a greater likelihood of a democratic transition. In addition to considering measures of party system institutionalization in authoritarian regimes, future case studies may shed more light on the link between electoral dynamics and outcomes.
ICYMI (In Case You Missed It), the following work was presented at the 2020 Annual Meeting of the American Political Science Association (APSA). The presentation, titled “Local Economic Malaise and the Rise of Anti-Everything Extremism” was a part of the session “Extreme Parties and Positions” on Saturday, September 12, 2020. Post developed by Hayden Jackson and Katherine Pearson.
Is it economic downturns or threats to cultural identity that lead some individuals to respond to populist and extreme-nationalist appeals? These explanations complement, rather than compete with one another, according to new research by Diogo Ferrari, Rob Franzese, Hayden Jackson, ByungKoo Kim, Wooseok Kim, and Patrick Wu.
Some people experiencing a decline in standard of living may react by supporting populist movements, including those that place blame for economic and social deterioration on out-groups. While economic downturns can spur support for nationalism, these factors are also deeply entwined with feelings of being left behind in a social and cultural context. This is especially true in hard-hit rural communities that feel neglected and misunderstood by policy makers and elites.
Whereas previous literature presents economic malaise and cultural or status threat as competing explanations for the rise of populist attitudes, the authors of this paper argue that these effects are not competing, but complementary. When the community experiences economic decline, some individuals will feel that their identity is under threat, and that they are looked down upon by elites. The feeling that their way of life is under attack leaves some individuals susceptible to extremist appeals. However, these appeals do not work on all members of the community equally; important differences may be explained by life experiences, education, personal income, and demographics, especially race.
One’s views and behaviors grow as a result of complex economic and cultural experiences. Some people will have experiences or personalities that predispose them to respond differently to economic and social shocks. For some, economic decline may trigger xenophobic, anti-elite reactions that will not be experienced by all members of the community.
To test the relationship between economic malaise and the perception of social threat, the authors conducted two empirical explorations. The first study reanalyzed data from Mutz 2018 to identify the effects of features like neighborhood decline or individual characteristics in subgroups with different responses to economic decline.
A second study focuses on structural differences that appear in data from Twitter data before and after automotive plant shutdowns in southeast Michigan and northeast Ohio. Data suggests that neighborhood economic shocks, like the closing of a factory, triggered rising extremist expression in at least some contexts. The increase in extremist-engaging Tweet activity was largest in the community around Lordstown, Ohio, which is predominantly white and rural/exurban. By comparison, the data showed slightly negative trends in extremist Tweets in the predominantly Black, urban community around Hamtramck, Michigan, which was also hit by a plant closing.
The authors hypothesize that the response to an economic shock, such as a plant closing, is likely to depend on the size of the closure “shock”, or how much impact it has and on the community, as well as the social and demographic characteristics of the local workers, particularly the community’s urban or rural nature and the racial makeup. In the analysis, these factors were most relevant when determining an extremist-engagement response. The bigger the economic shock to the community, and the more white and rural the community, the more likely it is to see an extreme response.
ICYMI (In Case You Missed It), the following work was presented at the 2020 Annual Meeting of the American Political Science Association (APSA). The presentation, titled “Joint Image-Text Classification Using an Attention-Based LSTM Architecture” was a part of the session “Image Processing for Political Research” on Thursday, September 10, 2020. Post developed by Patrick Wu and Katherine Pearson.
Political science has been enriched by the use of social media data. However, automated text-based classification systems often do not capture image content. Since images provide rich context and information in many tweets, these classifiers do not capture the full meaning of the tweet. In a new paper presented at the 2020 Annual Meeting of the American Political Science Association (APSA), Patrick Wu, Alejandro Pineda, and Walter Mebane propose a new approach for analyzing Twitter data using a joint image-text classifier.
Human coders of social media data are able to observe both the text of a tweet and an attached image to determine the full meaning of an election incident being described. For example, the authors show the image and tweet below.
If only the text is considered, “Early voting lines in Palm Beach County, Florida #iReport #vote #Florida @CNN”, a reader would not be able to tell that the line was long. Conversely, if the image is considered separately from the text, the viewer would not know that it pictured a polling place. It’s only when the text and image are combined that the message becomes clear.
A new framework called Multimodal Representations Using Modality Translation (MARMOT) is designed to improve data labeling for research on social media content. MARMOT uses modality translation to generate captions of the images in the data, then uses a model to learn the patterns between the text features, the image caption features, and the image features. This is an important methodological contribution because modality translation replaces more resource-intensive processes and allows the model to learn directly from the data, rather than on a separate dataset. MARMOT is also able to process observations that are missing either images or text.
MARMOT was applied to two datasets. The first dataset contained tweets reporting election incidents during the 2016 U.S. general election, originally published in “Observing Election Incidents in the United States via Twitter: Does Who Observes Matter?” The tweets in this dataset report some kind of election incident. All of the tweets contain text, and about a third of them contain images. MARMOT performed better at classifying the tweets than the text-only classifier used in the original study.
In order to test MARMOT against a dataset containing images for every observation, the authors used the Hateful Memes dataset released by Facebook to assess whether a meme is hateful or not. In this case, a multimodal model is useful because it is possible for neither the text nor the image to be hateful, but the combination of the two may create a hateful message. In this application, MARMOT outperformed other multimodal classifiers in terms of accuracy.
As more and more political scientists use data from social media in their research, classifiers will have to become more sophisticated to capture all of the nuance and meaning that can be packed into small parcels of text and images. The authors plan to continue refining MARMOT, and expand the models to accommodate additional elements such as video, geographical information, and time of posting.
This year the American National Election Study (ANES) will conduct its 19th time series study of a presidential election. In every U.S. presidential election since 1948, the ANES has conducted pre- and post-election surveys of a large representative sample of American voters.
On August 12, 2020, Vincent Hutchings gave a talk outlining the history of the study, and why it is the “gold standard” of political surveys. You can view a recording of his talk below, and view tweets about the talk here.
The history and significance of the ANES
The ANES was originally launched at the Institute for Social Research at the University of Michigan. Since 2005 the study has been a collaboration between the University of Michigan and the Institute for Research in the Social Sciences at Stanford University.
Since 1977, the ANES has been funded by the National Science Foundation. It is used by scholars as well as high-school students, college students, and journalists. The data are made publicly available online for free as soon as it is processed after the election; principal investigators of the study do not receive privileged access to the survey data.
The ANES aims to answer two fundamental questions: how do citizens select the candidate they vote for? Why do some citizens participate in politics (e.g., vote, work on campaigns, etc.) while others do not? These questions are answered with nationally representative survey data.
The value of the ANES comes not only from the care and precision brought to designing questions, but also from the way the study balances continuity and innovation. In order to achieve this balance, the ANES asks identical questions over time about vote choice, turnout, party identification, ideology, political information, and attitudes about candidates. But even as questions are preserved over time, new questions are added about issues as they arise. The investigators and board members solicit public input on new questions and determine which ones will add value.
Recent data trends
Professor Hutchings outlined findings from some of the questions that were recently added to the ANES, including questions about the Black Lives Matter movement and police misconduct.
Respondents to the 2016 ANES were asked to rate the Black Lives Matter movement on a 0-100 “feeling thermometer” scale. Ratings 50-100 degrees signal favorable feelings toward the group; ratings 0-50 degrees signify unfavorable feelings. Respondents would rate the group at the 50 degree mark if they don’t feel particularly warm or cold toward the group.
Hutchings points out that there are important partisan and racial divides in the results shown above. For example, Black Republicans have warmer feelings toward the Black Lives Matter movement than white Democrats in 2016. This question will be repeated in the 2020 study, giving researchers a way to track changes in perceptions of the movement over time.
Attitudes toward the Black Lives Matter movement were a very strong predictor of the candidate a respondent would vote for in 2016. As Hutchings showed using the graphic below, voters who supported the Black Lives Matter movement were much more likely to support Hillary Clinton for president.
Similarly, perceptions of police violence were correlated with voter preference. Those respondents who believed that whites were treated better by the police were much more likely to support Hillary Clinton than respondents who believed that police are unbiased.
The value of the ANES
Professor Hutchings concluded his talk by reflecting on the value of the ANES. “It allows us an opportunity to assess the health of our democracy,” he said. “We can assess levels of trust in government, levels of perceived corruption in government, levels of racial animus, levels of religious and gender intolerance. We can assess how things have changed – or how things have not changed – over time. And we can only do this as a consequence of this study.”
Michael Traugott, research professor at the Center for Political Studies, was featured on the Michigan Minds podcast. In the recording and transcript below, Professor Traugott discusses the timing of the presidential election and whether there are fraudulent concerns with mail-in voting after President Trump tweeted about both topics on Thursday, July 30, 2020.
A transcript of Michael Traugott’s remarks follows.
There’s been quite a bit of research about voting by mail. I actually participated in a research project in Oregon in 1995 the first all-mail election and there is no indication that mail-in voting produces any kind of fraud. For that matter, we have almost no fraud in American elections.
Having a vote-by-mail election is a complicated enterprise. Any election is an audit process in which the security of the ballots has to be maintained. Vote-by-mail elections actually cost more than a machine-based election because it requires more staff, the votes come in over a longer period of time, they have to be secured, and then counted. So it’s just as safe and secure, with proper preparation and with sufficient funding, as any other machine election.
One thing that might be going on is that the President is trying to run out the clock, in the sense that in order to have a secure vote-by-mail election, we probably have to have the funding in place and the local election administrators have to be organized by September. So there’s really only four or five weeks left in order to prepare for our mail election or to have a large number of absentee ballots printed and available.
It’s actually a kind of a fable or a myth that we have national elections in the United States. We really have a series of state and local elections held on the same day. But all of the rules about how you register, how you can get an absentee ballot, how many precincts there are, all of this is regulated by local officials. So while each local official is responsible for the election in their own jurisdiction. It takes a lot of coordination to get the votes counted, for example, at the state level.
Congress passed a law in 1845 as a way of regularizing the Electoral College procedures and they said that federal elections will be held on the Tuesday after the first Monday in November in even numbered years, and that has set the calendar for all of our elections. They have never been altered or postponed. Sometimes under unusual circumstances a local election has been postponed, for example a storm or hurricane or something like that. But there is no way that the president of the United States can change the date of an election. It requires an act of Congress.
I think the tweets are strategic. Donald Trump uses these tweets to distract journalists, for example, from covering other important elements of the news of the day. They also have purpose in appealing to his particular base but they don’t serve any useful function for the general public. And in fact, I would be concerned that tweets about the quality of voting in the United States or the need to postpone election day would increase distrust in the public about how our government functions. That’s clearly a bad thing.
I think that the Trump administration is trying to question the validity of the election in November, the accuracy of the vote count and other related factors. It’s all of a kind of debilitating message to American democracy.
Experts from the Center for Political Studies are available to discuss current topics related to elections, politics, international affairs, and more. Click here to find experts.
Our research finds that the label used to describe an act of violence can change perceptions of it.
By Kiela Crabtree and Corina Simonelli
With the fifth anniversary of the Mother Emanuel A.M.E. Massacre in Charleston, South Carolina, the nation still grapples with how to understand and remember the nine people killed in their house of worship on June 17, 2015.
The perpetrator of those murders has been sentenced to death, after being convicted on federal hate crime charges. But, in the aftermath of the killings, there was public uncertainty about how to describe what occurred. The murders certainly met legal definitions about what constitutes a hate crime, but there seemed to be a need for a stronger language to describe the massacre.
President Barack Obama, in his eulogy for Reverend Clementa Pinckney, remarked that the massacre at Mother Emanuel A.M.E., “was an act that drew on a long history of bombs and arson and shots fired at churches, not random but as a means of control, a way to terrorize and oppress…”
In a previous study, we find evidence to suggest that violence against black people is more likely to be classified by the public as a “hate crime,” but that such incidents are also perceived as being isolated, less destructive, and also less impactful on society at large than an act of terrorism. This suggests that the label of “hate crime” might minimize the seriousness of racial violence and imply that those incidents do not stem from similar wide-spread networks and ideologies that are associated with terrorism.
Does the label used to describe acts of violence such as these influence perceptions of the event? Here’s what our research suggests.
Labels shift emotional responses to racial violence
Our January 2020 survey experiment asked 1,012 subjects to read a brief breaking news story about a fictional shooting with several casualties. In the experiment, we alternated whether we described the incident as a “hate crime,” a “terrorist attack,” or a “mass shooting.” We also alternated the race of the perpetrator and the victims, describing them as either white or black. Subjects read a tweet about the fictional incident and then answered questions about their emotional reactions, their own perceived likelihood of victimization, and what punishments they believed were warranted by the attack.
We find that, regardless of who perpetrated the attack, subjects reported higher levels of anger after reading about an incident labeled as a “hate crime,” when a white male perpetrator targeted a black university. We find that there are no distinct differences in anger when comparing “terrorism” and “mass shooting,” nor under those labels do the race of the victim or perpetrator influence levels of anger.
While likelihood of personal victimization is slightly higher for those who see the hate crime condition with a white perpetrator, we see that this variable is not strongly influenced by treatments.
We also find that support for the death penalty to punish the shooting is significantly lower among subjects who read about a hate crime perpetrated by a black person.
The interaction of race and label matter as well
But, do all people perceive violence the same way? We cannot take for granted that violence, and racial violence at that, is viewed the same way by members of different racial groups, especially when long legacies of violence are in play.
Therefore, we also look at how these labels might elicit distinctive responses among white and non-white participants. Stratifying our sample this way, we find that there are distinct responses among members of different racial groups. Non-white respondents indicated greater support for the death penalty to punish the crime in all conditions that had a white perpetrator, regardless of the label. However, we see little difference across conditions among white subjects.
Racial group attachment moderates these in a way that we might expect – the lowest support for the death penalty is among white subjects with high racial group attachment who read about a hate crime committed by a white perpetrator. Among non-white subjects we see that higher racial attachment is associated with greater support for the death penalty in all conditions with a white perpetrator. Support is consistent in conditions with a black perpetrator.
Additionally, non-white subjects who read about an act of terrorism committed by a white actor reported a higher likelihood of victimization than white respondents in the same condition.
We also find that anger is stable for all, white and non-white subjects, who saw a terrorism condition, regardless of if the perpetrator is white or black.
Anger increases slightly among non-white people who saw a mass shooting targeting black people. But, anger is significantly higher among non-white people who read about a hate crime targeting black people, when compared to those who read about a hate crime targeting white people. We see no significant changes among white subjects across these conditions.
Labels can send a powerful message to the public
While the label “terrorism” has come to be associated with acts of violence committed by Islamists, the term has long been used by black people to describe white violence against them. Regardless of legal parameters, we wondered if using the term “hate crime” to instead describe these acts minimizes public perceptions about them. Our research suggests that calling an act of violence a “hate crime” has little effect on perceptions of violence for white Americans. For non-white Americans, however, we find that this label is associated with greater anger in reaction to the incident.
The boundaries of the law determined the charges levied against the perpetrator of the Mother Emanuel A.M.E. killings, but the press, politicians, and the public grappled for language to describe them. Our research suggests that while the the term “terrorism” seems more rhetorically evocative of a long history of violence against black people, it does not necessarily evoke greater anger than use of the term “hate crime” or “mass shooting.” In fact, among non-white respondents, “hate crime” elicits the greatest anger.
Emotions hold powerful political potential, anger in particular has been shown to incite political participation. The words used to describe violence do matter, for the images and narratives they conjure, as well as the emotions they evoke.
Kiela Crabtree (@kielacrabtree) is a PhD. candidate in Political Science at the University of Michigan.
Corina Simonelli (@CorinaSimonelli) is a PhD. candidate in Political Science and the Ford School of Public Policy at the University of Michigan.
What do voters really learn from the media about presidential candidates? A new book by experts from the University of Michigan, Georgetown University, and Gallup, Inc., Words That Matter: How the News Media Environment Allowed Trump to Win the Presidency, offers in-depth analysis and conclusions about the information that mattered most in the 2016 presidential election.
Words That Matter is the collaborative work of eight authors: Leticia Bode, Ceren Budak, Jonathan M. Ladd, Frank Newport, Josh Pasek, Lisa O. Singh, Stuart N. Soroka, and Michael W. Traugott. The authors have expertise in a range of disciplines including public opinion, communications, public policy, and computer science, and they take different approaches to the study of campaign media. As a result, the book is nuanced in its handling of news content, social media posts, and survey responses.
There are a number of reasons that the 2016 presidential campaign was exceptional. The media landscape has changed dramatically in recent years, with many people accessing and sharing news through social media. The authors find that news coverage during the 2016 campaign “was more negative than in recent previous presidential campaigns, consistent with these candidates being the most personally unpopular nominees in polling history.”
Words That Matter guides readers through the media’s process of producing information, how that information gets to voters, and what information voters actually absorb. The authors argue that advances in media technology call for new ways to measure the information environment. They address this challenge through innovative surveys and content-analytic research techniques.
A key finding of the work is that the largely negative campaign played out differently for the two major party candidates: Donald Trump was confronted with a shifting but largely uninfluential series of scandals, whereas Hillary Clinton faced a single, stable, and influential scandal involving her use of a private email server. The authors show that the long-standing nature of the email scandal made it especially sticky in the public mind. They write “Even when there was other news about Hillary Clinton, the public thought about ‘her emails’—for months and months—indeed, starting before the election campaign was even underway.”
Some scholars are skeptical that the media have the power to influence votes, whereas others believe that campaign messaging can have a large effect. The authors show that not all voters are equally open to influence. The most politically-engaged voters are steadfast, while the least engaged are difficult to reach at all. “The fact that middle- and low-engagement voters are the most susceptible to influence,” write the authors, “also helps us understand why the topics given heavy attention in the media environment can be consequential.”
News stories that are repeated over a long period of time are the most likely to be noticed by people who are not highly engaged with politics. The authors also find that telling people how to vote is less effective than simply changing the subject. Voters who don’t follow the news carefully may not remember the details of various scandals, but they do tend to notice if one specific issue garners sustained coverage. Those sustained scandals stand out as more important when voters make their choice.
The authors conclude that media content can indeed shift voter behavior for some voters, and that in a close election like the 2016 presidential election, these effects can be of real consequence.
“Not which ones, but how many” is a phrase used in list experiments instruction, where researchers instruct participants, “After I read all four (five) statements, just tell me how many of them upset you. I don’t want to know which ones, just how many.” In retrospect, I was surprised to see that this phrase encapsulates not only the key research idea, but also my fieldwork adventure: not which plans could go awry, but how many. The fieldwork experience could be frustrating at times, but it has led me to uncharted terrain and brought insights into the research contexts. The valuable exposure would not have been possible without support from the Roy Pierce Award and guidance from Professor Yuki Shiraito.
Research that I conducted with Yuki Shiraito explores the effect of behavior on political attitudes in authoritarian contexts to answer the question: does voting for autocracy reinforce individual regime support? To answer this question, two conditions need to be true. First, people need to honestly report their level of support before- and after- voting in authoritarian elections. Second, voting behavior needs to be random. Neither situation is probable in illiberal autocracies. Our project addresses these methodological challenges by conducting a field experiment that combines a list experiment and a randomized encouragement design in China.
In this study, list experiments are used instead of direct questions to measure the respondents’ attitudes towards the regime in the pre- and post-election surveys. The list experiment is a survey technique to mitigate preference falsification by respondents. Although the true preference of individual respondents will be hidden, the technique allows us to identify the average level of support for the regime within a group of respondents. In addition, we employ a randomized encouragement design where get-out-the-vote messages are randomly assigned, which help us estimate the average causal effect of a treatment. For effect moderated by prior support for the regime, we estimate the probability of the prior support using individual characteristics and then estimate the effect for the prior supporters via a latent variable model.
While the theoretical part of the project went smoothly and the simulation results were promising, the complication of fieldwork exceeded my expectation. For the list experiment survey, the usually reticent respondents started asking questions about the list questions immediately after the questionnaires were distributed. Their queries took the form of “I am upset by option 1, 2, and 4, so what number should I write down here?” This was not supposed to happen. List experiments are developed to conceal individual respondents’ answers from researchers. By replacing the questions of “which ones” with the question of “how many,” respondents’ true preference is not directly observable, which makes it easier for them to answer sensitive questions honestly. Respondents’ eagerness to tell me their options directly defeats the purpose of this design. Later I learned from other researchers that the problem I encountered was common in list experiment implementation regardless of research contexts and types of respondents.
The rationale behind respondents’ desire to share their individual options despite being given a chance to hide them is thought-provoking. Is it because of the cognitive burden of answering a list question, which is not a familiar type of questions to respondents? Or is it because the sensitive items, despite careful construction, raise the alarm? Respondents are eager to specify their stance on each option and identify themselves as regime supporters: they do not leave any room for misinterpretation. To ease the potential cognitive burden, we will try a new way to implement the list experiment in a similar project on preference falsification in Japan. We are looking forward to seeing if it improves respondents’ comprehension of the list question setup. The second explanation is more concerning, however. It suggests the scope condition of list experiments as a valid tool to elicit truthful answers from respondents. Other more implicit tools, such as endorsement experiments, may be appropriate in those contexts to gauge respondent’s preference.
Besides the intricacies of the list experiment, carrying out encouragement design on the ground is challenging. We had to modify the behavioral intervention to adapt needs from our local collaborators, and the realized sample size was only a fraction of the negotiated size initially. Despite the compromises, the implementation is imbued with uncertainty: meetings were postponed or rescheduled last minutes, instructions from local partners are sometimes inconsistent and conflictual. The frustration was certainly real. But the pain makes me cognizant of judgment calls researchers have to make in the backstage. The amount of effort required to produce reliable data is admirable. And as a consumer of data, I should always interpret data with great caution.
While the pilot study does not lead to a significant finding directly, the research experience and the methods we developed have informed the design of a larger project that we are currently doing in Japan.
I always thought of doing research as establishing a series of logical steps between a question and an answer. Before I departed for the pilot study, I made a detailed timeline for the project with color-coded tasks, flourish-shaped arrows pointing at milestones of the upcoming fieldwork. When I presented this plan to Professor Shiraito, he smiled and told me that “when doing research, it is generally helpful to think of the world in two ways: the ideal world and the real world. You should be prepared for both.” Wise words. Because of this, I am grateful for the Roy Pierce Award for offering the opportunity to catch a glimpse of the real world. And I am indebted to Professor Shiraito for helping me see the potential of attaining the ideal world with intelligence and appropriate tools.
Post developed by Katherine Pearson and Mai Hassan.
States can exert powerful social control over citizens. In her newly-published book, Regime Threats and State Solutions, Mai Hassan demonstrates how leaders use their authority to manage bureaucrats to advance their policy and political goals.
By controlling which bureaucrats are hired, where they’re posted, how long they stay in a post, and who gets fired or promoted, leaders can induce the bureaucratic behaviors that will help keep them in power.
Focusing on Kenya since independence, Hassan uses qualitative and quantitative data gleaned from archival records and interviews to show how the country’s different leaders have strategically managed the public sector. The data show that the strategic management of bureaucrats existed under the one-party authoritarian regime beginning with Kenya’s independence in 1963, and continued after Kenya’s transition to an electoral regime in 1991. Under both regime types, leaders were able to co-opt societal groups that are needed for support and coerce the groups most likely to challenge the regime.
Haasan examines how leaders rely on bureaucrats to manage popular threats against the leader such as protests and strikes. First, she argues that leaders assign bureaucrats with deep social bonds to those areas where the leader needs to co-opt the local population. These deep social bonds compel bureaucrats to work on behalf of the area. But in areas that need more coercion, the leader tends to prevent the posting of bureaucrats with deep local roots because those who have deep roots will be unwilling to coerce locals.
Second, she finds that the parts of the country that are most strategically important for the leader — and thus, the areas of the country where bureaucratic compliance is needed most — are staffed by the most loyal bureaucrats, those who are most willing to help keep the leader in office. Leaders can also neutralize the risks of disloyal bureaucrats by carefully managing where potentially disloyal officers are posted and how long they stay in their posts.
Why would a leader hire or promote disloyal bureaucrats in the first place? Hassan addresses this question by showing that most state bureaucracies are not actually packed with the leader’s in-group members, who tend to be the most loyal. Elite threats, such as coups, tend to be more pressing than popular ones. Leaders can appease rival elites by hiring and promoting bureaucrats who are loyal to elites other than the leader. Strategically posting and shuffling bureaucrats allows the leader to recruit potentially disloyal bureaucrats in order to temper elite threats, while still relying on loyal bureaucrats to prevent popular threats where they are most likely to emerge.
Overall, Hassan’s analysis shows how even states categorized as weak have proven capable of helping their leader stay in power. Her work demonstrates how the strategic management of bureaucrats solves both elite and popular threats, and in doing so, highlights why bureaucrats must be taken seriously. States may assert power, but states do not act: bureaucrats do.
Post developed by Kelly Askew and Katherine Pearson
Maasai Remix, a documentary directed by the award-winning team of filmmaker Ron Mulvihill and anthropologist Kelly Askew, follows three Maasai individuals who confront challenges to their community by drawing strength from local traditions, modifying them when necessary, and melding them with new resources.
The three subjects of this documentary live in different settings. Adam Mwarabu advocates for Maasai pastoralists’ rights to land in international political spheres. Evalyne Leng’arwa pursues a college education in the U.S., having convinced her father to return 12 cows to a man contracted to marry her. Frank Kaipai, the village chairman, faces opposition as he promotes secondary school education and tries to save the village forest. Sharing a goal of Maasai self-determination in an ever-changing world, Adam, Evalyne, and Frank innovate while maintaining an abiding respect and love for their culture.
In a companion film produced by Kelly Askew entitled The Chairman and the Lions, the focus was on the many challenges faced by Parakuyo Maasai, including marauding lions, landgrabbers, illegal loggers, male youth out-migration and lack of education. By contrast, the message of Maasai Remix is one of hope and innovation, and of connected yet individual initiatives in addressing communal challenges. It champions the use of tradition as a mode of community development and as such offers a rebuttal to the widespread view that culture is always and only an obstacle to development initiatives. Quite the contrary, Adam, Evalyne and Frank illustrate through word and deed how traditions can be deployed as tools of empowerment. Thus, integrating their culture with modernist goals in a manner, not unlike the remixes of hip-hop DJs, Maasai Remix celebrates the achievements of these individuals and the lifeways of their community.