By Taylor Black
Theatre Journal’s timely special issue on Post-Fact Performance, published in December 2018, brings together a number of fast-paced, volatile subjects, particularly electoral politics and digital-era communication. As such, it is no surprise that even the quickest of academic timelines requires updates. In my essay for the print journal, titled “The Numbers Don’t Lie: Performing Facts and Futures in FiveThirtyEight’s Probabilistic Forecasting,” I examined the US poll aggregator and political forecast news site FiveThirtyEight’s coverage of the 2016 US presidential election through a performance studies lens, focusing particularly on the public experience of data-driven forecasting as predictions of the future with theatrical tropes of soothsaying. One notable contributor to the sense of data journalists as fortune tellers in 2016, evidenced in FiveThirtyEight’s own reflections as well as external analysis, was partisan-motivated over-reading of the percent probability offered by the election model. These concerns are echoed in the postmortem conversations by political and social scientists and other forecasting projects. Editor-in-chief Nate Silver and his team had difficult questions to answer going into the also-contentious 2018 midterm, perhaps most urgent among them was how to dampen expectations and whether it is possible to shift readers’ desires to favor uncertainty.
In their 2018 midterms coverage, particularly the race for control of the House of Representatives and later the Senate, Silver and his team have altered their election forecasting model to directly address some of the coverage questions raised in 2016. Some of these changes reflect a difference in kind, as tracking a large number of congressional races is notably different from covering a presidential race. Other changes, however, are intended to affect reader perceptions of the forecast, and these suggest a possible reopening of the question of how data journalism performs future prediction.
The principles and overall methodology of the statistical analysis, Silver says in the methodology of the 2018 House forecast, are familiar to previous models. What has changed in 2018 is in how readers are encouraged to approach the model, and what kinds of interpretations are made more or less accessible. Importantly, the election forecast remains probabilistic and continues to speculate on future possibilities, so the potential for reading uncertain gestures about future events as a vision of the future appears to remain. The range-based uncertainty of forecasting, as Silver states, is crucial to FiveThirtyEight’s effort “to develop probabilistic estimates that hold up well under real-world conditions.” However, the way it gestures toward future outcomes has changed substantially.
One of the most misread features of the 2016 model was the “Now-cast,” which forecast “who would win if the election happened tomorrow” from any given point of the data—meaning that the Now-cast performed the most overt act of fortune telling, in that it suggested a percentile outcome based strictly on available data, which generated a confusion much lamented by Silver on Twitter during 2016, in that people read the Now-cast as a forecast. In 2018, this approach has been cut entirely, and the aesthetic alone has notably shifted. Rather than foregrounding a percentage, the primary visual indicator is a probability bar graph. This visualization underscores the scientific nature of data analysis rather than offering the desirable, but more subject-to-interpretation glimpse into the future of the Now-cast’s percentage. This approach also contributes to ongoing questions in data visualization, where the aesthetics of “beautiful” data intersect questions of how to accurately convey information—a challenge of form familiar to artistic debates in general. The 2018 model is offered in three degrees of complexity, what Silver calls “the cheeseburger menu.” Its three variations—“lite,” “classic,” and “deluxe”—vary the amount of non-polling peripheral data points used (for instance, candidates’ fundraising data), further diversifying a method used in 2016. Perhaps most importantly, the 2018 model does away with percentage forecasts altogether, speaking instead in terms of odds (7 in 9, 3 in 4, and so on). Fractional odds, while less precise, offer a more stable form by which to transmute trends into possible outcomes. The stated goal of this move supports one of Silver’s concerns expressed throughout the post-2016 period: that readers, and particularly the non-data news media, misrepresent uncertainty as it exists in a percentage or in a forecast in general. Simply, readers are not interested in uncertainty and thus ignore it. The reframing, he argues in the model methodology, places uncertainty at the forefront of the reader’s evaluation. Rather than performing one future (and then relativizing the certainty thereof), this approach performs multiples simultaneously.
These decisions and their potential impacts are chronicled in a special edition of FiveThirtyEight’s Politics Podcast, called “Model Talk.” In these episodes, Silver and producer Galen Druke took listener questions and discussed the minutiae of the model as it unfolded in the 2018 race. Throughout these conversations, the uncertainty was not only built into the model, but actively performed. Silver, for instance, spoke directly to the audience in the August 16th inaugural edition, explaining how a three-in-four probability (at the time, the Democrats’ probability of gaining control of the House) demonstrates that “if you play out this universe . . . four times, they [Republicans] win once on average . . . that’s in the Hillary Clinton zone” of the odds in 2016. The expression of uncertainty, he argued, was attainable for “commonsensical activities, things that are only a 75 percent chance are things we would confirm, or be careful about, and be fairly unsurprised if they failed to occur.” The focus on both uncertainty and prior consequences are repeated themes. In the August 30th edition, Silver mentioned that “it’s easier to explain the uncertainty when you say . . . ‘three in ten chance,’—by the way, do you know who else had a three in ten chance of becoming [president]?” The Model Talk episodes, in addition to demonstrating forecasting’s constant, complex relationship with uncertainty, offered a playful way to dissect the consequences of forecast reporting. Silver even expressed sentiments shared by the 2016 commenters: “I suppose I wish that every forecast could be 100–0, but that’s not the way the world works . . . and it probably shouldn’t be . . . things can change, people have to vote.” The Model Talk conversations not only analyzed the impact of forecasting in the world, but performed an active, relatable engagement with both the audience and the futurity of forecasting. The 2018 model’s new aesthetics gestures as well toward accessibility while leaning into the science of data journalism, particularly through the “cheeseburger” language and range of options. Overall, FiveThirtyEight seems to be (implicitly) rising to the performance challenges instantiated by the confusion in 2016.
While there are a number of potentially relevant factors at play, notably the fact that midterm elections are often less discussed overall in US politics, the conversation around the FiveThirtyEight forecast and future prediction seems to have notably dampened. Silver himself noted in the Model Talk on September 20th that “there aren’t as many models anymore, and I think the people who are looking at the data tend to agree with one another to a greater extent than they did in 2016. . . . There just isn’t as much controversy as there was in the past,” before qualifying, “but I like conflict, so. . . .” The kinds of tragic, grand assertions of outcomes that were hallmarks of Twitter conversation in 2016 are replaced by a calmer, more data-focused, and notably less popular conversations (far fewer likes and retweets being thrown around). The lack of coverage perhaps suggests a willingness to chasten the desire to see the future in favor of strong civic practices.
It remains unclear whether these reframings will have an impact on electoral outcomes, voter participation, or media coverage following the results of the November election, although again this writing will be already out of date by the time of its publication, but the emergent changes suggest a shift in the perspective of readers and writers alike when considering the role of prediction speech in influencing election outcomes. Perhaps Silver’s message that while, yes, polls do affect voter behavior, “shitty media coverage affects voter behavior, so our job is to provide non-shitty media coverage” is its own, quieter form of promising a (slightly) better future.