Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Decolonizing Climate Code / Decolonizing Climate (2022 Code Critique)

Any critique of the code that generates the predictive climate data used by the IPCC must be done with care.  

As Mark Marino has explored in chapter four of Critical Code Studies (MIT Press, 2020), there are some are some rather nasty media assemblages out there eager to take snippets of climate code out of context. This can make a person a bit nervous about any attempt to offer even constructive constructive critique of this code (This is why I'm posting this so late into the working group... I've spent the last four weeks fretting over this, worried that even the most minor poetic flourish on my part might be taken out of context...)

The "aura of anxiety" surrounding this code is very much the result of over four decades of disinformation campaigns directed against climate science (Oreskes and Conway 2010) by those who wish to reap temporary benefits from fossil fuel capitalism (Storm 2009). These "merchants of doubt" are quite ready to use any excuse they can find to claim the climate scientists are "overreacting."

That is why I feel it is important to say, right out of the gate, that from what I learned while exploring this code, the scientists are actually underreacting.

If anything, the scientists have been, perhaps, too timid with their presentation of the severity of what these data indicate.


Any discussion of climate struggle should also be prefaced with a critique of "crisis epistemology," as has been offered by Indigenous philosopher Kyle Powys Whyte (Potawatomi) in "Against Crisis Epistemology" (2021). Whyte blends Indigenous histories with critical theory to argue against the use of "crisis" to frame the climate struggle.

This is not because the struggle to stabilize the climate isn't dire (It is.), but rather, because the rhetoric of "crisis" has been used for centuries to rob Indigenous groups of land rights. This issue should be of particular concern to environmentalists, since research continues to show that giving Indigenous people their land back may be one of our best tactics to mitigate climate change and other forms of ecological peril (Etchart 2017, ICCA 2021).

This critique of "climate crisis epistemology" was brought into sharp relief in settler occupied Washington State in 2021, when colonial magistrate Jay Inslee evoked the rhetoric of "climate crisis" in a broad move that robbed Indigenous groups of decision-making power over their ancestral lands. This move happened in the backdrop of the colonial government's practice of regularly allowing corporations to remove ecosystems on public lands, as exemplified by Washington State DNR's regular practice of allowing corporations to clearcut publicly-owned "legacy forests," which contain mature trees over 120 years old with diameters wider than four feet (Seattle Times 2021, C4RF.org), making them invaluable carbon sinks (Whitehead 2011).

With one face, the colonial government evokes "climate crisis" as its reason to remove Indigenous decision-making powers over their ancestral lands, while, with its other face, it gives logging firms free rein to clearcut public forests. As Kyle Whyte has emphasized, the "climate crisis" rhetoric has simply become a new way to justify a "state of exception" (Agamben 2005) in which Indigenous sovereignty is suspended, while those who actively harm the climate and ecology are given free rein.

Again, this shouldn't undermine the seriousness of climate struggle—it simply means lending more care to how this struggle is communicated, while working to better center Indigenous voices in climate discourse.


Any conversion about climate in these times also must be prefaced with a critique of the concept of the "population bomb," as has been laid out by Emily Klancher Merchant in her book Building the Population Bomb (Oxford University Press, 2021).

The myth of “the population bomb,” or the belief that population in and of itself drives ecological destruction remains pervasive among climate activists, scientists, and even some thinkers in the humanities. There is little evidence, however, that more people inherently generate more emissions, or that reducing the number of people on the planet would reduce emissions.

As Merchant shows, the concept of the "population bomb" was invented by eugenicists in the middle of the twentieth century, and then was promoted by American businessmen as a means of stalling environmental regulation. Reductive equations that link population and emissions distract us from publicly addressing the activities that directly fuel emissions, while foreclosing upon successful tactics for reducing emissions that simply aren't visible when taking such a hyperopic view.


File: zchunk_L251.en_ssp_nonco2.R]
Programming Language: R
Developed: 2017
Authors: Paralit Patel, pkyle, kvcalvin, mbins
Source File: https://github.com/JGCRI/gcam-core/blob/master/input/gcamdata/R/zchunk_L251.en_ssp_nonco2.R
Interoperating Files: GCAM (https://github.com/JGCRI/gcam-core/releases), The SSP database V.2 (https://tntcat.iiasa.ac.at/SspDb)
Note: In the parlance of climate modellers, this is a run of the GCAM5 under SSP1/5, SSP2, and SSP3/4.

# Copyright 2019 Battelle Memorial Institute; see the LICENSE file.

#' module_emissions_L251.en_ssp_nonco2
#'
#' Produce regional non-CO2 emissions coefficient data for SSPs 1/5, 2, and 3/4 as well as a GDP control.
#'
#' @param command API command to execute
#' @param ... other optional parameters, depending on command
#' @return Depends on \code{command}: either a vector of required inputs,
#' a vector of output names, or (if \code{command} is "MAKE") all
#' the generated outputs: \code{L251.ctrl.delete}, \code{L251.ssp15_ef}, \code{L251.ssp2_ef}, \code{L251.ssp34_ef}, \code{L251.ssp15_ef_vin}, \code{L251.ssp2_ef_vin}, \code{L251.ssp34_ef_vin}. The corresponding file in the
#' original data system was \code{L251.en_ssp_nonco2.R} (emissions level2).
#' @details This section takes in the non-CO2 emissions factors for SSP 1/5, 2, and 3/4 across sectors.
#' First, create data that spans the years 2010-2100 in five year increments by interpolation of input data.
#' Next, add emissions controls for future years of vintaged technologies for SSP emission factors.
#' Then, add columns that have regional SO2 emission species.
#' A GDP control of regional non-CO2 emissions in all regions is also created.
#' @importFrom assertthat assert_that
#' @importFrom dplyr filter group_by left_join mutate select semi_join
#' @author CDL May 2017
module_emissions_L251.en_ssp_nonco2 <- function(command, ...) {
  UCD_tech_map_name <- if_else(energy.TRAN_UCD_MODE == 'rev.mode', "energy/mappings/UCD_techs_revised", "energy/mappings/UCD_techs")
  if(command == driver.DECLARE_INPUTS) {
    return(c(FILE = "emissions/A_regions",
             # the following files to be able to map in the input.name to
             # use for the input-driver
             FILE = "energy/calibrated_techs",
             FILE = "energy/calibrated_techs_bld_det",
             FILE = UCD_tech_map_name,
             "L161.SSP2_EF",
             "L161.SSP15_EF",
             "L161.SSP34_EF",
             "L201.nonghg_steepness",
             "L223.GlobalTechEff_elec"))

  } else if(command == driver.DECLARE_OUTPUTS) {
    return(c("L251.ctrl.delete",
             "L251.ssp15_ef",
             "L251.ssp2_ef",
             "L251.ssp34_ef",
             "L251.ssp15_ef_elec",
             "L251.ssp2_ef_elec",
             "L251.ssp34_ef_elec",
             "L251.ssp15_ef_vin",
             "L251.ssp2_ef_vin",
             "L251.ssp34_ef_vin"))
  } else if(command == driver.MAKE) {

    year <- value <- GCAM_region_ID <- Non.CO2 <- supplysector <- subsector <-
      stub.technology <- agg_sector <- MAC_region <- bio_N2O_coef <- future.emiss.coeff.year <-
      SO2_name <- GAINS_region <- emiss.coeff <- technology <- minicam.energy.input <-
      tranSubsector <- tranTechnology <- input.name <- efficiency <-
      future.emiss.coeff.year <- NULL # silence package check.

    all_data <- list(...)[[1]]

    # Load required inputs
    get_data(all_data, "emissions/A_regions") ->
      A_regions
    get_data(all_data, "L161.SSP2_EF") ->
      L161.SSP2_EF
    get_data(all_data, "L161.SSP15_EF") ->
      L161.SSP15_EF
    get_data(all_data, "L161.SSP34_EF") ->
      L161.SSP34_EF
    get_data(all_data, "L201.nonghg_steepness") -> L201.nonghg_steepness
    L223.GlobalTechEff_elec <- get_data(all_data, "L223.GlobalTechEff_elec")

    # make a complete mapping to be able to look up with sector + subsector + tech the
    # input name to use for an input-driver
    bind_rows(
      get_data(all_data, "energy/calibrated_techs") %>% select(supplysector, subsector, technology, minicam.energy.input),
      get_data(all_data, "energy/calibrated_techs_bld_det") %>% select(supplysector, subsector, technology, minicam.energy.input),
      get_data(all_data, UCD_tech_map_name) %>% select(supplysector, subsector = tranSubsector, technology = tranTechnology, minicam.energy.input)
    ) %>%
      rename(stub.technology = technology,
             input.name = minicam.energy.input) %>%
      distinct() ->
      EnTechInputNameMap

There are 309 more lines. If you want to see the whole thing, go to: https://github.com/JGCRI/gcam-core/blob/master/input/gcamdata/R/zchunk_L251.en_ssp_nonco2.R


The thing I find interesting about this code, along with the larger assemblages it is part of, isn't what's there, but what isn't there.

First, I should explain that when you run this code and plot the data, you're going to wind up with a graph that looks somewhat like this:

You've probably seen this graph. It has made rounds in the media, and it has been part of the most recent meetings of the IPCC and COP. Indeed, we are looking at the cat's pajamas of climate code! This is the big stuff—the CMIP-approved stuff! This code, and its assemblages, generate the predictive climate data that inform leaders at the highest levels of government, along with the media and the public, as we attempt to collectively make decisions about what happens next.

The process of developing, reviewing, and running the scenario-models that produced this graph took roughly seven years to complete, and the parameters for the models and the data have been gone through many rounds of professional examination and peer-review before receiving their CMIP endorsement. CMIP stands for Climate Model Intercomparison Project, and the CMIP may be thought of as an ensemble of over 100 endorsed models from over 50 modeling institutions that work somewhat like an orchestra to model different aspects of the climate (ECMWF 2021), (WCRP).

Each major run or "phase" of the CMIP corresponds with a new IPCC climate assessment cycle. Each five-year cycle centers the development and presentation of an Assessment Report (AR) that draws upon the predictive climate data generated by the CMIP. We are currently at the end of the 6th IPCC assessment cycle, with the CMIP6 having been run in roughly 2017-18 and the AR6 having been offered to policymakers and the public in 2021. Presently, models and parameters are being prepared, adjusted (to fit the latest research), and reviewed for the 7th phase of the CMIP, which, once completed, will inform the AR7, which will last from 2023-2028.


Here at the CCSWG, we are readers--readers of code, and sometimes other things.

Looking at the graph above, the reader might find themselves in a bit of quandary, asking, "How should I interpret this graph? How should I read it?"

The way many of us have been taught to interpret the above graph (and even some scientists read it this way) is to point to the top two lines and say, “Those are the worst-case scenarios.” And then we are supposed to gesture towards the colorful lines at the bottom and say, “That’s the good outcome.” We then often look towards the middle-lower lines and assume that we can sort of average everything together and say, "Those lines down there, the middle ones, that must be the path we are on."

The trouble is... that is not how to read this graph.

To get a sense of how this graph should actually be read, you need to get a bit more familiar with the code and data that produced it.


If you look up at the above code snippet, in Lines 30-32, we can see a spot in which the SSP scenario data is being plugged into the GCAM planetary model:

30.              "L161.SSP2_EF",
31.              "L161.SSP15_EF",
32.              "L161.SSP34_EF",

The GCAM, or Global Change Analysis Model, is an open source model of the planet, and it may be downloaded here on GitHub. The GCAM has been developed in a colaboratory effort between multiple labs, and an article documenting the design of the latest version by Katherine Calvin and her co-authors may be found in issue 12 of the journal Geoscientific Model Development. (Note: the above code snippet was co-authored by one of the co-authors of this paper, so we can assume this code snippet was at least partially composed by someone involved with the creation of the model itself!)

The GCAM is one of six models of Earth that the SSP data is designed to run on (Riahi et al. 2017). The other five planetary models (in case you are curious) are the AIM/CGE, the IMAGE, the MESSAGE-GLOBIOM, the REMIND-MAgPIE, and the WITCH. (The WITCH has a rather fun online tool, MAGICC, that includes an online modeling interface you can use to run the SSP data or input a social scenario of your own—this is a very good tool for students who might be new to coding, and you can even use it to run the SSPs without needing any knowledge of coding 🧙‍♀️).


The SSPs are a set of ready-made data sets that represent different scenarios that are designed to be run through these six CMIP-approved models of Earth. There are five scenarios, the SSP 1-5, and the things that happen in them depend upon what the humans in the models do. Each "scenario" is treated as a possible path we might take in the time we have left before catastrophic levels of increased global temperature are locked in.

You can download the SSP scenario data straight from its source, the IIASA SSP Database (you have to create an account to access the data, which takes a couple minutes). If you're in a hurry, you can glance over the SSP1-5 data here on GitHub. (Each of the five files contains all of the data for one of the SSP scenarios). Scenario 1 (or SSP1) is the most optimistic, while the fifth scenario, the SSP5 is the least.


The emergence of the SSP scenarios is briefly explored by Brian O’Neill and his co-authors in issue 9 of Geoscientific Model Development. In this article, we learn that it was not until the most recent phase of the CMIP, the CMIP6, that the scenarios (which were previously called the RCPs) were removed from the core experiment and became their own MIP, the ScenarioMIP, (and the scenarios themselves were renamed the SSPs). Parsing the scenarios from the core experiment in this way has opened up space for more time and attention to be spent refining the scenarios.

The scenarios, it should be emphasized, are the only part of the CMIP's predictive climate data that factors in the role of social activities upon future emissions.

Since the last CMIP, the modeling community that creates the Scenarios has been working to bring more voices into the process of developing the Scenarios. In 2019, these communities hosted the first Scenarios Forum, an event bringing together "a diverse set of communities" to engage with the Scenario models to "exchange experiences, ideas...and identify knowledge gaps for future research." These scenario modelers have continued to acknowledge a need for feedback, including a "particular need for social sciences to inform scenarios on societal dynamics and tipping points" O’Neill et al. 2020.

My hope, I suppose, with this critique, is to show how voices from the humanities many also be valuable to the efforts to develop, refine, and share these scenario-models. Likewise, I do think that the humanities would benefit from lending its artful tools towards the scenario-models of CMIP and other efforts to model the planet--as Katherine Buse has emphasized in her compelling work that treats climate models as a medium in their own right.


Returning to my reading of the above graph, alongside its attendant code and data assemblages, I perceive this graph to be telling me a type of story, or multiple stories, about the future; the actual future, or what Genette has called "nonfiction diegeses" (1980). A diegesis might be thought of as a storyworld, and we usually think of these in fiction settings, such as the diegesis of the MCU or of the Star Trek Universe. There are nonfiction diegeses as well, and sometimes it can be bumpy to bring everyone into the same nonfiction diegesis, but this is required for any form of collective action.

The active process of collectively working your way into a shared narrative about reality that facilitates action has been called "frame alignment" by cultural theorist Robert Carley (Culture and Tactics, SUNY Press, 2018; cf. Gramsci 2011).

So, each line on the graph represents a different story about the future, a story that is also represented by the SSP scenario data set that was used to generate that line on the graph. You cannot average these lines together anymore than you could average together the works of Shakespeare. Each of these sets of scenario data is its own independent story, its own nonfiction narrative about the fate of the world.

Perhaps we are now beginning to suspect that we should be suspicious of those who tell us to read the graph as if the lower-middle lines represent the path we are on...


What exactly are these scenarios though? What are the stories that each set of SSP data purports to tell?

There are five "official" SSP Narratives that are used to explain why the numbers are set the way they are in each of the SSP data sets. (Riahi et al. 2017).

Here are the first two narratives, which are used to explain the two most "optimistic" emissions scenarios, the SSP1 and the SSP 2:

SSP1 Sustainability – Taking the Green Road (Low challenges to mitigation and adaptation)
The world shifts gradually, but pervasively, toward a more sustainable path, emphasizing more inclusive development that respects perceived environmental boundaries. Management of the global commons slowly improves, educational and health investments accelerate the demographic transition, and the emphasis on economic growth shifts toward a broader emphasis on human well-being. Driven by an increasing commitment to achieving development goals, inequality is reduced both across and within countries. Consumption is oriented toward low material growth and lower resource and energy intensity.

SSP2 Middle of the Road (Medium challenges to mitigation and adaptation)
The world follows a path in which social, economic, and technological trends do not shift markedly from historical patterns. Development and income growth proceeds unevenly, with some countries making relatively good progress while others fall short of expectations. Global and national institutions work toward but make slow progress in achieving sustainable development goals. Environmental systems experience degradation, although there are some improvements and overall the intensity of resource and energy use declines. Global population growth is moderate and levels off in the second half of the century. Income inequality persists or improves only slowly and challenges to reducing vulnerability to societal and environmental changes remain.

The other three narratives can be found in this paper by Keywan Riahi and his co-authors (2017).

It is worth noting that the present SSP data also include a rather problematic relationship between "population" "GDP" "investment" and "policy." I will save that critique for another day, as it falls more into the realm of social science.

Needless to say though, as a reader, I am having quite a bit of trouble suspending my disbelief for the "optimistic" scenarios--both because I do not find the narratives compelling enough to convince me that they will actually happen, and because as I pour through the data, I can find nothing to support that these scenarios are suggesting evidence-based types of social organizing changes that would be needed to us steer away from the course we are on.

This isn't to say that I don't have a level of optimism. I just want to be convinced.

I am a very picky reader, and it is going to take an extra amount of work to convince me to suspend my disbelief.


Currently, according to IEA estimates, we are on a path to create a 6°C rise in global temperature by 2100. Likewise, the tiny "dip" in emissions that occurred at the start of the pandemic quickly vanished as companies turned towards cheaper forms of fossil fuel, including coal, to deal with economic woes (Tollefson 2020, IEA 2021).

That is the path we are currently on:

A 6°C rise locked in less than 80 years.

It is a path so bad, the scientists didn't think to include it in the SSP scenarios

The Y-axis on the above graph doesn't even go that high.


This continued acceleration in emissions comes despite survey data that shows that 64% of people in 50 countries representing half the world's population believe climate change is a global emergency (UNDP-Oxford 2019). It also comes despite research that shows that 62% of Americans say that climate change has effected them personally (Pew 2021).

This is not a matter of political will: the majority of people already want to change the emissions course we are on. There is something else at play...


This isn't to say that there is no data in the SSPs to back up the emissions drops.

There is, in fact, plenty of data in the scenarios about emissions going down--but it all centers specific, practical actions: more solar, less coal, etc.

There is nothing, however, in the data that explains the types of social intervention that would need to happen to allow these changes to occur.

I want there to be a compelling reason for any emissions dip in the stories these scenarios tell, otherwise, as an audience member, I am going roll my eyes and yawn. I am going to feel like I’m stuck watching a B-rated movie in which the actions of the protagonists simply aren't even feasible, my engagement waning, as the theatre around me starts to burn.


For any drop in emissions in these scenarios/narratives, I want that dip to be linked to an evidence-based change in the institutions and power structures that mediate our lives.

By "evidence-backed," I mean, I want evidence that shows these changes to the social structure would reduce emissions, or at least have a good chance of it.


Here's a bit of evidence that gives me hope:

  • One study looking at 72 countries over a 30 year period found that a one unit increase in a country’s score on the women’s political empowerment index was associated with an 11.51% decrease in the country’s carbon emissions (Lv and Deng 2019).
  • Reducing racism has been associated with reductions in emissions (Stephens 2019, Johnson 2020)
  • Restoring Indigenous peoples’ decision-making power over their ancestral lands is an emissions-reducing tactic—and this is backed up by geospatial satellite data (ICCA 2021).
  • Transitioning from investor-owned to cooperatively-owned business structures has likewise been explored as a promising tactic to reduce emissions (ICA-EU 2021).

Why not put together some Scenarios that link emission drops to factors like these?

Beyond lending plausibility, adding these evidence-backed factors to the equation would give me, as a reader of the code, some protagonists to root for.

Comments

  • Hi @Hayley_Steele! Thank you for this thread, and for breaking down the distinctions between the code, the visualization and between the narrative each line in the graph represents. I agree it's not clear that each line represents a unique story, and with it unique social assumptions that are still very rooted in western, capitalist ways. I think this demonstrates a critical point about ideology and models by showing how embedded the two are.

    If I was asked how I could improve that graphic, I would try and incorporate some sort of interaction. Each line contains policies, infrastructure and a general quality of life / texture that differ wildly. Interaction helps users break away from a single narrative, which this graph is not, and find that quality of life in the data. It also helps break down the temporal structure of a story by letting users go back to the beginning and take another path. Interaction that lets users zoom on each line's nuance and specific scenario in detail could help users see the difference. Creating a panel that shows each line on it's own graph could also be helpful.

    To the evidence that gives you hope - I'd also like to see this worked into the models / scenarios. This reminds me of Lauren Klein's and Cathrine D'Ignazio's Data Feminism. In particular, I think this falls into the category of data that is not seen. However, in this case the data has been collected, but the alternative narrative is left out of the major discussion.

  • edited February 2022

    Hi @samwalkow! Thank you so much for this thoughtful feedback!

    You bring up a very good point about the need for interactive tools that engage these data! Such things are unfortunately few and far between...

    Carbon Brief does have a page that breaks down the SSPs, but it's not very WYSIWYG and it asks a lot of the reader, I think. It would be nice if there were interactive media that did even more of the work for the reader, in helping visualize what these data imply.

    Also, there is the Climate Pathways app, which lets you use your finger to draw a better emissions path on your phone. But this app is pretty parred down and some usability issue. I'm also pretty critical of any interactive tool that allows users to "make emissions go down" with connecting those emission drops to evidence based social practice. Like, we aren't going to get out of this with a wave of our hands, or a swipe of our fingers. We can't swipe left on climate change...

    There was an effort to represent the social processes that could be behind future emissions drops in an interactive game created by EarthGames Lab at the University of Washington. In the game, "Deal: A Green New Election" (2017), electoral politics are used to attempt to put humanity on track for one of the "optimistic" emission scenarios.

    I think it's a good thing that Deal connects emissions drops to social practice, but it doesn't fully take into account the historical role of social movements in prompting policy change. I think it is easy for scholars in the humanities and social sciences to take for granted that sweeping policy changes like the New Deal have been pushed through because of widespread social movements, not because of having the "right person" in office. Due to academic soling though, folks in STEM tend to lack access to these historical understandings, and this gets reflected in the way climate data is presented and even constructed. (Like, in the SSP data itself, policy change is treated as having a role in reducing emissions, but that policy change isn't connected to the social movements that historically prompt policy change, I think this has led to an over-focus on electoral politics and a lack of focus on the coalition-building necessary for successful social movement organizing. Likewise, as Emily Merchant has pointed out in her critique, the data itself currently contains some inaccurate myths about eugenics, leading to types of rhetoric and sub-cultural norms that ultimately dissolve coalitions before they can even form.)

    EarthGames Lab does have a new project they are working on that they call "Life Reimagined," a climate-science-infused riff on the board game Life that attempts to factor in the needs of other social movements beyond the climate movement in reimagining a zero-emissions society. This also seems like a step in the right direction. It is still under development; it will be interesting to see where they take it.


    For anyone interested in helping create interactive tools or games that engage climate data, EarthGames at UW might be good to reach out to. I imagine they have done the work already to translate the CMIP-approved predictive climate data into coding languages that are easier to use to make apps, games, and interactive tools.


    I'm glad you brought up Klein's and D'Ignazio's Data Feminism! Their work was definitely an influence for this post. ...I've been dabbling in critical data studies (cf: Kitchen and Lauriault 2014) over the last two years and I guess it shows. 😅 It's really interesting thinking those methods—which seem to lean more towards a focus on power structures and assemblages—alongside critical code studies (Marino 2006)—which puts more of a focus on reading practices and materiality of the code...

    I fear my post perhaps leans a bit far into the CDS realm, perhaps getting so wrapped up in power dynamics that I'm not reading the code as closely as I could. I suppose like many millennials, I find it hard to become immersed in anything that I find power-structural weaknesses in.👷‍♀️ Eventually, I'd love to do a line-by-line close reading and annotation of this code (or code like it), but I really needed to express all these are hindrances to my own immersion, as a reader, before I could even begin such a project.

    Anyway, thanks so much for this feedback!


    I would be curious to hear what other people make of this code and its data and assemblages...but also, this is tricky stuff to critique 😅

  • edited February 2022

    Gosh, I realize this all must be a bit depressing to folks who are new to this stuff. 😅

    I do really want to encourage folks to play around with MAGICC. It is supported by one of the six C-MIP-approved models of the planet, the WITCH. This model is the one that's based in Italy, where it seems they have retained a sense of humor in the face of everything, and (perhaps relatedly) their model is by far the most user-friendly.

    So, you can build your own scenario (as a set of data), and use MAGICC to run it through the model of the planet. If you're feeling down about the climate, this can be a weirdly uplifting thing to do as a group. 🌱

Sign In or Register to comment.