Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Week 1: Decolonizing Code -- Discussion Starter

Discussion prompt framed in collaboration with @xinxin

Welcome to 2022 Critical Code Studies Working Group! In this first week, as our goal of thinking about decolonizing code collectively, we would like to ground this discussion in the potential incommensurable goal. On the one hand, we would like to confront the lineages that software brings with it: the histories and financial investments of the military and business industries in computation and the use of computers as techniques for ongoing discriminatory practices in the wider context of settler colonialism. On the other hand, we would like to build alternatives to these lineages, imagining and developing code that is ethical, that is queer, that is feminist, that is anti-racist, anti-colonial, and anti-settler colonialist. At the outset, we would like to make space for whether this is possible. Does decolonizing code fall under an "ethics of incommensurability" as Tuck and Yang invite us to consider as other calls to decolonize do? In other words, how can this discussion about decolonizing code not use decolonizing as a metaphor?

We have picked three main themes as starting points for our discussion.

Narratives of code / Narratives tied to code

  • Problem of Default: Oftentimes, the push for creating a default is based on the argument that there is a natural, progressive, or common design of code that would be applicable and beneficial to all. At the same time, it often requires an enormous amount of financial fundings and cultural capitals for an idea to establish broadly as a default. As a result, narratives that inform the default are often complacent and entangled with the oppressive and extractive agendas used in military, corporation, and industries. Who this default centers and at the same time leaves out is a potential starting point for thinking about who a particular piece of code is for. We are interested in learning about your examples where the default creates roadblocks that obstruct, delay, and overpower decisions that may otherwise be made on a local level.
  • Funding: Continuing our first point, we also find the efforts around making tools for all shying away from a critical discussion around the source of funding and funding models. Tracing where the money comes from; who the beneficiaries are and how a project is supported sustainably is crucial in understanding its impact and limitations. Broadly speaking, we are interested in asking the question of "we want to decolonize, but who is going to fund it?"
  • Monopolies in Context
    We find it difficult to consider the ongoing colonial/capitalistic violence without studying the context of the software

  • Example: Taiwan’s COVID app using Google Maps to prototype. The programmer used Google map to prototype a mask tracking app that benefited the residents of Taiwan but put him in enormous credit card debts. Too often we are dependent on Big Tech products to prototype responses to emergencies like this one, how do we set a long-term goal to move beyond this paradigm?

  • Open Street Map and map.
    Crowd-sourced projects, such as OpenStreetMap and Native-Land, can shape their projects according to the value systems their respective communities expect them to. This allows a very different evaluative system than a corporate entity. What implications does this have on who keeps developers accountable? How do we evaluate systems beyond the values of whether they work, how efficient they are, and whether they are written elegantly? In addition, we can return to the question of funding: if the evaluation is dependent on profitability, then what strategies can be leveraged to fund independent projects who can set different evaluative priorities?

  • Finally, open-source software, as discussed in previous CCSWs do have the possibility of being shaped by their communities and maintained. However, many also tend to mimic proprietary software. How can we escape that binary so as not to end up with what is seen as a lesser version of the mainstream software?

  •’s API:

Language Justice

  • Programming language: it seems like a settled question that a non-English speaker may not practice code proficiently without using English. Code educators who are interested in addressing language justice might often themselves in the situation of referencing inspiring esoteric programming languages written in non-anglophone languages such as Yorlang (Yoruba), Wenyan (Chinese), and قلب (Arabic), but ultimately need to return to reading and coding in English in the process of researching, troubleshooting, and incorporating modules and dependencies. As inspiring as some of the projects mentioned above are, we are interested in asking what strategies are possible to not just point to the lack of language justice in the landscape but to live with it?
  • Documentation: The current mindset around inviting contribution or feedback often requires one to learn how to code (in English!). This sets up a false programmer - consumer binary and excludes non-coders from contributing to tools that greatly impact their everyday life. What are ways to reimagine the ways documentations are written in order to remove barriers around tech jargons, which are more often than not inaccessible, obfuscating, alienating and even off-putting.


  • Modularity: we are intrigued by the idea of modularity. I am thinking of John Brown Childs' description of the Haudenosaunee practice in his book Transcommunality: From the Politics of Conversion to the Ethics of Respect.

    "As I understand the work of scholars with Haudenosaunee roots and involvement, such as John Mohawk, Jake Swamp, Oren Lyons, Gerald Taiaiake Alfred, and Lynne Williamson, this word for their culture means, in essence, "People of the Longhouse." The Haudenosaunee dwellings were longhouses containing several families. Each family was connected to the family next to it. But each had its own space, its own identity, and its own autonomy. The Haudenosaunee took this longhouse image of togetherness and autonomy and applied it to the geographic area that is their historical territory. This land ran from the Seneca people of the west, near what is now Buffalo, New York., through the land of the Cayuga, the Onondaga, the Oneida, and the Mohawk, farthest east near what is now Albany, New York. West to east constituted the "Longhouse" of these nations, each linked to the others while maintaining its own space and autonomy." (p.47-48)

    In this chapter titled "Learning from the Haudenosaunee", Childs continues to describe how each family continued to be distinct and retained its individuality and differences while remaining connected to others. The differences are respected. Is there a way to develop software with modular pieces that can be exchanged and have a standard for communication across these differences in a similar way to the longhouses? This opens up questions of standardization but also perhaps templates:

  • Templates / Toolkit?
    Continuing our thought on modularity, we are interested in opening up the question of whether it is possible to create an ethical template. Would it be possible to design a template that is programmed to support differences? And when that template isn’t responding to lived experiences, how can that template be challenged? How can it change/evolve? Can a template follow the intention of a toolkit, programmed in a way to give programmers the utmost agency to modify?

Please feel free to respond to any of these themes/questions in the form of comments below.


  • Tuck, Eve, and K Wayne Yang. "Decolonization Is Not a Metaphor," Decolonization: Indigeneity, Education & Society, Vol 1, Issue 1, p1-40. PDF hosted here
  • Brown-Childs, John. Transcommunality: From The Politics Of Conversion. 1st edition. Philadelphia: Temple University Press, 2003.


  • I think Vernelle Noel's work on "Situated computations" is excellent, and relevant here. Giving the example of a shape grammar for wire-bending craft for Trinidad and Tobago carnival, she argues that computational design tools should:

    • Be built on an ethnographic study
    • Be situated
    • Build on existing skills and knowledges
    • Facilitate sensory, perceptual interaction, and physical manipulation of materials and tools
    • Build communities, not individual isolation
    • Cater to experts and non-experts in CAD and Computer Programming
    • Not require large amounts of computational power and infrastructure
    • Narrative - to tell the stories, histories, and innovations of marginalized and disenfranchised groups and their settings

    Paper: "Situated Computations: Bridging Craft and Computation in the Trinidad and Tobago Carnival"

    Video presentation "Mathematics of wirebending":

  • Thank you for Dr. Noel's lecture, Alex. Very interesting! Dr. Noel named her shape grammar after two of the wire benders she interviewed while developing her theory: Bailey-Derek Grammar. Her decision to do so models Klein and D'Ignazio's directive to "Show Your Work" (see Chapter 7 of their open access book Data Feminism) and make labor visible. Rather than use decolonizing as a metaphor, we could consider labor issues of coding practices and projects, especially if we consider the term labor broadly (e.g. emotional labor).

  • Thanks for the link to the Data Feminism book @KatieA which I really need to read properly.

    These Polynesian stick charts are interesting in comparison to the Bailey-Derek grammar:

    As an aside I'm coming to the conclusion that programming languages are really at a developmental dead end, stuck in a backwater for some decades now. Programming language designers have built a wall around their culture and strongly resist any challenges to it, while at the same time seeing their conception of computation as being universal and all-encompassing. There's an anti-political mindset which means white supremacy and misogyny is tolerated including within the "future of coding" community.

    I think our ideas of what code is, and what it is for, need to be rebuilt on foundations that would seem fresh in comparison, but are grounded in craft, culture and heritage practices, rather than the usual software engineering reference points of white male genius and 'Manhattan projects'.

    So I don't know if it's possible to decolonise code as we currently think about it (although admit I have a lot to learn about decolonisation), but do think it is possible to find alternatives.

  • edited January 2022

    Yes! I am excited by your idea that we "ground" code in "craft, culture and heritage practices." The Polynesian navigators created their stick charts only after they placed themselves close to their canoes and felt "every motion of the vessel." Dr. Noel interviewed (and, it seems to me, connected with) wire benders to inform her theory... and then grieved their deaths. The openness required to create grounded code (to blend your elegant words, @yaxu) must have different motivations than efficiency, profit, or standardization in order to withstand the vulnerability that is part and parcel of that openness. Also, our definition of success must shift. Is code only successful if large numbers of people use it? Is it only successful if it endures? Is efficient or "elegant"? Like Dr. Noel's desire to preserve the art of wire bending, what if our guiding question is not what we will gain by creating or using code, but what we will lose if we don't?

  • Thank you, @fabiolahanna and @xinxin for getting us off on such strong footing. I'm looking forward to how this conversation develops.

    I followed your suggestion and headed over to the site to look at the documentation. (We should start a code-critique thread for this example.)

    One note struck me from that web page:

    Our map is NOT a legal resource, and is not meant to be an academic-level representation of indigenous lands. Our goal is, most of all, to represent Indigenous territories according to Indigenous nations they represent. We are not primarily after extremely precise geographic fidelity, textbook accuracy, or sources that correspond with government data.

    I find this quote useful in considering the way this software is resisting the pull of the corporate and governmental software that is used to map, delimit, and demark these territories, software that normalizes boundaries of colonization with its claims of precision and accuracy. This API seems to resist the tendency to "mimic" at least the truth claims of "proprietary software," which again strikes me as a certain kind of colonial and nationalistic logic with long lines of origin in the history of map-making. It makes me wonder how this resistance to the normalizing narrative of accuracy is evident in the code.

    On your point about the language of programming, it would be useful for us to keep in mind a previous CCSWG discussion of Jon Corbett's Cree# and the Indigenous Programming thread.

  • Documentation: The current mindset around inviting contribution or feedback often requires one to learn how to code (in English!). This sets up a false programmer - consumer binary

    It's wonderful that you mention Wenyan-lang in the intro, and I think it serves as a valuable example of both how to do a non-English programming language very, very well, but also the difficulties in getting away completely from English. For those unfamiliar, Wenyan programs read as Classical Chinese poetry. It has adopted a wide following (17k stars on github etc), many contributors, a professional IDE, and the majority of feedback and contributions are made in Chinese, not in English.

    Yet it also transpiles to other languages (including JavaScript), and here we see the problems with integrating a non-English programming language into existing tech stacks. Fiddling with one of the example scripts in the Wenyan IDE, I get an error in English, generated by JavaScript:

    SyntaxError: unlabeled break must be inside loop or switch

    As Ramsey Nasser said in his "A Personal Computer for Children of All Cultures", "What is the Pashto equivalent of 'AbstractSingletonProxyFactoryBean'?". If you want the advantages of latest optimizations and interoperability with other systems, at some point, English slips through -- and re-working all of it into a non-English-reliant system would be daunting.

  • edited January 2022

    thank you for the thread. I am picking up few areas to make further comment and reference:

    1) Problem of default: Beyond the common design of code, I guess there is also the problem of default in terms of learning how to code. It is commonly taught/learnt in a way to feed into (capitalistic) corporations and tech industry. Of course the CCS, creative and critical coding, art communities try to break this tradition, but there are, still, lots of resistance in terms of what should be taught in a curriculum and in what way and what should be prioritized. I agree with @yaxu regarding the normative grounding of code from the side of software engineering but rarely from cultural practices, and to address indeterminacy to reflect the precarity of lived conditions and further opening up ways of thinking that are multidirectional, across times and scales, more open-ended and indeterminate.

    2) Example of Taiwan's COVID app: I remember this case as the programmer has used the Google Map API but Google only offers very limited free request per day, regardless of the nature of organization e.g education, non-profit, etc. Eric and I have written something on this matter before in particular the asymmetric exchange between corporations and users. In terms of setting up a long-term goal or finding other ways, perhaps we can learn from citizen sense, civic tech (I actually think Taiwan is a really good case for learning - e.g and grass-root communities, free and open source movement in terms of building localized tools by and for a specific community. But of course you also mentioned about many open source software mimic proprietary software that maximizes traffic and efficiency.

    3) In relation to language justice: I totally agree that many of the current discussion of tech/software exclude non-coders, cultural practitioners, activists, and overlooking the expertise and voices from other areas. I want to point to the intervention (re the question of documentation) that was made by The Institute For Technology in the Public Interest on Github two years ago regarding contact tracing apps. What I found interesting is that how the statement is being made on Github under "issues" as if a software error to discuss wider cultural, political and socio-technical issues related to the subject matter.

  • edited January 2022

    We are interested in learning about your examples where the default creates roadblocks that obstruct, delay, and overpower decisions that may otherwise be made on a local level.

    I work in the game space, recently I was working on an interactive narrative in which I wanted the player to be able to choose their gender. Initially the goal was to have the Non-Binary, Feminine and Masculine options. When I looked around at tools and systems that supported this very few to none supported non-binary options. Tools like those provided for Dragon Age: Origins or Neverwinter Nights tended to use a <He/She> token replacement system. This assumption of a gender binary and often an assumption of player masculinity had flow on effects to how the systems were designed, coded and the narrative written. Now these assumptions are an issue for a whole host of reasons but the area that I found interesting was in building responsive narrative elements in English the assumption of a binary or masculine gender limited the system’s capacity to include non-binary options. Given this I decided to start of by picking the default that was most grammatically complex in this case non-binary pronouns and develop the systems, code and text around the assumption of a non-binary default. Version 1 worked to this standard. Then on consultation with the community I expanded this approach to separate visual representations of gender and pronouns and allowing the player to set their own pronouns (Version 2). Because the system had been coded and built to a gender-neutral default it allowed me to expand the system fairly easily. What this made me realise was that building to a different default, in this case a non-binary one, actually meant that a number of development issues e.g. the difficulty of supporting multiple genders, were not actually issues, the issues were caused by the assumptive default of binary or masculine player gender.

    If you are interested you can have a look at Version one and two on my website

  • Lots of interesting stuff so far, thanks! I'm thinking about the problem of the Englishness of programming languages and wondering if we can get any traction by relating it to other highly technical languages.

    In the case of mathematics, it has its own language, but this is sort of Latin (the generally Latin characters for variables, and "+" comes from a ligature of "et," similar things for some other symbols). Mathematics is "universal," but this is because of the total historical dominance of a certain way of doing mathematics. That being the case, at this moment in time mathematical innovation is coming from all over the world. So this is one model: allow the self-development of the technical language to the point at which it sort of forgets its origins in a particular national context, add the total global dominance of that language, and, well, the result will be hard for the culture that spawned it to control.

    Philosophical language offers us a different model. While a large part is translated, certain translation rules develop ("Vernunft" and "Verstand" become "reason" and "understanding," for example) and other terms develop that cut across languages, like German's "Dasein," Latin's "cogito," or Sanskrit's "maya." This maybe answers the question of "What is the Pashto equivalent of 'AbstractSingletonProxyFactoryBean'?" There is likely already a rule for translating Abstract, and possibly Proxy and Factory, but Singleton and Bean would best work transliterated. Other ways are possible, but there's precedent for this problem, anyway. Interestingly, philosophy is both a monolingual and polylingual enterprise; the terms develop within a definite linguistic context, but also exist in a context with other languages' terms. I could imagine a similar future for programming languages, in which we would have "for/while" in English, but some other construct borrowed from some other language in which it developed, etc. Actually, it's maybe a productive question as to why this has not yet happened; historically, technical terms are somewhat polyglot, even in a colonial setting.

    Both of these models also have their own needs for decolonization, of course.

  • Thank you all for this, especially the reflections on the pervasiveness of English in programming. I think that looking at how this dominance affects programmers on a daily basis might also be interesting. In the case of Ruby programmers, for instance, there is a lot of talk about Ruby being "like English" or "very similar to English" - which is always thought of in a positive way. In Ruby, you can write code like this:

    publish_this_post unless css_working_group.starts == in_the_future

    I've tried to think about this purported similarity between Ruby and English as a semiotic process of iconization, in which a sign is interpreted as a icon, e.g. Ruby is an icon of English. This ties in with what anthropologists have called "language ideologies," i.e. how a specific way of conceptualization what language is informs our beliefs about language. There are many ways that Ruby programmers are constantly "reminded" of this iconicity, from libraries to coding styles, linters, and discussions of new features. In a sense, I think it's important to highlight how English is always already becoming pervasive not only through community-level discussions but also through the way the language is implemented and evolves.

    Hope that makes sense!

  • edited January 2022

    In the context of algorithmic research and big data, faces are isolated from bodies through alignment and crop and further distilled into a set of “landmarks.” The landmarks include the following parameters: around chin, line on top of nose, left eyebrow, right eyebrow, bottom part of the nose, line from the nose to the bottom part above, left eye, right eye, lips outer part, lips inner part. In order to show what portraits reduced to data look like, I took the first photograph listed as data and am including the image and the landmark data as reference below (Figure 1). This portrait of a sleeping baby is classified as age 1, gender -male, race – Asian. His/their closed eyes, button nose, and smirky little smile are listed through semantics understandable by an algorithm.
    I want to signal to the ways in which code anchors difference into anthropometric systems that have historically been linked to policing and eugenics.

  • edited January 2022

    The idea of grounding code in craft, culture and heritage practices is compelling. At the same time, we also need to attend to how this framing can itself generate problematics with regards to labour conditions, how people and cultures are (differently) valued and/or exploited. Nakamura (2014) is a very interesting read on this topic. The article discusses a Fairchild semiconductor factory on a Navajo reservation in Shiprock, New Mexico. Here "immigrant women of color were hailed as the ideal workforce because they were mobile, cheap, and above all, flexible; they could be laid off at any time and could not move to look for alternative forms of work, while their employers could close plants and reopen them in locales with the most favorable conditions." Technology is built on precarious and exploitative labour and manufacture moves around the globe for these reasons.

    "Depicting electronics manufacture as a high-tech version of blanket weav-
    ing performed by willing and skillful indigenous women served two goals: it
    permitted the incursion of factories into Indian reservations to be seen as a
    continuation of rather than a break from “traditional” Indian activities, and
    it pioneered the blurring of the line between wage labor and creative-cultural
    labor; one seamlessly became the other. Indeed, one may have replaced the
    other: the new eight-hour workday altered many aspects of family life for the
    Navajo people who worked at Fairchild."

    I think this is the really important bit and something we can particularly engage with—how we frame different kinds of labour matters and shapes who performs this work and how it is values and compensated. Companies don't idealise indigenous crafts because they value them, but because they can co-opt them to generate capital.

  • I've been working with a number of different people and projects on the issue of better support for non-English languages in computational text analysis, though I've never thought about it from a decolonizing perspective. The problem of the default is huge, particularly in text analysis pedagogy: you can learn how to use X or Y method (e.g. anything involving or building on word counts) but usually only the students working on English are then in a position to pick up what they've learned and use it on their own materials. It turns out to be more complicated for many other languages, thanks to inflection (think Romance verb conjugation -- it gets worse with Slavic & Finno-Ugric languages, where nouns also vary in form depending on their syntax, and there's no shortage of other examples) or orthography (think Chinese, which doesn't write spaces between words). But no one ever brings this up in classes, which are presented as "text analysis" not "text analysis for English". In linguistics, there's a movement gaining traction called the Bender Rule (after Prof. Emily Bender), which basically amounts to "name the language you're working on" -- which is something everyone already does, unless they work on English. 🤨

    Within digital humanities, there's more projects and materials that are starting to pay attention to this. I collaborated with Melanie Walsh on an addition to her Intro to Cultural Analytics that reworked the text analysis sections for a handful of other languages. And Programming Historian not only has versions in other languages, it also has things in their guidelines about drawing on examples from languages other than English, too.

    But this is where my reluctance to framing any of this as "decolonizing" comes in: on one hand, it's a huge leap to go from English-only support to English + any (let alone multiple) other language. But the languages that we inevitably choose when we do make that leap are almost always... other colonial languages. Spanish, French, Russian, Chinese -- there's a certain amount of technical and data infrastructure you need to be able to build the tools to scaffold the kind of English-oriented text analysis (e.g. lemmatizers, stemmers, segmenters). And there are definitely amazing, dedicated communities of minority-language NLP developers, like the Ixa group for Basque. But most minority languages don't have that; to be honest, the performance of NLP models even for a lot of major colonial languages is not necessarily good. (Portuguese continues to be a particular thorn in my side in terms of effectively supporting people at work.) There's projects like Princeton's New Languages for NLP that includes some languages where developing NLP models could be a decolonizing act (e.g. Quechua, Efik) but honestly, even there, most of the languages being worked on are historical, even historical-imperial (e.g. Ottoman Turkish).

    Maybe another direction this could go in would be to reimagine text analysis in a way that doesn't start with assumptions of English-like grammar or orthography. It could make for, at a minimum, an interesting thought experiment. But I'll confess to leaning heavily towards the practical over the theoretical (or creative) in my own inclinations, and much as how the programming languages that aren't based on English have a small user base (even among people who speak those languages) compared to Python, the prospect of cutting off other-language text analysis communities from the vast pool of pedagogical materials, support forums, etc. that have sprung up around English-oriented text analysis is worth considering with some gravity if one were to try to implement this. User support is a huge weight; how do you reach the necessary critical mass so it doesn't crush the founders?

  • This is an incredible thread... there are two things I wanted to add to the earlier conversation around craft brought up by @yaxu and @DanielTemkin and others...

    1) String Games (like cats cradle) as narrative... I came across this years ago in the work of Harry Smith posting a pdf from cabinet magazine if people want to learn more

    This makes me think of different representations for change or computation

    2) I do find the bootstrapification of the web deflating - i am referring to the popular javascript library introduced by twitter I think scaffolding has become super popular maybe since the origin of ORMs with DJango and Ruby. Maybe related to the stackification of everything. (lose reference to Bratton).

    I am curious what does unscaffolding look- what is craft architecture of software
    what is the craft version of the stack.

  • edited January 2022

    What a wonderful thread! I'm learning so much, especially about the language-of-code issue and its persistent favoring of colonizer languages. I notice that we haven't turned much to the discussion prompts on funding yet, so maybe this is a good opportunity!

    Paraphrasing a bit, I'd like to respond to this part of @fabiolahanna 's initial post:
    on "the question of funding: if the evaluation [of a digital resource] is dependent on profitability, then what strategies can be leveraged to fund independent projects who can set different evaluative priorities?"

    I'm thinking about this matter of profitability as connected to creating the ideal "universal" tool for everyone to do things the same way. In my own area, I work on creating digital scholarly editions and archives of texts, and I do this in two ways, each a little uncomfortably at odds with the profitability model. I'm going to reflect on each a little bit here, wondering if others have had similar experiences:

    • as teaching exercises with students: small-scale curation projects in which students get to organize, design, and develop a project for the web on their own GitHub repos. These rely on certain common technologies, like learning how git and GitHub works and learning how to code in markup languages and how to process data from texts. Within the "univeral" syntactical frameworks of XML well-formedness and command line prompts, there is also considerable freedom in design when students learn how to write their own XSLT, HTML, CSS, etc. They can design something themselves in their own idioms.

      • I have learned over about a decade of teaching that students ultimately appreciate the freedom of learning their own way with web technologies, and that there's a (sometimes uncomfortable!) freedom to designing for themselves "outside the box" of a canned framework. And yet quite often that is how they were introduced to web development, through a canned set of templates that is supposed to appeal to "everyone" somehow: wix, Wordpress, etc. I don't think it benefits our students to learn to code only based on someone else's packaging of code.
      • But I'm also aware that my university believes the building of education resources (OER) proceeds based on canned templates, an imposed structure from a centralized authority that is heralded is liberating us from profit-based publishing of educational resources. While the design of OER is a laudable effort, we should be considering how it's gated and organized, and how a centralized publishing framework undercuts or even shuts down the notion that scholars can be designers of their own resources. I believe that one way to press for the decolonization of resources is simply to push back against normalizing templated solutions. This doesn't resolve the issue of coding in a colonial language, but it does mean we can try to introduce code to our students as an idiom that they design with. Perhaps we develop our own in-house localized scripts rather than rely on the easy templated infrastructure we're handed because that's what our universities have purchased for our so-called "benefit".
    • as a scholarly activity: As a digital humanities scholar, I work on developing digital scholarly editions and curating + analyzing digital data about literary/historical texts (e.g. a collection of digitized letters from the 19th-century). In my area, I'm aware that colleagues coming from Humanities fields experience a vexed relationship with the same canned templating structures that I've been complaining of with teaching. There are a variety of ways to differentiate "humanities research" from coding and development work, and for some of my colleagues, the independence of coding and design work is too much. But it comes at a cost of having one's project channeled into the software that has become mainstream because grant-funded as a "universal" tool for digital humanities support. I see arguments made within my community that if you're not working in a shared publishing framework, perhaps your work will not be "interoperable" or "interchangeable" with others, even though we work with the international guidelines of the Text Encoding Initiative.

      • In my world, the pressure to normalize one's digital work according to streamlined publishing frameworks can narrow a project, pressuring us to mark only that which is already programmed for the publishing technology. While it's convenient to think this way, it also tends to reduce digital scholarly editions to the expression of the designers of a framework. They risk becoming cookie-cutter expressions of a developer group. Of course, humanities scholars learning to code and program their own publishing outputs can make for a longer cycle of "publication." The messiness of learning this is often perceived as too risky, not rewarded by our grant-funding arcs that want results quickly.

    How do we push back? One way that I think I'm pushing back is simply by teaching students to code in a way that recognizes structures but doesn't impose them: (e.g. Study poetry and learn block elements. Don't learn block elements from web forms alone). Students can learn inside a university semester how to work with markup languages and web technologies according to their own designs. If they can do it, so can their professors. We can learn to critique templated decisions and build our own structures, our own code idioms that support our research.

    If we could change the way we think of design perhaps we could make different decisions that encourage diversity and imaginative design in the codebase.

  • This is such a fascinating thread!

    Adding on to a vein similar to @code_anth

    It would also be interesting to explore is how people "think" and how code is written in the esoteric programming languages mentioned. And how these possibly different ways of thinking influence how we think about and solve technical challenges. Or even how we relay information and feeling with code.

  • If we want to ground programming languages in craft, culture and heritage practices, we might look to helpful historical precedents. In my book The Software Arts I make the point that programming languages come from a long history of arts and crafts writings about how to make things. Cookbooks with recipes are an example of this, but historians of science, like Pamela Long, point out that the tradition of "how to" manuals from the manual arts starts, at least, in ancient Rome. Most programming languages today start by addressing the needs of very specific communities, even if they are later justified as "universal." Instead, we can start with an ethnography, as @yazu recommends above. Ron Eglash's work starting with his ethnography of craft practice in western Africa -- see The fractals at the heart of African designs -- has yielded computational tools for African-American kids learning programming by, for example, allowing them to build African patterns of hair weaving.
    One might start here on his website

  • @joanne that's a really important point. Franklin's "real world of technology" talks about similar processes where technologies of craftwork, like the sewing machine, apparently invented to save women time, are very quickly twisted into technologies of control, like the sweatshop. I think at the point at which craft skills are put to work on a semiconductor production line, that ceases to be craft, at least how Franklin talks about it. Craft happens when the worker has control over their own work and is able to respond to their materials in order to produce something unique.

    So I think it still works to idealise craft to some extent, as long as we recognise how easily it is undermined.

    @warrensack It's good to think of manuals and ethnographies, but one thing I struggle with is, what about algorithms that aren't written down? Are they even algorithms then? I'd argue that they are, but when there is no notation, and no author as we know it, how do we deal with them in an academic world obsessed with writing and notation? I expect this is familiar territory for ethnographers, but think heritage algorithms are probably a special challenge..

  • @yaxu: Yes, I think it is a good question for ethnographers. I have worked with ethnographers on various projects, but am not an ethnographer. Ron Eglash's ethnographic work addresses this issue of writing down unwritten algorithms, but it would be great to hear from other ethnographers. Anyone here an ethnographer?

  • edited January 2022
    Problem of default: Beyond the common design of code, I guess there is also the problem of default in terms of learning how to code. It is commonly taught/learnt in a way to feed into (capitalistic) corporations and tech industry. Of course the CCS, creative and critical coding, art communities try to break this tradition, but there are, still, lots of resistance in terms of what should be taught in a curriculum and in what way and what should be prioritized.

    I learned programming from both contexts: exploratory programming in an art school on project basis, and formal computer science course offered by a state school CS department (CU). I have benefited from both contexts and experienced shortcomings of both contexts.

    While I attribute the respective benefits and shortcomings of learning from the two contexts to reasons such as the demarcation of academic disciplines in higher education, funding model of the academic institution, and other contingent factors (students' expressed intent to acquire programming as a means to an end or / and as an exploratory medium, instructors' critical understanding of programming), I felt that the attributions may also seemed detached, to come up with strategies to reconcile the tension between the two contexts.

    I can only speak broadly for what I've experienced. When I learned programming in the art school I received fairly minimal instructions, and the curriculum toolkit mainly consist of software that were developed considering for the needs of artists and designers. With the exception of XPUB (look it up :)), which curriculum exposed students to a more diverse range of tools but also cultivate our critical view towards software (Computer Lib!). I wouldn't call it a downside - because conventional code practice is not the objective of an art school, but because the technical instruction was fairly basic, I patched things out mainly on my own. Classes in CS patched things up for me, in aspects like basic theory of computation, good coding practice, and extensive debugging.

    Personally, I haven't found a way to reconcile the two perfectly - still trying.

  • I'm watching a LASER talk by Amelia Winger-Bearskin right now. Very a propos of this thread. This might be a good resource:

  • I've learned so much from this discussion! I also have a lot more reading to do :).

    One thing I'd like to reflect on here is the idea of modularity and templates as brought up at the introduction to this thread:

    Continuing our thought on modularity, we are interested in opening up the question of whether it is possible to create an ethical template. Would it be possible to design a template that is programmed to support differences? And when that template isn’t responding to lived experiences, how can that template be challenged? How can it change/evolve? Can a template follow the intention of a toolkit, programmed in a way to give programmers the utmost agency to modify?

    I also enjoyed @epyllia's thread on the draw backs of using templates to learn and publish. I agree those are continuing issues.

    My work is on open source software development and I think a lot about modularity in the technical and abstract sense (in the spirit of this thread I should mention I work in python and I'm a native English speaker). What continues to become important for me is making a conscious effort to think about who I'm designing for and in what context will this design be used. How do we design for beginners? How about for experts? What are the differences in how they think and approach computation? This requires being conscious of the default you decide to work with, as others have pointed out in this thread.

    As a designer I find the idea of different modules that have common threads appealing, because this allows for local context to be applied with some universal themes. This comes with it's own tradeoffs though, and I like Sasha Costanza-Chock's Design Justice as a way to think about this.

Sign In or Register to comment.