<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>2024 Week 3: DHQ Special Issues — CCS Working Group</title>
      <link>https://wg.criticalcodestudies.com/index.php?p=/</link>
      <pubDate>Sun, 12 Apr 2026 07:18:07 +0000</pubDate>
          <description>2024 Week 3: DHQ Special Issues — CCS Working Group</description>
    <language>en</language>
    <atom:link href="https://wg.criticalcodestudies.com/index.php?p=/categories/2024-week-3/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>How to Do Things with Deep Learning Code</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/174/how-to-do-things-with-deep-learning-code</link>
        <pubDate>Wed, 21 Feb 2024 03:01:42 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>minhhua12345</dc:creator>
        <guid isPermaLink="false">174@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><strong>Abstract:</strong> The premise of this article is that a basic understanding of the composition and functioning of large language models is critically urgent. To that end, we extract a representational map of OpenAI's GPT-2 with what we articulate as two classes of deep learning code, that which pertains to the model and that which underwrites applications built around the model. We then verify this map through case studies of two popular GPT-2 applications: the text adventure game, <em>AI Dungeon</em>, and the language art project, <em>This Word Does Not Exist</em>. Such an exercise allows us to test the potential of Critical Code Studies when the object of study is deep learning code and to demonstrate the validity of code as an analytical focus for researchers in the subfields of Critical Artificial Intelligence and Critical Machine Learning Studies. More broadly, however, our work draws attention to the means by which ordinary users might interact with, and even direct, the behavior of deep learning systems, and by extension works toward demystifying some of the auratic mystery of "AI." What is at stake is the possibility of achieving an informed sociotechnical consensus about the responsible applications of large language models, as well as a more expansive sense of their creative capabilities-indeed, understanding how and where engagement occurs allows all of us to become more active participants in the development of machine learning systems.</p>

<p><strong>Question:</strong> Our article advocates for the close reading of ancillary deep learning code. Although this code does not do any deep learning work per se, it still contributes to the language generation process (e.g. decoding the model's raw output, content moderation, data preparation). We did give some thought to how to do the work of extending a CCS analysis to the weights and the model and arrived at this conclusion: "Even more important to the running of the original GPT2 model are the weights, which must be downloaded via “download model.py” in order for the model to execute. Thus, one could envision a future CCS scholar, or even ordinary user, attempting to read GPT-2 without the weights and finding the analytical exercise to be radically limited, akin to excavating a machine without an energy source from the graveyard of 'dead media.'" Nearly three years on from the composition of our article, in the context of the API-ification of LLMs, we can build upon Evan’s earlier <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/170/the-epistemology-of-code-in-the-age-of-machine-learning-evan-buswell">question</a> and ask not only <em>should we/could we</em>, but also <em>how</em> can we read what we termed “core” deep learning code?</p>

<p><strong>Note:</strong> This article was written in 2021.</p>
]]>
        </description>
    </item>
    <item>
        <title>DHQ article: Lai-Tze Fan -- "Reverse Engineering the Gendered Design of Amazon’s Alexa"</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/171/dhq-article-lai-tze-fan-reverse-engineering-the-gendered-design-of-amazon-s-alexa</link>
        <pubDate>Mon, 19 Feb 2024 20:41:25 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>Lai-Tze Fan</dc:creator>
        <guid isPermaLink="false">171@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Hi all! I'll be around all week to chat about any thoughts, ideas, and questions you may have concerning my 2022 DHQ article on Amazon's Alexa. Please also engage with each other.</p>

<p>Here's the original abstract, with key words and thoughts italicized.</p>

<p>"Reverse Engineering the Gendered Design of Amazon’s Alexa: Methods in Testing Closed-Source Code in Grey and Black Box Systems"</p>

<p>​This article examines the <em><strong>gendered design</strong></em> of Amazon Alexa’s voice-driven capabilities, or, “skills,” in order to better understand how Alexa, as an AI assistant, mirrors traditionally feminized labour and sociocultural expectations. While Alexa’s code is <em><strong>closed source — meaning that the code is not available to be viewed, copied, or edited</strong></em> — certain features of the code architecture may be identified through <em><strong>methods akin to reverse engineering and black box testing</strong></em>. This article will examine what is available of Alexa’s code — the official software developer console through the Alexa Skills Kit, code samples and snippets of official Amazon-developed skills on Github, and the code of an unofficial, third-party user-developed skill on Github — in order to demonstrate that Alexa is designed to be female presenting, and that, as a consequence, expectations of gendered labour and behaviour have been built into the code and user experiences of various Alexa skills. In doing so, this article offers <em><strong>methods in critical code studies toward analyzing code to which we do not have access</strong></em>. It also provides a better understanding of the inherently gendered design of AI that is designated for care, assistance, and menial labour, outlining ways in which these design choices may affect and influence user behaviours.</p>

<p>*</p>

<p>Here are some discussion questions to get us started:</p>

<ol>
<li><p>How do humanities- and social sciences-based theories and methods augment CCS? For instance, in this article, I describe Anne Balsamo's method of "hermeneutic reverse engineering," through which I used methods of close reading and discourse analysis to analyze Alexa's responses, the Alexa skills kit console, and some of the code used to build Alexa skills.</p></li>
<li><p>What other examples of closed-source and proprietary code and software do you think would benefit from a mixed method approach to reverse engineering their design?</p></li>
<li><p>What does CCS lend to analyses of gendered, racialized, and class-based biases in code and software?</p></li>
<li><p>Comparisons are sometimes made between language-based AI such as Alexa and ChatGPT, but in addition to these AI systems stemming from different types of NLP design, they also create different user expectations about what they should be used for and how users are expected to interact with each interface. Discuss.</p></li>
</ol>
]]>
        </description>
    </item>
    <item>
        <title>Defactoring Code as a Critical Methodology</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/175/defactoring-code-as-a-critical-methodology</link>
        <pubDate>Fri, 23 Feb 2024 12:55:41 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>jorisvanzundert</dc:creator>
        <guid isPermaLink="false">175@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Hi all, this thread focuses on the article that <a rel="nofollow" href="https://www.sci.pitt.edu/people/matthew-burton" title="Matthew Burton">Matthew Burton</a> (Pittsburg University) and <a rel="nofollow" href="https://www.huygens.knaw.nl/en/medewerkers/joris-van-zundert-2/" title="I">I</a> wrote for the DHQ special issue that is still to be published: “Defactoring ‘Pace of Change’”.</p>

<h2>Abstract</h2>

<p>Our article highlights the increasing importance of code in computational literary analysis and the lack of recognition and visibility code as scientific output currently receives in scholarly publishing. We argue that bespoke code (i.e. purpose built analytical code intended for single or infrequent use) should be considered a fundamental part of scholarly output. As such, a more overt inclusion of code in the scholarly discourse is warranted. An unanswered question is what a proper methodology for this would be.</p>

<p>As an experimental contribution to developing such a methodology we introduce the concept of “defactoring”. We propose defactoring as a technique for critically reading and evaluating code used in humanities research. Defactoring involves closely reading, thoroughly commenting, and potentially reorganizing source code to create a narrative around its function. This technique is used to critically engage with code, peer review code, understand computational processes, and teach computational methods.</p>

<p>We describe a case study where we have analyzed in this way the code associated with a publication in literary studies by Ted Underwood and Jordan Sellers (“The <em>Longue Durée</em> of Literary Prestige”). Based on our case study experience we question the separation between scholarly publications and code, and we advocate breaking down these boundaries to create a more robust scholarly dialogue. Linking code to theoretical exposition can enhance scholarly discourse and invites further exploration of the relationship between literary interpretation and computational methodology.</p>

<p>Finally, we also reflect on the challenges we met in publishing an article that combines theoretical discussion with defactored code, and we highlight the gap between scholarly argument and case-study material that is enforced by current academic publishing platforms. We suggest that there is a need for academic genre conventions for publishing bespoke code and we proposes the idea of a notebook-centric scholarly publication that integrates code and interpretation seamlessly.</p>

<h2>Questions</h2>

<p>I have been wondering again off late (but this is years old unease tbh) why scholars and scientists seem so indifferent to the quality of the code and algorithms they routinely apply in research. To me it seems that the technical and methodological quality of code applied in any analytical fashion should be interrogated quite rigorously. Any tiny error in that code may completely invalidate any finding. However, all our quality control processes (peer review, metrics, academic crediting, institutional evaluation – flawed as they may be by themselves) are almost exclusively aimed at the final outcome of research: the publication. One could argue that a thorough discussion of the code applied should be part of any research paper and should therefore have been scrutinized during peer review. But this is almost never the case. Only a facile methodological abstraction is presented on paper, and there is the assumption that this expresses what the code actually does. Arguably in many cases tailor made algorithms will not exactly do as the author will have us believe. (To substantiate this at least a little bit: a colleague of mine who is a research software engineer, almost completely refuses to use any Python libraries for statistical analysis on account of her finding them almost all flawed tot lesser or greater extent in mathematical precision.) Many reasons have been forwarded why we as researchers do not engage with code in some peer review fashion: lack of technical skills and knowledge, the impossibility of adding yet another infeasible task to the academic process, lack of resources, misplaced trust in perceived impartiality and mathematical correctness of code, assumption that quality control of code is being fully covered by RSEs (Research Software Engineers). All of these explanations are part of the problem. However, I cannot escape the impression that we mostly hesitate at taking intellectual responsibility because we acknowledge the sheer insurmountable effort involved with organizing and executing code peer review, all the while being rather poignantly aware that we are methodologically falling short. Will code peer review by in a nascent state perpetually, because it presents us with too many inconvenient truths?</p>
]]>
        </description>
    </item>
    <item>
        <title>The Epistemology of Code in the Age of Machine Learning (Evan Buswell)</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/170/the-epistemology-of-code-in-the-age-of-machine-learning-evan-buswell</link>
        <pubDate>Mon, 19 Feb 2024 18:52:35 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>ebuswell</dc:creator>
        <guid isPermaLink="false">170@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><strong>Abstract</strong>: Code is an epistemic system predicated on the repression of state, but with the rise of global optimization and machine learning algorithms, code functions just as much to obscure knowledge as to reveal it. Code is constructed in response to two characteristics of the twentieth century episteme. First, knowledge is represented as a process. Second, this representation must be sufficient, such that its meaning is constituted by the representational form itself. In attempting to meet these requirements, process is separated into an essential part, code, and an inessential part, state. Although code has a relationship with state, in order to construct code as an epistemic object, state is limited and suppressed. This construction begins with the first formation of code in the 1940s and reaches its modern form in the structured programming movement of the later 1960s. But now, with the growing prominence of global optimization and machine learning algorithms, it is becoming apparent that state is vitally important, yet our tools for understanding it are inadequate. This inadequacy nevertheless serves certain interests, which make use of this unclarity to act irresponsibly while obscuring their ability to determine the behavior of their software.</p>

<p><strong>Question</strong>: The bulk of the article is about the evolution of code and the way this evolution was motivated by the suppression of state, in order to make code a better epistemic object. It ends on a bit of a dismal note: with machine learning algorithms, we get a peculiar epistemic inversion, where the <em>state</em> of the machine (all the learned weights on the neural network, for example) holds pretty much all the interesting parts, but <em>state</em> is precisely what has been strategically ignored in order to develop the very powerful epistemic tool that is <em>code</em>. The big million dollar question: now that we've spent 70 years developing our epistemic tools in the exact opposite direction, can we make heads or tails of all of these weights? If we construe critical code studies very narrowly, this is not a critical code studies question—weights are not code, they're not written by humans, and they're not even really read by humans (yet!). But on the other hand, this thing that we do—look at a technical symbolic language and extract a more complete meaning than a compiler does, than many coders are aware of, etc.—this thing looks kinda similar to the problem of making sense of a trained system. Further, trying to read weights looks a lot like Mark Marino's original proposal for CCS: software criticism is great, but we can't ignore the source code -&gt; LLM criticism or training data criticism is great, but we can't ignore the weights.</p>

<p>So my big question for the group is: what now? When the weights and other state data might be more important than the code, what is the role of reading code? Should we/could we expand code studies methods to look at the state of machine learning algorithms, the weights or other data that make a given algorithm work?</p>
]]>
        </description>
    </item>
    <item>
        <title>Tracing “Toxicity” Through Code: Towards a Method of Explainability and Interpretability in Software</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/172/tracing-toxicity-through-code-towards-a-method-of-explainability-and-interpretability-in-software</link>
        <pubDate>Mon, 19 Feb 2024 20:44:19 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>davidmberry</dc:creator>
        <guid isPermaLink="false">172@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><strong>Abstract:</strong> The ubiquity of digital technologies in citizen’s lives marks a major qualitative shift where automated decisions taken by algorithms deeply affect the lived experience of ordinary people. But this is not just an action-oriented change as computational systems can also introduce epistemological transformations in the constitution of concepts and ideas. However, a lack of public understanding of how algorithms work also makes them a source of distrust, especially concerning the way in which they can be used to create frames or channels for social and individual behaviour. This public concern has been magnified by election hacking, social media disinformation, data extractivism, and a sense that Silicon Valley companies are out of control. The wide adoption of algorithms into so many aspects of peoples’ lives, often without public debate, has meant that increasingly algorithms are seen as mysterious and opaque, when they are not seen as inequitable or biased. Up until recently it has been difficult to challenge algorithms or to question their functioning, especially with wide acceptance that software’s inner workings were incomprehensible, proprietary or secret (cf. open source). Asking why an algorithm did what it did often was not thought particularly interesting outside of a strictly programming context. This meant that there has been a widening explanatory gap in relation to understanding algorithms and their effect on peoples’ lived experiences. This paper argues that Critical Code Studies offers a novel field for developing theoretical and code-epistemological practices to reflect on the explanatory deficit in modern societies from a reliance on information technologies. The challenge of new forms of social obscurity from the implementation of technical systems is heightened by the example of machine learning systems that have emerged in the past decade. A key methodological contribution of this paper is to show how concept formation, in this case of the notion of “toxicity,” can be traced through key categories and classifications deployed in code structures (e.g. modularity and layering software) but also how these classifications can appear more stable than they actually are by the tendency of software layers to obscure even as they reveal. How a concept such as “toxicity” can be constituted through code and discourse and then used unproblematically is revealing in relation to both its technical deployment but also for a possible computational sociology of knowledge. By developing a broadened notion of explainability, this paper argues that critical code studies can make important theoretical, code-epistemological and methodological contributions to digital humanities, computer science and related disciplines.</p>

<p><a rel="nofollow" href="http://www.digitalhumanities.org/dhq/vol/17/2/000706/000706.html" title="Read the paper here">Read the paper here</a></p>

<p><strong>Questions:</strong></p>

<ol>
<li>This paper sets out a new research programme in digital humanities and critical code studies related to exploring  concept formation and how rather than purely discursive, extra-discursive prescriptive elements from software (and para-code) can be mobilised to stabilise it. What other examples of similar approaches are seen in the literature or in technical work?</li>
<li>Tracing as a method in critical code studies seems to me to have great potential for traversing the code-domain and social (the socio-code) in order to understand the importance of the interrelation of the two. What other tools can we bring to examining the socio-code for this type of code/concept analysis that moves between social and algorithmic levels of description? (Here I am thinking of techniques such as Entity Relationship Diagrams, UML, and other techniques).</li>
<li>One of the problems with this method is that it requires the management of multiple levels of discourse and code and their interdependence. As this was a relatively scoped analysis the difficulty of juggling the levels was relatively straightforward, but it would be good to be able to use some form of digital asset management, code management tool, IDE or other technique to assist with the process. Perhaps something like NVivo might assist with this? Do people have experience of managing multiple, inter-layered and inter-textual document analysis that might help with this?</li>
<li>The Concept Lab <a rel="nofollow" href="https://concept-lab.lib.cam.ac.uk" title="https://concept-lab.lib.cam.ac.uk">https://concept-lab.lib.cam.ac.uk</a> (University of Cambridge) studied the architectures of conceptual forms in discourse. Are people familiar with this approach to concept mapping and how can approaches such of these be incorporated into critical code studies?</li>
<li>This approach of code/concept critique are clearly much more easily undertaken in open source work, but what are the methods we can use in proprietary systems?</li>
<li>Does this approach result in a tendency towards a pragmatic mode of analysis within a specific problem situation, and which may just reflect an instrumental approach to software rather than genuine insight? How does one connect these case studies to more generalisable situations, perhaps to questions of common sense assumptions and/or embedded values and norms?</li>
<li>There are obvious links to questions raised by the notions of explainability and interpretability in software, AI and automated decision systems. Does critical code studies have implications for the debates over explainability vs interpretability and what might the assumptions built into explainability mean for critical code studies?</li>
</ol>
]]>
        </description>
    </item>
    <item>
        <title>Witnesses and Witness Marks in Vintage BASIC Code</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/173/witnesses-and-witness-marks-in-vintage-basic-code</link>
        <pubDate>Tue, 20 Feb 2024 03:16:36 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>annettevee</dc:creator>
        <guid isPermaLink="false">173@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Hi! This thread focuses on the article I wrote for the DHQ Special Issue on CCS, <a rel="nofollow" href="https://www.digitalhumanities.org/dhq/vol/17/2/000696/000696.html" title="BASIC FTBALL*** and Programming for All">BASIC FTBALL*** and Programming for All</a>. Here’s the official abstract:</p>

<blockquote><div>
  <p>In late fall 1965, John Kemeny wrote a 239-line BASIC program called FTBALL***. Along with his colleague Thomas Kurtz and a few work-study students at Dartmouth College, Kemeny had developed the BASIC programming language and Dartmouth Time-Sharing System (DTSS). BASIC and DTSS represented perhaps the earliest successful attempt at “computer programming for all,” combining English-language vocabulary, simple yet robust instructions, and near-realtime access to a mainframe computer. This article takes a closer look at FTBALL as a crucial program in the history of “programming for all” while gesturing to the tension between a conception of “all” and FTBALL’s context in an elite, all-male college in the mid-1960s. I put FTBALL in a historical, cultural, gendered context of “programming for all” as well as the historical context of programming language development, timesharing technology, and the hardware and financial arrangements necessary to support this kind of playful, interactive program in 1965. I begin with a short history of BASIC’s early development, compare FTBALL with other early games and sports games, then move into the hardware and technical details that enabled the code before finally reading FTBALL’s code in detail. Using methods from critical code studies (Marino 2020), I point to specific innovations of BASIC at the time and outline the program flow of FTBALL. This history and code reading of BASIC FTBALL provides something of interest to computing historians, critical code studies practitioners, and games scholars and aficionados.</p>
</div></blockquote>

<p>The <a rel="nofollow" href="https://math.dartmouth.edu/~doyle/docs/ftball/ftball.txt" title="FTBALL*** code">FTBALL*** code</a> is in an appendix to the article, and also available on Dartmouth emeritus professor <a rel="nofollow" href="https://math.dartmouth.edu/~doyle/docs/ftball/ftball.txt" title="Peter Doyle’s website">Peter Doyle’s website</a> and the <a rel="nofollow" href="http://www.vintage-basic.net/bcg/ftball.bas" title="Vintage Basic website">Vintage Basic website</a>.</p>

<p>Here, I want to focus on the idea of witness marks in code. I’ll explain what they are and give a long preamble to ask: <br />
•   What witness marks do you see in the code that you study? <br />
•   Who are the witnesses to the construction of your code, and how might you connect with them to learn more about the processes of the code’s composition?</p>

<h1>Witness marks</h1>

<p>Horologists restoring ancient clocks note the little marks and dents in the clock’s workings to see the path of previous restorers, undo the damage they might have done, and replace the missing screws or tiny gears that make the clock function. Those little bits of human evidence in the machine are called witness marks. (I learned about these from the <a rel="nofollow" href="https://stownpodcast.org/" title="S-Town podcast">S-Town podcast</a>, which features a horologist and opens up with a description of witness marks.)</p>

<p>One feature of early BASIC’s line numbers is that by looking at the line numbers you can often glean the process of composition from the code itself. BASIC Convention calls for line numbers by 10s, like so:</p>

<pre><code>10 PRINT &quot;Hello World!&quot;
20 GOTO 10
</code></pre>

<p>But if I decide later that I don’t want in infinite loop in my code, or if I want to add something else to the message, I might revise my code like so:</p>

<pre><code>5 LET N = 0
10 PRINT &quot;Hello World!&quot;
12 PRINT &quot;This is Annette.&quot;
14 LET N = N + 1
20 IF N &lt; 2 THEN GOTO 10
25 END 
</code></pre>

<p>That way, my code will terminate after printing my introductory message twice.</p>

<p>A code sleuth analyzing this program later, without me around, would be able to surmise from the in-between-10s line numbers that I added those lines in later versions of the code. The line numbers in early BASIC mattered a lot because GOTOs would point to them. Not such a big deal for my little Hello World, but for a program of any length, it would have been a giant pain to renumber the lines. Moreover, interface for code revision didn’t give random access to the code—you couldn’t just see the whole program and line edit it like I’m doing now with my text in my word processor. Instead, you used the I/O console to add lines to the code in the computer’s very limited memory. You could overwrite previous lines of code or add new ones, but it was too difficult to renumber lines. So, these odd-numbered lines serve as witness marks for the construction of the code, and clues to those of us who come afterward, trying to understand how it works.</p>

<p>In reading the code for FTBALL***, written one Sunday by John Kemeny—one of the co-inventors of BASIC—I wanted to see how it worked, but also how it was composed. I can guess, for instance, that the PLACE KICK was added later in the session because of the line numbers:</p>

<pre><code>5001 PRINT &quot;PLACE KICK&quot;
5005 LET F = -1
5006 IF R &gt; .5 THEN 5010
5007 PRINT &quot;KICK IS BLOCKED***&quot;
5008 LET Y = -5
5009 GOTO 1480
</code></pre>

<p>So I can see a bit about how Kemeny’s mind worked in composing this program, which is pretty cool.</p>

<h1>Witnesses</h1>

<p>John Kemeny passed away in 1992, long before I did any research on BASIC, but years after I’d dabbled in BASIC myself, sitting at my Commodore 64 as a kid in the mid-80s. BASIC was everywhere in the mid-80s, as the CCS classic 10PRINT notes. My research on BASIC was, in part, a way to connect with my own early history with computing. But it also connected me with Kemeny, who was an early advocate of coding literacy, a former assistant to Einstein, and by all accounts a genius and a great guy.</p>

<p>It also connected me to Thomas Kurtz, the other inventor of BASIC, and an amazing person as well. Kurtz let me interview him in 2017, wrote me memos on how BASIC was put together, and then, at age 95, once this DHQ article was published, approved my take on it. He was a witness to both the construction of BASIC and my research on it 60 years later. I also emailed with Stephen Garland and John McGeachie, who were part of the Dartmouth undergraduate team working on ALGOL, BASIC and DTSS in the mid-1960s, for some details and clarifications about BASIC. So, to augment the witness marks I saw in the code, I also reached out to witnesses of it so I could understand the environment in which Kemeny wrote the code. Both the witnesses and the witness marks were key to my reading of BASIC FTBALL***.</p>

<h1>Questions</h1>

<p>This is a long preamble to ask: <br />
•   What witness marks do you see in the code that you study? <br />
•   Who are the witnesses to the construction of your code, and how might you connect with them to learn more about the processes of the code’s composition?</p>
]]>
        </description>
    </item>
    <item>
        <title>The Less Humble Programmer</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/179/the-less-humble-programmer</link>
        <pubDate>Mon, 26 Feb 2024 16:08:22 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>Temkin</dc:creator>
        <guid isPermaLink="false">179@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><strong>Summary</strong></p>

<p><a rel="nofollow" href="http://digitalhumanities.org/dhq/vol/17/2/000698/000698.html">The Less Humble Programmer</a> investigates the aesthetics of esolangs (esoteric programming languages) as drawing from / challenging / reacting to two competing aesthetic impulses in mainstream computing and computer science:</p>

<ul>
<li><p>The first is, borrowing from Dijkstra, Humbleness. This is the neutral voice of code that downplays personal style. It values clarity above all else, including, for the most part, efficiency. This is code that is friendly to corporate coding, easily understood by others. It is how we, for the most part, are taught to code, even creative coders.</p></li>
<li><p>The second is conciseness and cleverness. This is the view of "elegance" drawn from mathematics, valuing of the shortest expression that says or does the most. It prizes conciseness and (sometimes) efficiency over clarity. It is familiar from other hacker arts (e.g. demoscene) and was commonplace in early computing when extreme efficiency was necessary.</p></li>
</ul>

<p>In Wendy Chun's Sorcery and Source Codes, she discusses the move in mainstream coding from the second aesthetic to the first, when programming became professionalized and corporatized. But the impulse to show technical wizardry through  clever and unreadable code never went away; it simply finds other avenues for expression.</p>

<p><strong>An Esolang Example</strong></p>

<p>Consider brainfuck, the language that infamously recreates the Turing machine offering only eight commands to the programmer, each represented by a punctuation mark. The <code>+</code> adds one to a memory cell. <code>&lt;</code> and <code>&gt;</code> let us navigate through memory cells. The square brackets loop until the memory cell pointed to is at zero.</p>

<p>Brainfuck has an absolute refusal of Humbleness; there is no neutral way to express anything in the language and there are no standards to adhere to. One must make a choice even for the most trivial of code.</p>

<p>If we want to print 46 to the screen, we could have 46 <code>+</code>s followed by <code>.</code> (the command to print):</p>

<p><code>++++++++++++++++++++++++++++++++++++++++++++++.</code></p>

<p>This is not especially readable, as it is too many <code>+</code>s to recognize at a glance.</p>

<p>Then there is the multiplication pattern: one cell starts with a value and counts down, each time adding a second value to the neighboring memory cell, effectively multiplying the first by the second via a loop of addition. It multiplies 5 by 9 and then adds 1:</p>

<p><code>+++++[&gt;+++++++++&lt;-]&gt;+.</code></p>

<p>And here is the shortest (known) brainfuck algorithm to print 46:</p>

<p><code>-[+&gt;+[+&lt;]&gt;+]&gt;.</code></p>

<p>It is extremely short, has an appealing rhythm of <code>+</code>s and brackets, but is mostly unreadable without playing out a complex algorithm in ones head. It is a loop within a loop and gets to 46 by walking back and forth across six memory cells. It wraps past zero (in brainfuck, zero minus 1 is 255) many times. It is efficient in number of characters, but is the slowest in execution.</p>

<p>These alternate solutions show how brainfuck may appear chaotic, but it prizes the re-assertion of order within a seemingly chaotic space. It is chaos to be tamed. The most celebrated approach will be the shortest or the most efficient.</p>

<p><strong>Questions</strong></p>

<p>Esolangs have matured as a medium since 1993, when brainfuck appeared, and deal with a wider set of concerns. I've been thinking lately about how current esolangs build on these foundational aesthetics. The language Unnecessary (2005), for instance, refuses any form of computation: its only valid program is the one that doesn't exist. This is an example of the kind of idea art that allows no space for technical wizardry, by taking formal play with language design to its most extreme, yet feels indebted to the early esolangs in that choice. I wonder how other current esolangs engage (or do not engage) with these aesthetics of early esolangs.</p>

<p>Also, how is this thinking reflected in code poetry? Code poetry is one of the other code forms (along with esolangs and demos) where one has the freedom to break from clarity. At least some code poets, (such as the recent book <a rel="nofollow" href="https://hyperallergic.com/835123/what-is-code-poetry-daniel-holden-chris-kerr/">./code –– poetry</a>, poke fun at the mathematical idealism of code -- in their case via the (much hated) golden spiral.</p>
]]>
        </description>
    </item>
    <item>
        <title>Poetry as Code as Interactive Fiction (Jason Boyd)</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/167/poetry-as-code-as-interactive-fiction-jason-boyd</link>
        <pubDate>Sun, 18 Feb 2024 23:21:53 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>JasonBoyd</dc:creator>
        <guid isPermaLink="false">167@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><strong>Abstract</strong>: In Prismatik’s <em>Scarlet Portrait Parlor</em> (2020) poetry and code uncannily appear one and the same. This results in a work that is both familiar and strange, and this, along with <em>Scarlet Portrait Parlor</em>’s brevity, simplicity of construction, and immediate recognizability as a work of literature (a sonnet) that is also executable source code producing a work of electronic literature, has the potential to intrigue students and textual scholars unfamiliar with and perhaps resistant to Critical Code Studies (CCS). A study of Prismatik’s work also has the potential to refine some simplistic judgements in CCS scholarship about the efficacy of code that emulates natural, human language. This case study aims to elaborate the value of <em>Scarlet Portrait Parlor</em> as a rich example of how poetry, programming, and interactive fiction can be intertwined if not blurred in a single text and to act as a catalyst for generative discussions about the overlapping and intertwining of natural languages, programming languages, creative writing, and coding.</p>

<p><strong>Question</strong>: In the article, I suggest that Mark C. Marino's conception of <em>code legibility</em> requires interrogation, because it is informed by a "dubious premise that programming languages, including natural language programming languages, should strive for (or can only function using) a one-to-one equivalence between named operations and methods and what those operations and methods do" (para. 21). What are peoples thoughts about these different perspectives on code legibility?</p>

<p><strong>Question</strong>: The article suggests that, in the cause of a general code literacy, CCS should "study, explicate the potential of, and advocate for programming languages that strive to emulate 'natural languages' so that code literacy does not require a highly specialized literacy far removed from the common literacy that most people possess" (para. 26). What are the potential pros and cons of such a suggested approach?</p>
]]>
        </description>
    </item>
    <item>
        <title>"To Refuse Erasure by Algorithm" (Lillian-Yvonne Bertram's Travesty Generator)</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/168/to-refuse-erasure-by-algorithm-lillian-yvonne-bertrams-travesty-generator</link>
        <pubDate>Mon, 19 Feb 2024 12:42:08 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>zachwhalen</dc:creator>
        <guid isPermaLink="false">168@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><a rel="nofollow" href="http://digitalhumanities.org/dhq/vol/17/2/000707/000707.html">"Any Means Necessary to Refuse Erasure by Algorithm": Lillian-Yvonne Bertram's Travesty Generator</a></p>

<p><strong>Abstract</strong>: Lillian-Yvonne Bertram's 2019 book of poetry is titled <em>Travesty Generator</em> in reference to Hugh Kenner and Joseph O'Rourke's Pascal program to “fabricate pseudo-text” by producing text such that each n-length string of characters in the output occurs at the same frequency as in the source text. Whereas for Kenner and O'Rourke, labeling their work a “travesty” is a hyperbolic tease or a literary burlesque, for Bertram, the travesty is the political reality of racism in America. For each of the works Travesty Generator, Bertram uses the generators of computer poetry to critique, resist, and replace narratives of oppression and to make explicit and specific what is elsewhere algorithmically insidious and ambivalent. In “Counternarratives”, Bertram presents sentences, fragments, and ellipses that begin ambiguously but gradually resolve point clearly to the moment of Trayvon Martin's killing. The poem that opens the book, “three_last_words”, is at a functional level a near-echo of the program in Nick Montfort's “I AM THAT I AM”, which is itself a version or adaptation of Brion Gysin's permutation poem of the same title. But Bertram’s poem has one important functional difference in that Bertram's version retains and concatenates the entire working result. With this modification, the memory required to produce all permutations of the phrase, “I can’t breathe”, is sufficiently greater than the storage available most computers, so the poem will end in a crashed runtime or a frozen computer--metaphorically reenacting and memorializing Eric Garner’s death. Lillian-Yvonne Bertram's <em>Travesty Generator</em> is a challenging, haunting, and important achievement of computational literature, and in this essay, I expand my reading of this book to dig more broadly and deeply into how specific poems work to better appreciate the collection's contribution to the field of digital poetry.</p>

<p><strong>Question</strong>:</p>

<p>Looking back at this essay that first began as a CCSWG post four years ago, I am interested in how Bertram's work (including <em>Travesty Generator</em> and more recent projects) continues to explore computational creativity at the edges of LLM. In a recent <a rel="nofollow" href="https://ifthen.cargo.site/"><em>If, Then</em></a> talking about their new chapbook, <em>A Black Story May Contain Sensitive Content</em>, Lillian-Yvonne spoke about the value of "small" and "bespoke" language models and used the phrase "creative research" to frame their project working with a corpus of writing by Gwendolyn Brooks. All this has me thinking about the value of (let's say) artisanal computational poetics as an alternative to LLMs. I have some ideas, but I'll pose this is a question: How and why do tiny language models, "code as text" poetics, and new work built on these legacies critique or resist the operations of large-language models?</p>
]]>
        </description>
    </item>
    <item>
        <title>Critical Code Studies in Translingual Contexts - DHQ article on work by Daniel C. Howe.</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/166/critical-code-studies-in-translingual-contexts-dhq-article-on-work-by-daniel-c-howe</link>
        <pubDate>Sun, 18 Feb 2024 15:05:22 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>shadoof</dc:creator>
        <guid isPermaLink="false">166@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Hello all, I'm a bit busy until Tuesday but will then be 'around'. Here's a link to my article in the DHQ issue:<br />
<a rel="nofollow" href="https://www.digitalhumanities.org/dhq/vol/17/2/000705/000705.html">https://www.digitalhumanities.org/dhq/vol/17/2/000705/000705.html</a><br />
... and also the abstract. I used 'possibility space' in the article just at the time when I should have used 'vector space' since Howe's work does explore a vector space even though it is one that is relatively simple, heuristic, and more or less arbitrarily applied for conceptual+aesthetic reasons; nonetheless, this is in contrast to the vast, multidimensional and more or less hermetic spaces which now threaten to dominate our practices. Note how the topography of the 'same kind' of space for one language is entirely different for another, despite 'the same code'. Who accounts for/interprets this, and how?</p>

<h3>Abstract:</h3>

<p>A code-critical close reading of two related works by Daniel C. Howe. The artist's <em>Automatype</em> is an installation that visualizes and sonifies minimal-distance paths between English words and thus explores a possibility space that is relatively familiar to western readers, not only readers of English but also readers of any language which uses Latin letters to compose the orthographic word-level elements of its writing system [Howe 2012-16a]. In <em>Radical of the Vertical Heart 忄 (RotVH)</em> Howe engages with commensurate explorations in certain possibility spaces of the Chinese writing system and of the language’s lexicon. Translinguistically these spaces and, as it were, orthographic architectures, are structured in radically different ways. A comparative close reading of the two works will bring us into productive discursive relationship not only with distinct and code-critically significant programming strategies, but also with under-appreciated comparative linguistic concepts having implications for the theory of writing systems, of text, and of language as such. Throughout, questions concerning the aestheticization of this kind of computational exploration and visualization may also be addressed. His website is <a rel="nofollow" href="https://programmatology.com">programmatology.com</a>.</p>
]]>
        </description>
    </item>
    <item>
        <title>DHQ Special Issues (Main Thread)</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/169/dhq-special-issues-main-thread</link>
        <pubDate>Mon, 19 Feb 2024 17:59:37 +0000</pubDate>
        <category>2024 Week 3: DHQ Special Issues</category>
        <dc:creator>markcmarino</dc:creator>
        <guid isPermaLink="false">169@/index.php?p=/discussions</guid>
        <description><![CDATA[<h1><strong>Co-hosts:</strong></h1>

<p><strong>David Berry, Jason Boyd, Kevin Brock, Evan Buswell, John Cayley, Lai-Tze Fan, Zach Mann, Daniel Temkin, Annette Vee,  Zach Whalen, Joris Van Zundert</strong></p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/zf/zrgeqvgtsw0g.jpg" alt="" title="" /><br />
<img src="https://wg.criticalcodestudies.com/uploads/editor/il/l0ufbdwncat5.jpg" alt="" title="" /></p>

<p>The two special issues of Digital Humanities Quarterly, one published in 2023 and the other forthcoming, focussing on Critical Code Studies represent a watershed moment. For while we have published essays  in electronic book review and other venues, these collections mark the first set of scholarly explorations of code gathered in issues dedicated to that topic. Most of the authors have published in more than one working group, and some of the content has been developed from discussion threads in working groups.  We are grateful to DHQ and all the authors, editors, and peer respondents who have made these two issues possible.</p>

<p>For this week, we have assembled a group of authors from these two issues to discuss with us their reflections on the articles and what they mean for CCS. We have also set up discussion threads for the articles.  (See list below)  You will notice the diversity of the code objects and approaches reflects the wide ranging applications of Critical Code Studies.</p>

<p>Here is a link to <a rel="nofollow" href="http://digitalhumanities.org/dhq/vol/17/2/index.html" title="the first of the special issues">the first of the special issues</a>.</p>

<h2><strong>Discussion Questions</strong></h2>

<p>For the authors:<br />
What did we authors learn from each other’s articles? <br />
What connections do we see between our articles in methodology, discoveries, or theoretical approaches?</p>

<p><strong>From all of our participants:</strong><br />
What’s one article that catches your interest and why?</p>

<p>And our cohosts have further questions!</p>

<p>Join also the individual threads by <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/172/tracing-toxicity-through-code-towards-a-method-of-explainability-and-interpretability-in-software#latest" title="David Berry">David Berry</a>, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/167/poetry-as-code-as-interactive-fiction-jason-boyd#latest" title="Jason Boyd">Jason Boyd</a>, Kevin Brock, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/170/the-epistemology-of-code-in-the-age-of-machine-learning-evan-buswell#latest" title="Evan Buswell">Evan Buswell</a>, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/166/critical-code-studies-in-translingual-contexts-dhq-article-on-work-by-daniel-c-howe#latest" title="John Cayley">John Cayley</a>, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/171/dhq-article-lai-tze-fan-reverse-engineering-the-gendered-design-of-amazon-s-alexa#latest" title="Lai-Tze Fan">Lai-Tze Fan</a>, Zach Mann, Daniel Temkin, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/173/witnesses-and-witness-marks-in-vintage-basic-code#latest" title="Annette Vee">Annette Vee</a>,  <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/174/how-to-do-things-with-deep-learning-code#latest" title="Rita Raley and Minh Hua">Rita Raley and Minh Hua</a>, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/168/to-refuse-erasure-by-algorithm-lillian-yvonne-bertrams-travesty-generator#latest" title="Zach Whalen">Zach Whalen</a>, Joris Van Zundert</p>
]]>
        </description>
    </item>
   <language>en</language>
   </channel>
</rss>
