<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>2026 Week 2: AI and Critical Code Studies — CCS Working Group</title>
      <link>https://wg.criticalcodestudies.com/index.php?p=/</link>
      <pubDate>Sun, 12 Apr 2026 07:17:09 +0000</pubDate>
          <description>2026 Week 2: AI and Critical Code Studies — CCS Working Group</description>
    <language>en</language>
    <atom:link href="https://wg.criticalcodestudies.com/index.php?p=/categories/2026-week-2/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>Book: Intellivision by Tom Boellstorff and Braxton Soderman</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/206/book-intellivision-by-tom-boellstorff-and-braxton-soderman</link>
        <pubDate>Mon, 19 Jan 2026 21:55:03 +0000</pubDate>
        <category>2026 Week 2: AI and Critical Code Studies</category>
        <dc:creator>jeremydouglass</dc:creator>
        <guid isPermaLink="false">206@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Tom Boellstorff and Braxton Soderman's <em><a rel="nofollow" href="https://mitpress.mit.edu/9780262549509/intellivision/">Intellivision: How a Videogame System Battled Atari and Almost Bankrupted Barbie®</a></em> is our featured book discussion for this week. The book is available from the MIT Press in paperback / ebook, and also <a rel="nofollow" href="https://direct.mit.edu/books/oa-monograph/5869/IntellivisionHow-a-Videogame-System-Battled-Atari">open access PDF</a>.</p>

<p><br /><br />
<img src="https://wg.criticalcodestudies.com/uploads/editor/f1/w1n893lhtw5e.png" alt="" title="Left: cover of the Intellivision (2024) book. Right side: black and white photo of the Intellivision videogame system." /><br />
<em>Left: cover of the _Intellivision</em> (2024) book. Right: black and white photo of the Intellivision videogame system._</p>

<p><br /><br />
From the MIT Press:</p>

<blockquote>The engaging story of Intellivision, an overlooked videogame system from the late 1970s and early 1980s whose fate was shaped by Mattel, Atari, and countless others who invented the gaming industry.

Astrosmash, Snafu, Star Strike, Utopia—do these names sound familiar to you? No? Maybe? They were all videogames created for the Intellivision videogame system, sold by Mattel Electronics between 1979 and 1984. This system was Atari's main rival during a key period when videogames were moving from the arcades into the home. In Intellivision, Tom Boellstorff and Braxton Soderman tell the fascinating inside story of this overlooked gaming system. Along the way, they also analyze Intellivision's chips and code, games, marketing and business strategies, organizational and social history, and the cultural and economic context of the early US games industry from the mid-1970s to the great videogame industry crash of 1983.

While many remember Atari, Intellivision has largely been forgotten. As such, Intellivision fills a crucial gap in videogame scholarship, telling the story of a console that sold millions and competed aggressively against Atari. Drawing on a wealth of data from both institutional and personal archives and over 150 interviews with programmers, engineers, executives, marketers, and designers, Boellstorff and Soderman examine the relationship between videogames and toys—an under-analyzed aspect of videogame history—and discuss the impact of home computing on the rise of videogames, the gendered implications of play and videogame design at Mattel, and the blurring of work and play in the early games industry.</blockquote>

<p><br /><br />
<em>Intellivision</em> is a part of the Platform Studies series at the MIT Press edited by Nick Montfort and Ian Bogost, who write in their series forward that "there is also much to be learned from the sustained, intensive, humanistic study of digital media. We believe it is time for humanists to seriously consider the lowest level of computing systems and their relationship to culture and creativity." They describe books of the series as sharing:</p>

<ul>
<li>a focus on a single platform or a closely related family of platforms</li>
<li>technical rigor and in-depth investigation of how computing technologies work</li>
<li>an awareness of and a discussion of how computing platforms exist in a context of culture and society, being developed on the basis of cultural concepts and then contributing to culture in a variety of ways— for instance, by affecting how people perceive computing.</li>
</ul>

<hr />

<p><br /></p>

<p>For our book discussion of <em>Intellivision</em> in this Critical Code Studies Working Group, we will focus on engagement with code and code work. In the book these passages are situated within a broad study of the Intellivision platform and its many historical and cultural contexts. In the opening passages of their introduction ("Introduction: Intelligent Visions": "Blue Skies") the authors frame this situated approach:</p>

<blockquote><div>
  <p>We dig into the dirt, so to speak, revealing multifaceted technical and social practices that shaped the platform. We investigate Intellivision’s origins, computational properties, and videogames. We examine its design, advertising, and marketing. We introduce the companies who collaborated to produce it and the people who worked to develop it. We also look outward, reflecting on what Intellivision teaches us about videogames, platform studies, and the social history of technology.</p>
  
  <p>Intellivision’s fascinating story is about a major toy company enter- ing the nascent videogame industry and the wider market for consumer electronics and home computing. It is a story about competing visions for the future of videogames, about the exhilarating and risky experience of exploring uncharted markets, about the intoxicating boom of success and agonizing plummet of failure. It is a story about a videogame system that battled Atari and almost bankrupted Barbie—and along the way, it changed the history of videogames.</p>
</div></blockquote>

<p>Our starting focus for discussion is drawn from Chapters 4-6, and in particular Chapter 5.</p>

<blockquote><div>
  <ul>
  <li>4 <a rel="nofollow" href="https://doi.org/10.7551/mitpress/14266.003.0010">Mattel’s Marketing Magic</a> 107<br />
  <strong>PART II: Practices</strong></li>
  <li>5 <a rel="nofollow" href="https://doi.org/10.7551/mitpress/14266.003.0012">Ladders of Game Production</a> 145</li>
  <li>6 <a rel="nofollow" href="https://doi.org/10.7551/mitpress/14266.003.0013">Becoming a Videogame Programmer at Mattel</a> 167</li>
  </ul>
</div></blockquote>

<p><em>Linked open access Chapters 4-6, from the Table of Contents.</em></p>

<p>In this key passage from Chapter 5 (pp150-151), the authors quote APh programmer David Rolfe on the cartridge and EXEC operating system before working through a short code routine from the Intellivision game <em>Major League Baseball</em> written in CP1610 assembly code. The quote and their worked examples explores the relationship between platform-os and software as a relationship between 'body' and 'soul', in a metaphor reminiscent of Cartesian mind-body <a rel="nofollow" href="https://plato.stanford.edu/entries/dualism/">dualism</a>: "entwined," and yet also "throwing control back and forth":</p>

<blockquote><div>
  <p>The EXEC evolved into more than a space-saving device, and game programs on cartridges became deeply entwined with it. “Strictly speaking, the EXEC is like a body without a soul,” Rolfe said. In this metaphor, the cartridge is the soul bringing the EXEC to life, while the EXEC is the body of processes that carry out the soul’s desires. This dance between the EXEC and cartridge code is clear when swinging one’s bat in Major League Baseball. The EXEC handles much of the work, first by calling the BATSWING routine contained on the cartridge ROM:</p>

<pre><code>CMP     .UP,R1          ;IS THIS COMMAND FROM UP PLAYER?
BNZ     RETINS          ;IGNORE IT IF NOT
MOV     #NOKEYDSP,R0    ;PREVENT FURTHER SWINGING
MOV     R0,.KEYDSP      ;BY NOT LISTENING TO KEYPAD ANYMORE
CALL    S.SWING         ;BAT SWING SOUND
MOV     .BATRUP,R0      ;GET OBJECT NUMBER OF BATTER
CALL    TOOBJ           ;GET DATA BASE
ADD     #.OBJSEQ,R1     ;WANT TO START SEQUENCING
</code></pre>
  
  <p>This is assembly language code from the Major League Baseball cartridge, with CPU instructions on the left. The text after the semicolons are non-executable comments left by Rolfe to describe what the code is doing. Since this is a two-player game, the BATSWING routine checks to see if the button press came from the player at bat, branching away from the routine if not (BNZ means “branch away if the result of the compare, CMP, is not zero”). The code changes the EXEC’s .KEYDSP variable (the key dispatch) to switch off the keypress interaction on the hand controllers. This prevents you from swinging twice. Then the code triggers the bat swing sound, a process that also uses the EXEC. BATSWING uses another EXEC routine called TOOBJ to retrieve the moving object of the batter and manipulate its animation sequence (using .OBJSEQ, another variable used by the EXEC). This starts the batter’s swinging animation, which is also handled by the EXEC. Then, if the videogame code on the cartridge determines that you hit the ball, the EXEC handles its movement. The EXEC code and the videogame cartridge code are thus constantly throwing control back and forth, like two kids playing catch</p>
</div></blockquote>

<p>Notice that the play logic of the prototypical 20th-century American sport of baseball here is aligned with the logic of the <em>Major League Baseball</em> cartridge for Intellivision, That logic is constituted by e.g. the BATSWING routine and its code snippet, and that the 'soul' of  the <em>Baseball</em> player's desire (to swing the bat) is embodied by the Intellivision's EXEC operating system, with <em>Baseball</em> a kind of ur-program or prototypical program against which the constituting logic EXEC against. For the authors co-constitution is actualized during code execution in a way that aligns, once again, with part of the cultural logic of a baseball: "throwing control back and forth, like two kids playing catch."</p>

<p>This suggests many potential entry points into the intellivision and its cartridge, OS, and development code, whether from broad contexts such as business culture or the history and philosophy of sport (and foundations of game studies) or from  a single line of CP1610 assembly.</p>

<p>-- Jeremy Douglass, CCSWG 2026</p>

<p><strong>For Discussion:</strong></p>

<ul>
<li>What is the relationship of Intellivision code to the role of code in culture?</li>
<li>How do the EXEC OS and <em>Baseball</em> cartridge and relate to the body/soul concept?</li>
<li>What is interesting about these code examples and their platform today through the lens of software engineering and computer science, the history of computation, or cultural studies?</li>
<li>How do the interrelationships of code work across various levels of abstraction (routine, library, operating system, CPU instruction set, hardware...)?</li>
<li>What are the methodological interrelationships of "platform studies" to "code studies" in these examples?</li>
</ul>
]]>
        </description>
    </item>
    <item>
        <title>Week 2: AI, Vibe Coding and Critical Code Studies</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/204/week-2-ai-vibe-coding-and-critical-code-studies</link>
        <pubDate>Sun, 18 Jan 2026 15:58:43 +0000</pubDate>
        <category>2026 Week 2: AI and Critical Code Studies</category>
        <dc:creator>davidmberry</dc:creator>
        <guid isPermaLink="false">204@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><span data-youtube="youtube-Bzz_cSAvSns?autoplay=1"><a rel="nofollow" href="https://www.youtube.com/watch?v=Bzz_cSAvSns"><img src="https://img.youtube.com/vi/Bzz_cSAvSns/0.jpg" width="640" height="385" border="0" alt="image" /></a></span><br />
<em>Vibed using <a rel="nofollow" href="https://www.capcut.com" title="CapCut">CapCut</a></em></p>

<p><span data-youtube="youtube-wGpFEo1Dtw8?autoplay=1"><a rel="nofollow" href="https://www.youtube.com/watch?v=wGpFEo1Dtw8"><img src="https://img.youtube.com/vi/wGpFEo1Dtw8/0.jpg" width="640" height="385" border="0" alt="image" /></a></span><br />
<em>Vibed using <a rel="nofollow" href="https://cloud.google.com/blog/products/ai-machine-learning/veo-3-available-for-everyone-in-public-preview-on-vertex-ai" title="Veo 3">Veo 3</a></em></p>

<h1>"I'm ready to help you plan, study, bring ideas to life and more..." (Gemini 2026)</h1>

<p>"Vibe coding" emerged in early 2025 as Andrej Karpathy's term for a new mode of software development. He wrote about <em>fully giving in to the vibes, embracing exponentials, and forgetting that the code even exists</em>. The phrase captures something novel about writing software through conversation with large language models (LLMs) as a practice where natural language prompts replace direct code authorship, and where working programs emerge through iterative dialogue rather than deliberate deterministic construction. For me, vibe coding lies somewhere between augmentation and automation of software writing, but I think it also raises new questions for Critical Code Studies (CCS).</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/kx/ydksy3yj4qxm.png" alt="" title="" /></p>

<p>For CCS, vibe coding poses real opportunities for innovation and creativity in new methods for code reading <em>and</em> writing. Over the past year, experimenting with vibe coding, such as building tools and experimenting with failures, has convinced me that in terms of how we might undertake sophisticated code readings, vibe research (vibe CCS?) has something interesting to offer us. I have <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/205/critical-code-studies-workbench/p1?new=1" title="vibe coded a critical code studies workbench">vibe coded a critical code studies workbench</a> that you can download and use to undertake different types (modes) of CCS. Full <a rel="nofollow" href="https://github.com/dmberry/CCS-WB/tree/main" title="instructions are here">instructions are here</a>.</p>

<p>This thread draws on on a case study I've already documented, which involved <a rel="nofollow" href="https://stunlaw.blogspot.com/2025/10/co-writing-with-llm-critical-code.html" title="building an Oxford TSA practice application">building an Oxford TSA practice application</a> with Google Gemini, and then subsequently attempting to develop an <a rel="nofollow" href="https://stunlaw.blogspot.com/2025/11/ai-sprints.html" title="&quot;AI sprint&quot; method">"AI sprint" method</a> that adapts <a rel="nofollow" href="https://sussex.figshare.com/articles/book/On_book_sprints/23408009?file=41131784" title="book sprint">book sprint</a> and data sprint approaches for LLM-augmented research. Rather than approaching LLMs primarily as tools for reading code (i.e. the hermeneutic direction) or generating code (i.e. the software engineering direction), this group's discussion will aim to focus on what happens in the space between, that is, where prompts become interpretive acts, where code becomes a mirror reflecting gaps in our own thinking, and where the boundaries between human and machine contribution become productively blurred.</p>

<h1>Theorising Working With AI's</h1>

<p>Working through extended vibe coding sessions, I've found it useful to distinguish three different ways of engaging with AIs.</p>

<p><strong>Cognitive delegation</strong> occurs when we uncritically offload the work to systems that lack understanding. In my TSA project, I spent considerable time pursuing PDF text extraction approaches that looked plausible but were unworkable. The LLM generated increasingly sophisticated regular expressions, each appearing to address the previous failure, while the core problem, that semi-structured documents resist automated parsing, was ignored. The system's willingness to produce solutions obscured that <em>the solutions the LLM offered were broken, failed or misunderstood the problems</em>.</p>

<p><strong>Productive augmentation</strong> describes the sweet spot in working with an LLM. This is human curation of the LLM combined with the LLMs speed and efficiency to create code. Once I abandoned text extraction and instead remediated the PDF within an interactive interface, progress on my project accelerated dramatically – in other words, it was up to me to rearticulate the project and realise where I was going wrong. In contrast, previously the LLM would cheerfully tell me that my approach was correct and that we could fix it in just a few more prompts. By actively taking over the coordination and curating the design and structure of the research questions and design decisions the LLM handled coding I would have struggled to produce myself (and certainly at the speed an AI could do it!).</p>

<p><strong>Cognitive overhead</strong> is what I call the "scaling limits" of vibe coding. Managing LLM context, preventing feature regressions (a common problem), and maintaining version control are irritating for the human to have to manage as it seems like that should be the job of the computer. However, due to a range of reasons (e.g. misspecified or mistaken design, context window size, context collapse, bad prompting) a project soon reaches a complexity threshold where the mental labour of scope management became impossible of the LLM to handle and exhausting for the user. I think this means that perhaps vibe coding works best for bounded tasks rather than extended development (or at least present generations of LLM do, as this is clearly a growing problem in deployment of these systems).</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/hh/ysdzkv4jvegc.png" alt="" title="" /><br />
<em>Vibed using <a rel="nofollow" href="https://gemini.google/overview/image-generation/&quot;">Nano Banana Pro</a></em></p>

<h1>Day 1: Question for discussion</h1>

<ul>
<li><p>When you try vibe coding (and you should!), do you recognise which mode you occupy?</p></li>
<li><p>What signals the transition from productive augmentation to cognitive delegation? Is it an affective change or a cognitive one?</p></li>
</ul>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/hb/n12vcurdiny5.png" alt="" title="" /><br />
<em>Vibed using <a rel="nofollow" href="https://gemini.google/overview/image-generation/&quot;">Nano Banana Pro</a></em></p>

<h2>The Competence Effect</h2>

<p>The 2024 AI and CCS discussion returned repeatedly to Weizenbaum's ELIZA effect, which is the tendency to attribute understanding to pattern-matching systems. I want to suggest that vibe coding produces something qualitatively different, which I'm calling the <em>competence effect</em>.</p>

<p>Where ELIZA users had to actively ignore (bracket out?) the system's obvious repetitiveness, vibe coders receive constant reinforcement that the LLM "gets it". The AI always tries to be positive and action your suggestions and prompts. Bugs seem to get fixed through iterative prompting, and the system appears to learn from corrections. Quasi-functioning code provides evidence that seems to support that the LLM is helping you, even as it has no real understanding of your intentions.</p>

<p>The danger lies in a subtle mistake. A projection of competence and positivity from the LLM actually obscures the distinction between pattern-matching and comprehension. <em>The LLM's apparent responsiveness creates a false confidence in its flawed approaches.</em> We persist with unworkable strategies because the system generates plausible-looking code without signaling architectural problems (or perhaps even being able to do so). It tells us our idea is great, and that it can build the code – and if we don't keep a critical eye on the development, we may waste hours and days on an unworkable solution.</p>

<h1>Day 2: Question for discussion</h1>

<ul>
<li><p>Can we develop critical literacy for recognising the competence effect?</p></li>
<li><p>What textual or structural markers distinguish real LLM capability from a surface plausibility?</p></li>
</ul>

<h2>Intermediate Objects as Hermeneutic Sites</h2>

<p>A key principle that emerged from my AI sprint method was that "intermediate objects" were hugely helpful in keeping track of how successful the approach was. These include tables, summary texts, JSON files, and extracted datasets. These make algorithmic processing visible, and contestable, at steps along the way whilst vibe coding. These "materialised abstractions" serve as checkpoints where you can verify that computational operations align with your interpretive intentions.</p>

<p>CCS tends to examine code as a singular cultural object. But interestingly vibe coding produces <em>chains of intermediate objects that themselves potentially become sites for critical reading</em>. For example, the prompt history reveals the distribution of agency and versioned code (you must tell it to use versions!) shows where architectural decision points where made at different moments. Indeed, even the debugging conversations can help examine assumptions about what the system can and cannot do.</p>

<h1>Day 3: Question for discussion:</h1>

<ul>
<li><p>Can we perform CCS on code that exists in a kind of "perpetual draft form", continually revised through human-LLM dialogue?</p></li>
<li><p>Should we be reading the code, the prompts, or the entire conversation as the primary text? What is the research object?</p></li>
</ul>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/sr/1wxqjxefem5j.jpg" alt="" title="" /><br />
<em>Vibed using <a rel="nofollow" href="https://gemini.google/overview/image-generation/&quot;">Nano Banana Pro</a></em></p>

<h2>The Hermeneutic-Computational Loop</h2>

<p>Hermeneutics can be said to involve dialogue between interpreter and text. In contrast, vibe coding creates a <em>three-way exchange</em> between (1) human intention expressed through natural language prompts, (2) machine interpretation and code generation based on statistical patterns, and (3) executable code requiring further interpretation and testing.</p>

<p>This triadic structure challenges Gadamer's dyadic model of hermeneutics. Instead, understanding emerges through iterative cycles (i.e. loops) where each prompt is an interpretive act and a request for computational response. Code becomes a kind of mirror, reflecting intentions back while revealing gaps in your initial thinking. The 2024 discussion, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/157/ai-and-critical-code-studies-main-thread" title="AI and Critical Code Studies">AI and Critical Code Studies</a>, touched on this noting how "the AI prompt becomes part of the chain of meaning of code".</p>

<h1>Day 4: Question for discussion:</h1>

<ul>
<li><p>Does vibe coding represent a new hermeneutic approach for CCS? Or a supplement to it?</p></li>
<li><p>Is there a difference between collaborative coding with an LLM as against collaborative coding with a human partner, or, indeed, from working in sophisticated <a rel="nofollow" href="https://en.wikipedia.org/wiki/Integrated_development_environment" title="Integrated development environments">Integrated development environments</a> (IDEs) with autocomplete?</p></li>
</ul>

<h1>Provocations</h1>

<ul>
<li><p>A team of us has been <a rel="nofollow" href="http://inventingeliza.com/" title="working on ELIZA for several years now">working on ELIZA for several years now</a>, and the comparison with contemporary LLMs keeps returning. But I think the <em>competence effect</em> marks a qualitative break. ELIZA was obviously limited and its power lay in users' willingness to project meaning onto its mechanical responses. In contrast, LLMs produce outputs that pass functional tests, as code that runs, prose that (sometimes) persuades, analysis that (might) appear sound. I think the critical question shifts from "why do we anthropomorphise simple systems?" to "how do we maintain critical distance from systems that perform competence so convincingly?"</p></li>
<li><p>If an LLM can generate bespoke analytical tools from natural language prompts (and it clearly can), what becomes of the <a rel="nofollow" href="https://eadh.org/methodologies" title="methodological commons">methodological commons</a> that has been used to justify digital humanities or digital methods as fields? My AI sprint method attempts one response by integrating LLM capabilities within traditions that emphasise interpretation, critique, and reflexivity rather than abandoning methods completely. But I'm uncertain whether this represents a sustainable position or a rearguard action (it is also often feels somewhat asocial?).</p></li>
</ul>

<p>My TSA project involved approximately 4 hours of vibe coding, apparently (according to the LLM) consuming 0.047 GPU hours at a cost of roughly $0.20 (!). This apparent frictionlessness conceals questions that CCS is well-positioned to raise, such as whose labour did the model appropriate in training? Under what conditions was that knowledge produced? What material resources make such co-creation materially possible? The collaborative interface "naturalises" what should surely remain contested.</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/7f/nk5xnl1rqu5o.png" alt="" title="" /><br />
<em>Vibed using <a rel="nofollow" href="https://gemini.google/overview/image-generation/&quot;">Nano Banana Pro</a></em></p>

<h1>The Discussion</h1>

<p>Some questions to consider:</p>

<ol>
<li><p>Have you tried vibe coding? What modes of cognitive augmentation did you encounter? Where did delegation shade into "productive collaboration" or "cognitive overhead"?</p></li>
<li><p>Are these three modes I've identified similar to your experience, or do they need refinement? Are there other modes I'm missing?</p></li>
<li><p>Can you share examples of your LLM-generated code that appeared ok but failed? What made the failure difficult to recognise?</p></li>
<li><p>Is vibe coding a good method, <em>or object</em>, for CCS? What would CCS methodology look like if its primary objects were vibe coding sessions rather than code artefacts? What would we read, and how?</p></li>
<li><p>Can we vibe code tools and methods for analysing code? Or to what extent can LLMs help us analyse vibe coded projects?</p></li>
<li><p>Is this the <a rel="nofollow" href="https://stunlaw.blogspot.com/2025/12/provenance-anxiety-death-of-author-in.html?m=1" title="death of the author">death of the author</a> (redux)?</p></li>
</ol>

<p>I'll be posting some code critique threads with specific examples from my projects, and proposing a mini AI sprint exercise for those who want to try vibe coding during the week and reflect on the experience together.</p>

<h1>Resources</h1>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/y8/jia9a270c5uk.png" alt="" title="" /><br />
<em>Vibed using <a rel="nofollow" href="https://gemini.google/overview/image-generation/&quot;">Nano Banana Pro</a></em></p>

<p><strong>For those new to vibe coding</strong></p>

<p>Karpathy's "Software Is Changing (Again)" talk: <a rel="nofollow" href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" title="https://www.youtube.com/watch?v=LCEmiRjPEtQ">https://www.youtube.com/watch?v=LCEmiRjPEtQ</a></p>

<p>My "Co-Writing with an LLM" case study: <a rel="nofollow" href="https://stunlaw.blogspot.com/2025/10/co-writing-with-llm-critical-code.html" title="https://stunlaw.blogspot.com/2025/10/co-writing-with-llm-critical-code.html">https://stunlaw.blogspot.com/2025/10/co-writing-with-llm-critical-code.html</a></p>

<p>My "AI Sprints" methodology post: <a rel="nofollow" href="https://stunlaw.blogspot.com/2025/11/ai-sprints.html" title="https://stunlaw.blogspot.com/2025/11/ai-sprints.html">https://stunlaw.blogspot.com/2025/11/ai-sprints.html</a></p>

<p>The Oxford TSA Questionmaster code: <a rel="nofollow" href="https://github.com/dmberry/Oxford_TSA_Question_Master" title="https://github.com/dmberry/Oxford_TSA_Question_Master">https://github.com/dmberry/Oxford_TSA_Question_Master</a></p>

<p>Berry and Marino (2024) "Reading ELIZA": <a rel="nofollow" href="https://electronicbookreview.com/essay/reading-eliza-critical-code-studies-in-action/" title="https://electronicbookreview.com/essay/reading-eliza-critical-code-studies-in-action/">https://electronicbookreview.com/essay/reading-eliza-critical-code-studies-in-action/</a></p>

<p><strong>For the previous CCSWG2024 AI and CCS discussions</strong></p>

<p>AI and Critical Code Studies: <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/157/ai-and-critical-code-studies-main-thread" title="https://wg.criticalcodestudies.com/index.php?p=/discussion/157/ai-and-critical-code-studies-main-thread">https://wg.criticalcodestudies.com/index.php?p=/discussion/157/ai-and-critical-code-studies-main-thread</a></p>

<p>ELIZA code critique: <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/161/code-critique-what-does-the-original-eliza-code-offer-us" title="https://wg.criticalcodestudies.com/index.php?p=/discussion/161/code-critique-what-does-the-original-eliza-code-offer-us">https://wg.criticalcodestudies.com/index.php?p=/discussion/161/code-critique-what-does-the-original-eliza-code-offer-us</a></p>

<p>LLM reads DOCTOR script: <a href="https://wg.criticalcodestudies.com/index.php?p=/discussion/164/code-critique-llm-reads-joseph-weizenbaums-doctor-script" rel="nofollow">https://wg.criticalcodestudies.com/index.php?p=/discussion/164/code-critique-llm-reads-joseph-weizenbaums-doctor-script</a></p>
]]>
        </description>
    </item>
    <item>
        <title>Critical Code Studies Workbench</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/205/critical-code-studies-workbench</link>
        <pubDate>Mon, 19 Jan 2026 15:45:17 +0000</pubDate>
        <category>2026 Week 2: AI and Critical Code Studies</category>
        <dc:creator>davidmberry</dc:creator>
        <guid isPermaLink="false">205@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Inspired by the theme of vibe coding this week I decided to try to <a rel="nofollow" href="https://github.com/dmberry/CCS-WB/blob/main/README.md" title="actualise a workbench for working in critical code studies">actualise a workbench for working in critical code studies</a> projects. I used Claude Code to do the heavy lifting and I now have a usable version that can be downloaded and run on your computer. This is a version 1.0 so things may not always work correctly but I think that it offers a potential for CCS work that democratises access to the methods and approaches of CCS and makes for a (potentially) powerful teaching tool.</p>

<p>It uses a local LLM, <a rel="nofollow" href="https://ollama.com" title="Ollama">Ollama</a>, which you will need to download and install, but thats pretty easy (but should (!) be able to use an API key to talk to a more powerful LLM, if you want)</p>

<p><strong>UPDATE: <a rel="nofollow" href="https://ccs-wb.vercel.app" title="WEB VERSION NOW AVAILABLE TO TRY OUT">WEB VERSION NOW AVAILABLE TO TRY OUT</a></strong></p>

<p>This is how the main page looks:</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/kn/1nyb6r7vv49y.png" alt="" title="" /></p>

<p>The Critical Code Studies Workbench facilitates rigorous interpretation of code through the lens of critical code studies methodology. It supports:</p>

<ul>
<li><strong>Code critique</strong> - Close reading, annotation, and interpretation in the Marino tradition</li>
<li><strong>Hermeneutic analysis</strong> - Navigating the triadic structure of human intention, computational generation, and executable code</li>
<li><strong>Code archaeology</strong> - Analysing historical software in its original context</li>
<li><strong>Vibe coding</strong> - Creating code to understand algorithms through building</li>
</ul>

<p>Software deserves close reading here is a tool to help us. The Workbench helps scholars engage with code as meaningful text.</p>

<p>Note that the CCS Workbench has a built in LLM facility. This means that you can chat to the LLM whilst code annotating, ask it to help with suggestions, give interpretations of the code, etc. whilst you are working. The other modes are more conversational (archaeology/interpretation/create) and allow a more fluid way of working with code and ideas. There is a quite sophisticated search for references whilst you are chatting in the latter three modes so you can connect to CCS literature (and wider) from within the tool.</p>

<p>The Workbench saves project files based on the mode you are in and you can open them from the main page and then it will put you back in the session where you left off. You can also export the session in JSON/Text/PDF for writing up in an academic paper, etc.</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/pb/rk13t854kska.png" alt="" title="" /></p>

<h2>Features</h2>

<h3>Entry Modes</h3>

<ul>
<li><strong>I have code to critique</strong>: IDE-style three-panel layout for close reading with inline annotations</li>
<li><strong>I'm doing code archaeology</strong>: Exploring historical software with attention to context</li>
<li><strong>I want to interpret code</strong>: Developing hermeneutic frameworks and approaches</li>
<li><strong>I want to create code</strong>: Explore algorithms by building them (vibe coding)</li>
</ul>

<h3>Experience Levels</h3>

<p>The assistant adapts its engagement style based on your experience:</p>

<ul>
<li><strong>Learning</strong>: Explains CCS concepts, offers scaffolding, suggests readings</li>
<li><strong>Practitioner</strong>: Uses vocabulary freely, focuses on analysis</li>
<li><strong>Research</strong>: Engages as peer, challenges interpretations, technical depth</li>
</ul>

<h3>IDE-Style Critique Layout</h3>

<p>The critique mode features a three-panel layout for focused code analysis:</p>

<ol>
<li><p><strong>Left panel</strong>: File tree with colour-coded filenames by type</p>

<ul>
<li>Blue: Code files (Python, JavaScript, etc.)</li>
<li>Orange: Web files (HTML, CSS, JSX)</li>
<li>Green: Data files (JSON, YAML, XML)</li>
<li>Amber: Shell scripts</li>
<li>Grey: Text and other files</li>
</ul></li>
<li><p><strong>Centre panel</strong>: Code editor with line numbers</p>

<ul>
<li>Toggle between Edit and Annotate modes</li>
<li>Click any line to add an annotation</li>
<li>Six annotation types: Observation, Question, Metaphor, Pattern, Context, Critique</li>
<li>Annotations display inline as <code>// An:Type: content</code></li>
<li>Download annotated code with annotations preserved</li>
<li>Customisable font size and display settings</li>
</ul></li>
<li><p><strong>Right panel</strong>: Chat interface with guided prompts</p>

<ul>
<li>Context preview shows what the LLM sees</li>
<li>Phase-appropriate questions guide analysis</li>
<li>"Help Annotate" asks the LLM to suggest annotations</li>
<li>Resizable panel divider (drag to resize)</li>
<li>Customisable chat font size</li>
</ul></li>
</ol>

<h3>Project Management</h3>

<ul>
<li><strong>Save/Load projects</strong> as <code>.ccs</code> files (JSON internally)</li>
<li><strong>Load Project</strong> button on landing page auto-detects mode</li>
<li><strong>Export session logs</strong> in JSON, Text, or PDF format for research documentation</li>
<li>Session logs include metadata, annotated code, full conversation, and statistics</li>
<li>Click filename in header to rename project</li>
</ul>

<p>Download the software from <a rel="nofollow" href="https://github.com/dmberry/CCS-WB/blob/main/README.md" title="https://github.com/dmberry/CCS-WB/blob/main/README.md">https://github.com/dmberry/CCS-WB/blob/main/README.md</a></p>

<p>Here are some screenshots:</p>

<p>CCS Archeology Mode</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/93/1mmcums6qryi.png" alt="" title="" /></p>

<p>CCS Interpretation Mode (i.e. Hermeneutics)</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/76/go4k5rketx3t.png" alt="" title="" /></p>

<p>CCS Create Code Mode (Vibe Coding)</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/09/f2g3fb1t5w8i.png" alt="" title="" /></p>

<p>See the help icon for more information (or detailed instructions in the project <a rel="nofollow" href="https://github.com/dmberry/CCS-WB/blob/main/README.md" title="README.md file">README.md file</a>:</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/dh/mb320dbr4xxr.png" alt="" title="" /></p>
]]>
        </description>
    </item>
    <item>
        <title>The First AIs: IPL-V and Simon&amp;Newell's 1950s (!!) Cognitive Models</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/198/the-first-ais-ipl-v-and-simon-newells-1950s-cognitive-models</link>
        <pubDate>Wed, 14 Jan 2026 05:50:37 +0000</pubDate>
        <category>2026 Week 2: AI and Critical Code Studies</category>
        <dc:creator>jshrager</dc:creator>
        <guid isPermaLink="false">198@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>For almost exactly a year now, since Team ELIZA got the original ELIZA working, and the book became the team's primary focus, I've been working on reanimating the very first AIs, specifically those written in IPL-V (Information Processing Language Five*) in the 1950s at RAND and Carnegie Tech (now CMU). Mark (Marino) suggested that I open a discussion about some of these earliest IPL-V AIs in this CCS session, and I'm looking for either encouragement or discouragement.</p>

<p>The problem is that, although IPL-V is an incredibly important language, being the language in which Simon and Newell implemented the world's first AIs**, it's also a very difficult language, being essentially the assembly language for a Lisp machine (although Lisp itself wasn't invented until a decade later than the earliest IPL work!) I've made a little video introduction to IPL-V: <br />
   <span data-youtube="youtube-Q6e8XQEdOFY?autoplay=1"><a rel="nofollow" href="https://www.youtube.com/watch?v=Q6e8XQEdOFY"><img src="https://img.youtube.com/vi/Q6e8XQEdOFY/0.jpg" width="640" height="385" border="0" alt="image" /></a></span></p>

<p>If you aren't bored by it, or scared by (or scarred by) all this, let me know, either herebelow, or in DM, and I'll consider making the promised next video introducing LT (The Logic Theory machine, aka. The Logic Theorist) and maybe GPS (The General Problem Solver).</p>

<p>Cheers,<br />
'Jeff</p>

<p>ps. Here's a video that I made about 6 months ago that summarizes my progress at that time on reanimating LT:<br />
    <span data-youtube="youtube-qmE5o2ezqBg?autoplay=1"><a rel="nofollow" href="https://www.youtube.com/watch?v=qmE5o2ezqBg"><img src="https://img.youtube.com/vi/qmE5o2ezqBg/0.jpg" width="640" height="385" border="0" alt="image" /></a></span> <br />
Since then my efforts have regressed! :-) Why that has happened is a long, perhaps semi-interesting story.)</p>

<p>(* The fifth IPL is the only one that was fully implemented and commonly available. Most of the previous ones were internal to RAND/CIT. There were plans for an IPL-VI, but it was rolled by Lisp, which implements all the same concepts in a much more elegant language.)</p>

<p>(** What Simon and Newell were working on at RAND in IPL-V wasn't primarily AI, but rather cognitive models; their goal was to build computer programs that thought in the same way that humans thought. To the extent that they succeeded at this, they would have incidentally built an AI by definition, but Simon and Newell preferred terms like "complex information processing" or "cognitive simulation" over "artificial intelligence," which was coined by McCarthy for the 1956 Dartmouth conference. This reflected a genuine philosophical difference: McCarthy pursued intelligence by any effective means, while Simon and Newell insisted their programs should mirror actual human cognitive processes.)</p>
]]>
        </description>
    </item>
   <language>en</language>
   </channel>
</rss>
