<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>2026 Week 4: New Work and New Directions — CCS Working Group</title>
      <link>https://wg.criticalcodestudies.com/index.php?p=/</link>
      <pubDate>Sun, 12 Apr 2026 07:17:19 +0000</pubDate>
          <description>2026 Week 4: New Work and New Directions — CCS Working Group</description>
    <language>en</language>
    <atom:link href="https://wg.criticalcodestudies.com/index.php?p=/categories/2026-week-4/feed.rss" rel="self" type="application/rss+xml"/>
    <item>
        <title>Week 4: New Work and New Directions</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/223/week-4-new-work-and-new-directions</link>
        <pubDate>Mon, 02 Feb 2026 19:45:13 +0000</pubDate>
        <category>2026 Week 4: New Work and New Directions</category>
        <dc:creator>Lyr</dc:creator>
        <guid isPermaLink="false">223@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><span data-youtube="youtube-8ZgACA3mccE?autoplay=1"><a rel="nofollow" href="https://www.youtube.com/watch?v=8ZgACA3mccE"><img src="https://img.youtube.com/vi/8ZgACA3mccE/0.jpg" width="640" height="385" border="0" alt="image" /></a></span></p>

<p>This week, Carly Schnitzler (<a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/profile/cschnitz%29">@cschnitz)</a> and I got together for a chat around the week's topic, <strong>New Work and New Directions</strong>.</p>

<p>Join us with your thoughts and reactions!</p>

<p><strong>References and Links</strong></p>

<ul>
<li>Alan Blackwell, <em><a rel="nofollow" href="https://mitpress.mit.edu/9780262548717/moral-codes/" title="Moral Codes">Moral Codes</a></em></li>
<li><a rel="nofollow" href="https://theconversation.com/the-dead-internet-theory-makes-eerie-claims-about-an-ai-run-web-the-truth-is-more-sinister-229609#:~:text=The%20dead%20internet%20theory%20is%20the%20idea,clear%20agenda%20and%20no%20longer%20involves%20humans" title="“The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister”">“The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister”</a></li>
<li><a rel="nofollow" href="http://" title="Google trends show when social media tanked zine interest and when the Dead Internet brought it back">Google trends show when social media tanked zine interest and when the Dead Internet brought it back</a></li>
<li>Ursula Franklin, <em><a rel="nofollow" href="https://monoskop.org/images/5/58/Franklin_Ursula_The_Real_World_of_Technology_1990.pdf" title="The Real World of Technology">The Real World of Technology</a></em></li>
<li><a rel="nofollow" href="https://digitalscholarship.library.jhu.edu/s/aivoices/page/welcome" title="Preserving AI Voices">Preserving AI Voices</a></li>
<li>kathy wu, <a rel="nofollow" href="https://kaaathy.com/#uncertain-weathers" title="“Into Uncertain Weathers”">“Into Uncertain Weathers”</a></li>
<li><a rel="nofollow" href="https://www.ensemblepark.com/" title="Ensemble Park">Ensemble Park</a> (eds. Kyle Booten and Katy Ilonka Gero)</li>
<li><a rel="nofollow" href="https://shop.nothing-to-say.org/products/the-anxiety-of-conception" title="Anxiety of Conception">Anxiety of Conception</a> by Katy Ilonka Gero</li>
<li>Friedrich Kittler, <em><a rel="nofollow" href="https://www.sup.org/books/media-studies/truth-technological-world" title="The Truth of the Technological World">The Truth of the Technological World</a></em></li>
<li>Jessica Pressman, <em><a rel="nofollow" href="https://cup.columbia.edu/book/bookishness/9780231195133/" title="Bookishness">Bookishness</a></em></li>
</ul>

<p><strong>EDIT:</strong> I had not added a transcript, which is now <a rel="nofollow" href="https://docs.google.com/document/d/1EgOnHQYtmpPXl1XEUnqUiFpZCE5MLXf9/edit?usp=sharing&amp;ouid=109462096985752734084&amp;rtpof=true&amp;sd=true" title="here">here</a>! It is still very rough, but I tried to do the fastest of cleanups.</p>
]]>
        </description>
    </item>
    <item>
        <title>Book: Moral Codes - Designing Alternatives to AI, by Alan Blackwell</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/217/book-moral-codes-designing-alternatives-to-ai-by-alan-blackwell</link>
        <pubDate>Sat, 31 Jan 2026 08:46:08 +0000</pubDate>
        <category>2026 Week 4: New Work and New Directions</category>
        <dc:creator>AlanBlackwell</dc:creator>
        <guid isPermaLink="false">217@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Thank you very much to Mark, Jeremy and Lyr for the invitation to discuss my book with you. Moral Codes is subtitled “Designing alternatives to AI”, but people from critical code studies will certainly have noticed the double meaning of the word “Codes” in the title!</p>

<p>The agenda of the book, as I occasionally observe to colleagues in computer science, is to argue that the world probably needs less AI, and better programming languages. I wrote the book to try and persuade a wider audience (including policy makers) of this, observing that many of the most intractable problems of AI - explainability, alignment, controllability - are precisely the established priorities of programming language designers.</p>

<p>I draw on my own long experience of designing end-user programming languages, especially those that extend the spreadsheet paradigm or support creative imporovisation. These are often diagrammatic, or data-centric, or use direct manipulation, meaning that the “code” they allow is not the kind that easily invites close reading. Which is not to say that close reading would be pointless - I suspect many spreadsheets are crying out for it!</p>

<p>A deeper, but probably subtler, theme of the book reflects on the attention economy of AI, of social media, surveillance capitalism, enshittification and all that. I argue that the investment of attention is the fundamental unit of human consciousness, and that the drive to make machines conscious reflects a systematic devaluing of our own consciousness, as a consequence of the technofeudal attention economy (similar to Pasquinelli’s labour theory of AI). From this perspective, programming is fundamental to the exertion of individual personhood. As Geoff Cox and Winnie Soon said, “program, or be programmed”.</p>

<p>The last part of the book speculates on where vibe coding may take us, through an explanation of basic craft principles in software engineering. But the book was originally written before the launch of ChatGPT, and published before Karpathy invented “vibe”, so those chapters are quite a hostage to fortune. There is an extended historical centre, in which I discuss the evolution of the GUI out of programming innovations of Alan Kay, Ivan Sutherland, and others who all saw their work as a kind of programming. For me, critical attention to all of these technologies benefits from the ability to view them as notational systems, each having their own kinds of code-like properties.</p>

<p>So overall, I would not advocate Moral Codes as a text in critical code studies, because it is not really doing the same thing as CCS. Nevertheless, the arguments are likely to be familiar to students of CCS, and I hope offer some value to the field, by reminding us the ways in which code may continue to be important, even if we become obliged to access it at second-hand via a chat dialog.</p>
]]>
        </description>
    </item>
    <item>
        <title>What Can Critical Code Studies Read When Programmers Stop Writing?</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/231/what-can-critical-code-studies-read-when-programmers-stop-writing</link>
        <pubDate>Thu, 12 Feb 2026 13:04:24 +0000</pubDate>
        <category>2026 Week 4: New Work and New Directions</category>
        <dc:creator>davidmberry</dc:creator>
        <guid isPermaLink="false">231@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Something seems to have happened to the language of programming in the past decade. During the CCSWG 2026 I spent time <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/205/critical-code-studies-workbench" title="experimenting with vibe coding">experimenting with vibe coding</a>, <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/204/week-2-ai-vibe-coding-and-critical-code-studies#latest" title="coding in conversation with an LLM">coding in conversation with an LLM</a> rather than writing it directly, and what struck me was not the capacity of these systems but the strange new vocabulary surrounding them. Programmers now wire up hooks,  bind components, plug into event streams, subscribe to state changes, mount and unmount lifecycle methods and so on. The everyday language of software development feels to me like it has become the language of plumbing and electrical engineering. It seems to describe connecting pre-existing conduits and wires rather than writing software.</p>

<p>I don't think this is merely jargon though (in Adorno's sense). The older vocabulary of programming, as I am used to it, was mathematical and procedural. Using terms like function, call, return, execute, implement, compile and these words carried within them an image of the programmer as an engineer of software, someone who reasoned through a problem and translated that reasoning into source code to make the machine actualise something. This newer vocabulary of hook, wire, bind, plug in, pipeline, signal, emit, seems to describe a different labour process. It seems to be configuration rather than construction, attachment to infrastructure rather than programming from basic software building blocks.</p>

<p>For CCS, I wonder if this shift poses a new challenge. <a rel="nofollow" href="https://electronicbookreview.com/publications/critical-code-studies/">Critical code studies</a> has generally treated code as a cultural object that can be read, a hermeneutic text connecting it to, for example, ideology, and social context. But the vocabulary change I'm seeing seems to indicate a transformation in what that "text" is. What we might call "framework-era" code is less like a text and more like a wiring diagram. The <a rel="nofollow" href="https://react.dev/reference/react/hooks">React hook</a> that seems to be the basic unit of front-end development today does not express thought, instead it declares a connection. It seems to me that when the coder writes <code>useState</code>, <code>useEffect</code>, <code>useCallback</code>, etc. they are not reasoning, they are attaching themselves to flows of "re-rendering" and "state propagation" that the framework manages itself. These hooks (the word itself is so suggestive!) even seem to come with <a rel="nofollow" href="https://react.dev/reference/rules/rules-of-hooks">disciplinary imperatives</a>, for example they must be called at the top level, never inside conditionals, because the framework tracks them by "call order". The new infrastructure dictates the discourse but also the coding mental model.</p>

<p><a rel="nofollow" href="https://en.wikipedia.org/wiki/Vibe_coding">Vibe coding</a> seems to change the paradigm again. The coder is no longer writing functions or wiring hooks. Rather, the user is prompting, guiding, and shepherding. When the LLM produces the code the user can allow or disallow, accept or reject. Does this mean the coder is a manager or does the LLM proletarianise the coder? What does this say about the code that is produced and what happens to CCS and its methods?</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/mc/5zu9krsnftya.png" alt="" title="" /></p>

<p>So here is my question - if CCS reads code as cultural text, what happens when the text has been progressively emptied of the traces that made it readable? Good old-fashioned programming (GOFP) left traces of individuality and their social context in their code. The new "framework" programming leaves configuration choices and permission traces (allow/accept) but rarely do we see the negative (cancel/reject) as this is seldom written, stored, or documented. The vibe coder leaves only markdown conversations, and the code that results bears little individual trace. Its social context is algorithmically mediated rather than socially situated. Is this vocabulary shift significant for CCS? As it seems to me one could argue the move from mathematical to electrical to conversational, marks a withdrawal of the interpretative from the text.</p>

<p>Does CCS need to follow this withdrawal (follow the actants as it were)? Should we be reading/critiquing prompts, conversations, intermediate objects (as I suggested in <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/204/week-2-ai-vibe-coding-and-critical-code-studies">this thread</a>) rather than the (generated) code? Or is there something still legible in the statistical patterns of vibe-coded output, something that a close reading of the probabilistic source code might reveal about the collective labour it distills?</p>

<p>Perhaps CCS should be attending to the transitions between human and machine labour, not just the outputs. I think the vocabulary shift here can be read diagnostically. When did programmers actually stop saying "implement" and start saying "wire up," and will they move to saying "vibe up"? Are we following these shifts and paying attention to new directions as the <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/223/week-4-new-work-and-new-directions#latest" title="New Work and New Directions discussion">New Work and New Directions discussion</a> asks, or is Critical Code Studies becoming conservative and overly obsessed with good old-fashioned programming (GOFP)?</p>
]]>
        </description>
    </item>
    <item>
        <title>Markdown: A Lightweight Markup Language (2004)</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/226/markdown-a-lightweight-markup-language-2004</link>
        <pubDate>Thu, 05 Feb 2026 13:46:01 +0000</pubDate>
        <category>2026 Week 4: New Work and New Directions</category>
        <dc:creator>davidmberry</dc:creator>
        <guid isPermaLink="false">226@/index.php?p=/discussions</guid>
        <description><![CDATA[<p><strong>Author:</strong> John Gruber<br />
<strong>Language:</strong> <a rel="nofollow" href="https://daringfireball.net/projects/markdown/" title="Markdown">Markdown</a> syntax specification; original implementation in Perl<br />
<strong>Year:</strong> 2004<br />
<strong>Source:</strong> Daring Fireball, <a rel="nofollow" href="https://daringfireball.net/projects/markdown/" title="https://daringfireball.net/projects/markdown/">https://daringfireball.net/projects/markdown/</a></p>

<p><strong>Software/Hardware Requirements</strong></p>

<p><a rel="nofollow" href="https://daringfireball.net/projects/markdown/" title="Markdown">Markdown</a> is a plain text formatting syntax and a text-to-HTML conversion tool. The original implementation was a Perl script (<code>Markdown.pl</code>) that processed <code>.md</code> or <code>.markdown</code> files into HTML. Unlike Scribe, which required a PDP-10 and BLISS compiler, Markdown runs anywhere Perl runs, which by 2004 meant essentially any Unix-like system, including Mac OS X and Linux. The format itself requires no special software to write, only a text editor, and remains human-readable without processing.</p>

<p><strong>Context</strong></p>

<p>This code critique accompanies the <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/discussion/197/scribe-a-document-specification-language-1980" title="Scribe code critique in Week 2">Scribe code critique in Week 2</a>. Where Scribe (1980) represents the emergence of structured document markup in academic computing, Markdown (2004) represents something like a return of the repressed, a deliberate simplification that prioritises human readability over formal rigour. Together, they bookend the "word processing parenthesis," the period of WYSIWYG dominance – and Markdown might be a signal that it is closing.</p>

<p>Markdown matters for three reasons. (1) Its syntax decisions have become infrastructural, shaping how millions of people write documentation, notes, and web content – it is also the (current) format crucial for powering the AI moment we are having in 2026. (2) Its licensing (or lack thereof) contrasts sharply with Scribe's commercialisation, representing a different political economy of software. (3) Its subsequent fragmentation into competing dialects (CommonMark, GitHub-Flavored Markdown, MultiMarkdown) raises questions about standardisation, power, and whose conventions become normalised.</p>

<p><strong>Code</strong></p>

<p><em>The Markdown Syntax</em></p>

<p>Markdown uses ASCII punctuation characters to indicate structure. Unlike Scribe's @ commands or HTML's angle brackets, Markdown syntax was designed to be "publishable as-is, as plain text, without looking like it's been marked up with tags or formatting instructions" (Gruber 2004).</p>

<p>Headers use hash marks:</p>

<pre><code># Heading 1
## Heading 2
### Heading 3
</code></pre>

<p>Emphasis uses asterisks or underscores:</p>

<pre><code>*italic* or _italic_
**bold** or __bold__
</code></pre>

<p>Lists use dashes, asterisks, or numbers:</p>

<pre><code>- Unordered item
- Another item

1. Ordered item
2. Another item
</code></pre>

<p>Links and images use brackets and parentheses:</p>

<pre><code>[Link text](https://example.com)
![Alt text](image.png)
</code></pre>

<p>Block quotations use the email convention of angle brackets:</p>

<pre><code>&gt; This is a quotation
&gt; spanning multiple lines
</code></pre>

<p>Code is indicated by backticks (inline) or indentation (blocks):</p>

<pre><code>Inline `code` here

    Four-space indented code block
</code></pre>

<p><em>Design Philosophy</em></p>

<p>Gruber's specification emphasises readability over "parseability":</p>

<blockquote><div>
  <p>The overriding design goal for Markdown's formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it's been marked up with tags or formatting instructions.</p>
</div></blockquote>

<p>This inverts the usual priority in markup language design. SGML, XML, and even Scribe prioritised unambiguous machine parsing. Markdown prioritises the human reader of the source file, accepting some parsing ambiguity as the cost.</p>

<p><em>The Perl Implementation</em></p>

<p>The original <code>Markdown.pl</code> is approximately 1,400 lines of Perl. It processes text through a series of regular expression substitutions, transforming Markdown syntax into HTML. The code is procedural rather than structured around a formal grammar, reflecting Markdown's origin as a practical tool rather than a formally specified language.</p>

<p>A representative excerpt shows the pattern:</p>

<pre><code>sub _DoHeaders {
    my $text = shift;

    # Setext-style headers:
    #     Header 1
    #     ========
    #  
    #     Header 2
    #     --------
    #
    $text =~ s{ ^(.+)[ \t]*\n=+[ \t]*\n+ }{
        &quot;&lt;h1&gt;&quot;  .  _RunSpanGamut($1)  .  &quot;&lt;/h1&gt;\n\n&quot;;
    }egmx;

    $text =~ s{ ^(.+)[ \t]*\n-+[ \t]*\n+ }{
        &quot;&lt;h2&gt;&quot;  .  _RunSpanGamut($1)  .  &quot;&lt;/h2&gt;\n\n&quot;;
    }egmx;


    # atx-style headers:
    #   # Header 1
    #   ## Header 2
    #   ## Header 2 with closing hashes ##
    #   ...
    #   ###### Header 6
    #
    $text =~ s{
            ^(\#{1,6})  # $1 = string of #'s
            [ \t]*
            (.+?)       # $2 = Header text
            [ \t]*
            \#*         # optional closing #'s (not counted)
            \n+
        }{
            my $h_level = length($1);
            &quot;&lt;h$h_level&gt;&quot;  .  _RunSpanGamut($2)  .  &quot;&lt;/h$h_level&gt;\n\n&quot;;
        }egmx;

    return $text;
}
</code></pre>

<p>This code reveals several things. The use of Perl's extended regular expression syntax (<code>/x</code> modifier) allows readable formatting of complex patterns. The dual support for "Setext-style" (underlined) and "atx-style" (hash-prefixed) headers shows Markdown inheriting conventions from earlier plain text traditions. The regex-based approach, rather than a formal parser, explains both Markdown's flexibility and its parsing edge cases.</p>

<p><strong>Provocations</strong></p>

<p><em>On the politics of simplicity.</em> Markdown's design prioritises ease of writing over formal specification. This has democratic implications, anyone can write Markdown without learning a complex syntax, but also creates problems. The original specification left many edge cases undefined, leading to the fragmentation problem that CommonMark later attempted to address. Is "simplicity" a neutral design value, or does it encode particular assumptions about users and use cases?</p>

<p><em>On plain text as ideology.</em> The preference for plain text has deep roots in Unix culture and hacker ethics. But "plain" text is never simply plain. UTF-8 encoding, line ending conventions (LF vs CRLF), and character set assumptions are all contested terrains. The apparent simplicity of <code>.md</code> files conceals layers of standardisation and historical compromise. What would it mean to read plain text ideologically?</p>

<p><em>On licensing and the gift economy.</em> Gruber released Markdown under a BSD-style license, essentially giving it away. Aaron Swartz, who contributed to the specification as a teenager, later became famous for his information-freedom activism and died in 2013 while facing federal prosecution for downloading academic articles. The contrast with Reid's sale of Scribe and insertion of time bombs could not be sharper. What do these different political economies of software reveal about the conditions under which technical infrastructure emerges?</p>

<p><em>On fragmentation and standardisation.</em> Markdown's success created its own problems. GitHub-Flavored Markdown added tables, task lists, and syntax highlighting. MultiMarkdown added footnotes, citations, and metadata. CommonMark attempted to create an unambiguous specification. The format that solved HTML's complexity problem has reproduced complexity at another level. Who gets to decide what "Markdown" means?</p>

<p><em>On LLMs and markup.</em> Large language models are trained on vast quantities of Markdown-formatted text from GitHub, documentation sites, and technical blogs. When we prompt an LLM to write, it <em>typically produces Markdown</em>. Does this training data bias encode particular assumptions about document structure? Whose conventions are being reproduced and naturalised through AI-mediated writing?</p>

<p><strong>Resources</strong></p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/1l/c84htnbbv2ei.png" alt="" title="" /></p>

<p>Markdown in the CCS workbench as a sample: <a rel="nofollow" href="https://ccs-wb.vercel.app/" title="https://ccs-wb.vercel.app/">https://ccs-wb.vercel.app/</a></p>

<p>Gruber, J. (2004) "Markdown." Daring Fireball. <a rel="nofollow" href="https://daringfireball.net/projects/markdown/" title="https://daringfireball.net/projects/markdown/">https://daringfireball.net/projects/markdown/</a></p>

<p>Gruber, J. (2004) "Markdown: Syntax." <a rel="nofollow" href="https://daringfireball.net/projects/markdown/syntax" title="https://daringfireball.net/projects/markdown/syntax">https://daringfireball.net/projects/markdown/syntax</a></p>

<p>Original Perl implementation: https<a rel="nofollow" href="://daringfireball.net/projects/downloads/Markdown_1.0.1.zip" title="://daringfireball.net/projects/downloads/Markdown_1.0.1.zip"><img src="https://wg.criticalcodestudies.com/resources/emoji/confused.png" title=":/" alt=":/" height="20" />/daringfireball.net/projects/downloads/Markdown_1.0.1.zip</a></p>

<p>CommonMark specification: <a href="https://spec.commonmark.org/" rel="nofollow">https://spec.commonmark.org/</a></p>

<p>MacFarlane, J. (2017) "Beyond Markdown." <a rel="nofollow" href="https://johnmacfarlane.net/beyond-markdown.html" title="https://johnmacfarlane.net/beyond-markdown.html">https://johnmacfarlane.net/beyond-markdown.html</a></p>

<p>Dash, A. (2026) "How Markdown took over the world." <a rel="nofollow" href="https://anildash.com/2026/01/09/how-markdown-took-over-the-world/" title="https://anildash.com/2026/01/09/how-markdown-took-over-the-world/">https://anildash.com/2026/01/09/how-markdown-took-over-the-world/</a></p>

<p>Wikipedia entry on Markdown: <a rel="nofollow" href="https://en.wikipedia.org/wiki/Markdown" title="https://en.wikipedia.org/wiki/Markdown">https://en.wikipedia.org/wiki/Markdown</a></p>

<p><strong>The Source Code</strong></p>

<p>The original <code>Markdown.pl</code> (version 1.0.1, 2004) is available from Daring Fireball:<br />
<a rel="nofollow" href="https://daringfireball.net/projects/downloads/Markdown_1.0.1.zip" title="https://daringfireball.net/projects/downloads/Markdown_1.0.1.zip">https://daringfireball.net/projects/downloads/Markdown_1.0.1.zip</a></p>

<p>Later implementations in other languages are numerous. Notable examples include:</p>

<ul>
<li>Python-Markdown: <a rel="nofollow" href="https://github.com/Python-Markdown/markdown" title="https://github.com/Python-Markdown/markdown">https://github.com/Python-Markdown/markdown</a></li>
<li>marked (JavaScript): <a rel="nofollow" href="https://github.com/markedjs/marked" title="https://github.com/markedjs/marked">https://github.com/markedjs/marked</a></li>
<li>commonmark.js (JavaScript reference implementation): <a rel="nofollow" href="https://github.com/commonmark/commonmark.js" title="https://github.com/commonmark/commonmark.js">https://github.com/commonmark/commonmark.js</a></li>
<li>Pandoc (Haskell, converts between many formats): <a rel="nofollow" href="https://pandoc.org/" title="https://pandoc.org/">https://pandoc.org/</a></li>
</ul>

<p><strong>Questions About the Code</strong></p>

<ol>
<li><p>How does Markdown's syntax encode assumptions about document structure? The format handles paragraphs, headers, lists, links, emphasis, and code, but struggles with tables, footnotes, and metadata. What model of "documents" does this imply? What kinds of writing does Markdown make easy or difficult?</p></li>
<li><p>The original implementation uses regular expressions rather than a formal grammar. What are the consequences of this design choice? How does it relate to the parsing ambiguities that later motivated CommonMark?</p></li>
<li><p>Gruber explicitly borrowed conventions from email (blockquotes with <code>&gt;</code>), Usenet (emphasis with <code>*</code>), and earlier plain text formats (Setext headers). What does this genealogy reveal about the communities whose practices became infrastructural?</p></li>
<li><p>Markdown was designed for web writers producing HTML. But it has spread far beyond that context, into note-taking, documentation, academic writing, and AI training data and AI output format. How do tools and formats exceed their original design intentions? What happens when a format becomes infrastructural?</p></li>
<li><p>The contrast between Markdown (given away, BSD license) and Scribe (sold, time-bombed) represents different political economies of software. What conditions enabled Gruber to give Markdown away? What does the gift economy of open source depend on that we might not see?</p></li>
</ol>

<p>Take a <a rel="nofollow" href="https://ccs-wb.vercel.app/" title="look in the CCS workbench now">look in the CCS workbench now</a></p>
]]>
        </description>
    </item>
    <item>
        <title>[Code Critique] getCrimeaStatusCookie, Yandex and very large codebases</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/221/code-critique-getcrimeastatuscookie-yandex-and-very-large-codebases</link>
        <pubDate>Mon, 02 Feb 2026 12:43:06 +0000</pubDate>
        <category>2026 Week 4: New Work and New Directions</category>
        <dc:creator>période</dc:creator>
        <guid isPermaLink="false">221@/index.php?p=/discussions</guid>
        <description><![CDATA[<h1>Yandex Code Critique</h1>

<hr />

<p>Title: Yandex Maps<br />
Author/s: Yandex Corporation<br />
Language/s: JavaScript<br />
Year/s of development: 2021<br />
Software/hardware requirements (if applicable): Web</p>

<hr />

<h2>Code</h2>

<pre><code>/**
 * Возвращает куку, отвечающую за статус Крыма.
 *
 * @see https://st.yandex-team.ru/MAPSUI-720
 */
function getCrimeaStatusCookie(cookies: Record&lt;string, string&gt;): string | undefined {
    if (!cookies.yp) {
        return;
    }
    const values = yandexYCookie.parseYpCookie(cookies.yp);
    return values.cr &amp;&amp; values.cr.value;
}
</code></pre>

<hr />

<h2>Context</h2>

<p>In 2023, the source code of Yandex, the equivalent of Google in the Russophone internet, was leaked. Given the ties of the Yandex engineers with their western counterparts, and the ties of the Yandex management with the government of the Russian Federation, this is quite a unique corpus, as it inscribes both corporate and governmental power. It is also an incredible challenge to make sense of it.</p>

<p>I attempted to do that in a paper that was recently published <a rel="nofollow" href="https://doi.org/10.1007/s00146-025-02819-4">createPoliticsResponse: the political computation of state borders in Yandex maps</a> (edited by <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/profile/orladelaney9">@orladelaney9</a> and <a rel="nofollow" href="https://wg.criticalcodestudies.com/index.php?p=/profile/davidmberry%29">@davidmberry)</a>. Most of the article is focused on tracing how Yandex Maps decides which borders to show, to whom, and under which conditions. In this sense, it is a material testimony of what is already asssumed, but hard to prove at the interface level, and in thus case it really shows the specific contribution of CCS to platform studies.</p>

<p>One code snippet that I looked at in the paper was the function above, from the frontend part of Yandex.Maps, which seems to extract a value about Crimea's status from a client cookie. So on one side, it is quite obvious that a Kremlin-linked Yandex wants to treat one of the most contested geopolical areas of Europe as an edge case. But on the other side I have found it particularly hard to show <em>how</em> exactly this is treated, and as <em>which kind</em> of edge case. This is a big limitation of the critically studying this function: it only allows us to study the reading of a value, and not its writing, hence telling only half of the story.</p>

<p>One reason for this is that the size of the Yandex codebase is orders of magnitude more vast than the usual snippets that constitute most of the corpus of CCS: the whole leaked codebase clocks in at upwards of 44Gb, and the maps module at more than 4Gb; both are mostly composed of plaintext files (see the resources section for a repo containing the maps section of Yandex's source code). This shift in quantity seems to me to be a shift in quality, and asks new questions for methods of CCS, some of which I've sketched out below.</p>

<hr />

<h2>Questions</h2>

<ul>
<li><p>Quite an uncritical start, but where is the value of the <code>crimeaStatus</code> cookie field set? How do we handle variable names changing as they get passed as arguments/references/assignments?</p></li>
<li><p>How does one go about reading 44Gb of code? The default means of search in textual software (matching patterns of characters) is heavily biased towards a syntactic approach, rather than a semantic approach. Could tools that focus on the structure of code (e.g. class relationships, function definitions and references, data structuring, argument passing) rather than on the surface of the code? If we do CCS on large corpora with only tools that enable such lexical analysis, rather than tools that do static structural analysis, what are we missing?</p></li>
<li><p>Is it enough to focus on the name of a function (e.g. <code>getCrimeaStatusCookie</code>) as an argument to critique the relationship between a private corporation and the (imperial) policy of a nation-state, without knowing exactly what the function does? In other words, what is the relationship between lexical choices and semantic choices as epistemic building blocks in a critical code study? Is there a critique of the data structuring that is independent of how data structures are called?</p></li>
<li><p>Thinking of structure, how much can/shold CCS draw on existing CS entities and denominations? I'm thinking here of design patterns, best practices, testing strategies, application architectures or language features? Specifically here, what kind of parts a CCS grammar could something like middlewares or localizations be (<code>getCrimeaStatusCookie</code> being both of these)?</p></li>
<li><p>The nature of leaked code seems to always imply a <em>lack</em>. In this case, there is missing documentation, specification, as well as all the ML components of Yandex. So how does one investigate incomplete code? How does one account for the part that is lacking, and how can one make extrapolations about it? What kind of forensic is this?</p></li>
</ul>

<hr />

<h2>Resources</h2>

<ul>
<li><a rel="nofollow" href="https://gitlab.com/periode/yandex-maps">Maps module of the Yandex leaks</a>, with the code snippet being in <a rel="nofollow" href="https://gitlab.com/periode/yandex-maps/-/blob/main/maps/front/services/maps/src/server/middlewares/localization-middleware.ts?ref_type=heads#L153">localization-middleware.ts:153</a></li>
<li><a rel="nofollow" href="https://www.css-lab.rwth-aachen.de/tools/overview">GICAT and ICE</a>, tools developed at the Computational Social Science research lab at RWTH, enabling code study of software used in social science research.</li>
<li><a rel="nofollow" href="https://github.com/sourcegraph/zoekt">zoekt</a>, the main lexical analysis and parser tool used for exploring source code</li>
</ul>
]]>
        </description>
    </item>
    <item>
        <title>Launch: LLMbench - a tool to undertake comparative annotated analysis of LLM output</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/230/launch-llmbench-a-tool-to-undertake-comparative-annotated-analysis-of-llm-output</link>
        <pubDate>Sat, 07 Feb 2026 18:26:38 +0000</pubDate>
        <category>2026 Week 4: New Work and New Directions</category>
        <dc:creator>davidmberry</dc:creator>
        <guid isPermaLink="false">230@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>After the success of the <a rel="nofollow" href="https://ccs-wb.vercel.app" title="CCS workbench">CCS workbench</a>, I have used the lessons learned from that tool to vibe code another new tool I call LLMbench. This new tool allows you to prompt two LLMs simultaneously with the same prompt and then use the reply to analyse it with annotations. These can then be exported to JSON/text or PDF for later use.</p>

<p>In the examples below you can see I have compared <a rel="nofollow" href="https://ollama.com/library/llama3.2" title="Llama 3.2">Llama 3.2</a> and <a rel="nofollow" href="https://modelcards.withgoogle.com/assets/documents/gemini-2.5-pro.pdf" title="Gemini 2.5 Pro">Gemini 2.5 Pro</a>, but you could also compare two models from the same family (e.g. Pro vs Flash or even the same LLM Pro vs Pro). See below.</p>

<p>Source code (fully open source, and built on open source software) is available here: <a rel="nofollow" href="https://github.com/dmberry/LLMbench" title="https://github.com/dmberry/LLMbench">https://github.com/dmberry/LLMbench</a></p>

<p>It's still in early development so I don't have a deployment to share, but you can install it yourself by <a rel="nofollow" href="https://github.com/dmberry/LLMbench" title="following the instructions here">following the instructions here</a>.</p>

<p>UPDATE: It is now deployed and can be used from this address (currently only working with non-local LLM models): <a rel="nofollow" href="https://llm-bench-mu.vercel.app/" title="https://llm-bench-mu.vercel.app/">https://llm-bench-mu.vercel.app/</a></p>

<h1>To connect up a free API model</h1>

<p>Go to <a rel="nofollow" href="https://openrouter.ai" title="https://openrouter.ai">https://openrouter.ai</a> and sign up for an account</p>

<p>Click <strong>Get an API key</strong></p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/rj/j8wkz0tylack.png" alt="" title="" /></p>

<p>Write this API key down somewhere as you are going to need this later.</p>

<p>Then <strong>Explore Models</strong></p>

<p>I use the model: "google/gemma-3n-e2b-it:free" (which is free) but there are many others to choose from.</p>

<h1>In the LLMbench app</h1>

<p>Now paste this information into the <strong><a rel="nofollow" href="https://ccs-wb.vercel.app" title="LLMbench settings window">LLMbench settings window</a> (click top right gear)</strong>:</p>

<p><strong>Provider</strong>: Open-AI-Compatible API (or choose the one you have if you pay for it)<br />
<strong>Model</strong>: Custom Model<br />
<strong>Custom Model ID</strong>: google/gemma-3n-e2b-it:free<br />
<strong>API Key</strong>: your API key from above<br />
<strong>Base URL</strong>: <a href="https://openrouter.ai/api/v1" rel="nofollow">https://openrouter.ai/api/v1</a></p>

<p>Do this for both panels - as a test you can send to the same model. Later you can choose another free model from OpenRouter such as:</p>

<ul>
<li>nvidia/nemotron-nano-9b-v2:free</li>
<li>openai/gpt-oss-20b:free</li>
<li>google/gemma-3n-e2b-it:free</li>
<li>deepseek/deepseek-r1-0528:free</li>
<li>meta-llama/llama-3.3-70b-instruct:free (supports multilingual dialogue)</li>
</ul>

<p>Then click X to close and it should work...</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/y7/15y49eknk4kx.png" alt="" title="" /></p>

<p>This is what it currently looks like (it already has a dark mode!).</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/bx/fw8bu0xceji6.png" alt="" title="" /></p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/r7/ka2b0f559m94.png" alt="" title="" /></p>

<p>And here it is with annotation added:</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/xh/9m2ef8hvg98a.png" alt="" title="" /></p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/su/sji1ym7eqw1d.png" alt="" title="" /></p>

<p>Here are some examples of the same Gemini model family. In the first (Pro vs Pro) you can see the effects of the probabilistic generation in the slightly different outputs from the same prompt given to two instantiations of the model simultaneously.</p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/i3/9x1pirnxecb2.png" alt="" title="" /></p>

<p><img src="https://wg.criticalcodestudies.com/uploads/editor/bg/jr2n5puft053.png" alt="" title="" /></p>

<p>The one above is Gemini Flash vs Pro.</p>

<p>UPDATE: It is now deployed and can be used from this address (currently only working with non-local LLM models): <a rel="nofollow" href="https://llm-bench-mu.vercel.app/" title="https://llm-bench-mu.vercel.app/">https://llm-bench-mu.vercel.app/</a></p>
]]>
        </description>
    </item>
    <item>
        <title>New project: Indigenous and decolonial foundations for programming</title>
        <link>https://wg.criticalcodestudies.com/index.php?p=/discussion/218/new-project-indigenous-and-decolonial-foundations-for-programming</link>
        <pubDate>Sat, 31 Jan 2026 08:49:18 +0000</pubDate>
        <category>2026 Week 4: New Work and New Directions</category>
        <dc:creator>AlanBlackwell</dc:creator>
        <guid isPermaLink="false">218@/index.php?p=/discussions</guid>
        <description><![CDATA[<p>Mark has invited me to share some early thoughts about a nascent project. This is practice-based research, meaning that I can't share results, or even fully coherent motivation, until the practical work commences. The following describes a little background, in the hope that you may find this an interesting basis for discussion.</p>

<p>I try to justify the privilege of academic sabbatical leave by undertaking research investigations that would not be supported by conventional public or private funding, applying radically new methods to longstanding problems. As an engineer / critic, I take Agre’s critical technical practice to mandate not only properly informed theoretical perspectives on technology, but also subjecting any speculative concepts to Pickering's mangle of practice, learning what I don’t know by making different things in different contexts.</p>

<p>Next year will be the last time in my career that I’m entitled to take a year of research leave. I’ve done it twice before. In my first sabbatical, I spent 7 months living in a New Zealand forest, creating the purely visual art programming language Palimpsest, through a daily process of creative exploration that avoided specification or explicit functional goals (perhaps “vibe coding,” but working directly in Java). My second sabbatical asked what AI would look like if it were invented in Africa instead of the USA, a question that I explored ethnographically, working with computer scientists and local communities in Ethiopia and Namibia.</p>

<p>The massive surge of interest in AI since then has inspired many to reconsider the foundations of computing, and the longstanding tension between AI and programming as explored in my book Moral Codes. For this last sabbatical, I plan to undertake a practical investigation of alternative foundations. Informed by critical histories such as Wendy Hui Kyong Chun’s “Programmed Visions”, and Matteo Pasquinelli’s "Eye of the Master,” I bring together my ethnographic investigations of AI alternatives with some decades of personal experience inventing domain-specific and end-user programming languages. I am making plans to work directly with creative and scholarly communities in two countries - in Lagos, Nigeria inspired both by Helen Verran’s “Science and an African Logic” and by Fela Kuti’s Afrobeat, and in my home country of Aotearoa New Zealand, where mātauranga Māori does not separate people and their environment in the same way as colonial metaphysics.</p>

<p>In 2018, when starting my investigation of AI in Africa, computer science colleagues asked how the fundamental principles would be any different, simply by working in a different place. Yet observing the work of my graduating students as they join DeepMind, Facebook AI, Spotify and other corporations, I wondered how AI on another continent could possibly be the same. Those tensions seem even greater, in turning my attention to programming languages. Aren't the basic principles of computation mathematical laws, within which today’s programming languages represent a natural compromise of engineering practice and architectural principles? Yet Chun, Pasquinelli and others attest to the contingency of so many supposedly fundamental principles. I really don’t know what, or whether, new things might be discovered by this somewhat Quixotic project. But I hope that a spirit of playful humility and moral engagement may uncover some alternative paths into the forest.</p>
]]>
        </description>
    </item>
   <language>en</language>
   </channel>
</rss>
