Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2026 Participants: Martin Bartelmus * David M. Berry * Alan Blackwell * Gregory Bringman * David Cao * Claire Carroll * Sean Cho Ayres * Hunmin Choi * Jongchan Choi * Lyr Colin * Dan Cox * Christina Cuneo * Orla Delaney * Adrian Demleitner * Pierre Depaz * Mehulkumar Desai * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Kevin Driscoll * Iain Emsley * Michael Falk * Leonardo Flores * Jordan Freitas * Aide Violeta Fuentes Barron * Erika Fülöp * Tiffany Fung * Sarah Groff Hennigh-Palermo * Gregor Große-Bölting * Zachary Horton * Dennis Jerz * Joey Jones * Titaÿna Kauffmann * Haley Kinsler * Todd Millstein * Charu Maithani * Judy Malloy * Eon Meridian * Luis Navarro * Collier Nogues * Stefano Penge * Marta Perez-Campos * Arpita Rathod * Abby Rinaldi * Ari Schlesinger * Carly Schnitzler * Arthur Schwarz * Haerin Shin * Jongbeen Song * Harlin/Hayley Steele * Daniel Temkin * Zach Whalen * Zijian Xia * Waliya Yohanna * Zachary Mann
CCSWG 2026 is coordinated by Lyr Colin-Pacheco (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC). Sponsored by the Humanities and Critical Code Studies Lab (USC), the Transcriptions Lab (UCSB), and the Digital Arts and Humanities Commons (UCSB).

Week 1: Where is the Critical in Critical Code Studies?

For Discussion:
Where is the critical in critical code studies?
What do you feel is the most productive marriage of critical and technical inquiry?
What code readings (articles or writings or other) marry these ideals effectively?
What are examples of discussions of code objects where one side or the other of this spectrum has dominated or is more needed?
What are your concerns in this debate?

Read the (unedited) transcript of our video introduction here.

Comments

  • edited January 12

    Mark and Jeremy, thank you for such a generative (!!) introduction. I think you've captured something key about the stakes of CCS work, particularly in your identification of the science wars redux and the intellectual redlining Tara McPherson identified. What you've laid out is not just a problem of method, but a tension in how we understand computational culture writ large and also the additional challenges of vibe coding and LLMs.

    I think you reveal something interesting about where CCS now stands. We're not arguing for critical code studies' legitimacy anymore, we are starting to map its terrain and interrogating its methods. This seems to me to mark a shift from a kind of defensive justification to a much more productive move towards what Imre Lakatos called a research programme. The question "where is the critical in critical code studies?" assumes that the field exists, that it has accumulated enough work to justify reflection on method. I think that means CCS is maturing into a stronger disciplinary formation.

    The dialectical overview you've laid out, with synthesis generated through close reading, maps onto a deeper epistemological problem about how computational objects can be known at all. Code exists at multiple levels, as materiality (e.g. voltage states, magnetic mediums), as a formal system (e.g. the formalism of syntax, semantics), as a cultural practice (e.g. practices, norms, communities), and, of course, as political economy (e.g. labour, accumulation, power). I wonder if this is a place to ask whether the concept of totality has something to offer to CCS? How do we navigate between these levels without reducing one to another – is it possible for us to grasp the whole rather than the parts – or is this a false synthesis?

    This is why the working group has become part of the answer to your question. By bringing people with different disciplinary expertise to examine the same code objects together, we're not just combining perspectives but generating truly interdisciplinary knowledge. We might say the computer scientist sees patterns the CCS practitioner misses whereas the critical theorist asks questions the programmer takes for granted or may not realise it is a question at all.

    What's needed is not just critique but what I think of as constellational analysis, showing how technical, economic, political, cultural forces converge in particular computational assemblages. This means reading code alongside business logic, interfaces with labour, algorithms through ideology critique, code even through AI methods. We should certainly question the separation of the technical from social that your presentation of intellectual redlining lays out so well.

    But I would like to offer an additional provocation, that we need to also create new concepts and (perhaps a new set of keywords) to understand what I am increasingly thinking of as not the mediation of code and software but its intermediation (Vermittlung).

  • edited January 13

    I wonder how the critic fits into the "constellation" that David describes. When we talk about "close reading," we sometimes overlook the act of reading itself. Who is reading? Where? How? Jeremy mentioned an upcoming article on "Reading Code Aloud" in the plenary video. This set me thinking...

    In the meetings of my online CCS group, we read code aloud, together. This has been an interesting challenge, because there is no canonical pronunciation for code. For example, you can read the following snippet of LISP/Scheme in several ways:

    (* (/ 3 2) (+ x 7.))
    

    This could be pronounced:

    "Open parenthesis, asterisk, open parenthesis, slash, three, two, close parenthesis, open parenthesis, plus sign, x, seven dot, close parenthesis, close parenthesis."

    Many members of the group lean towards a more "orthographic" pronunciation like this. Or you might interpret the symbols:

    "The multiplication of the quotient of 3 and 2 and the sum of x and 7.0."

    Or you can read and gloss it at the same time. Let's assume that x represents the velocity of some object in a simulation.

    "Three divided by two, which are both integers, times the velocity plus 7, which are both reals."

    This problem of how to pronounce code points, I think, to the rightness of David's "constellation" idea. The meaning of the code depends entirely on its context. We are constantly negotiating the contexts of reading in our group. What contexts are necessary to recover the literal meaning of the code, so we can discuss its interpretations? How much interpretation is too much when we recite the code? Can we even agree on a "literal" level of meaning, which can then become the object of critique? We are still trying to identify the stars by which to navigate these questions.

    In answer to the question, "Where is the critical?", I would say that it lies in this "constellation" of agreements between the critics, which establish relevant contexts for discussion.

  • I’m so pleased to see this question being addressed explicitly. Although I’ve occasionally and tangentially engaged with CCS, in the past couple of years I have (like so many) had a lot of my attention taken by Critical AI studies, and have been contributing to related debate in that field. From my own perspective, as an AI researcher turned programming language designer, again turned AI commentator, I’ve never seen a clear distinction between AI and PLs, meaning that “criticality” for me applies equally to creation of code and to creation of AI systems. The commonality between AI and PLs is that these are information processing tools, requiring clarity about the distinction between critique of a tool and critique of a product. In classical domains of cultural production, we critique the painting not the paintbrush, and the carving not the chisel. (Although I have personally worked on the critical analysis of the violin, as well as analysis of music). The distinct property of both AI and PL is that the mutability of the tools requires recursive reflexivity to a degree unseen in other objects of critique, while also rendering language as tool in a way that invites many category errors. Boundaries of agency and behaviour are blurred between the mechanical and the semiotic, in a way we have not really seen before. From my own perspective, primarily as a tool-maker rather than critic, I often return to Phil Agre to ask where the practice can be located, for engagement with my own critical technical practice community. I’m looking forward to learning further critical orientations and resources, but are there any other members of this group who will be looking to applications in practice?

  • I do agree with the point regarding legitimacy of the field being established with the various special issues and this working group. Here I think that the need to reflexively go forward with using/developing methods on code while considering the role of code and models in reading code and models.

    The constellational analytical approach is useful in bringing together the many layers and approaches to reading code. It also, to me, seems to offer an inclusionary way into CCS for interested parties. I hope that I have not misunderstood Michael Falk’s point in the reading aloud of code, but it seems to me that the differing readings offer alternative perspectives on that bit of LISP/Scheme.

    Does the critic work in concert/choreography with others within the constellation to find overlaps and gaps between perspectives? This working group, although my first time participating, seems to be an answer to developing both critique and conversations across perspectives and skills. My intuition is that there will always be gaps in analysis of code and that trying to link all the varying layers as a complete will create a false synthesis. Those gaps may either enable further work or reflection.

  • edited January 13

    Oh. Interesting. I had read this in a completely different way. (I'm sort of out of my element here, not being a media studies person, so what I'm about to say might be just plain nonsense, and if so yo can just ignore it.) What I thought the question was asking is something like: If I was in the middle of debugging a big complex program -- like I literally was until 3am this morning -- and I had Mark Marino sitting at my elbow, what would he be asking me, and telling me, and would I find that useful or annoying? (Or perhaps a better way to put that is: What sort of things that he could tell me would I find useful, and in what way?) That is, I was reading "where is the critical" ... as "how is the criticism useful to engineers?" ... probably (now that I'm reading the other responses) not what was intended ... although I'm still interested in this question, but maybe it should be in a different thread. (BTW, it's actually not far off from near reality. I "vibe code" (although I hate that term) literally all the time, and sometimes I actually ask the LLM to criticize my code, which is does but from a narrow engineering perspective. I wonder what would happen if we added a little Mark Marino to the LLM vibe coding engines? Sounds like I'm joking. I'm not!) p.s. my 3am session was a success!

  • @michael.falk Re reading code (in the reading aloud sense) is a very interesting cognitive process, but it seems to me to depend completely on the meaning. Arithmetic, for example, has semi-standardized ways to read, so we tend to both read and wrwite in a way that's biased by the standard way we read math. And code is written that way too. (Something that often confuses beginning lispers is that there's no infix notation for math, although it turns out that that prefix notation is more general, so instead of a+b+c+d you can just write the more advanced: sum(a,b,c,d).)

    I read this code (from something I'm working on right now):

    (if (functionp (cell-symb (h1)))
        (cell-symb (h1))                                                                      
        (if (functionp (<== (cell-symb (h1)))) 
            (<== (cell-symb (h1))))))) 
    

    As "If H1's symbol is a function then return it, otherwise if the cell H1 is pointing to is a function then return that." But this is quite specific to my understanding of my own code. Someone without that context might read it is: "If the symbol slot of the cell resulting from calling H1 is a function then return that .... etc.", but I know that (H1) is just a macro that gives me the H1 cell, and so on.

    Here's another example from the same code base:

    (loop for (nil . getter) in *symbol-col-accessors*                                                           
             as symbol = (funcall getter cell)                                                                      
             if (local-symbol-by-name? symbol)                                                                      
             collect (cons symbol (format nil "~a-~a" top-name symbol)))
    

    which I just read as "make new local symbols for all those that don't already have them". Whereas a raw reading would be something horrific like "Loop through the symbol-col-accessors and for each cdr (which is a getter), applyit to the cell, and if ...." Often things like this are complete functions (at least in lisp), and the name of the function is just the reading.

    (In fact, I sort of decry languages that don't traditionally separate their symbol names and are case sensitive, because instead of the nice human-readable (and thinkable) names like (Make-new-local-symbols-as-needed ...) you get things like MakeNewLocalSymbolsAsNeeded and can't vary the case to work nicely in sentences.)

  • edited January 13

    Thanks Jeff and Iain!

    Indeed, I think these different pronunciations are different "perspectives" (Iain), and I agree that the pronunciation of code depends on its "meaning" (Jeff).

    On reflection, I think what I was trying to say was this:

    In our reading group, the technical/critical distinction often maps on to the literal/meaningful distinction. When we are reading, we often struggle to understand what the code does. We normally discuss this first. When we discuss what the code does, we discuss it like engineers, and it feels like we are determining the literal meaning. Only then do we move on to discussions of the metaphors, naming conventions, assumed prior knowledge and so on, which allows us to build up a critique.

    So, as I say, when we read code aloud in my group, we often act like the technical is the literal, and lies on a lower level of analysis. The critical is the meaningful, and is built "on top of" the literal level.

    This is what it seems like when I read code aloud with friends, but as Iain and Jeff observe, even just reading the code aloud requires interpretation. Our very pronunciations imply an interpretation. Reflecting on this, the mapping technical→literal/critical→meaningful breaks down... Which I think lends support to Mark and Jeffrey's basic argument, which is that the technical view and the critical view are different orientations toward reading, rather than stacked-up levels of reading.

    On the other hand, just getting "what the code does" is also essential to critical understanding...

  • edited January 13

    Also, I like your shorthand way of reading, Jeff. It is funny to compare

    "make new local symbols for all those that don't already have them"

    with

    "left parenthesis, loop, left parenthesis, nil, dot, getter, right parenthesis ..."

    Where is the boundary between a recitation and a description of code? How does that spectrum map onto the technical↔critical spectrum?

  • Well, @michael.falk , no one ever reads code at the character level unless they’re trying to dictate it over the phone to someone who literally has never seen Lisp. (Which, BTW, I’ve done on multiple occasions! :-)

    To your point that “when we read code aloud in my group, we often act like the technical is the literal, and lies on a lower level of analysis. The critical is the meaningful, and is built "on top of" the literal level.”, I think that this is very uncommon. (Or perhaps I’m mis-taking your description.) Humans have an incredible ability to jump levels nearly instantaneously and fluidly. I saw this when I was working on how molecular biologists think: In literally the same sentence they’ll connect a molecule folding in a particular way to the way the cortex of the brain gets organized, and even (still the same sentence!) to how societies are organized. Although I have in mind a particular example here, it’s extremely common, even I would say, natural and automatic. So, back to code, one never (very rarely) just read at one level at any given time. Rather one is trying to understand what’s going on at all (or at least several) relevant levels at the same time. We saw this all the time in our reading the ELIZA code. (As in writing this I’m coming to think that this is actually exactly what you said, and I’m actually agreeing with you — so much the better!)

  • Magnificent introduction, Mark and Jeremy!

    I remember sitting in a FORTRAN class at what is now Ball Aerospace in circa 1968. The class was primarily to convert slide rule engineers to computer use, so the code examples were of little use for what I has hired for, which was to computerize the Library. Meanwhile, my immediate boss was pioneer Chicano writer José Antonio Villarreal, who was writing The Fifth Horseman, while at the same time he was hired by Ball Aerospace as a technical writer.

    To be continued....

  • Thank you for this introduction and for explaining where the field comes from and what are key questions in the field today!

  • It is of interest in this group that we have participants who are early examples of the humanities and changing computer science culture. LinkedIn just informed me that in November 1993 – December 1994, jshrager and I were both employed at Xerox PARC. But otherwise our very different bios reflect radical changes in comp sci culture between the 1960s and the 1970s and onward.

    The Interview “The Influence of Algorithmic Thinking: Judy Malloy and Julianne Nyhan”
    in J. Nyhan, A. Flinn, Computation and the Humanities, Springer Series on Cultural Computing, 2016

    https://link.springer.com/content/pdf/10.1007/978-3-319-20170-2_7.pdf

    very thoroughly covers my experience and is interesting as regards the CCSWG call for examples. Perhaps what we were both doing at PARC is relevant as regards this discussion. Jeff?

  • Continuing Mark and Jeremy’s interest in bringing culture into places where culture was not expected and examples from one side or the other -- and davidmberry's response:

    “By bringing people with different disciplinary expertise to examine the same code objects together, we're not just combining perspectives but generating truly interdisciplinary knowledge.”

    -- re the Xerox PARC program , I first worked with Pavel Curtis and Rich Gold in Computer Science Lab ( CSL only hired artists with programming skills) This book covers all the work in the program:

    Craig Harris, ed, Art and Innovation: The Xerox PARC Artist-in-Residence Program, MIT Press, 1999

    https://direct.mit.edu/books/book/2566/Art-and-InnovationThe-Xerox-PARC-Artist-in

    “The idea behind Xerox's interdisciplinary Palo Alto Research Center (PARC) is simple: if you put creative people in a hothouse setting, innovation will naturally emerge. PARC's Artist-in-Residence Program (PAIR) brings artists who use new media to PARC and pairs them with researchers who often use the same media, though in different contexts. This is radically different from most corporate support of the arts, where there is little intersection between the disciplines. The result is both interesting art and new scientific innovations. Art and Innovation explores the unique process that grew from this pairing of new media artists and scientists working at the frontier of developing technologies. In addition to discussing specific works created during several long-term residencies, the artists and researchers reveal the similarities and differences in their approaches and perspectives as they engage each other in a search for new methods for communication and creativity…”

    Contents: Series Foreword / Roger F. Malina -Preface / Craig Harris -- Introduction / John Seely Brown -- 1. The Xerox Palo Alto Research Center Artist-in-Residence Program Landscape / Craig Harris -- 2. PAIR: The Xerox PARC Artist-in-Residence Program / Rich Gold -- 3. The PARC PAIR Process / Craig Harris. Cultural Repulsion or Missing Media? / David Biegelsen. An EAP Perspective / Constance Lewallen -- 4. The Place of the Artist / Steve Harrison -- 5. O Night Without Objects / Jeanne C. Finley, John Muse and Lucy Suchman / [et al.] -- 6. Public Literature: Narratives and Narrative Structures in LambdaMOO / Judy Malloy -- 7. Forward Anywhere: Notes on an Exchange Between Intersecting Lives / Judy Malloy and Cathy Marshall -- 8. Endless Beginnings: Tales from the Road to "Now Where?," / Margaret Crane, Dale MacDonald and Scott Minneman / [et al.] -- 9. An Archeology of Sound: An Anthropology of Communication / Paul De Marinis -- 10. Reflections on PAIR / Stephen Wilson -- 11. Artscience Sciencart / Michael Black, David Levy and Pamela Z. -- 12. Conduits / Joel Slayton -- 13. Art Shows at PARC / Marshall Bern.

  • edited January 17

    Thanks for all of this so far, everyone.

    I want to raise a different question. Is it possible that the "critical" is tied not just the way we approach and object of study but also the reason we chose that object of study in the first place. Again, keeping in mind that this working group has investigated a single line of code that creates a maze-like pattern on the screen of a Commodore 64, is it more likely that a critical reading will follow the selection of a code object with a particular hermeneutic question in mind? For example, our discussions of the Transborder Immigrant Tool. Another example would be our discussion of the code of the Apollo Lunar Lander, starting from Judy's framing of the project as a mission of recovery of the work of women in computer science and particularly in space exploration, starting with crediting Margaret Hamilton. In that discussion, we found ourselves following @JudyMalloy's lead into questions of race inequity back on earth and other more social issues, not that all of the "critical" is social?

    Let me offer for consideration this code critique thread on the ICE data appendix, a Freedom of Information Act project to scrutinize the deportations by the Immigration Control and Enforcement. Do those kinds of code snippets or code objects framed with certain critical questions yield more critical readings?

  • Hi everyone, I am joining this working group for the first time. Coming from a background in AI and programming, I am currently navigating the steep learning curve of Critical Code Studies. I’ve been diligently reading the materials and watching the introductory video, but please pardon me if my perspective reflects a novice understanding of the field's history.

    I was particularly struck by the discussion on "Reading Code Aloud" initiated by @michael.falk and expanded upon by @jshrager. While Michael explores the pronunciation of LISP to find meaning, and Jeff notes how engineers "jump levels" between syntax and function, I find myself grappling with a unique challenge in my field: How do we "read" modern AI models?

    In traditional software, we can debate the reading of a variable name or a function's logic. However, in my work with Deep Learning, the most critical "code object"—the model weights and biases (often saved as .pt or .h5 files)—is essentially a massive array of floating-point numbers. As Mark and Jeremy noted in the video regarding the "Science Wars redux," there is a tension between technical facts and cultural interpretation. But here, the "technical fact" is a list of millions of numbers that are semantically silent. No matter how closely we read these numbers aloud, they refuse to disclose their "meaning," "bias," or "political economy" without the context of their training data and hyperparameters. It is a literal black box where the causal link between human-readable syntax and machine behavior is severed.

    This brings me to @davidmberry's concept of "constellational analysis." If the central object (the weight file) is unreadable, I believe our critical inquiry must shift to the constellation of artifacts surrounding it. Responding to @markcmarino's profound question—"Is it possible that the 'critical' is tied not just to the way we approach an object of study but also the reason we chose that object of study in the first place?"—I propose that for AI, we must radically rethink what we choose to select as our object of study.

    Instead of staring at the unreadable weights, we might need to treat Pull Requests (PRs), GitHub Issues, code review comments, and benchmark reports as the primary "code objects." Just as @moritz.maehr pointed out in the Oberon critique that we can read a "teaching philosophy" in non-executable code snippets, I believe the "values" and "politics" of an AI model are encoded not in the final Python script, but in the negotiations found in the PR logs—discussions on why a certain dataset was filtered, or why a specific learning rate was chosen over another.

    If the weights are the "subconscious" of the machine, the GitHub discussions are the "conscious" deliberation of its creators. Perhaps, to overcome the "intellectual redlining" that Tara McPherson warns against (as mentioned in the video), technical experts and critical theorists need to join forces to "read" these peripheral texts. The engineer knows where to look in the git logs, and the critical theorist knows what questions to ask of them.

    I am eager to hear if this approach of "displacing" the object of study aligns with the direction of CCS, and I look forward to learning more from this constellation of experts.

  • edited January 18

    A minor observation relating to Jeff’s post: it occurred to me that programmers are encouraged to write code comments that describe what the code is doing (e.g. "make new local symbols for all those that don't already have them"), and perhaps why this is being done, rather than how it is doing it (e.g. "Loop through the symbol-col-accessors and for each cdr …”). The code itself says how it works and may, or may not, be written in a way that makes that clear.

    I’m a programmer, so from the technical side. I value CCS because it makes me aware of the broader societal implications of code that might not otherwise have occurred to me.

    Presumably, in order to criticise what code does, one needs to understand what it does, and in order to understand what it does one needs to understand how it works. Although, even when one understands how even simple code works, one may not be able to predict how it will behave. For example, emergent behaviour in Conway's Game of Life. Also, neural nets, as @song.jongbeen points out.

    It seems to me there are many different strands to CCS. Criticising variable naming conventions used in the code feels very different to criticising code that fails to understand minority accents or permits sexualised images of children to be made.

  • Yes, and...

    I agree with @song.jongbeen that we will need to find new approaches to analyzing AI and other blackboxed programs. And @anthony_hay you are right to point to the difference in those distinct kinds of CCS readings.

    I guess I wonder how we can best avoid getting ensnared at the level of how the code works, stuck in the drying cement of what the code does at a technical level and not to keep pursuing what it means. I also think the critic interpreting variable names (or some other specific aspect of the code including its changing state) can identify an equally compelling critical reading as the person investigating some hateful code. But the stakes can be harder to articulate to those unfamiliar with these philosphical approaches or hermeneutics. More and more I am interested in the way code can open up or create an context for conversations of culture, or cultures, using that word to mean not just regional cultures but cultures of practice, of shared activities, of shared objects of study, et cetera.

    What are some examples? Where are the edge cases?

    Can we be just as critical with Oberon as with the Ice-Air Data Appendix? What philosophical approached (critical theory) might help read each?

  • Building on the thread’s concern with “where the critical is,” I want to suggest that paratext (manuals, textbooks, tutorials, style guides, “getting started” pages) is one of the key sites where criticality becomes legible—because it is where the conditions of intelligibility and legitimacy are set. In Searlean terms, paratext functions like a book of constitutive rules: “X counts as Y in context C.” It doesn’t just teach syntax; it teaches what counts as idiomatic, safe, professional, production-ready—even what counts as a “real” problem worth solving in that language. So the “critical” might be located not only in code-as-text, but in the normative scaffolding that makes certain readings (and practices) possible in the first place.

    This also reframes the “in theory anything can be done in any language” point: yes, but in practice languages become bound to contexts not only by domain constraints but by the sedimentation of repeated, teachable scripts—canonical examples, standard libraries, pipeline defaults, hiring rubrics—that paratext helps stabilize. And we can see drift between early framings and dominant use as a kind of re-performativity: once a practice becomes common enough, paratext catches up and retroactively legitimizes it (JavaScript from browser scripting to backend via Node; Java from consumer-device imaginaries to enterprise/mobile; PostScript as “page description” that is also a programming language; BASIC from pedagogy to microcomputer default). So my question for the constellation is: if paratext is where “X counts as Y” gets authored and enforced, should we treat it as a primary CCS object—especially for black-boxed systems where the executable artifact is hard to “read”—and what would a close/critical reading of those constitutive rules look like?

  • I'm getting to this thread later than I wanted. (My first week back teaching, with two new courses, was much busier than I expected!)

    I'd like to join together David's early question with Moritz's point on the "where" for critical code studies. First, David asks, point to the many ways to approach code, the following:

    How do we navigate between these levels without reducing one to another – is it possible for us to grasp the whole rather than the parts – or is this a false synthesis?

    I see multiple ways to approach this, but one that has worked well for my own research into the past of hardware and software is through media archaeology and the basis of history as not forward progressiveness, everything always gets better, but an emphasis on the points of "rupture" where the past and present coexist together. My go-to example is that cassette tapes did not disappear overnight with the production of CD-ROMs nor did DVDs get rid of CDs. All these and more coexist at the same time.

    Any reduction is in the assumption that all can be studied at once. It depends, I would argue, on the framing of the inquiry. Borrowing from queer studies, we might call this the orientation of the research. Are you examining relationships "from" or "toward" a certain position? If, as much of my own past work has built from, you are looking at a stack of software (code built on libraries built on operating system code), you might be looking "down" at one part and looking "up" at another to understand the positionality - -the "command and control" as Mark mentions in the video -- coming from "above" in the stack or passed through the current layer to one "below."

    Connecting back to Moritz's point, I'd also like to quote again.

    I want to suggest that paratext (manuals, textbooks, tutorials, style guides, “getting started” pages) is one of the key sites where criticality becomes legible—because it is where the conditions of intelligibility and legitimacy are set.

    I have learned much more about code from its paratexts than I have the code itself. Close reading is important, but I am much more inclined to take a micro-historical approach of what other texts reference, speak to, or interact with code than simply the keywords and their configuration. By looking at documentation, guides, and even textbooks, the community of the code, its expectations and considerations, become part of its analysis. So much more, at least with my media archaeological bias, in how the code "sits" on a rich past that controls the present from the deep past.

  • I'm getting to the party late too, just catching up on last week's discussion. Thanks for all the fab references, and already so many fascinating threads going on here!
    I'd just like to add one point, that has in a way been said between the lines but for me is one of the key dimensions of complexity in the meaning of "critical" in CCS. You guys mention paratexts, explicit decision making process etc., as well as the "how" of the functioning of the code. To me interrogating all these make part of this complex mode of reading code. But from my literary background's perspective, an additional and more tangible trickiness of code is the fact that the language used in programming is (even) more layered than natural languages, and is (even) more inevitably and admittedly underpinned by an agenda, a logic - not to say an ideology - than natural languages. The person using a programming language chooses it for X practical reasons, more or less well identifiable. They are likely to be aware of the limitations and advantages of that language, but I suppose rarely question its underlying assumptions. So there is this space between the programmer's intentions and their expression in the code on the one hand, and the way the given language can technically translate that intention, inevitably shaping the expression. I don't know if this sounds very obscure, but the point is that the explicit and identifiable intentions might not go all the way in what "critical" can also mean from a hermeneutical - or rather, deconstructivist perspective (sorry...). The AI example with its black box better shows the existence of an uncontrolled/uncontrollable dimension, but in a lighter (?) and less visible way it's also present in anything coded, working with a language that was itself written.
    To translate this into a concrete example I'll post somewhere, I'll be wondering how the language predetermines the logic of a text generator, from BASIC through HyperTalk to JavaScript. There is a part of the critical analysis that can interrogate the author's intentions vs realisation, but a deeper layer can also wonder about the implications brought to the work by the language.
    (And suddenly all this feels very naive, obviously must have been said better somewhere.. never mind, since I wrote it I leave it here... cf. learning curve :) )

  • Returning to @song.jongbeen ‘s important contribution -- that points out “for AI, we must radically rethink what we choose to select as our object of study”; that suggests that technical experts and critical theorists join forces; and that astutely calls for seeking programmers' notes in GitHub -- I’d also like to point out a role for artists’ code works. My current work is aimed more for public understanding than for this knowledgeable group, and I don’t see this as a code critique but here is a brief abstract:

    Issues of racial bias, cultural bias, employment bias, and copyright infringement are inherent in contemporary AI systems. But the role of algorithms in controlling data generated from AI systems is not widely understood, and in “Bias Amplification in Artificial Intelligence Systems,” Kirsten Lloyd observes that “The first line of defense against creating AI systems that inflict unfair treatment is to give more attention to how datasets are constructed before operationalizing them”.

    The Flagrant Algorithms is an information artist’s interactive dataset that -- using the words of 19th Century writers as data -- demonstrates how unseen algorithms impact output from AI systems. The data that the system hosts is always the same, but in response to the question: “What was it like to be a woman in the 19th Century?”, unseen algorithms selected at random by the system determine how the data is accessed and in the process produce surprisingly different results. For instance, an algorithm might generate words at random from all the data, or generate only women written words, or generate only male viewpoint written words. However, a public user might not know which algorithm the system activated. Additionally, because sources of data in AI apps could be cited, an algorithm is being created to identify sources of all the data. Why not?

    Although data selection, data entry, and page design are in progress, a rough working model is available at https://www.narrabase.net/algorithms/index_fa.html
    run it multiple times, but note that because of the small number of texts that the model currently includes, at this point it exhibits too much repetition. And the coding and screen design are currently also in rough draft stage. A more complete version is scheduled for release in June, 2026. Notes on the process are available at https://www.narrabase.net/algorithms/flagrant_algorithms_notes.pdf

  • edited January 24

    Continuing the idea that the choice of the specific code object provides the opportunity for the "critical" and especially thinking about code objects that invite a critique of unseen (but not unfelt) code run by the State (again, just one of the approaches of the critical), do people have other code snippets we might examine this session along the lines of Ice-Air though perhaps in some other realm of investigating the operations or calculations of the State or multinational corporations that exert so much control over our lives?

  • Picking up on this, I agree that the choice of the code object could already be already a critical move, but I’d frame the “critical” less in terms of exposing hidden operations than in how an object orients attention. What’s powerful about cases like Ice-Air is not just that they point to unseen state computation, but that they make its effects felt without fully rendering them legible or solvable.

    From a Frankfurt School perspective, that matters because critique doesn’t begin with total visibility. Totality works better as a horizon than a goal: the task isn’t to assemble all levels of computation into a coherent picture, but to keep the tensions between material, formal, cultural, and political–economic dimensions open. Feminist critiques have long warned how quickly abstraction (especially in state or corporate systems) can erase labor, care, and vulnerability even as it claims to explain them.

    This is close to what I’d simplified refer to as critical computational literacy as a practice rather than a method. Attending to how computational systems define what can count as a problem, a risk, or even a coincidence, and how critique risks losing itself when it becomes too affirmative of those normative categories. So code objects that don’t simply reveal “how it works”, but unsettle how authority, participation, and calculation are organized in the first place (especially in state or corporate contexts) seem particularly generative for this session.

    In that sense and imho, the “critical” in CCS shows up where code objects, readings, and collective practices interrupt stabilization, make exclusions and labor felt rather than merely visible, and sustain productive frictions, in a time where LLMs and computational workflows are very good at smoothing contradiction away.

Sign In or Register to comment.