It looks like you're new here. If you want to get involved, click one of these buttons!
Thank you very much to Mark, Jeremy and Lyr for the invitation to discuss my book with you. Moral Codes is subtitled “Designing alternatives to AI”, but people from critical code studies will certainly have noticed the double meaning of the word “Codes” in the title!
The agenda of the book, as I occasionally observe to colleagues in computer science, is to argue that the world probably needs less AI, and better programming languages. I wrote the book to try and persuade a wider audience (including policy makers) of this, observing that many of the most intractable problems of AI - explainability, alignment, controllability - are precisely the established priorities of programming language designers.
I draw on my own long experience of designing end-user programming languages, especially those that extend the spreadsheet paradigm or support creative imporovisation. These are often diagrammatic, or data-centric, or use direct manipulation, meaning that the “code” they allow is not the kind that easily invites close reading. Which is not to say that close reading would be pointless - I suspect many spreadsheets are crying out for it!
A deeper, but probably subtler, theme of the book reflects on the attention economy of AI, of social media, surveillance capitalism, enshittification and all that. I argue that the investment of attention is the fundamental unit of human consciousness, and that the drive to make machines conscious reflects a systematic devaluing of our own consciousness, as a consequence of the technofeudal attention economy (similar to Pasquinelli’s labour theory of AI). From this perspective, programming is fundamental to the exertion of individual personhood. As Geoff Cox and Winnie Soon said, “program, or be programmed”.
The last part of the book speculates on where vibe coding may take us, through an explanation of basic craft principles in software engineering. But the book was originally written before the launch of ChatGPT, and published before Karpathy invented “vibe”, so those chapters are quite a hostage to fortune. There is an extended historical centre, in which I discuss the evolution of the GUI out of programming innovations of Alan Kay, Ivan Sutherland, and others who all saw their work as a kind of programming. For me, critical attention to all of these technologies benefits from the ability to view them as notational systems, each having their own kinds of code-like properties.
So overall, I would not advocate Moral Codes as a text in critical code studies, because it is not really doing the same thing as CCS. Nevertheless, the arguments are likely to be familiar to students of CCS, and I hope offer some value to the field, by reminding us the ways in which code may continue to be important, even if we become obliged to access it at second-hand via a chat dialog.
Comments
Thank you, @AlanBlackwell, for this post. Alrhough you may not categorize your book as such, I want to point out some Critical Code Studies content in Moral Codes, starting with Chapter 13.
Chapter 13 talks of making code less WEIRD: Western, Educated, Industrialized, Rich, and Developed. That enterprise tied to your continued research in indigenous and nonWestern programming fits in quite well with CCS critical attention on the postcolonial and indigenous languages.
We discussed that topic in 2020 in a thread led by Jon Corbett, Outi Laiti, Jason Edward Lewis, Daniel Temkin. Jon Corbett, who you mention in that chapter, is the creator of Cree#, the language based in Metis language and culture.
I will be interested to see where these questions you bring take you. I am glad you have started a related thread about your research in that area for our working group.
I was very happy to read your passage:
I am interested not only in this idea of new languages that embody world views and practices beyond the WEIRD but also how those world views make us look at WEIRD code in new ways. I am interested to hear more on what you have found so far.
But I do want to also take up your central claim that "source code... becomes moral when it articulates questions of good or bad." I think that the "critical" of Critical Code Studies can, through interpretation and philosophical hermeneutics, suggest ways that questions of good and bad, or perhaps more broadly issues of equity, human rights and dignity, environmental sustainability (to name a few), abound in code far beyond the projects that explicitly or overtly operate in moral zones. Though that is just one of the realms of critical code studies, because hermeneutics can lead to all sorts of interpretations, it is, I believe, excuse the pun, a critical one.
@AlanBlackwell Your book, what I have read so far, frames a moral imperative around teaching and learning programming, over a reliance on AI. To what extent, does that apply to using AI for assistance in understanding and interpreting code?
Yes, AI tools for assistance in understanding and interpreting code is an important area of research advance.
Rapid development in AI-assisted coding tools is probably the aspect of the book that is in most danger of being superseded by technical developments. However, I did anticipate this in the book, for example in my observation that programmers usually ensure that any new software capability is applied first to improving their own tools and quality of life, and only secondly concerns for other people!
There is a huge amount of current work in AI-assisted coding. Established research in human factors of programming has long been clear that reading and understanding code, rather than writing it, is the largest cost component of software development (the phrase 'technical debt' alludes to this.
Despite the thrill and excitement of vibe coding (an experiential element that I would directly relate to the live coding movement), use of this approach in real software engineering contexts would be a nightmare of technical debt and unreadable code bases. So is it a good idea to use AI to generate unreadable code, then use more AI to read it? Seems unlikely, from an engineering perspective. When I argue this out with my CS students, who suggest that we'll only need natural language rather than special notations in future, I ask whether they think it would be better to do mathematics with natural language than with algebra.
Hello, pleasure to meet you. My name is Brian Arechiga. I’m currently a 5th year at USC’s Phd English program. The research I do is closely related to ccs, and I am really interested in your take on A.I. In my own research, I have found the development of A.I. to be tied to the logic of late capitalism. There’s a particular Ernest Mandel quote that I believe anticipates the current drive for tech-companies to develop A.I.:
Writing in 1975, Mandel was not thinking of A.I., of course; however, I feel like he points to a trend that has led us to the current LLMs that are commercially available today. Although A.I. is not totally ‘useless,’ the way that many people interact is, more so when it comes to its application in creative pursuits (I’m thinking of the endless A.I. slop on TikTok/Youtube). Of course, the ‘harm’ of A.I. is always on people’s minds: its ability to manipulate photos and videos, the damage to the environment, the loss of work, the sci-fi fantasy of an A.I. apocalypse. There are many ways in which A.I. could ‘liberate’ mankind from repetitive work; however, the logic of capitalism will never allow that to occur since it needs a labor force to exploit. Therefore, we see A.I. becoming prominent in creative labor. Reading your first chapter, you echo this sentiment:
In your book, you mention that you spent time as an AI engineer and researcher. I am curious if these concerns were ever discussed in corporate (and/or) academic A.I. research? Also, I admire your argument that we need better programming languages. Even though computer programming is the domain of “immature men” as you humorously point to in Chapter 13, I also wonder if you have any insight into how corporate interests have shaped the languages that we have today - the ones that fall short?
Hi! Thank you for this post. Your book spoke to me a lot as a creative writer--I’ve been rethinking what I thought I knew about writing as a craft, especially as I’m dipping my toes into collaborating with LLMs in my writing.
Your argument in Chapter 12 on machine creativity as but “a measure of how much we are surprised by what the machine does” is intriguing, and it reminds me of Hugh Kenner and Joseph O’Rourke’s Pascal program “Travesty” in 1984, specifically the article published in Byte Magazine where they probe into the nature of creativity and an author’s signature style of writing through the implications of language statistics:
(Zach Whalen has contributed an insightful code critique in the 2020 working group which I discovered when researching on Lillian-Yvonne Bertram’s Travesty Generator that takes its title from the same program)
“Travesty”’s output is often surprising in how closely it can mimic the unique voice and texture of a particularly writer, which Kenner and O’Rourke attribute not to an inherent creative ability of the algorithm, but simply a set of the author’s unconscious writing patterns that are governed by statistical probability. The output from “Travesty”, from a distance, does greatly resemble the original text in its form and style, but a closer inspection would reveal that it really is not saying much after all, which is an observation that I also identify in AI workslop--the professional, cautious, and balanced looking generated work that is really just word rot.
Of course, Kenner and O’Rourke are aware of the nonsensical nature of “Travesty”’s output (hence its name), but with how good LLMs are getting at doing what “Travesty” cannot, I can’t help but wonder what the point of creative labor is at all, if ChatGPT can write me a rather decent poem at just the click of a button. In the chapter, you distinguish creative surprise from random noise based on the two factors of “somebody who wants to tell us something worthwhile” and “our expectation of what we are going to hear”, and I’m interested in this tension between authorial intent and audience expectation. I’m not a computer programmer by any means and am still very much a beginner in coding, so allow me to bring in examples from more familiar territory: I’m reminded of M. NourbeSe Philip’s Zong! which was written using words only from the Gregson v. Gilbert legal document, and in which many entries in the book are almost nonsensical in how impossible it is to read them due to how Philip has dismembered the original text. It actively resists telling on the author’s behalf, and meaning-making on the reader’s. This is what Philip insists upon regarding the nature of her poem:
Clearly, in the case of Zong!, Philip wants “to tell us something worthwhile”, and the reader does have a certain “expectation of what [they] are going to hear”, but without Philip’s explanation of her writing and the paratexts, the text’s randomness can seem like white noise with no meaningful message to be anticipated. I doubt anyone would argue against the literary and creative value behind Philip’s work, but the point of this example is that it complicates the aforementioned factors of creativity--say, if, a machine is able to generate a piece of writing on a similar premise as Zong!, how, then, might we evaluate its creativity? My initial thoughts are that humans can undergo the process of a creative labor which sits at the heart of what we deem as “creative” in human endeavors. This “creative labor” consists of a devotion to the trial and error and incessant stumbling that is involved in all creative pursuits, including of course coding creatively. But I wonder, beyond the craft of coding, if creativity can be fully accessible to machines if we take the “surprise” of Zong! as a mark of creativity. Again, these are just initial thoughts! I’d be curious to hear if anyone has more to add. Thank you again to @AlanBlackwell for sharing your work with us.
I’ve only had a chance to dip into the book and I look forward to reading it in full. The themes are very interesting to me. It brought to mind Tim Berners-Lee’s idea that people should own their data. I can relate to the chapter The Craft of Coding, though for me the hardest part was not naming things (I think you resisted mentioning the old saw about there being two hard things in computer science) but the sheer complexity of the world in which my code was to be a part. The hard part was not creating my own code, but understanding enough about everyone else’s. Also, I can still remember discovering that I could use the computer as a tool to automatically test some aspects of my code and how useful and important that was.
As an aside, although such things as good names are important, for me whitespace is probably the most important thing that makes programs understandable. (I work mostly with C++.)
If I understood correctly, you say that AI cannot be creative because, unlike humans, it lacks intent. I’d love to really understand why intent is uniquely human (or perhaps unique to living things). I assumed intent was a consequence of having a body that has needs, and I don’t understand why something analogous cannot also apply to a machine. My intention in writing this is presumably buried somewhere in my stone age brain anatomy/chemistry and if I had to try to rationalise it I would use notions like play, curiosity, the feeling of satisfaction that comes with finding out and understanding, or even just desire for status or belonging or something like that. It's sort of magical that DNA has assembled molecules into structures that have intentions. One could say that humans do only what their DNA programs them to do, but that would be like saying computers do only what they are programmed to do, i.e. both true and not true.
Finally, dipping into your book prompted these thoughts: I consider myself a stochastic parrot with the benefit of four billion years of evolution and a quadrillion synapses (assuming I have a full complement) and a life lived in the current era. I think art is putting existing things together. If there was anything truly original it would be incomprehensible to us unless we could relate it to something we already knew. I often inadvertently talk rubbish, believing I understand or remember something. I often write code to find out what I’m thinking. I think play is good for adults as well as children. /slop