Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Week 4: Code Critique - Recurrent Imaginaries

+++++
Adapted from the book- Aesthetic Programming: A Handbook of Software Studies, the last two chapters. - Winnie Soon & Geoff Cox
+++++

We offer the bonus chapter in the book Aesthetic Programming: A Handbook of Software Studies in the form of an “Afterword: Recurrent Imaginaries,” a machine-generated chapter based on the contents of the book, on what has been learnt, and what might be unlearnt. In chapter 10, Machine Unlearning, it contains the sample code that is adapted from ml5.js - CharRNN, ported by Cristóbal Valenzuela with additional contributions from Memo Akten in 2018. Instead of using the pre-trained model provided by ml5.js that was trained using the literary works of Virginia Woolf, we offer another pre-trained model based all the chapters of this book. In this way our example learns from previous chapters and generates a new text based on the generalized style of the others. Of course there is a process of reduction here that exemplifies some of the political issues with regard to knowledge production.

The training process uses a “Recurrent Neural Network” (RNN) and “Long Short Term Memory” (LSTM) that analyze and model sequential data, character by character. Both are useful in terms of character-by-character training because the order, and context of the text are both important to generate sentences that make sense to human readers (this is related to the field of “natural language processing”). This recurrent type of neural network can capture long-term dependencies in a corpus in order to make sense of the text pattern through many iterations of the training process, using markdowns in the form of characters and symbols from each chapter as raw data. What we end up with more or less makes sense, in its processing of text, but also source code, image links, captions, and so on, but most importantly with the machine generated text in the bonus chapter it provides an insight into how a machine learns from our book in contrast to what the readers of the book might have learnt. Here we return to one of the main objectives for the book, i.e. exploring some of the similarities and differences between human, and machine reading and writing: what we refer to as aesthetic programming.

image

We are interested in how this book might open up recurrent imaginaries for aesthetic programming, in the form of further iterations, and additions to chapters by others. We are inspired by Ursula K. Le Guin and woud like to delve into the imaginaries of reading, writing, coding and thinking: “As you read a book word by word and page by page, you participate in its creation, just as a cellist playing a Bach suite participates, note by note, in the creation, the coming-to-be, the existence, of the music. And, as you read and re-read, the book of course participates in the creation of you, your thoughts and feelings, […] the ongoing work, the present act of creation, is a collaboration by the words that stand on the page and the eyes that read them” (1977-1978).

let charRNN;
let textInput;
let lengthSlider;
let tempSlider;
let button;
let runningInference = false;

function setup() {
  noCanvas();
  // Create the LSTM Generator passing it the model directory
  charRNN = ml5.charRNN('models/AP_book/', modelReady);

  // Grab the DOM elements
  textInput = select('#textInput');
  lengthSlider = select('#lenSlider');
  tempSlider = select('#tempSlider');
  button = select('#generate');

  // DOM element events
  button.mousePressed(generate);
  lengthSlider.input(updateSliders);
  tempSlider.input(updateSliders);
}

// Update the slider values
function updateSliders() {
  select('#length').html(lengthSlider.value());
  select('#temperature').html(tempSlider.value());
}

function modelReady() {
  select('#status').html('Model Loaded');
}

// Generate new text
function generate() {
  // prevent starting inference if we've already started another instance
 if(!runningInference) {
    runningInference = true;

    // Update the status log
    select('#status').html('Generating...');

    // Grab the original text
    let txt = textInput.value();
    // Check if there's something to send
    if (txt.length > 0) {
      // This is what the LSTM generator needs
      // Seed text, temperature, length to outputs
      let data = {
        seed: txt,
        temperature: tempSlider.value(),
        length: lengthSlider.value()
      };

      // Generate text with the charRNN
      charRNN.generate(data, gotData);

      // When it's done
      function gotData(err, result) {
        if (err) {
          console.log("error: " + err);
        }else{
          select('#status').html('Ready!');
          select('#result').html(txt + result.sample);
          runningInference = false;
        }
      }
    }
  }
}

Code:
* run the code: https://aesthetic-programming.gitlab.io/book/p5_SampleCode/ch10_MachineUnlearning/
* source code: https://gitlab.com/aesthetic-programming/book/-/blob/master/public/p5_SampleCode/ch10_MachineUnlearning/
* Afterword: Recurrent imaginaries: https://aesthetic-programming.net/pages/afterword-recurrent-imaginaries.html
* Machine Unlearning: https://aesthetic-programming.net/pages/10-machine-unlearning.html

One of the interesting sections of the generated bonus chapter:

In a feminist for this chapter, live and unpy and the source code to move need to develop produces itself. Chapter 8, “Que(e)ry data,” trans. Face) has a class, a “smart” that is returns a closer loops are developed a modifying code with the code or solve an auga was partly rendering to the technical intelligence as a form of software and new emoji stairs are requires itself — to further identify able to mered model in the dataset by noats in order to train how cultural and powerful writes injustices that look with archificational logics,19 declared the network for political and changing that our deaden, that coding it operative file, you learn to Chinister and syntax continue to show the credentials, and distributed mobil purposes, and are contingencies which properties and behaviors, made server libraries and adding the Universals us too and happens entries that we do not just automaticulas on focusing of this sense of hiding up the present in generator form from software and originally deeply encourage the execution, and commerciculusing of learning to develop because the function draw(), the program and ellipses is a new syntax with other syntaxes from the curated by specifying compring try was the web cam tracker practices and conceptual thinking to use.

Discussion Questions:

  1. Can a machine respond convincingly to an input with an output similar to a human — or more precisely — can it mimic rational thinking?
  2. How might combinations of free and open source ethics, and intersectional feminist/queer politics open up ways of learning and unlearning?
  3. When it comes to the book/source as a whole, which alternative knowledge and aesthetic practices emerge as a consequence?

Comments

  • Thank you to @siusoon and @geoffcox for this glimpse into the text generating process! I have been curious in particular about the process for training the model on the entire book in order to generate the new chapter. I shyly reached out to Winnie offline, and they agreed we could share more details here...

    from Winnie:

    the pre-trained model is included in the source code link. Yes I can share the ml5 resource regarding the training (it is in the book too) and other references to RNN. Happy to discuss that too, as from the beginning we were thinking about what is training and what is learning, and what is the result of training and learning. I guess one of the most interesting things for thinking is that the machine is training code or training text? of course is a combination of both though (if we consider markdown as a form of code) But there is actually just a thin line between them and that mix is produced in the generated text which make the text difficult to understand in a natural language sense....You need another way of seeing/reading to make sense of it. Another interesting aspect would be the difference between word and character training, and how that may open another perspective in looking into code/text.

    from me:

    thanks for your comment. the resources for training and a discussion around it both technically and theoretically would be fantastic to share too! I see (line 11) where you bring your model in to be run, and I see the model in your files but I didn't see the code that trained the model. [...] I'm curious personally because I've only gotten so far as using pre-trained models but never successfully training my own (I always get a lot of "eeee eeee e eeeee"s). I find there's so much interest around this kind of work but even when I'm doing it, it can feel shrouded in mystery, so I really appreciate these discussions.

    from Winnie:

    [...] "We have used the free and open source program Text Predictor developed by Greg Surma in Python [...] See https://github.com/gsurma/text_predictor."  With this one, you can do many modification with the code in terms of iterations, and show the sample predicted text for each iteration starting from 0. Then you can really see how the text transforms from garbage to something (perhaps) readable.

    you definitely need to try the link(s) above, some other resources that i have been documented in the past: http://softwarestudies.projects.cavi.au.dk/index.php/Machine_Learning_Experiments (there is a section on technical explanation on ML)

    I found that training with your own data give more insights into the process, um..i think the reason is because I am more understand the context, and the blind spot of the data source, and this help thinking about the relations. Also each iteration of the result in the terminal gives you a sense of how much training is needed (but there is also design decision here, or we may call it as bias if you wish). I found the negotiation between readable and not readable is very intriguing for me.

    Thank you @siusoon for this additional context on the delicate process of training with a model for text generation from scratch. I'm curious if others have worked with model training or transfer learning in their experiments? And what does looking at their codes with an eye to Critical Code Studies help us consider about how text is produced and consumed?

  • edited February 2022

    @SarahCiston thanks so much for reaching out. I have seen your chapter talking back where you have included building your own chatbot ex, in which you have used ListTrainer. I wonder how do you find the process?

  • That is surprisingly functional, both in the code and in the comments. I like that seed needs temperature to grow. I see that temperature ranges from 0 to 1.0. Maybe it could max out at 1.5 instead? If you pay a little bit of carbon quota (in crypto, of course) you could have 2?

    What's the interaction between the seed and temperature? I can imagine (from programming knowledge), but what would be the other interpretations? In a kind of hermeneutical "wrong answers only" mode.

    Association about AI generation of books, of seeds, and (raising) temperatures: I somehow remember there was a problem, or rather "a problem" if you are nihilistically/surrealistically bent, that Amazon¹ was spammed with robo-generated publications when ebooks and Kindle were all the hype? A strange new kind of spamming.

    Sincerely I never link to sources like this but it's justified this time:

    ¹ the megacorp, not the beautiful forest

  • After a little bit of JavaScript UI hacking, the following result with temperature at 1.5 degrees:

  • edited February 2022

    I don't dare to try 2 or 3 degrees ... who knows what would happen to Amazon. Curiosity killed the cat, even the mighty Amazon jaguar...

  • Thanks for asking about the ListTrainer process, @siusoon
    It was interesting working with a premade tool like chatterbot which tries to provide a more immediate conversation, after building my own (not quite) chatbot to interact with the reddit community more asynchronously. Folks can try out the chatterbot example here if you like

    I think the most striking part of the experience with tools like chatterbot for me is how much sample input text is required, how the gaps are revealed in the illusion of a smooth experience. That it really is just a list of pre(human)made phrases matching up to a bunch of best-guesses. Takes the mystery and magic out in a big way, but also helps imagine what other modes might be possible, what other kinds of exchanges too.

    As I consider version 2.0 of ladymouth, I'm definitely trying to consider quite how much and which kinds of machine learning should (and shouldn't) be involved, and how much it shapes the design vs how much the bot I imagine might shape the choices of ML I could use. In these senses, I really appreciate the careful attention which you've given to laying bare your processes with machine learning and text generation here.

  • edited February 2022

    @SarahCiston  Thanks for your thoughts on the training aspect of the book. More conceptually, we also tried to highlight how training/learning operates across pedagogical and technical registers. Machine learning is of course founded on its comparison with human learning (especially in child development) but what would other kinds of learning might be imagined (beyond what Freire would call the banking model of education)? We say something along these lines in the book too. So with the playful final chapter - aside from getting it to resemble meaningful prose - we try to point to the power relations between learner and the source of external authority (teacher, the book, etc.). We thought it important to try to understand the methods by which computers are trained, the kinds of knowledge the computer acquires, in contrast to what human learners acquire though reading a book such as ours.

  • edited February 2022

    @Mace.Ojala yay i think having the temp at 1.5 or above may help to make apparent the notion of learning in relation to control, randomness and (un)predictability (pedagogically speaking :smile: )

Sign In or Register to comment.