Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Code Critique: Artistic Convergence, AI + Critical Code + Critical Data Studies

Title: Artistic Convergence, AI + Critical Code + Critical Data Studies
Author: Imani Cooper
Language: No specific code just an invitation to think with me on AI, Code, and Data :)

Hi all, I wanted to start a thread to invite you to consider the role of code in contemporary gallery/ museum spaces as it is being used to further AI innovations.

Background info––

I am currently a PhD student and I study code and algorithms as they intersect with experimental writing, and creative technologies that center notions of ancestral knowledge, movement, and self-becoming within black diasporas. My dissertation project examines 5 artistic projects that convey a genealogy of creative and critical approaches to data (both analog and digital) through a politics of gender and race. The analysis ends on a case study of a current art project using data and algorithmic driven technology, specifically Recursive Neural Networks (RNN) to make an AI sculpture engendered through data on three generations of black women experiences. In short, I encounter code the most in galleries, museums, artists’ studios, and creative community based workshops. The code critique I am posting is concerning an art exhibition I went to at the Barbican Center in London called "AI More than Human". While initially I contemplated my experience from a Critical Data Studies perspective, I am most certainly interested in how a Critical Code Studies perspective can advance this line of thought!

Code Critique / Inquiry––

In the exhibition AI: More than Human (2019) interactive digital art captivated the audience. Imagine, a large dimly lit crescent shaped room. Inside, a mild discordant hum of computer processing units (CPUs), flashing touch screens with various visual materials, these are some of the elements that characterized the breadth of artwork on display. An array of works by corporations and artist were presented including Affectiva, the leader in Human Perception AI, Mario Klingemann’s "Circuit Training", and Sony CSL’s "Kreyon City". For these select works ( however not limited to) the exhibition was a site to further train the neural networks of AI and large databases with participants as data. I went to this exhibition to examine the speech patterns of “Not The Only One” by Stephanie Dinkins (after its latest training session) but was struck by the exhibition as a whole. I read the moments of audience participation and their data contributions (some known and unknown) as a transformation of the Barbican gallery space into a kind of information research lab, and I am curious about this approach to data collection for AI, and use of code.

This line of inquiry is part of a larger ongoing thought process, but I would love to incite a generative discussion considering: the role of code and data in the 21st century gallery/ museum space (especially the collaboration/juxtaposition of corporate and independent artist use)? How is code being enacted in this nuanced mode of artistic driven data collection? Would programming in Indigenous languages change the ethical terms of this data collection (especially for underserved communities) ?

Feel free to add and/or further along the questions!

Comments

  • Regardless of the means of collection, I believe it unethical to train an AI off of human data without the human in question consenting. While AI art exhibits are cool and I understand creators wanting to get a diverse group of people, it reminds me of Google's attempt to fix their bias problem with facial recognition by sending out hired contractors to get pictures of people's faces without them knowing what it was for (or even knowing that their face was being recorded - some thought they were just playing a game. Others couldn't even properly consent). Here's a good podcast episode about that. Using artistic exhibitions to train an AI seems like a way to limit access to that exhibit by people who don't want their data collected as well.

    Like in the Google case, I think the focus on Indigenous languages might make it even more unethical. @joncorbett explained how some communities want to keep their languages strictly to themselves. This type of experiment could lead to exploitation or corruption of that by outside data.

    These are just my thoughts! I think this is a very interesting set of questions.

  • @KalilaShapiro said:
    Regardless of the means of collection, I believe it unethical to train an AI off of human data without the human in question consenting.

    I agree that it is unethical to artificially learn without consent in some cases. However, as a blanket statement this concerns me. Privacy, intellectual property, and even the sacred are all good values, but they are only some values -- they should be balanced in consideration with other values, such as the common good, or justice.

    For example, in cases of difference of power, those in power may decline to let the subaltern learn from them. Many political and corporate entities would prefer that you not observe their behavior, and would benefit greatly from a learning-enclosure movement that, in the name of privacy, privatizes everything apprehensible about people (and corporations, which are "people") after the model of likeness rights. That principle could limit accessing shared reality to those who can afford to orchestrate consent (and who have the legal resources to do it correctly) and it might also limit participation in the construction of shared reality to those who can afford to give consent or who can afford to participate in the infrastructure through which consent is permissibly collected.

    While an AI is a very different thing from a person, I believe that this is also important for thinking about the ethics of future AI agency. I myself am an intelligence who was trained off of human data without explicit consent -- observing people around me, including strangers, really helped me learn to walk, talk, eat, dress myself, play games, dance, and so forth. This makes me want to insist that it cannot be categorically unethical to learn from observing people without their explicit consent, as our current idea of socialization often depends on it.

  • I'd also like to think more about questions of leaning from human data without consent. Because of the requirement for large amounts of data it feels to me that machine learning has pushed data scraping ethics. The recent articles in the New York Times about Clearview AI have brought these questions to everyday uses of Twitter and Facebook. The numbers of academics that are also involved in this sort of scraping and--in what might be even more of a problem--in the publication of these datasets is also concerning. ImageNet is an obvious example but there are many others. This group at UC Irvine, for example, http://archive.ics.uci.edu/ml/datasets.php provides almost 500 datasets and it is hard to imagine that many of these that contain data from observed humans were produced with consent to share in this form.

  • This thread brings up such an important, complex question, and it really resonates with the work Feminist Search is trying to do with their "data donation" model (in the main thread and code thread for week 3).

    I also make AI-driven participatory artwork, sometimes in gallery settings, but felt icky about the way the Barbican was advertised (I didn't get to see it, however). Particularly the large models like GPT2 that scrape wide social media sources raise questions about consent, publication, public/private, and authorship.

    In my own show recently, the works were each explicit about being community-built through data contributed through interaction with the work. The intention is to engage users and their data with tools like GPT2 or DeepSpeech to bring awareness and engagement to these tensions. And I frame my practice as artistic research that will go on to incorporate that data in future iterations, inviting them to collaborate in what's being researched and built.

    However, with the blurriness of boundaries in the questions mentioned above, obviously data is often collected and used in ways not originally intended and trust in these systems is problematic, different for different groups, with good reason (which leads to the issues of access you mention). I'm not sure how to solve these tensions, but try to lay them bare in the work, be explicit about what's weird and uncomfortable about them to me, try to use them toward some alternative purpose, approach them with an ethics of care and interconnectedness with the audience.

  • @Imani.Cooper -- have you considered reaching out to any of the "AI more than Human" participants and asking them if they would share their code with you?

  • @jeremydouglass No I haven't, but that is a brilliant idea! Definitely interested in the response i'll get but also what the code could potentially say. Thanks for the suggestion!

Sign In or Register to comment.