Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Code Critique: Andrew Sorensen "A Study In Keith"

My apologies for launching this code critique so late in the working group's schedule. It took all of my "free" time just to keep with the rest of discussion. I'm a bit unsure what happens now, at the end of the third week, but wanted to contribute something, anyway, before it ends. The hope had been to offer a detailed interpretation of a documented live coding performance by Andrew Sorensen, "Study in Keith", created "9 years ago" (if the Vimeo metadata is to be trusted). What I will give here is not yet that detailed interpretation, but hopefully there will be some interest at least in an indication of the tack I would (have) like(d) to (have) take(n).

The performance is viewable here:

My interest in this particular performance is anchored in a sense that it quietly points away from a dominant interpretive frame or frames that are frequently applied to, or conjured up by, live coding. I don't want to deduce general characteristics of live coding (or broader "creative coding") from this artifact - I want to figure out what is different and unique about it and hopefully make it (at least marginally) more difficult for such generalizations to operate.

In my experience, there is this pervasive discourse around live coding that, in various ways, figures a heroic individual subject that, augmented by the power of programming languages, boldly and bravely discovers the promised land of "music that has not yet been heard". Such discourse echoes a common narrative of the "older" field of computer music about itself, and both echo nakedly colonial ideologies. But I see various ways in which this performance by Andrew Sorensen goes against that grain:

  1. To begin with, there is the explicit connection of the performance to Keith Jarrett. "Not quite Keith, but inspired by Keith" says the brief textual tag. Such explicit re-creation or homage or "being [so closely] inspired by"... the specific work of another artist is strikingly infrequent among live coding performances. The piece announces its own aim to recreate rather than to "innovate". (The artist points away from themself.)

  2. The act of appearing to re-create the work of another by algorithmic models points to a historical tradition of musical Turing tests, such as the relatively well-known work of David Cope. I think, for some listeners and under many circumstances, it could be possible to deceive people into believing this was a recording of a Keith Jarrett performance. But unlike the aforementioned work by Cope, there is no obvious relay with artificial intelligence or particularly deep models of musical creativity or construction. Quite the opposite: the models used to produce this particular musical Turing test are relatively straightforward. (The performance not only points away from itself, but also away from any imagined surrogate agency of the computer.)

An eventual detailed interpretation might linger for some time on drawing out these models which, while I have characterized them as relatively straightforward, are certainly not readily "accessible" in this video documentation. The typing in this particular video moves quickly, and the resolution, at least in this format, is such that characters are all legible, but not precisely "easily" legible. Producing ways to archive and navigate live coding performances other than video is a desiderata that comes up from time to time in live coding research narratives, with few examples yet of it being done in any generalized way, i.e. with performers beyond the researchers proposing such systems.

  1. The "instrument" that Sorensen uses to realize the individual tones in the musical patterns controlled by the code is a "generic" piece of commercial software, not a laboriously developed original synthesizer or sample collection built on top of free and open source DSP (as is otherwise quite common with live coding artists, albeit not exclusively). And then that commercial software synthesizer is used to simulate an extremely "naturalistic" grand piano. I don't know how exactly this particular piano sound fits into that particular commercial synthesizer, but a (simulation of) an acoustic grand piano is the first sound in the general MIDI standard, the most default instrument from a particularly default set of instruments.

Those are some starting points, whether for now or later. I hope everyone has a good weekend!

Yours truly,
David

Comments

  • @d0kt0r0, I hope we can dive in over the next few days, and I'm glad you posted this.

    I wonder if you've seen our discussion from the first CCSWG on Sorensen's work. Steve Ramsay ran a terrific conversation, leading with his amazing video Algorithms are Thoughts, Chainsaws are Tools. Now, since that time, he seems to have taken down the video, but the conversation remains.

    Take a look at his original post here, minus the video.
    And here's the edited conversation that followed with intro by David Shepard.

    Of course, I'm hoping we'll build on the conversation during the remainder of this CCSWG!

  • edited February 2018

    Thanks @d0kt0r0 for sharing this code sample and interpretation! It sounds like a larger media critique is at play here about hmmm... what I might call performances of centralized authorship? Like, if you were to apply this critique to a text in a different medium, I wonder what that would look like?

Sign In or Register to comment.