Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Week 3: Code Critique - describeElement()

From the essay Making p5.js Accessible, Luis Morales-Navarro and Mathura Govindarajan wrote:

Trying to make the p5.js web editor and p5.js sketches accessible came with its own set of challenges, the main one being that the canvas is probably the single, most inaccessible element in HTML. While it’s possible for a screen reader to recognize the element, it cannot identify all of the elements on it. The reason for this is simple: the canvas behaves just like a physical canvas, once you put paint on it, it covers up what’s behind it. It’s easy to see the color of each pixel but not understand how the pixels comprise shapes and elements. This poses a problem when you are trying to describe the elements on the canvas through code.

There has been an ongoing discussion about next steps for web accessibility on p5.js GitHub. In 2020, describe() and describeElement() were added to p5.js. The describeElement() function creates a screen-reader accessible description for elements —shapes or groups of shapes that create meaning together— in the canvas. The first paramater should be the name of the element. The second parameter should be a string with a description of the element. The third parameter is optional. If specified, it determines how the element description is displayed. Below are the reference code snippet of describeElement() and an example of how to use it.

p5.js describeElement() Reference


describe('Heart and yellow circle over pink background');
describeElement('Circle', 'Yellow circle in the top left corner');
ellipse(25, 25, 40, 40);
describeElement('Heart', 'red heart in the bottom right corner');
ellipse(66.6, 66.6, 20, 20);
ellipse(83.2, 66.6, 20, 20);
triangle(91.2, 72.6, 75, 95, 58.6, 72.6);

p5.js describeElement() Example

Author: Luis Morales-Navarro

function setup() {
  createCanvas(300, 200);
  describe('a red wheelbarrow beside the white chickens inspired by "The Red Wheelbarrow" by William Carlos Williams',LABEL);

function draw() {
  //red wheelbarrow
  describeElement("wheelbarrow","A red wheelbarrow whith a gray wheel rests on the brown ground.",LABEL);
  quad(30, 75, 60, 130, 140, 105, 140, 76);
  let cC= [[200,140],[250,100], [100,150]];
  for (i=0; i<cC.length; i++){
    let x = cC[i][0];
    let y = cC[i][1];
    arc(x-12, y-8, 18, 20, radians(180), radians(0), PIE);
    arc(x, y, 40, 40, 0, PI + QUARTER_PI, PIE);
    arc(x-15, y, 20, 30, radians(180), radians(0), PIE);
  describeElement("chicken 1","A white chicken infront of the wheelbarrow.",LABEL);
  describeElement("chicken 2","A white chicken to the right of the wheelbarrow.",LABEL);
  describeElement("chicken 3","A white chicken standing between chicken 1 and chicken 2.",LABEL);

Below is the poem mentioned in the code:

The Red Wheelbarrow
so much depends

a red wheel

glazed with rain

beside the white

Below is a screenshot of the code run on p5.js web editor

There also has been this ongoing discussion on p5.js GitHub, initiated by Lauren Lee McCarthy, about potentially requiring a describe() line in order for a p5 sketch to run. Below is the screenshot of the GitHub issue.

Discussion Questions:

Please feel free to add your question in the thread.

  • How can the community form around the creation of functions like ‘describe()’ and ‘describeElement()’?
  • What should we do about web accessibility in languages other than English? The functions describe() and describeElement() will support any language but library generated descriptions with textOutput() and gridOutput() as of now will be limited to the English language. We believe these features should be accessible in other languages supported by p5.js.
  • How should we expand library generated descriptions?


  • I think this is excellent work, y'all. I imagine that there's a way, akin to the Art + Feminism organization's global Wikipedia Edit-a-thons, for individuals to get engaged as a community around adding 'describe()' functions. I would be down to help.

  • edited February 2022

    A couple of things

    could you put the chicken describeElement into the loop? if not this would be a barrier to me as a developer as I tend to do things in a more procedurally generated way and so the descriptions would to be able to handle that to a degree e.g. (forgive the rough sudo code I am thinking as I go)

    for (i=0; i< draggableObjects.length; i++){
        let randomDraggable = CreateRandomDraggable();
        describeElement( + "_" + i,"A " + randomDraggable.Description + " is ready to be dragged to the correct container.",LABEL);

    In terms of localisation if we can use this system with a keyed string and file for the language ie. each file will be tagged by language and each one would have within it the same keys for the strings if you get the string RandomDraggableDescription from the file you might get something like:

    ENG: "A ${randomDraggable.LocDescript} is ready to be dragged to the correct container."
    DEU: "Eine ${randomDraggable.LocDescript} ist bereit, in den richtigen Container geschleppt zu werden."

    The problem you tend to hit with this approach is that in gendered languages the structure can change depending on the inserted item's gender so it gets extra complicated. I imagine that most developers even if they use the describeElement would struggle with this localisation layer as it would be an extra layer of obfuscation and to do things like the above you would need a binding system that could get the localised version of names etc e.g (again forgive the rough sudo code)

    for (i=0; i< draggableObjects.length; i++){
        let randomDraggable = CreateRandomDraggable();
        describeElement( + "_" + i, getLocalisedDescription("RandomDraggableDescription", randomDraggable),LABEL);
    function getLocalisedDescription(key, item)
        let keyString = localisationManager.getString(key);
        const regex = /\$\{\w+\.\w+\}/g; //the binding key using the ${item.bind} format
        if(regex.test(keyString)) //checks if the string needs to bind data from the item
            keyString = localisationManager.BindLocalisedData(keyString, item);
        return keyString;

    You could build in a localisation manager to p5.js but it would require a bunch of supporting stuff to make it work (like the binding above). I do a lot of this sort of stuff in games so I am pretty familiar with the process :smile:

  • I love this code example because it really gets my head spinning about the purpose of language and the purpose of access. I'm sure many folks are theorizing this in several overlapping disciplines and I'm a n00b (how great that code can be an entry point to, say, disability studies for someone!). But what does it mean to "describe"? Depends who the audience is and their goals, and as an author of code or other language we won't always/ever know that in advance, we can only imagine and try to account for these possibilities. As a reader of poetry describe means something very different than as as user of accessibility tools (we all may be both at times, accidentally or intentionally). I'm so curious about the opportunities to use these affordances in their offshoots and overlaps.

  • Thank you @QianqianYe for leading us in with this thread.

    What I love about the Red Wheelbarrow use of describeElement is that it helps to identify a few assumptions that seem to circulate with that tag.

    In most uses of describe, the image is considered the primary object and the description is secondary, and often lesser version of the object. It is a passable substitute, but never an equal, certainly its superior.

    In the case of the Williams poem used to describe the drawn image, the description is arguably significantly more engaging and well rendered than the rudimentary chickens and wheelbarrow. Furthermore, while the drawing is rather plain, the poetry is artful and evocative. This example serves to demonstrate the assumptions of priority we bring (I bring) to elements such as describeElement designed to be accommodations with an underlying presumption that the substitute could not surpass the main form of display, the visual. Here I am also thinking of perfunctory use of the "alt" tag in HTML, even though there are readers (human and machine) for whom the "alt" is the primary form of access.

    Like a provocative title for an art piece, such as Duchamp's "fountain" or even the paratexts for a video game, the describeElement can create a productive space in tension with the visual. But considering this case, what would a world look like if the description were prioritized above the content it was replacing?

    The example asks us, how might we use a tool like describeElement to make something richer. And what if the audience for the description were seen as the priority rather than the ones being offered a poor substitute. If nothing else, the example demonstrates the inequities and hierachies that can attend encoded solutions to problems of access.

  • Thank you for this code critique, @QianqianYe , it is such a great case!

    I agree with @markcmarino that the prominent position of the description is key to the work, both as an accessibility tool and a key element of the work. Often alt texts are invisible to users not using a screen reader*, which makes it easy for people to forget to add them or not know how to create good alt texts (using a browsing extension like Alt or Not on Twitter can help with that but it would be nicer if the platform provided this tool as a default). This is also why I like the idea of adding the describe() and describeElement() to the template as is suggested in the github issue. Perhaps it is an idea to fill it in with a default describe('no description provided yet' LABEL) so that the coder is reminded of needing to add the description at the end without receiving a message about it in console while they're still working on their canvas?

    As @SarahCiston points out, the usefulness of descriptions of images is often dependent on the target audience and the role of the image. As experts in multimedia we know that an image never speaks for itself. The same figure used in two separate contexts, for example, can require two different alt texts. And electronic literature and digital art in particular have visuals that are not only depicting something, but also aim to have a literary/artistic effect on the reader. This might make it initially more difficult to write useful descriptions, but I believe that the deliberate practice of writing alt texts also builds a broader digital literacy on visual content (same thing goes for writing good captions for audio content, especially if it's nonverbal sounds).

    *a bit off-topic, but the invisibility of for example default descriptions of gifs can also have disturbing effects, for example in this case where someone had put QAnon conspiracy theories in alt texts. So if you do not edit the alt text, there is not only a risk of sending something without or with lacking alt text, but also that you might be "passing along propaganda unknowingly".

  • I wonder how this will work for a programmatic objects which might be appearing, vanishing and morphing thousands of times per second, and anyway are (often) nondeterministic, reactive and interactive. Laudable effort nevertheless.

  • I'm admittedly a strictly a hobbyist when it comes to P5JS and I've never really shared my sketches except a quick link to friends or after turning them into gifs/static images, so this isn't something I've thought closely about because I've never really had an audience. I'm now really thinking about how I would describe different sketches I've made over the past couple of years.

    I feel like with the chicken example it's a pretty straightforward visual to describe, and I also agree that it adds to the experience overall. But I don't know how I would begin to distinguish some of my sketches so many of which are just "blobs but a little different than the last time" or " falling". I take a lot of enjoyment in making sketches that are different every single time, really make liberal use of that random() and noise(), but I don't know how I would write the descriptions so that it had a similar experience of seeing subtle-but-different changes each time. Especially because sometimes the differences can be really unexpected and hard to predict. Also what @Mace.Ojala pointed out with animated objects that can't be described at the same rate they animate.

  • Writing generative descriptions feels like an interesting challenge. I wonder how the documentation could encourage users to write poetic (but accurate) descriptions of a sketch given randomized values or parameters in the code itself.

    I've also wondered how to introduce the describe() and describeElement() functions in class when the students have usually moved on to moving images or interactive features by their second assignment. One thing that comes to mind is using p5.js's frameCount variable to ensure that the description isn't updated too often:

    if(frameCount % 60 === 0) { // once a second
      // update describe/describeElement

    This still has the challenges that @christina describes: what if it's hard to describe the image because so much of it is driven by randomness? I'm wondering if some additional functions can help: for example, describeColor(), a convenience function for converting r, g, b or color object values into a description. I think something like this is already embedded in the code for textOutput() and gridOutput() but I'm not sure if it's easily accessible/documented elsewhere. I can imagine some creative uses for this like writing a function that describes the mood of the work based on the most frequently appearing hue. What other info/functions could the library provide to support generative descriptions? (I wish I had posted this sooner!)

  • Sorry to come to this so late but this thread has been super inspiring for me. I've been working towards making alternative interfaces for creative coding and this has stopped me in my tracks. Making weirdo visual interfaces somehow feels wrong when it limits who can use it. Somehow I feel designing for disability first can end up with better interfaces for everyone.

    The example here is interesting in a different way though. Looking at the wikipedia page, The Red Wheelbarrow has an interesting backstory.

    "This poem was meant to be appreciated only by a chosen literary elite, only by those who were educated, those who had learned the back story (Williams was a doctor, and he wrote the poem one morning after having treated a child who was near death. The red wheelbarrow was her toy.)"

    So there's this idea that you have to be in the elite 'in crowd' to understand it. But then:

    "["The Red Wheelbarrow"] sprang from affection for an old Negro named Marshall. He had been a fisherman, caught porgies off Gloucester. He used to tell me how he had to work in the cold in freezing weather, standing ankle deep in cracked ice packing down the fish. He said he didn’t feel cold. He never felt cold in his life until just recently. I liked that man, and his son Milton almost as much. In his back yard I saw the red wheelbarrow surrounded by the white chickens. I suppose my affection for the old man somehow got into the writing."

    So the wheelbarrow belonged to a street vendor Thaddeus Lloyd Marshall Sr, who kept chickens in his backyard and was the inspiration for the poem. So perhaps now the poem turns from literary elites self-congratulating themselves for understanding a poem about infant mortality, to a white man seeing poetry in a black man's hardship.

    What does this mean for describeElement?

    I've also been reading Stiny's work on shape grammars. This is all about how we calculate with shapes in perception, and are able to see very different things in the same image, each time we look. It's then hard to see how it's even possible to describe a shape using words, when shapes and words are in such different domains. How do we carry that ambiguity of a shape across to its description?

Sign In or Register to comment.