Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Week 3: Feminist AI (Main Thread)

by Christine Meinders, Jana Thompson, Sarah Ciston, Catherine Griffiths

Feminist Legacy and Theory
As long as there has been code there have been feminist approaches to coding and pattern finding---such as the first complex program which was created by Ada Lovelace in the 1840s, as well as the use of knitting for encoding secrets during World War II, often hidden in plain sight and overlooked for its perceived lack of value.

As artificial intelligence developed in the latter half of the twentieth century, most of its theorists and developers were overwhelmingly white and male. The roots of Feminist AI can be traced to contemporary theorists like British academics Alison Adam and Lucy Suchman. In Alison Adam's Artificial Knowing, she critiques traditional AI as symbolic and connectionist, both of which fail to address embodiment in the production of knowledge. Referencing Tong (1994) and Wajcman (1991), Adam argues that technology is inherently social, political, and cultural in its usage and production and that AI research can and should be informed by feminist theories such as liberal feminism, eco-feminism, postmodern feminism and standpoint theory. Additional Feminist theoretical approaches include participatory (BardzelI), embodied (Blumenthal), implementation into practice (McPherson), and design as research (Burdick). These approaches examine the mutable relationships of form, making, theory, and community.

Feminist Practices and Projects
Building on earlier work, new critical approaches and projects have emerged in recent years, including volumes on racism and feminism such as Safiya Umoja Noble's Algorithms of Oppression, Cathy O'Neil's Weapons of Math Destruction, Joy Buolamwini's Algorithmic Justice League and Catherine D'Ignazio and Lauren F. Klein's Data Feminism. Additional feminist projects which explore the role of the body in knowledge production, critical prototyping, and critiques of science and technology include Ursula Damm's generative video project Membrane, Wekinator by Rebecca Fiebrink, design approaches from Feminist Internet.com, Feminist Internet.org, LAUREN by Lauren McCarthy, Anne Burdick's digital humanities design fiction Trina, Gender and Tech resources project by Tactical Tech, ladymouth by Sarah Ciston, Catherine Grffiths' Visualizing Algorithms, and Caroline Sinders' work on the Feminist Data Collection, which details a path for building data collections and ontologies in a feminist reference and framework.

The organization Feminist.AI (to be distinguished from the conceptual approach Feminist AI) is a collective based across three cities (LA, SF, and Holdenville, OK) which strives to redefine AI from its current development in private companies and academic settings to community and socially-driven futures. Feminist.AI is developing a project called Feminist Search that explores many of the issues and approaches of Feminist AI practices through community-driven research and prototyping. Feminist Search is used to actively highlight the work of feminist theorist Dr. Safiya Umboja Noble, and her book Algorithms of Oppression

Feminist.AI Project: Feminist Search (Searching for Ourselves) 
Feminist.AI emphasizes and employs critical prototyping, participatory focused approaches (BardzelI), acknowledging creators, de-centering the human (Adam, Hayles, Braidotti), and viewing embodiment as beyond the enfleshed, as a body-self---living and indefinite (Blumenthal). Feminist.AI is an explicitly feminist practice and this value appears in our projects, including the recent offering Feminist Search, which involved co-creating a Feminist Search Engine. Sarah Ciston has written elsewhere in more detail about bringing intersectional methodologies to artificial intelligence.

Feminist Search addresses this multi-faceted approach to search and offers visual entry points as a starting place. It requires participatory engagement and critical prototyping. Feminist Search also engages the challenges of working within a binary, asking how that construct impacts the weighting and utilization of specific models, as well as how the interface design can highlight data bias or community contributors.

Search is one of the most commonly-used algorithms in the world and it is dominated today by several large private corporations, most notably Google. Google's search algorithm is driven by a combination of usage and ad revenue. The first Google search algorithm, PageRank, was used to measure and prioritize websites ranked according to a number of factors, and a more recent explanation is detailed in the video "How Google Search Works (in 5 minutes)."  To summarize, a search engine is comprised of a database (which can include things like images, webpages, videos, pdf's), and algorithms that interact with information on the web (such as text, metatags, links).  When searching, software is used to enter in a text based query and results are returned via text, voice or images. Search engines use web crawlers (spiders, spiderbots) to collect information about pages. These crawlers summarize links, tags, metatags and share information with the search engine. These "spiders" crawl across these pages to do several things---like find new content, index information, and rank. There is no human intervention---these algorithms are deployed in real time to gather information.

While these explanations clarify the process somewhat for the non-expert, they do not address the problematic features such as those covered by Safiya Umoja Noble, beginning with her 2012 article Missed Connections: What Search Engines Say About Women. In both this and her 2018 book, Algorithms of Oppression, Noble discusses the troubling intersection of search results and Black bodies, including returning a page of primarily pornographic pages when entering the term "black girls" and on results for searches for professional hairstyles returning pictures of primarily white women. Additional problems with web search involve indexing algorithms and filter bubbles. Specifically, information is linked, indexed, and personalized using factors and decisions that are not transparent. Professor Noble highlights the lack of review process for determining what is hyperlinked (Noble 41). Unwanted bias can also occur with filter bubbles which can serve up narrow information in personalized search.

Starting with a visual search prototype that uses community sourced images, Feminist.AI will build a larger visual search engine powered by community definitions and informed by library science and critical theory. With a belief in users as not only contributors, but also as owners of their experience and information, an editable, co-created search tool will be developed where users have control over access to their information and how it is used for development. Using this as inspiration, the Feminist.AI community proposes to create an alternative to private search engines with the working title Feminist Search.

Feminist Search seeks to add transparency to the search process and promote education about factors that determine how search results are created. The project creates multiple entry points for community members to learn about how information is classified, trained and accessed, making search more understandable and accessible. In our next post, we will go further into details of the Feminist Search project.

Critical Questions:
1 - How do engaging feminist principles and practices around embodiment, community, and critical prototyping shift how code might be read or written? How do they shift how you understand or engage AI more broadly?

2 - Historically, the contributions of female-identified persons has often been overlooked in the main narratives of events, as seen in the stories of one of humanity's most famous scientific achievements - the Moon landing in July 1969, or in the algorithmic processes of weaving or knitting. How can we reframe and revaluate traditionally "female" practices in light of today's emphasis on STEM education and create feminist spaces for both learning and developing coding practices?

3 - Can we imagine different sensory approaches for querying information? Currently, we use computers, with typing, voice search, and touch on screens. Could we simply improve our screen interfaces to incorporate new visual and interactive models of knowledge? How might we synthesize human and natural environments through search? How can we incorporate culturally specific ways of exploring knowledge through artifacts?

Contribute to this research!
We invite you to contribute to this research. We are currently seeking data donations for our Feminist Search prototype. You may donate your data here: https://aidesigntool.com/feminist-search

Thank You CCSWG 2020 community: Mark Marino, Jeremy Douglass, Zach Mann.

Comments

  • Thank you for collecting all these resources and exploding my browser tabs!

    Regarding the second question, I am very interested in the stories that the computer industry (defined broadly) likes to tell about itself. To cite a timely and imperfect example, you can see it at the Academy Awards, from The Imitation Game to The Social Network, which stories get main stage, get parodied, meme'd, etc. I was told recently that (at least in the past) computer companies hiring new coders valued men with poor social skills above other candidates, regardless of credentials, perhaps because they bought into the image of the coder propagated in popular media as awkward, white, and male. The Feminist Search project sounds fantastic, and I think it also addresses this issue. In addition to Safiya Noble's work, I have seen Ruha Benjamin ("The New Jim Code") point out image searches for "criminals" and "friends" returning very raced results. I just now Google-image searched for "coder" and the results were not surprising at all: white men with beards and/or glasses. Disassociating that particular kind of body from the label "coder" would go a long way.

  • @Zach_Mann I would argue that these films do not represent what the computer industry thinks about itself, but what non-technical (ie end users) think about the computer industry.

    However I am very much engaged with these questions (I teach a class on computers robots and film about this very topic), and am interested in what the narrative/culture would be if the computer industry (or technicians) were crafting the story.

    Now I'll do a shameless plug for an event I am co-organizing - The Art of Python - at this year's PyCon in Pittsburgh (http://artofpython.herokuapp.com)[http://artofpython.herokuapp.com/]. It addresses this in particular. I would love to see submissions from folks here - please message me if you have any questions!

  • edited February 2020

    Given the prototyping of Feminist.AI and the open questions about querying information, I'm curious about what "visual search engine" will mean in this context. Currently it may be commonly defined as a search engine that:

    1. returns images or video
    2. or accepts images or video as input
    3. or both

    Not that long ago, type 2 was called "reverse image search", and was primarily speciality search providers like TinEye (2008), before Google Images (2011) and Bing (2014) integrated similar interfaces, becoming providers of both.

    So, for example, Google Images search is both, and in it these modes are connected in complex ways. Regular text search returns a subset of matching images alongside webpages, and may switch into an images-only result set. Or you can search with text for images only. Or you can search using image, or an image + text -- which returns a mixed set of images and webpages, which may switched to images-only.

    To think through some of what currently exists I decided to walk through a couple "round-trip" examples similar to Noble's example referenced above. One can type in "black girl" and get an image result set:

    Google Image search for string "black girl"

    ...or use an image. Here I randomly selected one of the top 3-4 top images appearing in my first result set, which turned out to be from the blacknerdgirls.com post "Hey Black Girl, You Deserve To Be Seen". The text is republished from a 2016 Sorella magazine article "Words From A Black Girl Who Used To Wish She Were White", where the uncaptioned and uncredited image does not appear.

    I dragged that image from its homepage into a new image search tab to get a "reverse lookup" of proposed text labels and results. In this case, Google proposed the topic heading "black vs brown persons" and linked to the Wikipedia page for "Brown (racial classification)".

    Google Image search using image found in "black girl" search

    Further focusing on related images returns one exact match in the 2019 Forbes article "Overcoming The Angry Black Woman Stereotype", with the same uncaptioned and uncredited image. It isn't clear this is stock photography or if the image belongs in some sense to the Sorella author or Forbes author (or if the authors are the same).

    A similar round-trip with "white girl" as the search term produces a different set of image result. Again, I randomly selected one of the top 3-4 images, which turned out to be from a post on The Race Card Project titled "She’s so basic, typical white girl.". The image appears to be a self-portrait uploaded by Marryn Hilliker, whose piece is on her thoughts on being called both "basic" and white, and how the two relate or do not.

    Google Image search for string "white girl"

    In a reverse lookup of her image, Google proposes the topic heading "typical white girl" and returns the Urban Dictionary page for "typical white girl" -- which is not linked from Hilliker's text, but which exactly matches the phrase she uses in her title.

    Google Image search using image found in "white girl" search

    This highlights one of the (many) ways that Google is cheating, so to speak, at visual search with image input. In addition to recognizing or classifying the image itself, they are often also reading whatever pages they know that the image appears on, and those texts may create heavily weighted contexts for recommending image topics and related sites.

    It also highlights something about how discourse is processed by search. The Race Card Project invites candid participants to:

    Think about the word Race. How would you distill your thoughts, experiences or observations about race into one sentence that only has six words?

    When Hilliker chose that six word title in order to distill her response, three of those title words (for better or worse) then associated her image with the concept that she was processing and distancing herself from in her body-text essay.

  • @jeremydouglass said:

    This highlights one of the (many) ways that Google is cheating, so to speak, at visual search with image input. In addition to recognizing or classifying the image itself, they are often also reading whatever pages they know that the image appears on, and those texts may create heavily weighted contexts for recommending image topics and related sites.

    It also highlights something about how discourse is processed by search.

    @jeremydouglass This is a fascinating exploration here, and it points I think to the ways we are forced into speculative reverse-engineering of processes which remain obscured by proprietary algorithms. One of the things it seems to me Feminist Search, and Feminist AI, looks to do is both to reveal how existing systems make these moves (often more simplistic and frustratingly as you show) and also to imagine how we might create more transparent, nuanced procedures for drawing conclusions like search results via algorithms.

    I'm taking from your searches the idea that the resulting images or text may very well be just as culturally encoded, it's just that users are unaware they are inadvertently "tagging" their data as they participate in these digital systems. Additionally, they cannot see the models and processes underneath which prioritize particular interpretations and push them to the top of the results, based on commercial interests and majority rule.

  • "How can we reframe and revaluate traditionally "female" practices in light of today's emphasis on STEM education and create feminist spaces for both learning and developing coding practices?"
    As a K-12 educator I have developed with support of colleagues and students a series of experimental small scale code based digital projects inside a primary school in Brazil as can be read in papers at Research Gate (Using Virtual Reality And Web-based Technologies For Improving Individuals' Education. Paper presented at 11th International Conference on Experiential Learning, Sydney, Australia, 2008. https://www.researchgate.net/publication/266228908_Using_Virtual_Reality_And_Web-based_Technologies_For_Improving_Individuals'_Education // Developing Interactive educational environments to stimulate lifelong learning, IADIS International Conference ICT, Society and Human Beings 2007, https://www.researchgate.net/publication/260386601_DEVELOPING_INTERACTIVE_EDUCATIONAL_ENVIRONMENTS_TO_STIMULATE_LIFELONG_LEARNING). Then, I have reflected that one effective way of inspiring girls and boys understanding why engaging in code based learning activities, addressing STEAM has been to propose and sustain an integration of technological and scientific knowledge with the processes of teaching and learning. By using Web3D based scripting languages and information visualization resources such as HTML, VRML, X3D and the X3Don framework (https://www.x3dom.org/) there has been addressed ways for discussing the relevance of deepening citizens reading and writing skills through (re)using and interpreting small blocks of coding. So, even students with little reading skills related to a mother language, in this case (Portuguese) and referent to a foreign language (English), can be inspired to learn and practice a third language that is a given language used for coding. We have used the X3D programming language because can be a support for interactive reflection about the relevance of reading and writing with accuracy. This language can also be embed in a blog through an HTML editor, allowing individuals' studying and creating 3D interactive virtual reality content anytime and anywhere.

  • @SarahCiston said:

    it's just that users are unaware they are inadvertently "tagging" their data as they participate in these digital systems.

    Yes! This points to the way that these systems are built from the unconscious labour of the public. We are only just coming to terms with the idea that users of platform technologies perform labour and build capital for others, through the work Trevor Scholz for example. We have hardly thought about the nuances of how that labour structures and defines other's identities. If this specific issue was more transparent, users might very well tag and title content differently.

  • @jeremydouglass said:
    Given the prototyping of Feminist.AI and the open questions about querying information, I'm curious about what "visual search engine" will mean in this context.

    This analysis is fascinating and so revealing, but when I think about visual search or any visual interaction with these systems, I also think about how we can have visual access to the decision-making within them, to see how certain data is weighted and leads to certain connections that lead to certain outcomes. The Feminist.AI team @Christine.Meinders and Jana Thompson, refer to 'tracing data visually' and developing 'visual and interactive models of knowledge'. I'm curious if they see the development of these practices along the lines of the visual search engine exploration that @jeremydouglass links to, or what I would describe as a more reflexive engine that offers us some kind of access to and legibility of its model. I'm also picking up on this because the Feminist Search project refers to being an educational tool to learn about how search results are created.

    My own research seeks to explore these possibilities, of how the visual supports the legible which supports access.

  • @Zach_Mann said:

    Disassociating that particular kind of body from the label "coder" would go a long way.

    Considering the prominent attempts by computer science departments and tech companies to diversify their workforce/student body, it seems very outdated that search results for coder would still come up that way. That the Feminist.AI team chose the search term "safe" for their prototype, reveals traditional search engines to be reductionist projects. On searching for 'safe', one has to scroll a long way through images of the lockable boxes before getting to glimpses of other concerns: safe spaces, safe sex, environmental safety for wildlife, safety in relation to freedom of expression. Feminist Search opens up this reductionist problem, where traditional engines appear to strive for the singular, extracted from social context, and disinclined toward complexity.

  • I am really excited about the work around Feminist.AI and thank you so much for your generous contributions to this thread.

    One thing that stuck me is the discussion of human. I wonder if the focus should be on how a Feminist AI can decenter not the human per se but rather liberal conceptions of the autonomous and individual Human subject, and the human exceptionalism that hinders more meaningful understandings and practices of human and more-than-human worlds. This seems to be an important distinction that a Feminist AI should make given that so many historically marginalized groups continue to struggle to be recognized as fully human, and to exceed a liberal recognition towards more meaningful futurities.

    I am conducting an early-stage, collaborative research project on Gendered and Racialized Narratives of AI across Africa, we find that a majority of media sources directed by and/or for audiences across the continent of Africa articulate AI technologies through language that remains embedded within colonial logics of modernity and progress that are deeply racialized and gendered.

    How are these narratives of AI shaping and being shaped by how code is written? How can engaging feminist principles and practices around the body and the human shift these narratives, and thus provide alternative frameworks for writing code? These questions assume that cultural narratives and source code are co-constitutive.

    For example, in our study of 270 media sources we find that they promise that AI will be fast, efficient, productive, adaptable, certain, and precise. These promises are made in relation to the human. They assert that AI technologies are designed to imitate human intelligence, but be faster and more efficient and precise than humans.

    This language could be understood as simultaneously devaluing the human as the “Other” to AI technologies, while valuing the latter as imbued with more intellectual capacity than humans. Such understandings bolster a rhetoric of anxiety and fear that AI technologies will ultimately replace and render the human as less valuable, which unsettles foundations of human exceptionalism that undergird colonial and colonial settler modernity. Scholars of queer feminist decolonial and Indigenous STS however remind us that such rhetoric and anxieties imply a default understanding of the Human coded as heteronormative, white, and male. Colonial and colonial settler societies have a long history of designing and deploying technology to reinforce, replace, and/or exploit certain humans as less than according to race, ethnicity, gender, class, sexuality, and citizenship.

    Alternatively, in our study, we find that this language of modernity and progress continues to reinforce human exceptionalism and its liberal human subject of Man. In our study of media sources, we find that some emphasize how AI technologies will replace human workers, but many more express how AI technologies should be understood as helping, assisting, augmenting, and complementing the human. Neda Atanasoski and Kalindi Vora (2019) make similar arguments in their study demonstrating how robots and AI technology futures are being discursively and materially designed as surrogates rather than substitutes to humans, and how this replicates and reinforces gendered and racialized colonial logics.

    What might be interesting to think more about is how these AI narratives contribute to reinforce social hierarchies and inequalities precisely through this discursive construction of AI technologies as both seemingly devaluing the human as Other to machine, and continuing to value the human as exceptional to technology.

    As these media sources articulate and promise that AI will be fast, efficient, productive, adaptable, certain, and precise they reinforce and value these qualities and their historical attachment to the liberal human subject shaped by and central to histories of imperial capitalism. In doing, they promise a narrow vision of AI futures of “disruption” that does not disrupt the status quo of power and social hierarchies that have historically valued certain peoples, bodies, and ways of knowing and doing more than others.

    While I consider how our research contributes to understandings of how AI narratives are shaping and being shaped by the making of source code that drive AI technologies, I also want to consider what a process of unmaking might look like.

    What frameworks might be informative for a Feminist AI and practices of coding? Is it possible to imagine a more inclusive and meaningful AI future with source code that enables slower movement, sideways thinking (Puar), queer use (Ahmed), situated knowledges (Haraway), queer failure (Halberstam)? I am not at all sure of what this would look like, but I am interested in thinking more on this.

  • edited February 2020

    @Christine.Meinders @SarahCiston @CatherineGriffiths and Jana Thompson

    Such an important conversation. Developing methodologies to pursue these questions is key to the project of CCS.

    Could you e plain a bit in this thread about the code of the Feminist Search example you have posted and the ways specific aspects of the code represent feminist interventions?

    Also, could we perform a feminist code reading of this code? And how might that identify additional aspects?

  • @lfoster
    Thank you for these insights that open the discussion to intersectional issues, especially the way in which current computational technologies can be considered to continue a colonialist project. A lot of the ideas being discussed are not exclusive to Feminism, but can be are shared strategies with other struggles and discourses, such as post-colonialism and code.

    I find it really productive that you put into juxtaposition the terms ‘decentering’ and ‘disruption’. Where disruption is a deeply conservative trope propagated by the tech industry that really serves to mask and maintain neoliberal power narratives. It reminds me of the argument from Meredith Whittaker from the AI Now Institute, which points out how damaging Facebook’s ethos of ‘move fast and break things’ can seem when put into the context of political misinformation campaigns - we don’t want our democracy breaking. It raises the opposite idea of how to move slowly in building new technologies in order to get them right, and in turn, how to slow computation down so that we can trace and test and communicate the possible paths it might take. For me, thinking about tactics for slow computation can be a decentering strategy, considering that computation takes place at a beyond-human scale of perception.

    @markcmarino @lfoster
    Perhaps another way to reframe the development of AI in the context of feminism and post-colonialism, is the work of Yanni Loukissias. He asks us to look beyond a dataset as an autonomous, flexible, self-contained objec,t and rather to consider ‘data settings’ as a way to place data back into a larger knowledge system, to remember that all data belongs and generates meaning only in relation to the context in which it was produced. A dataset that forgets, that loses provenance, that is ‘clean’, is really just detached from the inherent messiness and complexity of its background, and continues the problem of abstraction and expropriation. The dirt and noise in data is a problem of this expropriation. We could also think of this through the lens of Donna Harraway’s ‘tentacular thinking’, where the tentacles that tie data to its origins are imbricated with other beings. When working with datasets/settings AI needs to reconsider these entanglements with provenance and question the ideal of ‘frictionless’ technologies/thinking that we have become accustomed to. Data that cannot evidence its cultural origin and development. I would like to think through the development of AI, less as an autonomous forecasting mechanism, but rather as a cultural amplifier tethered and recursively entangled in its own means of production.

  • edited February 2020

    @markcmarino
    The code for Feminist Search is a prototype. It starts a conversation about classification and clustering, suggesting new ways to think about search. In this example, the images provided are already classified. So the current code for this search is reflexive, as it serves information that has already been populated by individuals donating the data for search.

    The information gathered from the uploaded images, as well as keywords further describing the image and category of image, will provide more context when it comes to search and help explain how information is retrieved. The goal is to move beyond classifying an image as safe or dangerous, offering a place for data donors to explain why an image is safe as well as how they are viewing safety (emotional, physical, and so on).

    The clustering algorithm (in this case svm, but it could also be k-means) will be used on both the classified images and the donated context for the image. This will allow us to potentially explore unknown patterns about different contexts of safety or danger in community contributed categories. In this current code, the most compelling idea is the number of clusters. One of the initial challenges with clustering is the ability to figure out how many clusters and exactly what level of detail is needed to find meaningful insights. The number of clusters are an indicator of the complexity of the problem you are trying to solve. In the code below, the number of clusters has real social implications for the complexity and analysis of the problem.

    class ClusteredImages:
       def __init__(self, positive_images_path, negative_images_path, image_suffix):
           self.positive_images = set(glob.glob(positive_images_path + '/' + image_suffix))
           self.negative_images = set(glob.glob(negative_images_path + '/' + image_suffix))
           self.no_of_clusters = number_of_clusters
    

    The use of personal data donation and binaries in data collection and model creation are also essential ideas to consider. @lfoster comments that the deployment of technologies to reinforce or exploit certain humans is found in both past and current approaches to search.

    Using data donation approaches, our goal is to co-create technology for people who contribute to search. This allows for increased relevance while also exploring the social implications of search from a community perspective. The data donation approach in Feminist Search allows for more transparency in system design. It may also have the unintended effect of being misread, misunderstood, or used out of context. Feminist Search seeks transparency of search contributors and data donators, and eventually full transparency of the delivered results.

    However, is data donation even a good idea? Some Feminist.AI community members say yes, while other community members are concerned about the misuse or misapplication of search. This project is currently presented as the shared data of a community, but there will be challenges if and when this idea is generalized to a larger population.

    The most interesting part of the code is that it draws attention to these binaries and potentially problematic development spaces from a social perspective; it also highlights that our current approaches to search are still very screen based and prioritize visual approaches to sending and receiving information (text and images). As indicated earlier, these visual approaches are very important. As is evident in Catherine Griffiths’ work on Visualizing Algorithms, and her comments in this post, the act of visual representation and search has the ability to make the process more legible, and therefore, more accessible.

    An additional topic to consider is new approaches to image classification. During a conversation with AI expert Don Brittain, I learned about “label mixup”, which basically blends two images - so the class is no longer binary. This benefits class predictions during inference.

    re: Also, could we perform a feminist code reading of this code? And how might that identify additional aspects?

    The idea of a reading is interesting as it takes the code off the screen and involves an embodied performance - it reminds me of Sarah Ciston’s LADYMOUTH.WTF.

  • @CatherineGriffiths

    Thank you for connecting disruption and decentering more for me in your post. I have been focusing so much on disruption as colonial trope, and had not actually made the metholdogical link yet to decentering.

    I like the connection you make to Haraways's "tentacular thinking" which also helps us to think through various timescapes. This enables your thinking about AI as "less as an autonomous forecasting mechanism, but rather as a cultural amplifier tethered and recursively entangled in its own means of production" to consider how datasets/settings AI are also entangled within related histories of colonialism, capitalism, settler colonialism, postapartheid.

    I want to think more about your understanding of AI as "cultural amplifier" though. Does this enable us to consider how AI increases the voltage and power of social hierarchies and inequality?

  • edited February 2020

    @lfoster @CatherineGriffiths @markcmarino

    @lfoster Thank you for this post! re: "The human exceptionalism that hinders more meaningful understandings and practices of human and more than human worlds." Human exceptionalism is a key problem that Feminist AI should address.

    I share @CatherineGriffiths perspective: "A lot of the ideas being discussed are not exclusive to Feminism, but can be are shared strategies with other struggles and discourses, such as post-colonialism and code."

    Here are some approaches that may inform a Feminist AI that are central to the work of the Feminist.AI community.

    • Include participatory and new approaches to science and technology (Bardzell, Wajcman)
    • Contextual approaches that recognize there is no one size fits all. Each project may require a different approach, and may incorporate as you suggested - sideways thinking, queer use, queer failure and situated knowledge. These approaches to AI are created based on the project and those bodies making the project, and how they decide to frame the work.
    • Understand that AI and AL contribute to knowledge production (Adam, Hayles, Dyson)
    • Engages in Embodied (Blumenthal, Ahmed) approaches to the design of AI and its form (AI/XR design, knowledge creation, etc). 
    • Defines posthuman (ie- beyond human, augmented human, after human) (Ahmed, Braidotti, Hayles, Malabou)
    • Thinks through making (Burdick, McPherson)
    • Considers the co-created Feminist.AI philosophy (Feminist.AI community)

    Exploring AI Narratives and Code 
    When considering the relationship between AI narratives and code, it's clear that human-centered approaches to AI are often product focused and don't really work for all bodies, human or non-human. Alison Adam writes that this 'monolithic' view of AI is often equated with the building of an artificial mind or person, which is problematic for philosophy and social science research (Adam 12). Human-centered approaches greatly impact current narratives envisioning of the future, and as you suggest "replicate and reinforce gendered and racialized colonial logics". Lauren Williams writes about the challenges of Neoliberalism, Design and Racism in her article The Co-Constitutive Nature of Neoliberalism, Design, and Racism.

    Sources and Narratives
    Thinking about AI along a spectrum will change the way we write source code. Before this can happen, a focus should be on communities exploring or creating their own language, science, and technology. This is a way to counter existing approaches and come up with new ideas that can be applied to or an alternative to existing products, and will produce more interesting and thoughtful approaches to knowledge that move beyond intelligence. Adam writes about feminist suggestions that people develop their own science based on their own values, and provides ecofeminism as an example (Adam 25). Additional examples of these approaches have been explored in Láadan, and more recently, in #cplusequality from the Feminist Software Foundation.

    Beyond Human
    One way to frame understandings and practices of human and "more-than-human worlds" is by reframing the definition and role of embodiment. By doing this, we can start to bridge the artificial and natural worlds, creating new ideas along a spectrum of artificial and natural systems while pulling from AI and Artificial Life (AL).

    Dara Blumenthal's re-imagined corpus infinitum expands thinking about the human body to be a part of the environment, and incorporates a living-sensory approach to embodiment (Blumenthal 47) rather than separate from, therefore offers a new way to understand or think about embodiment and starts to frame embodiment as beyond human.

    We can then extend these approaches across environments and realities. Moving beyond connecting through sensors, actuators, and XR (connecting the digital and the biological), these human to more-than-human worlds explore not only the lived, cultural, and social experiences of embodiment, they also explore how to make across spectrums within digital and biological worlds.

    Posthuman and living-sensory embodied approaches to AI have been used in prior Feminst.AI projects. With each Feminist.AI project from 2016 on, the community defines their making approaches as they make. Engaging in co-creation is also a form of unmaking. Taking these theoretical approaches - which aren't new - and applying them to the design of AI has tremendous possibility. Creating space for the act of thinking and focusing on longer-term research --for example the most recent Feminist.AI project contextual normalcy is proposed as a 20 year AI/AL/XR research project -- is critical. Having a central place to make and physically understand with our bodies what is made is important. All good making includes these approaches: queer, slow, contextual.

    Themes around the digital, biological, intelligence, life, and knowledge continue to emerge, and most projects directly connect the digital with the natural. Feminist Search can incorporate these approaches in exciting ways. For example could search be triggered by human movement or a leaf blowing in the wind? While the current focus is on community created approaches to the design of search, what might a collaborative search between humans and nature look like?

  • @Christine.Meinders @SarahCiston @CatherineGriffiths, thank you for this rich discussion of Feminist AI. I found especially compelling the way you frame the use of Feminist Search as an opportunity for communities to discuss, to reflect, to question the deployment of rapidly advancing technologies of artificial intelligence, following the ethos of the larger Feminist AI praxis you have modeled. It engages communal discussion around encoded processes that directly affect them or, perhaps better, us.

    When I said

    Also, could we perform a feminist code reading of this code?

    I meant what we are doing now, reading in the interpretive sense.

    So, can I ask a follow up on the point you make:

    The clustering algorithm (in this case svm, but it could also be k-means) will be used on both the classified images and the donated context for the image. This will allow us to potentially explore unknown patterns about different contexts of safety or danger in community contributed categories. In this current code, the most compelling idea is the number of clusters.

    Can you elaborate a bit on what makes this the most compelling idea and how it relates to the larger project of Feminist Search?

  • @markcmarino
    What’s particularly compelling about defining the number of clusters is that the simple act of changing the number of clusters can change the outcome of something like Feminist Search. The code for this local version is a little limited -as it trains on a small dataset comprised of a few images. In this version the submissions from the data donation portion of search are not present, however if the data donation pieces were included (labels for type of safety or danger, images and new classifier suggestions) then it could affect how an image is categorized. Clustering according to these new features could generate new categories in different dimensions which may offer different ways to frame (or perspectives) of "safe" and "dangerous", and community created categories. 

    In this example, we are talking about code, but the benefit of these long term critical research projects is to look at different ways of searching, as highlighted in Algorithms of Oppression. One of the goals of Feminist Search is to engage in parallel research within and outside of tech - so investigating search through the act of “querying” a friend or community is equally as important as understanding how prejudice is hard coded into search.

Sign In or Register to comment.