Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Code Critique: Possibility and Injustice/Bias in AI Transfer Learning

Author/s: TensorFlow
Language/s: Python, NumPy, TensorFlow, Keras, TensorFlow Hub
Year/s of development: Current Learning Materials at tensorflow.org
Location of code: https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub

Overview:

In machine learning with neural networks, the practice of transfer learning uses one network trained for one task on another related but different task, without fully retraining the original network. This approach works in practice and it seems to mirror how humans use previous experience to learn new tasks. The learning of the new task in ML could then be thought of as “emergent” behavior, as the network classifies new input data under new categories.

There are lots of possibilities for philosophical reflection on this emergent behavior, although, transfer learning can also more clearly demonstrate how machine learning can be biased in potentially dangerous or unjust ways. In fact, in some of the early papers on multi-task and transfer learning, learning outcomes improve when ML learns with “bias”, a fact wholly accepted by the authors (see for example, R. Caruana, “Multitask Learning”. Machine Learning, vol. 28, 1997, p. 44).

That some bias is necessary in learning or knowledge production is an insight that philosophers have also come to understand through science studies and much contemporary thought. But this does not remove the potential danger or injustice. Consider an example of a transfer learning of possibility in multilingual language learning. Similarly, consider a transfer learning of injustice in facial recognition. These are possibilities and injustices that are present in neural networks in general. The code I would like to consider more fully dramatizes this bias however.

The code to consider comes from an ML tutorial on the TensorFlow.org website. TensorFlow is a higher level programming framework for neural network-based ML. Interestingly, this tutorial uses TensorFlow Hub, a repository of reusable, pre-trained models. In some ways, this repository provides a central gesture of transfer learning made into a new software platform.

To demonstrate the disconnect and potential bias between the classifying task of the originally trained network and the transfer network applied to a new, related task, consider first of all that the pre-trained model from this repository of models is loaded and configured with just 3 lines of code:

classifier_url ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2" #@param {type:"string"}

IMAGE_SHAPE = (224, 224)

classifier = tf.keras.Sequential([
    hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,))
])

Secondly, in the beginning of the tutorial, a photograph of Grace Hopper, the famous woman in technology, is fed in as test data after establishing the transfer-learned network. Bias of the original network is shown by the fact that the new network classifies the image as “military uniform” (That Dr. Hopper is wearing) rather than “Grace Hopper” or “Important early computer scientist”, etc.

Following through the tutorial, a pre-trained network is loaded for a second example and its network weights are explicitly set not to be trained further (the whole point of transfer learning is to not have to retrain them):

feature_extractor_layer.trainable = False

Only, the final layer of the network is removed and a classifying layer (a “classification head”) for recognizing flower species is configured as the penultimate set of nodes.

model = tf.keras.Sequential([
  feature_extractor_layer,
  layers.Dense(image_data.num_classes, activation='softmax')
])

Until in the tutorial the network is modified further, adding the classification head, the flower detection is not as “accurate”. So in this case, the neural network is potentially unjust until accurate. That the recognition is first inaccurate in this case shows the limitations of transfer learning. But after adding the classification head, the network identifies most of the flowers accurately. Herein lies transfer learning’s philosophical possibility, but this possibility has its own limitations.

Questions

How does the TensorFlow Hub model repository both demonstrate an AI of possibility as well as injustice?

Does transfer learning effect a truer “reuse”, much more powerful than traditional reusable software components and libraries?

If ML/AI can be unjust, despite the role of bias in all learning, is this because it is tied to software and legal policy that impinges upon democracy (by excluding some members of a commonwealth)?

Are there limitations to looking at transfer learning as emergent, given that a learning network, in not being a simulacra of a human brain, could nevertheless be a mirror of “Reason” in a way that would be problematic for 20th Century critiques of Enlightenment (i.e. Adorno, Horkheimer et. al.)?

Comments

  • Thank you for sharing this very interesting example. Being able to walk through the tutorials and experiment with it hands-on really adds something to the example.

    Could you articulate a bit more what "just" vs "unjust" behavior looks like for you in the context of these particular models / algorithms? For context, one well-known example of algorithmic bias in popular culture is racist face recognition -- the face recognition camera tracks a white face, but not a black face, so the model leads the application to offer features (unlocking, autofocus, et cetera) unequally, offering services to some kinds of bodies rather than others. A laptop can unlock when it sees its owner, but this works frequently for white bodies and seldom for black bodies; this is inequitable and unjust.

    In your initial example, is Dr. Hopper not recognized, but Alan Turing is recognized? Or is Hopper recognized as a military uniform, while men in military uniforms are recognized in additional ways that women are not? Or is it the prioritization of recognition categories over others (for example, systematically identifying object categories such as military uniforms rather than identities) being done in an even-handed way, but evidence of a cultural bias about priorities of what counts to recognize?



    A secondary location for the code -- not executable, but with some attribution / blame and version history -- is here:

    https://github.com/tensorflow/docs/blob/66d51334e055b08affd272bcbb204c368fc57be7/site/en/tutorials/images/transfer_learning_with_hub.ipynb

    I'm not sure that this gets to the bottom of who the tutorial author(s) are -- from a quick glance it seems like much of the material was perhaps imported at some point from a previous source? -- but apart from being a product of the organization, it might have been predominantly written by only a few specific people. Documentation often is.

  • @jeremydouglass , thank you for the comments and the Github link.

    With this particular model, the “unjust” behavior stems from priorities of what counts to recognize. For example, I don’t think this particular pre-trained model would recognize Alan Turing either, as it likely doesn’t have a facial recognition dataset (let’s hope not!)

    My point in contrasting the label of the uniform versus the identity was to suggest that the algorithm decides in seemingly arbitrary fashion - because the model/network classifies input images, acting as a black box or where the original training images are unknown to the programmer.

    I could see this network labeling Dr. Turing as “mathematician”, or “professor” instead of his style of clothing. This would point to gender bias in the original training data, in which men had been more often assigned these labels by a researcher who created the reusable model/network.

    As regards facial recognition harboring bias, it is not out of the question that transfer learning (though not necessarily the network of the tutorial) could be implicated here too. If the number of faces entered into a model was enormous, and with proper noun/identity labels, the technical construct of transfer learning and pre-training data - would work as an algorithmic approach. It would in this case be possible to reuse the network for facial recognition in a variety of human communities - albeit problematic as surveillance, etc.

    Note that this is a different facial recognition problem than tracking a face for unlocking a device (unlocking the device is a binary classification, for example).

    As far as “just”, I wonder if the tutorial only approaches this when it turns the model into a classifier for flowers, which is where the “possibility” comes in (since it largely identifies them). That is, it does less violence once the researcher has actually decided to do something specific. Maybe machine learning applications need to heed to the cautions of science studies and stick to particulars and local knowledge. This imperative certainly helps reduce injustice/bias/violence in other types of research production.

Sign In or Register to comment.