Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

2024 Participants: Hannah Ackermans * Sara Alsherif * Leonardo Aranda * Brian Arechiga * Jonathan Armoza * Stephanie E. August * Martin Bartelmus * Patsy Baudoin * Liat Berdugo * David Berry * Jason Boyd * Kevin Brock * Evan Buswell * Claire Carroll * John Cayley * Slavica Ceperkovic * Edmond Chang * Sarah Ciston * Lyr Colin * Daniel Cox * Christina Cuneo * Orla Delaney * Pierre Depaz * Ranjodh Singh Dhaliwal * Koundinya Dhulipalla * Samuel DiBella * Craig Dietrich * Quinn Dombrowski * Kevin Driscoll * Lai-Tze Fan * Max Feinstein * Meredith Finkelstein * Leonardo Flores * Cyril Focht * Gwen Foo * Federica Frabetti * Jordan Freitas * Erika FülöP * Sam Goree * Gulsen Guler * Anthony Hay * SHAWNÉ MICHAELAIN HOLLOWAY * Brendan Howell * Minh Hua * Amira Jarmakani * Dennis Jerz * Joey Jones * Ted Kafala * Titaÿna Kauffmann-Will * Darius Kazemi * andrea kim * Joey King * Ryan Leach * cynthia li * Judy Malloy * Zachary Mann * Marian Mazzone * Chris McGuinness * Yasemin Melek * Pablo Miranda Carranza * Jarah Moesch * Matt Nish-Lapidus * Yoehan Oh * Steven Oscherwitz * Stefano Penge * Marta Pérez-Campos * Jan-Christian Petersen * gripp prime * Rita Raley * Nicholas Raphael * Arpita Rathod * Amit Ray * Thorsten Ries * Abby Rinaldi * Mark Sample * Valérie Schafer * Carly Schnitzler * Arthur Schwarz * Lyle Skains * Rory Solomon * Winnie Soon * Harlin/Hayley Steele * Marylyn Tan * Daniel Temkin * Murielle Sandra Tiako Djomatchoua * Anna Tito * Introna Tommie * Fereshteh Toosi * Paige Treebridge * Lee Tusman * Joris J.van Zundert * Annette Vee * Dan Verständig * Yohanna Waliya * Shu Wan * Peggy WEIL * Jacque Wernimont * Katherine Yang * Zach Whalen * Elea Zhong * TengChao Zhou
CCSWG 2024 is coordinated by Lyr Colin (USC), Andrea Kim (USC), Elea Zhong (USC), Zachary Mann (USC), Jeremy Douglass (UCSB), and Mark C. Marino (USC) . Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

The Original ELIZA in MAD-SLIP (2022 Code Critique)



  • edited February 2022

    Timothy Snyder has quite an interesting description of the Turing test in his article, And we dream as electric sheep: On humanity, sexuality and digitality:

    Turing’s imitation game, as he set it out in 1950, has two stages. In the first, we measure how well humans can distinguish between a woman and a man who is impersonating a woman. Then we see whether humans are better or worse at telling the difference between a woman and a computer imitating a woman.

    As Turing described it, three people would take part in the first stage of the game. In one room is the interrogator (C), a human being whose task is to adjudge the sex of two people in a second room. He knows that one is a man (A) and one is a woman (B), but not which is which. An opening between the two rooms allows for the passing of notes but not for sensory contact. The interrogator (C) poses written questions to the two other people in turn, and they respond.

    The interrogator (C) wins the imitation game by ascertaining which of the two is a woman. The man (A) wins if he persuades the interrogator that he is the woman. The woman (B) does not seem to be able to win.

    In Turing’s example of how this first stage of the game might proceed, the man (A) answers a question about the length of his hair by lying. The woman (B) proceeds, Turing imagined, by answering truthfully. She must do so while sharing space with a man who is pretending to be a woman (likely doing so by describing her body) and in the uncertainty that she is making her case, since she cannot see the interrogator.

    Now, asked Turing, ‘What will happen when a machine takes the part of A in this game?’ In the second stage, the man in the second room is replaced by a computer program.

    The imitation game recommences with a modified set of players: no longer three people, but two people and a computer. The interrogator in the first room remains a human being. In the second room are now a computer (A) and the same woman (B). In 1950, Turing anticipated that, for decades to come, human interrogators would more easily distinguish computers from women than they would men from women. At some point, he thought, a computer would imitate a woman as convincingly as a man could.

    In Turing’s article about the imitation game, to be male means to be creative and to be replaced by a computer; to be female means to be authentic and to be defeated by one. The woman (B) figures as the permanent loser. In the first stage, she plays defense while the male struts his creative stuff; in the second, when a computer has replaced the man, she must define humanity as such, and will eventually fail. The gender roles, however, could be reversed; the science fiction that grew up around Turing’s question has done so.

    But what does it mean to be human? Can we assess whether machines think without determining what it means for humans to do so? Turing proposed the interrogator, C, as an ideal human thinker, but did not tell us enough about C for us to regard C as human. Unlike B and A, who talk about theirs, C does not seem to have a body. Because Turing does not remind us that C has a corporeal existence, we do not think to ask about C’s interests. Cut off from A and B, an isolated C with a body might start thinking of what works best for C personally. Analytic skills alienated from fellow creatures have a way of serving creature comforts. Perhaps there is a lie that suits C’s body better than the truth?

    Without a body, C has no gender. It is precisely because we know the gender of A and B that we follow the conversation and the deception Turing recounted. The person playing C also would have a gender, and this would matter. Could a male A, in the first stage, ever fool a female C interrogator if he had to answer questions about the female body? Might a female C drop hints that a woman would catch but a man would not? Would that not be her very first move? In the second stage of the game, would a female C try to distinguish a computer from a woman the same way a man would?

    It is quite different to ask questions about what you are, as opposed to what you think you know. Might a male C be more likely to lose to a computer A than a female C, because male expectations of femininity are more easily modeled than actual femininity? In general, would not a computer program playing A try to ascertain the gender of C? Given that the Internet reacts to women’s menstrual cycles, this last seems plausible.

    To be sure, presenting C as pure mind is tempting. It appeals to reassuring presuppositions about who we are when we think. We need not worry about self–serving if we have no self, nor worry about weaknesses when we have no vulnerable flesh. With no body, C seems impartial and invulnerable. It would never occur to us that Turing’s version of C would use the computer for corporeal purposes not envisioned by the game, nor that the computer might take aim at anything other than C’s cerebrum. Turing granted that the ‘best strategy for the machine’ might be something other than imitating a human, but dismissed this as ‘unlikely.’ Here, perhaps, the great man was mistaken.

  • Snyder goes on to say (and this is the link to ELIZA/DOCTOR),

    As early as the 1960s, people were speaking of such a reductive Turing Test with two players, a single stage, and zero reflection. Tellingly, the first program said to have passed the Turing Test in this form, a decade or so after the mathematician’s death, was a fake psychoanalyst. Rather than answering the questions posed by the human interrogator, the program ELIZA reformulated them as curiosity about the interrogator’s own experiences and feelings. When ELIZA worked as intended, humans forgot the task at hand, then rationalized their thoughtlessness by the belief that the computer must have been a human thinker. And so arose the magic circle of emotional targeting and cognitive dissonance that would later structure human–digital interaction on the Internet.

    The programmer’s idea was that people on the psychoanalyst’s couch tend to believe that there is some reason why they are there. They project meaning onto a psychoanalyst’s question because they wish to think that the expert has reasons behind the inquiries. But it could be, as with ELIZA, that a therapist does not think with a human purpose but only mindlessly engineers emotions — that it has no why, only a how.

  • edited February 2022

    @warrensack (and also responding to Timothy Snyder as quoted by @davidmberry), it can of course be interesting to discuss gendered variants of the standard "Turing Test", but the evidence that Turing himself had something like the standard [computer imitating human rather than specifically woman] setup in mind is extremely strong.

    The evidence on the other side is more or less confined to the fact that Turing starts off with a man/woman imitation game, and then says "What will happen when a machine takes the part of A [the man] in this game?" If the article had stopped at that point, we would indeed have to conclude that his idea was to have the machine pretending to be a woman. But even so this setup is not well-defined - for we are not told whether or not the interrogator is informed of the substitution (and this makes a huge difference to the appropriate strategy).

    You claim that this first sentence is "the most important one" in the paper for defining Turing's game/test, but this claim seems to me to be gratuitous. You suggest that "To argue that Turing didn't state the actual problem he wanted to work on at the start of the paper is simply to argue that Turing was a bad writer", but I disagree. First, it is entirely common for philosophers who generally write extremely well - David Hume would be my paradigm - sometimes to express things in a confusing (and probably confused) way. Secondly, a fair amount of carelessness is evident in Turing's paper, and Robin Gandy told us that he wrote it "quickly and with enjoyment", much less carefully than his mathematical papers. Thirdly, there is no reason for privileging the start of the paper - when Turing is introducing his game/test by analogy with the gendered imitation game - as the place where accuracy is most likely to be found. On the contrary, that is the point where confusion (between the two different setups) is most likely, because he is attempting to draw an analogy between them. And it would speak worse of Turing if he was misdescribing his intended setup later in the paper, at the points when he is focusing purely on that setup, and is explicitly trying to clarify it more precisely. He signposts that clarification twice ...

    (a) At the beginning of §3 he says that the question "will not be quite definite" until the term "machine" has been specified. Just before this, he has said "it will be assumed that the best strategy [in programming the computer] is to try to provide answers that would naturally be given by a man". Note - "a man", not "a woman".

    (b) Then at the end of §5 he specifies it more precisely, ending by asking whether a computer C "can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man". Note - "a man", not "a woman".

    So we already have a significant weight of evidence against the claim that the first introduction of the game/test conforms to Turing's settled intention. And in my earlier contribution I mentioned a number of other points which you did not allude to.

    (1) Turing's first hint of an imitation-style test in 1948 involved chess, not gender.

    (2) In the 1950 paper, the only gender-related question arises in the context of his illustrative "imitation game". Once he introduces the computer, the questions concern skill at poetry, arithmetic, and chess, with no hint of gender relevance (see especially §2, p. 442; §6.4, p. 452; and §6.5, p. 454 - page numbers from "The Essential Turing").

    (3) In §2, entitled “Critique of the New Problem” and starting immediately after he has introduced the computer as a participant, Turing six times talks explicitly of a “man” – even implying that the machine’s obvious strategy is to imitate a man – and makes no mention whatever of women or the gender issue (pp. 442-3).

    (4) In the remainder of the paper, women are mentioned only in the context of an imagined “theological objection” (§6.1 p. 449), while the words “man” or “men” occur a further 30 times (probably intended gender-neutrally).

    You object strongly to interpreters treating the game/test as a "test" - i.e. something that can be passed or failed - but that's exactly what Turing did. For example, at p. 443 he talks of "our criterion for 'thinking'", and at p. 448 he embarks on a section describing "contrary views" to his positive answer to "our question, 'Can machines think?'". Here he is clearly dealing with those who aim to answer negatively, saying that no machine could pass the test (which is not to deny, of course, that they could play the game, albeit relatively poorly - but the criterion/test is passed by playing it well or "satisfactorily"). Several other examples could be given from the paper. By contrast, there is no other place in the paper, after the first introduction of the game/test, when he treats it as a three-player game (with interplay between all three), and the examples in the 1950 paper almost all take the form of viva-voce questioning, a pattern which - as I remarked before - continues in 1951 and 1952.

    Indeed in the 1952 description of his setup, which seems to me to be the most developed, Turing says: "I would like to suggest a particular kind of test that one might apply to a machine. You might call it a test to see whether the machine thinks, but it would be better to avoid begging the question, and say that the machines that pass are (let's say) 'Grade A' machines. The ides of the test is that the machine has to try and pretend to be a man, ..." A bit later he says "the machine would be permitted all sorts of tricks so as to appear more man-like, ... Well, that's my test." p. 495).

    This is a considerable body of textual evidence, all in the same direction. To insist that it is outweighed by a single short paragraph (or even one sentence) in Turing's first section seems to me to be deeply implausible.

    We might agree more on the significance of the "Turing Test", whose influence on the development of AI was I think rather unfortunate. Chatbots are fun, and one can learn from them, but the idea that they become closer to genuine AI the longer they manage to fool an "average interrogator" is simply false. Turing might have been under this misapprehension when he wrote the paper in 1950, but if so, and he had lived to see Weizenbaum's 1966 paper, I think he'd have quickly changed his mind.

  • @warrensack thank you for replying. I won’t keep asking questions, but I’ll share my view:

    I agree that “...the role of the woman is still intrinsically a part of Weizenbaum's project,” but only in the sense that JW was crafting a program that could converse using natural language and could be “taught.” If there is a significance in the gender of that role in relation to JW’s program, I don’t know what it is.

    If the gender of the Eliza character was instead male I don’t think it would require any part of the ELIZA code, DOCTOR script or 1966 CACM paper to change.

    JW worked on ELIZA from 1964 to 1966. My Fair Lady was in the cinemas in 1964. The role of Eliza in that story is in some respects analogous to JW’s work. It seems to me likely that the name ELIZA was chosen on a whim.

    JW says of ELIZA:

    “Its name was chosen to emphasize that it may be incrementally improved by its users, since its language abilities may be continually improved by a "teacher". Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.”[1966 CACM paper]

  • I've received permission to make two papers by Edwin F. Taylor public. These deal with uses of ELIZA in education, and (as we have discussed above) relate to a later version of ELIZA that seems to be able to interpretively execute snippets of MAD-SLIP code in addition to the usual ELIZA script. See:

  • It seems that the ELIZA discussion has run its course, at least for the moment. We didn't really get into much of the code MAD-LISP code. In part this may be because of the obscurity of MAD-SLIP.

    From a purely software point of view, I thought that that back-and-forth between Anthony Hay and Arthur Schwarz regarding the details of SLIP, and Arthur's on-the-spot partial reimplementations of the missing SLIP functions was particularly interesting.

    It also became apparent, part-way through the discussion, that there was a subsequent version of ELIZA -- the one used in Taylor's publications on using ELIZA in a tutorial setting -- and I only a few days ago received permission to make those document public by the copyright holders. (See my Feb 4 post for links.)

    One of these, the earlier one, from 1967, goes into some line-by-line detail on the (apparently) interpreted MAD-like code used in these tutorial settings, and this is clearly a significant extension beyond the original ELIZA, permitting what appears to be fully interpreted MAD! It would be interesting to see how that worked, but someone would need to go back into the archives to find this out. (I do have some un-published images of other parts of the content of the archive, nearby the ELIZA code itself, and although some of this tutorial I/O was there, I don't remember seeing something is likely to constitute a full-blown MAD interpreter!)

    Interestingly, Taylor only cites JW's 1966 CACM paper, but this interpreter functionality is not described there. Nor, as it turns out, is the ability to teach ELIZA by adding new rules at run-time, except tangentially. This also came up in conversation as a direct result of reading it straight out of the code!

    There is another publication that I'm trying to get from MIT that is apparently a very detailed ELIZA programmer's guide. when I get that, I'll share it here, and maybe that will relight some discussion.

  • @jshrager, is the source of the HASH function one of the things you will look for when you go back to JW's archive?

    Going back to low level stuff, I noticed this at the bottom of the ELIZA source:

                   R* * * * * * * * * * SCRIPT ERROR EXIT                           001980
    NOMATCH(1)      PRINT COMMENT $PLEASE CONTINUE $                                002200
                    T'O START                                                       002210
    NOMATCH(2)      PRINT COMMENT $HMMM $                                           002220
                    T'O START                                                       002230
    NOMATCH(3)      PRINT COMMENT $GO ON , PLEASE $                                 002240
                    T'O START                                                       002250
    NOMATCH(4)      PRINT COMMENT $I SEE $                                          002260
                    T'O START                                                       002270

    There are two places in the code where these NOMATCH labels are jumped to via T'O NOMATCH(LIMIT). (T'O is an abreviation of TRANSFER TO.)

    Under normal circumstances, if no keyword is found in the user's input and no memory is available, or it's not time for the recall of a memory, ELIZA selects one of the messages in the NONE list in the script. But there are circumstances where a keyword has been identified, but the patterns associated with that keyword do not match the user's input. This probably shouldn't happen if the script is correctly designed. If it does happen one of these 'hard-coded' messages is displayed. Which one is displayed depends on the value of the variable LIMIT at the time of the error.

    Rather than display a message such as 'script error', JW chooses to hide the problem from the user. Although, I'm sure he would have recognised a 'HMMM' for what it really meant.

  • @jang apologies if you've already answered this (I'm making my way through this HUGE thread). You wrote:

    Not all the logic is captured in this DSL: there's some explicit (still rudimentary) parsing and transformation machinery in the mad-slip code that looks explicitly written to support DOCTOR.

    Do you have any examples? So does that suggest that Weizenbaum was already thinking about DOCTOR when writing ELIZA or that these may have been added elsewhere.

    Also, are all my [ ]s actually ( )s?

  • @anthony_hay @aschwarz Does anyone have any ideas regarding how the MAD interpreter used in the scripts in the Taylor documents might work? The only think that I can think is that they wrote an entire MAD-SLIP (or maybe just MAD) interpreter in MAD-SLIP. I'm specifically thinking of this document: Edwin F. Taylor (1967) The ELIZA Progam: Conversational Tutorial. IEEE International Convention Record, Vol. 5, Part 10 which, on pages 5++ describes the programming language to a large extent, but doesn't say how it is implemented.

  • @markcmarino Second things first: yes, as far as I can tell, all the []sshould be ()s.

    In terms of the parsing: I'm referring to the explicit clause location by separation on ,, . and but. This seems to be to be aimed directly at making eliza/doctor's responses to fairly standard questions/statements more reasonable: increasig the chance that it can produce a sensible-seeming response to a multi-clause sentence, given that we tend to front-load the important things in multiclause statements. Much like the use of the ymatch function, it strikes me that this is something useful for "doctor" that was built at a different layer because that was the simplest place to put it.

    I don't know how much such clause splitting would work for an arbitrary user - although the stories about late-night unwitting conversations with the program highlight that it might be more effective than it has any right to be! I wonder how much of the effectiveness of this parsing scheme is down to users being primed by their own expectations of communication over the same technology - either by seeing other eliza dialogues as examples, or simply because a teletype is a very clunky machine for use as a communication medium. Low baud rates and mechanical keyboards, perhaps, prompt simple sentences and assertions.

    (Some of the awkwardness in spoken user interfaces comes from the fact that a lot of the things we say have drastically different structure to written communication; if eliza were plumbed into voice recognition, the results might seem far less natural*.)

    (* fwiw, I knocked together an Alexa "skill" that plugged a Z-machine interpreter into the Amazon APIs, in order to try some of the classic games through that medium. The results were hilarious, but far from natural or usable.)

  • Thank you @jang

    I want to return our attention to the question of gender with respect to ELIZA, for it is something I have been thinking about for a long time, at least since my doctoral studies. And this is a golden opportunity to discuss that question on the level of the code.

    My initial reaction to the question about whether we would see any hints of gender in the ELIZA code was that it would be highly unlikely, as the code would primarily contain operations for recognizing and prioritizing keywords, decomposing and reassembling phrases, matching keywords and delivering responses, and other functions of the software, such as the EDIT function that JW describes, which I find fascinating.

    And yet... My mind keeps returning to the question of gender in ELIZA/DOCTOR as one worthy of discussion. And while with @warrensack I do believe that gendered content is a high-level aspect of the program or a kind of epiphenomena -- as is gender itself, at least in Butler's formulation, I do think it's worth asking where gender emerges out of the code since ELIZA/DOCTOR (the two systems operating together) seem so fraught with gendered implications.

    So, the first thing I think it is worth mentioning is that the hunt for GENDER does not have to be a hunt for GENDER bias. JW does not have to be a misogynist to have implemented some gendered patterns into this program. And, I wouldn't even begin with the presumption that we are looking for gender bias. I am only looking for gender itself, a construct that is in a current process of disintegration at least in North American culture. So, if we move this conversation out of the boxing ring and into the parlour, where it belongs, perhaps we can see some signs of gender in the code...

    First, what is gender? In the first part of the 20th century, gender was a binary division used to indicate two categories of behavior. Those categories of humans are separate from biological sex and so they are largely symbolic, the accumulation of iterated acts read by other people (again, I'm drawing on Butler here). So, where is gender in conversation?

    Throughout the Gutenberg age, or parenthesis, authors (both men and women) have written books of comportment or etiquette, how proper men and more often women (or young ladies) should behave. These often have clear instructions on conversation.

    While Rogerian psychotherapy offers one model by which we can read ELIZA/DOCTOR, certainly these guides for gendered speech offer another, not because Weizenbaum was trying to follow them when constructing his program but because they offer a sense of how conversational patterns are instructed read with regard to gender.

  • edited February 2022

    Let me continue that conversation with just one example:

    Here is an excerpt from one conversational guide. If we really wanted to be careful, we would perhaps have to find an example from a German text around the time Weizenbaum was growing up, though he is publishing in the age of Emily Post.

    Here is an example from The Ladies' Book of Etiquette and Manual of Politeness by Florence Hartley from 1890. (Yes, I realize this is an outdated guide, but it offers a useful example.)

    If your companion relates an incident or tells a story, be very careful not to interrupt her by questions, even if you do not clearly understand her; wait until she has finished her relation, and then ask any questions you may desire. There is nothing more annoying than to be so interrupted. I have heard a story told to an impertinent listener, which ran in this way:—

    "I saw a fearful sight——"


    "I was about to tell you; last Monday, on the train——"

    "What train?"

    "The train from B——. We were near the bridge——"

    "What bridge?"

    "I will tell you all about it, if you will only let me speak. I was coming from B——"

    "Last Monday, did you say?"

    and so on. The story was interrupted at every sentence, and the relator condemned as a most tedious story-teller, when, had he been permitted to go forward, he would have made the incident interesting and short.

    Never interrupt any one who is speaking. It is very ill-bred. If you see that a person to whom you wish to speak is being addressed by another person, never speak until she has heard and replied; until her conversation with that person is finished. No truly polite lady ever breaks in upon a conversation or interrupts another speaker.

    Now from this example, we could argue that ELIZA is very ladylike because it lets the conversant type in their entire statement. On the other hand, as we have noted, since (one version of) ELIZA only takes the statement up until the comma, period, or but, perhaps it is not so ladylike.

    I offer this example to say that when we look for something like gender, or perhaps the intersection of race, class, and gender that such books of comportment instruct -- which is itself a performative process -- perhaps we can think about the way patterns of interaction have been socially encoded -- rather than fearing that someone is going to cancel Weizenbaum merely because he chose the name of his program from such a fraught literary work -- which to me, actually, suggests, along with his writings, he was more cognizant of the many symbolic realms at play.

  • edited February 2022

    @markcmarino we have been trying to get hold of:

    Hayward, P. R. ELIZA Scriptwriter's Manual. Education Research Center, Massachusetts Institute of Technology: Cambridge, Mass. March 1968.

    Which if we think about the performative in relation to ELIZA scripts, there may be interesting discussions about how the ELIZA group thought one should construct the scripts in particular ways to perform the script conversations.

  • @davidmberry I hope we get it.

    Though, I should mention that by "performance," I was referring to iterated acts from Butler's concept of performativity, which itself is drawn from Austin's performatives.

    Along these lines, thinking about the ways that gendered acts (and again, I'd add acts that also signal socio-economic status and even race and ethnicity intersectionally) accrue and accumulate to create a sense of the identity of the person.

    I'd also like to emphasize another point (that I also argue in my dissertation) that the conversational model reminds us that the process of performitivity is dialogic. It is created by the production of symbols that are read and interpreted by an observer who applies whatever social norms they hold true. Conversants with ELIZA as DOCTOR who gender it as a man demonstrate their role in this process -- which again speaks to the point that the software's perceived gender is not merely a function of the system or script but in its connection or completion when others converse with it.

  • I wonder if all Rogerian therapists of the time were men. (Likely all therapists of the time were men!) I don’t have any idea how realistic DOCTOR is to actual Rogerian therapy, although I was struck that the dramatization of therapy of that era, as depicted in Mad Men, seemed to me incredibly ELIZA-like. Probably both the writers of Mad Men, and JW were working off the same caricature of Rogerian therapy.

  • @markcmarino: Mark, your suggestion that we concentrate on the performance of gender rather than gender bias is inspiring. Thank you!

  • @peter.millican and @anthony_hay: Thank you for responding so thoroughly to my comments. I see, however, that my comments moved the conversation in a direction that I did not intend to pursue. @peter.millican, your textual analysis of Turing's papers provides ample evidence of Turing's intentions regarding gender. And, yours @anthony_hay provide the same for Weizenbaum. But, a propos of @markcmarino 's comments, I think if we are to pursue questions of gender we will need to widen our search beyond just the texts (code and prose) authored by Weizenbaum and Turing (Mark's citation of The Ladies Book of Etiquette is exemplary) and consider cultural formations (historical and contemporary) that influence the author in ways that may not be intentional, that may be, instead, linguistic, or to speak like a sociologist, "social facts" (e.g., that "man" was intended as a gender neutral term even if we can see now, after years of feminist work, that it is not at all neutral; or that Eliza Doolittle was a well-known character), or that are unconscious expressions that are not necessarily the author's intentions. Without getting Freudian about this one can, instead, see empirical evidence of unconscious expression in a stylistic analysis of a text such as has been done for decades in the digital humanities looking at, for instance, the number of times an author uses specific pronouns or other vocabulary terms compared to their usage in a larger corpus of written materials. In other words, I think if we are to make any progress on an analysis of gender we need to pursue not just a given author's intentions as expressed in a small set of texts, but a larger corpus of evidence too.

  • @jshrager and @davidmberry: The materials you are finding in the MIT archives are incredible! I know this is supposed to be just a discussion thread, but you are opening what could be an entirely new long-lasting line of research on these topics with your finds. Thank you!

  • Another way of expressing this idea, @warrensack , is I suppose that we should do a symptomatic reading of ELIZA/DOCTOR rather than a purely or mainly intentional one? That seems a productive way to carry out readings of many sort of texts, and of code & systems as well.

  • @nickm: Yes! Maybe we should pursue both intentional and symptomatic readings.

  • edited February 2022

    Among the sorts of reading -- Close, Intentional, and Symptomatic -- How would one treat the fact of the author's evolving intentions? Weizenbaum's relationship to ELIZA changed over time. (And I'm sure many other authors as well have experienced this -- I've experienced it with nearly every one of my own published works, perhaps not so violently as JW.) Anyway, how do you understand (if at all) this evolution in an intentional reading?

  • edited February 2022

    @warrensack: Thank you for your response, which raises interesting issues. I too favour stylometric analysis to shed light on texts (having long taught this as a way of interesting humanities students in computing: But many scholars like exploring ideas in a way that is less firmly tied to objective data, leading to multiple possible "interpretations" that need not necessarily be seen as in competition. That enables literary scholars to proceed without having to pretend that their analyses have a firm scientific basis - so for example they can pursue Freudian or Marxist interpretations, without feeling the need to take heed of more empirically rigorous modern work (e.g. in evolutionary or cognitive science) that might conflict with these older more speculative theories. Scholars can legitimately differ in their preferences here. My own strong preference, when interpreting Hume, Turing, or Anselm (; is to be as "objective" as possible, sticking to textual and historical data with relatively little attempt to mould an interpretation or narrative. But the scholarly world is richer for having a variety of approaches.

  • @jshrager: As @peter.millican describes, it is possible to analyze a text using a variety of methodologies, some of which are more empirically driven than others. One could, of course, not grapple with intentions at all and, instead, for instance, situate a text and its repetitions (e.g., reimplementations of ELIZA) in a wide network of texts: Which other texts are cited by the text to be analyzed? Which other texts that followed the analyzed text cite the analyzed text (e.g., who comments on Weizenbaum's book)? On the other hand, if one wants to analyze a text in light of the author's intentions one generally needs to do a lot of archival work -- to read not only the author's other works, but those of the author's collaborators, peers, and students (like what you have been putting together here!) -- and conduct a set of interviews, ideally with the author but if that is not possible, as is the case here, then with their collaborators, friends, etc. to find empirical evidence to infer what they were thinking at the time and how that thinking changed over time.

Sign In or Register to comment.