It looks like you're new here. If you want to get involved, click one of these buttons!
In some sort of relationship to my dissertation, which explores the ideology behind codic language, I started to code. What I haven’t figured out quite yet, and what I want to explore, is what this code is. Is it software? Almost definitely not… Is it art? Well, maybe… Is it criticism? Again, maybe… Is it experimentation? Yes, well, maybe…
I’ll talk more about my particular situation below. But to zoom out, these sorts of questions apply very broadly to the humanities. The humanities of course are more than one thing, but if I might hazard a relatively common idea of what brings us together, it would be the hermeneutic method. The 2010s has seen the proliferation of various humanities “labs,” as well as software- and media-making projects amongst humanists. So much so that many have reacted with some degree of alarm, as funding is supposedly drained from other humanities-oriented activities. To what degree these other research activities were always going to be defunded is unknowable, but it is incontrovertible that funding in the humanities for a particular computer-oriented niche of creative activity has gone way up, and other research activities have not seen a proportional rise.
Doubtless several people on this forum had their feathers ruffled by a 2016 article in the LARB on “A Political History of Digital Humanities” (https://lareviewofbooks.org/article/neoliberal-tools-archives-political-history-digital-humanities/). In that article, the authors speak about the material basis of this explosion of lab-type activity, and how that basis tends to undercut precisely the critical, hermeneutic method which ought to define the humanities:
“Those who wish to acquire a sizeable grant, and who do not have site-based research needs, must develop a compelling rationale to employ graduate students. One of the simplest ways to justify the need for graduate students is to set up a named lab—a lab that requires not just funding but continual funding, and whose students can work on an evolving list of projects. In turn, applicants must explain how graduate students’ research enhances their employability. This makes Digital Humanities labs especially attractive, and makes researchers feel as if they cannot win large grants without doing Digital Humanities.”
I am one of those graduate students. And one answer to “what is all this code I’ve been writing?” is: it is the meaningless precipitate of a system that funds my otherwise prose-based research.
But if the materialist study of ideology has taught me anything, it is that meaningless precipitates don’t stay meaningless for very long. And the circumstances in which a thing is generated are rarely the determinants of the ideology which that thing ultimately expresses. So here we are. We have tons of code projects. I think it’s safe to say that, at least in our little subdiscipline of critical code studies, we have no intention of giving up on the sort of critical hermeneutics that has always characterized the humanities. If we are going to make anything of these projects, we are going to have to somehow draw our codework into a broader hermeneutic methodology. To focus all this in a single question: how can the creation of media, especially of code, be hermeneutic?
The Noneleatic Languages
The rest of this post will be in relation to a collection of programming languages I’m developing. They’re on github here: https://github.com/ebuswell/noneleatic
If you’d like to compile them, there are instructions in the README. You need some sort of *nix development environment. MacOS works fine. Cygwin should, too.
The background: Having sifted through the minutiae of the early creation of programming languages, I settled on the creation of the conditional branch as the ur-moment of a tendency in computer science to almost religiously separate code and state. In making this separation, programmers tried to make code capture the computation without recourse to the inherent unfolding, development, dissemination, etc. which constitute the actual steps taken in the calculation. Various programming languages have historically proliferated, but in some ways they have all been recapitulations of the philosophy of programming developed with the first insistence on a conditional branch statement in the machine language of the EDVAC.
The Noneleatic languages are a series of languages that take the other path. There is no conditional branch statement, and the programs can often and easily modify themselves in their execution. Although in certain ways this makes the code more obscure, in some ways the running code is more transparent than in conventional languages. I’ve called these languages “The Noneleatic Languages” in a playful, somewhat unhistorical contrast with the Eleatic pre-Socratic philosophers—Parmenides, Zeno, and others—who believed that in the true world change was impossible.
What I have so far is a virtual machine and its assembler. In the doc/ folder, you’ll find a complete specification of these. In the interest of space, I’ll go over just a few details of the machine and assembly language here. There is a flat memory layout, no registers. The instruction pointer (the address of the operation which is currently executing) is always mapped to location 0, and the 80x25 character screen is mapped to 0xF000. Each instruction begins with a symbol, e.g. “+” for addition, “>” for shift right, and is followed by up to three letters which indicate how the operands are to be interpreted, i.e. signed/unsigned floating point/integer etc. Uppercase indicates that the operand is directly a value, lower case that it is an address of a value. The destination always comes first, but otherwise the arguments are in mathematical order, e.g. -uuu a b c means set a to b - c.
In the examples/ folder, there are a number of example programs I’ve written. But to really understand what’s going on, you need to see the code develop itself over time, i.e. you need to run it.
Here is a program which simulates a conditional branch, just in case anyone thinks the lack of a conditional branch means the language isn’t Turing complete. I’ll give you the complete code listing, sans comments, then go over the execution line by line:
It begins at “start”, which is 0x10.
And here we are at the end, where the program has correctly printed out that 123 is less than 456.
One more quick example, one that will be familiar to many, and then I’ll talk about this a little:
And let’s see tenprint.s in action:
In many ways, I already had the findings on which this piece is based far before the piece was created. I looked at history and decided that the separation of code and state, especially through the conditional branch statement, was historically contingent. But was it really possible to have happened otherwise? What the noneleatic languages tell us is: yes. So here is the interpretation of the languages in terms of experiment.
But now the languages exist, and they seem, to me at least, to say more than “yes/no.” There is a remainder. The problem with the experimental interpretation, and the problem with the application of scientific models to humanities study more generally, is that there is more remainder than there is simple result. The remainder, the programs, the languages, the examples, now confront us as yet more objects to be interpreted. Have we gotten anywhere?
I’d like to think that we have, that maybe the dilation of the hermeneutic question into an answer deserves a subsequent explosion into another hermeneutic question. But if we are to find a way in which the creation of the not-yet-synthesized is a real objective of the humanities, without also undermining concrete political commitments in favor of disseminated arbitrariness, then we have a lot of work to do.
This is a fascinating project, and I appreciate the clear explanation, although I'm still working through the second example.
Some OISC languages have a similar approach, for instance BitBitJump, whose only instruction is "copy the bit at address A to address B, and then jump execution to address C." Since the array of memory it acts upon is also the source code, it's self-modifying: copying individual bits changes the code, including that which may be used for future jump locations. There are generalized approaches to creating conditional branches in the language, even though it doesn't natively support it.
For me that raises the question: if we only had eleatic architectures available, would we have higher level eleatic languages (and what would they look like?), or noneleatic languages that compile down to eleatic machine code?
I'm thinking here also of a very strange project, DawnOS, an operating system written in SUBLEQ. I interviewed the creator for my esoteric.codes blog. While SUBLEQ does have conditional branching, it's a OISC with self-modifying code and awkward to use. The approach DawnOS's designer took was to first write a C compiler for SUBLEQ and then build the OS in C.
A quick how-to, from start to finish, for those with little or no command line experience who want to run the programs:
Preparing your computer
This might be incomplete, as I only have macOS, and of course it's already prepared. PM me if something doesn't work.
Linux / Other
Downloading and building the noneleatic languages
You should now be in a terminal with all the tools necessary to make this happen. We hope. Note that "nevm" allocates screen space automatically, so make your terminal window big. However, it will work just fine at the default 80x25, if you're limited to that for some strange reason.
branch: Type "./nevm -g -d 10 branch". This will pause for 10 seconds before every instruction, to try and give you a chance to figure out what's going on. Instead of "-d 10", use "-d 5" for 5 seconds, "-d 0.001" for 1 millisecond, etc. That goes for all of these examples. When the program is finished, press any key to exit. Press CTL-C to stop the program before it is finished.
helloworld: Type "./nevm -g -d 0.1 helloworld". When the program is finished, press any key to exit. Press CTL-C to stop the program before it is finished.
tenprint: Type "./nevm -g -d 0 tenprint". This program will repeat forever. Press CTL-C to stop it.
fibonacci: Type "./nevm -g -d 0 fibonacci". This program slows down with each number it calculates in the series, so after a bit you'll want to quit with CTL-C.
This materialist critique of digital humanities "labs" just blew my mind. Thank you for sharing that - I'm excited to witness the (rebellious, un-predetermined by ideology, non-eleatic perhaps?) development of hermeneutic approaches to code.
I enjoyed your explanation of non-eleatic code, as related to pre-Socratic thought - of Heraclitus and notions of constant change, separating code from state. Can you tell me what precedence there is for "non-eleatic code," as I am only barely familiar with the Eleatics as a school of thought in philosophy, and have never heard of the term eleatic applied to computer science.
This statement is full of so much dense meaning. Qualitative research, whether in the social sciences or humanities or sciences, always faces the problem of the not-yet-codified, where classification generates more questions on classification: what doesn't fit into pre-existing categories becomes the birthplace of new categories, or whole new ways of seeing and classifying. I wonder if you could clarify what you mean in your statement about "political commitments."
Also, I'm getting the following error message when I attempt to run the make file in my CLI:
Is anyone else getting this error, or knows how to resolve it? I'm fruitless via Google and StackExchange, and may need a simpler explanation if the config file needs to be changed.
Yeah, I remember reading about OISC almost 15 years ago, before the RISC dream was completely dead . Unsurprised and glad to see it has an afterlife. By "eleatic" do you mean "noneleatic"? Or "conventional"? Anyway I'll answer both versions of the question.
So long as we accept Church's and Turing's definition of an algorithm, and the equivalence between them, the actual construction of the machine is not so relevant: any higher-level code can be compiled for any lower-level machine or virtual machine. To some extent, anybody who has done actual programming knows that this isn't quite true, because there are potentially vast differences in performance. One of C's genius moves was to make a relationship between easily executed and easily expressed. This is part of why it leaves so much undefined—so that what we pass over in silence runs quickly. But it is also not true, even for C, that the language hews to what is efficient on the machine. I remember one of the Unix creators had a story that they grossly distorted the cost of a function call, in order to convince programmers to make clean functions instead of spaghetti code.
So really, I think the issue is less about higher-level languages being a superstructure that appears on top of and dependent on a machine code base, and more about there being an "eleatic" philosophy that infuses programming languages and machine code alike. So the same anxiety that led to the conditional branch (the desire for a rigorous, but ultimately impossible, separation of code and state) also led to structured programming, functional programming, object oriented programming and their associated languages (and maybe to so-called scripting languages, although there might be other things going on there).
So it's an open question for me whether or not, for example, there is a "structured programming" or anything equivalent to it, when we don't have this state-code anxiety. That being said, I use "languages" intentionally, even though right now there's basically one, maybe two if you stretch it. The idea was to just start here, and see what I notice. So far, I've noticed two things.
Doubtless there may be more.
Many interesting questions/observations, which I'll get to later today!
In the meantime, this error is happening because I left out a step. Sorry, my bad!
I updated the post above, so it should work for people now. If the config file does end up needing to be changed for anyone, this can be pretty technical, so best to post here so I can figure out what's going on and/or others can benefit as well (or at least see that it's not just them ). Likely I'll just fix it and we'll end up with a config.linux.mk, config.cygwin.mk, etc. Like I said, I haven't run this outside of one linux distro and macOS, so this is unknown territory.
For an easier starting point, perhaps consider the 'Hello, World' of Critical Code Studies, 'Hello, World' itself in noneleatic language:
All of these works are incredibly fascinating as indeed all the work of the Esolang community is. Thank you all for sharing them.
To me, one of the most interesting feature emerging from these kind of works is that the subject of the hermeneutic analysis is not solely code but also technology as a whole, intended as a human endeavour.
I quote a passage from the interview shared by @DanileTempkin (thanks!) hereafter:
“….current x86 is the result of approx 30+ years of work of 1 million hardware developer, all added his own poop into it to have a cpu. some idiot waked up at morning, and decided to add a MOV with his very own shitty prefixes and various different encodings. another idiot waked up at morning, and decided to …”
Thus, the pair eleatic vs non-eleatic architecture, by presenting two ways of thinking architecture, may be also read in terms of crystallised vs experimental methodologies or, if you wish, applied vs. basic research. The former being predominantly problem-solving oriented, relatively easy to add to (or poop as Gery would say) and prone to standardisation. The latter instead dictated by speculative impulses which, if predominantly humanistic rather than exclusively scientific, may help to stop momentarily the problem-solving race to ask more important whys.
This way, and to address @ebuswell question, interpreting code becomes an hermeneutic of technology’s epistemology.
If indeed as Geri says (and I agree) “...no real development on windows or linux has been made in the past 2 decade..” this code methodology may be one way for injecting “fresh air”. Maybe rethinking hardware altogether too.
Art comes to the rescue of science.
But also art as a practice that speculates on technology rather than simply using it.
I am not sure I understand what exactly you are saying here. First, by "separate code and state," do you mean that program descriptions use variables and parameters to refer to the input they are working on? If so, there is a good reason for this:
An algorithm is a method for solving problems, and a program that encodes an algorithm is generally expected to be run many times and with varying inputs. Otherwise, it would not make sense to save or share the program---you could simply save or share the computed result. Therefore, programs use parameters to abstract from concrete computations and represent a whole bunch of different computations.
Of course you can write programs that produce only one result, but they are not very useful as a means to solve problems.
On the other hand, if you consider functional languages, there is actually no separation between code and state (if by state you mean the representation of the data the code works on). In a functional language, a program is expressed as an expression that is simplified in a number of transformations to a so-called "normal form," which represents the result of the computation. There is no separate state.
There isn't precedent for the application of the term to computer science, as far as I know. More broadly, I'm not sure I've heard the prefix "non" added to "Eleatic" at all, although as you point out, that might include Heraclitus; also maybe his modern influences on Hegel, in theory Marx but probably not actually, etc. The term is not meant to be taken seriously, unless it is. The development of code happens in the 20th century and I don't honestly think we should blame Greek philosophy. On the other hand, of course the Eleatic idea of an unchanging world is preserved in Plato's world of Forms, and modern mathematics self-consciously includes greek philosophy in its genealogy.
Sure. I have something pretty specific in mind, actually. There's a story about our recent history as humanists that goes something like this, or at least this is how I heard it: Derrida and the rest of the post-structuralists came around and put the subject back into our theory of interpretation. Interpretation was a subjective act, they claimed, which inevitably changed the meaning of the text it interpreted, and that was good. The conservation of meaning as a property of the text itself, on the other hand (the new critical position), is a politically problematic act. What we want is the proliferation of new meanings. When you take this theoretical approach to questions like "what is our sexuality?" you get some pretty politically useful answers: it's what each one of us decides, and stop trying to make it otherwise. On the other hand, if you apply it to a question like "is Capitalism reformable?", then you end up equivocating between what are very different sides with very different consequences and politics. So post-2008, when a lot of people lost homes, savings, and jobs, post-structuralism started receding in popularity. In its place, there has been a resurgence of a more historically and economically rooted mode of interpretation.
Now, there are problems with this story—like the equivocation between subjective and arbitrary, the fact that of course post-structuralist methods are still quite alive, that historical and economical modes of interpretation have a much longer history, etc.—but it captures something nevertheless. Returning finally to coding, or more broadly to making, I think that a post-structuralist interpretation is pretty easy to do. It goes something like: all acts of interpretation create something, so what if that something is code instead of a paper. But I think we can see how that's problematic. We're, at least in part, often doing political work when we interpret code. Writing code may also be political, but it's not the same thing. It's not arbitrary which one you choose to do. So given that, what is the relationship?
I hope not! Here at the xpMethods group we encourage the use of Git(hub) and/or similar version control systems precisely to make the labor visible and therefore available to valuation. "How can the creation of media, especially of code, be hermeneutic?" Hidden labor is not available to hermeneutics in the first place. Let's try to do right by the power dynamics of labor in our communities, is a step towards an answer.
A little catch-up:
Thank you for looking and commenting!
I really like the notion here of "speculating on technology." It reminds me of Colin Milburn's argument, here, that science fiction and "actual science" are often not as separate as we might think.
But then, I'm not sure that me or my work deserves these other distinctions from other acts of creating code. I mean, Dijkstra's push for structured programming is one of the most influential things to ever happen for codic languages, and that was by a man who, as has been mentioned elsewhere in this forum, abhorred the practical everyday coder and her problems. Similarly, at least according to lore, Unix and C were invented by Ken Thompson at least in part in order to continue playing a video game he'd written for multics. C is, I will say, more playful than Algol or Fortran, but it certainly partakes in the anxious separation of code and state. So I'm not sure that this history, to which mine would be a sort of speculative counterhistory, doesn't also include plenty of moments of "speculating on technology," "stopping momentarily the problem-solving race," etc. I don't think that negates your point—all the better for these histories and languages—but I do think it helps clarify something about what I'm doing / trying to do.
In contrast to DawnOS, the noneleatic languages are absolutely not a utopian project. For me, ideology is not something I think of in contrast to a nonideological view. All views and intellectual products participate in an ideology; it's simply a question of which one. That's not to say there aren't better or worse ideologies. In this case, however, the only goal is to create something different, to bring out the distinction, and to maybe in some way, exist as a critique of the current ideology of code.
On a related note, the noneleatic languages are also not exactly meant to be esoteric. I think they may have, in fact, ended up that way, but that's, as far as I'm concerned, a (possibly productive) failure of the project. However, that being said, there is actually another sort of clarity to them. You can imagine, maybe, an alternate universe in which the noneleatic languages were just what computers were, in which a whole discipline of coding had grown up around them over 70 years. What would someone from that universe think when confronted with C, for example? I think they'd feel very confused as to how you watched it run. They'd feel like it was strange that changes to the state of the program weren't being reflected in the code. Like: "How do you know what the program is going to do if the code doesn't change to reflect that? Seriously, you look at a completely different memory segment with variables but no code? What's the point of this whiplash programming??" Then we'd tell them that we rarely even look at the memory while code is executing, and they'd think we were strange wizards.
I'm sorry, this is the subject of a yet-unwritten dissertation chapter, and I've yet to really come up with a good, concise way of explaining myself. Basically, in such things as the continual injunction against self-modifying code, "goto considered harmful," etc., I see a lot of anxiety. There's a Dark Side to code, and you aren't supposed to go there. I would say that every time this Dark Side rears its head, it's because the separation of code and state are threatened, and therefore the idea of code as timeless representation is threatened. Code is supposed to display what it does before the fact, rather than in its actual execution. When that display, the rhetoric of code, starts to seem like a property of executable state rather than code, then everybody starts getting anxious.
I take your point about parameters; I don't think that's wrong. But I do think it might be a little like a medieval person being extremely confused as to how they would mill their grain without a manor. Yes, you need a social order to tie together the miller and the grower, but this social order doesn't have to be the manor. What if, for example, you have a machine running some particular kind of code, and it processes stack after stack of punched cards. In one end and out the other. But instead of purely data, those cards each contain both code and data on them. For example, maybe you have a record of debits and credits. But instead of a data tuple "this is a credit" "to this account" "for this amount," maybe you have "add" "this amount" "to this account" which is executed directly as code as soon as it is read in. Maybe, then, your "program," that which doesn't change as your data changes and is stored on the machine until you say otherwise, is just actually a list of accounts and their balances.
Can you say a little more about the relationship between visibility of labor, presumably programming labor, and hermeneutics?
Here's a playful variation, along the lines of @gtorre's "AI Prison", effing the ineffable, a noneleatic mediation:
Separating input from an algorithm doesn't mean the input has to be "data". Functional languages (and some imperative and OO languages as well) offer so-called "higher-order functions," which are functions that take other functions as arguments (and may also return functions as a result). A simple example is the "map" function, which takes a function f and a list of values l and applies f to elements in l. By passing different functions as arguments to map you can modify its behavior.
Thus the separation of algorithms and input does not limit programming by precluding behavioral adjustments in different program executions. However, the separation makes these effects more systematic and structured, which is a good thing, since it facilitates the reasoning about programs.
Yes, this is one way that input is separated from algorithm, without restricting input to data. But in my hypothetical example, the input is not separated from the algorithm. In fact, the input is the entirety of the algorithm. The "program" is just data.
Yes, the anxiety about separating code and state does not limit what we behavior we can and can't program, AFAICT. But it certainly affects how we express that behavior.
Not to pick on you, but did you notice how you first said: "An algorithm is a method for solving problems, and a program that encodes an algorithm is generally expected to be run many times and with varying inputs." And then you thought, well, no, what about functional languages? And now you're saying "the separation makes these effects more systematic and structured, which is a good thing, since it facilitates the reasoning about programs." There's a slipperiness between first one justification, then the next. My guess is that you have a sense that the separation of code and state is right, and that my language is confusing, and then (after that initial sense) you're trying to articulate why that is. I have that sense, too! There's certainly virtually no chance that I'll be writing a program in these languages that I need to, you know, actually work. But just because something is somehow sensible doesn't make it not historically contingent.
I think your final justification for the code-state binary, "facilitates reasoning," is at the core of this project. Exactly what kind of reasoning does the code-state binary facilitate? It facilitates reasoning about what a particular line or section of code does, plucked out of its running context as much as possible; it facilitates reasoning about what code does without having to run it. The reasoning that the code-state binary does not facilitate is figuring out what the state of a particular algorithm is from a snapshot of memory. The noneleatic languages, I would argue, do that a little better. Also, sometimes the relationship between a piece of data and a piece of code is obscured by the rigorous separation of code and state.
Now, the noneleatic languages are not real historical languages. In designing them, I took the code-state separation and asked myself, what if they were maximally unseparated? So really the only thing that defines the noneleatic languages is their difference from conventional programming languages. If history had actually occurred differently, rather than a simple negation we would have had a different thing, a different principle that emerged as the center of our programming languages, rather than just the negation of the current principle.
I don't think so. I was making these two remarks because I didn't know what exactly you meant by "separation." If you mean the separation between algorithm and input, I'd say that's a good thing (in any language, imperative, OO, functional). If you mean by "separate" that the data is kept separate from the code that is manipulating it, I'm just saying that this is not a universal fact of programming languages, functional languages being a counter example.
So my two statements don't contradict each other at all, they are complementary.
Yes! And that is a good thing, since it provides you general statements (such as partial correctness guarantees in the case of type systems) about the code.
Do you have a simple, concrete example for this?
Oh, OK. That makes sense. Sorry, like I said, I'm making a lot of gestures here, but the full argument is probably not coming through very well in this online-forum format. Thanks for pushing me here.
I actually don't think that functional languages are a counter example, since what I'm talking about is the way code is packaged as a before-the-fact thing, and not allowed to be swept up into a running program. That's not on the level of function, where of course it's not really possible to distinguish between code and state, but on the level of meaning.
I think nearly everything is an example of this, actually. To start with the contrast, in neasm, you might do an addition like this:
After you run it, this turns into:
So there it is. The code is intimately connected to its result, and you can read that connection right there. In contrast, in C you'd have something like:
but what is "abc"? where is it? what is its connection with the code?
After writing a few things in neasm, I found myself adopting one of two idioms: either the output is immediate and the input comes from the previous output, or the output goes directly to the immediate argument of the next input. I think in hypothetical version 2.0 I would find a way to collapse this choice, which is strangely arbitrary. Idk.
I don't want to dwell too much on the potential deficiencies of our current code paradigm. I think it's pretty clear that, one way or another, we've figured out how to get stuff done in this paradigm. What I do want to emphasize is simply its particularity. This is one way of doing things. There are others.
Have you looked at spreadsheets? There you have the results of the computations (i.e. formulas) directly visible, in the same spot as the formula.
There are many important languages for which this is not true. Lambda calculus, for example, has only three constructs, namely abstraction, variable reference, and application. There is no conditional. Lambda calculus is Turing-complete, i.e. any program can be written in lambda calculus, and all functional languages are based on it. Of course, you can encode conditionals in lambda calculus, but I don't think it would be correct to say that lambda calculus is essentially based on the concept of a conditional.
As another, maybe more widely known example, consider spreadsheets. These are the most widely used programming systems in the world, and their basic functionality does not depend in any way on conditionals.
Thank you for these great examples!
Lambda calculus—assuming you mean something like Church's version—stays in math and doesn't quite make it to code, although the influence is huge. I'd place it as the last piece of a mode of thinking that dates from at least Hilbert's program, maybe earlier. I'd say with code we're seeing something different. In fact, one of the motivations of my project is the fact that Von Neumann's famous first draft article has no conditional branch—instead it has a conditional substitution. I think he's modeling his language on Church. But then something happened, and when any of the EDVAC-type computers were actually built, we see this conditional substitution gone, and the conditional branch replacing it.
As for spreadsheets, I was actually just thinking this. I'm not sure, but I think you may be right; they might not separate code and state hardly at all. Maybe in part this is why spreadsheet programming is in some ways not seen as real programming?
Like @erwig, I was curious about the relationship b/t functional programming and your assertion about the split between code and state.
Regarding your specific point about Lambda Calculus not making it to code, my favorite programming blog just started a new series about the Lambda Calculus and it's relationship to current trends in web development toward functional programming. It may be an interesting site for exploring how some of these FP ideas that missed wide industrial uptake have percolated in non-industry settings and are becoming more mainstream.
Just a quick note that I compiled successfully using the Windows Subsystem for Linux included with recent versions of Windows 10. Thanks for sharing this fascinating project, @ebuswell! I am looking forward to digging in!
I also want to come back, as many have to this point. I also produce a lot of code in the process of working through my prose-based work. I've never really had an explanation for it, but I found one in reading about the media philosopher Vilém Flusser and his practices of self-translation.
Flusser, who spoke and wrote several languages, would often translate his own writing back and forth between different languages multiple times because he thought different affordances of different languages revealed different facets of the problem he was approaching. He thought of this as nomadism. In Finger et al's Vilém Flusser: An Introduction, they explain:
This idea of a self-critical ballet comes closest to why I think I move between code and writing in my research, even though the code is often uncounted as labor. I find that the problems I'm interested in, and the way my brain works in either code or written English, are different between the two problem spaces.
I'm still trying to think through this idea, but it strikes me that maybe this is related to your thoughts on programming and/as hermeneutic.
Apologies in advance for re-asking the same question but I am still unsure if I understand properly this difference between code and state.
Is it the difference between abstraction (code) and hardware (physical memory)?
Is it something other than what pointers or even array are for?
Pointers describe an address in memory (code) containing something (a state/data?)...and that can be changed..right?
Re-connecting to your ( @ebuswell ) "abc" example, I could write something similar using arrays:
to answer your question: abs is an array.
It is of course less elegant than the solution you suggest but yet I do not see the distinction between code and state because we changed the state at abc.
I am sure I am missing something. Again, apologies.
@markcmarino interesting take on the AI Prison code
What are the "chains" in this case that forbid us from encountering a Null character? same as in AI Prison (i.e. MMU, VM , OS etc.)?
Somehow I see these chains "shaking a tiny bit" with the DawnOS/Subleq mentioned by @DanielTemkin as well as other esolangs ...
Yes, got that backwards.
Yes, this is the question that interests me. Is the noneleatic approach one that requires us to work close to the machine (at the assembly level) or can it be abstracted in a way that maintains its .... noneleaticness. BitBitJump shows how quickly people recreate the conventional if statement on top of a language that doesn't natively support conditional branching, perhaps simply because of our habits thinking in eleatic style. Creating a somewhat higher level (perhaps C-like as you suggest) language that maintains state-code anxiety could be an interesting challenge, although it's hard for me to imagine what it would look like.
Also, I think of Dijsktra's "The Humble Programmer." These both put clarity of code above all other concerns, even in cases where it causes the code to run with less efficiency. Coders are taught to value clarity of intent above all else: nicely formatted, readable code that expresses what we expect it to do, avoiding the "clever tricks" Dijstra warned us about. I think of obfuscated code, esolangs, code golf, etc, as Less Humble practices, reversing those values, allowing a release from the compulsive orderliness of code. I think that's all part of this anxiety (or Dark Side) that, to me, is about lack of control. Not to get too far off the subject, but given the prevalence (and ultimate inevitability) of bugs, clearly written, orderly code is as much about a feeling of control it provides as about actually exposing such error.
Thank you for this!
This reminds me of how important it has become to insist that gamers are not a monolithic group of people with a shared identity and shared values. 'Games nativism' comes with an insistence that there are only certain authentic ways of game making and game playing, and involves an aggressive gatekeeping of what counts as a game or who counts as a gamer: excluding queer / feminist games is part of an explicit program that is against Videogames for Humans AND videogames for (all) humans.
So, by analogy, if Less Humble coders may be just one example of what it looks like to define coders expansively. Engineering for correctness, efficiency, and maintainability are one version of what coding best practices should be, and those might be dominant in many industries, but they aren't innate to coding. Other values might define different visions of what coding should be at its best: expressive, surprising, educational, accessibile, obscure, private, self-documenting, generous, and many more. While none of these values are mutually exclusive, there is no one authoritative, authentic, constitutive set of coding values, and perhaps there should be no normative "coder."
The way operators precede all operands in nevm is reminiscent of prefix notation in languages like Lisp. I’m interested in how prefix notation directly changes the information architecture of code syntax or the visual appearance of code. In both Lisp and your project, the code we write just looks more like data, a grid, field or sea of symbols. I wonder if in companionship to the break from eleatic notions to make the case for change or changing code, there is another intervention towards the non-representational, the non-narrative. One gesture of operators in eleatic code is the vector, but in non-eleatic, it is an undifferentiated field with no one point of emphasis. For an alternative history of programming, we return to the undifferentiated field of assembly / binary ...
So many good replies. I've barely begun digging into the other thread:
@driscoll: I think "-lm" is probably generally useful. I do include instructions from math.h, so it's just a matter of what is a macro and what is in the library on any particular system. I've updated config.def.mk so this change should be unnecessary with the latest version.
Yes! This discussion of thought and its relationship to language choice—code being among the things one can write—reminds me of Alva Noë's insistence that our process of thinking takes place just as much externally in our writing as it does internally in our pre-writing. So yes, the process of writing this code has definitely forced me to clarify what I mean. And this is definitely one way that writing code, and writing in general, contribute to hermeneutics. The notion that we interpret first and write about the interpretation as a different act is a false one.
However, that being said, I don't want to be too enthusiastic, because this is still in the context of discovery. Writing this code has helped me think. But what happens to that in the context of justification? If we quote code in our prose, or publish it alongside our prose, how is that methodologically important?
You're re-asking them because I've so far answered not quite adequately, so thank you for pushing me. So, OK, code and state. This is going to be a bit longer and a bit technical, but please everyone keep asking questions about things that aren't clear.
When your computer goes to sleep, it makes sure all of its state is stored in memory, and then powers off pretty much all the systems except for the memory. When it hibernates, it instead stores this state on to the disk, and restores it when it powers on the next time. In each case, "state" is a sort of substantial and complete image of something as it exists in a given time, but without any of the substance which does not change. That is, in order to save the state of your computer, you shouldn't need to store the structure of the processor or the voltage of the power supply. Those are unaffected by state. Defined operationally, state is something that, if and only if you can restore it/replicate it, that restoration/replication will return you to exactly the situation in which it was first stored/copied. So there is the concept of state.
We can note, already, that the concept of state is abstract, and the state as it is stored is never quite state as such. For example, when you put your computer to sleep while moving the mouse or holding down a key, sometimes when it comes back on there are keyboard and mouse problems. There's no real correct way for the operating system to deal with this: either it artificially produces a keyup event, which if the key is still held down when it wakes up means that there is a second keyup and keydown that the computer records but which didn't happen. Otherwise, the key can be released in between sleep and wake, but the keyup event never fires at all. The former is probably less bad, but it is not, strictly speaking, correct. The problem is that the state inside the computer is actually only correctly restored when we also restore the entire situation outside of the computer. Otherwise, the two states, going to sleep with a key held down, waking up with the key raised, aren't really the same state.
But let's imagine, like we do in math, that our reason is trapped in the world of our premises, and that world is closed and complete. We have a calculation. That calculation has steps. Let's then imagine that we have a "state restorer" algorithm that, given some input, restores the state of any calculation, and then proceeds with the calculation. The broadest version of this state restorer algorithm might be a sort of meta Turing machine. It takes, as input, the state register of a Turing machine, the machine's rules of operation of the finite state machine, the infinite tape, and finally the position of the tape reader. That clearly includes enough information to restore the state. But is there anything here which we do not need? Is there a better version of the state restorer algorithm?
In general, I think the answer is no. All of this data is state. In fact, the "state restorer" algorithm is just a particular version of Turing's universal Turing machine, which can emulate every particular Turing machine. But note that, actually, an infinite number of universal Turing machines exist. If instead of a general state restorer algorithm, we implement a particular state restorer algorithm, where we just restore the state of calculations on a single kind of Turing machine, then we can use fewer things as the input to this state restorer. Specifically, we will still need the state register, the position of the tape, and the content of the tape, but we won't need the rules of operation of the finite state machine. Since we have limited the domain of this state restorer algorithm to a single kind of Turing machine, those rules can be implied by the state restorer algorithm.
If we, then, adopt a truly universal definition of state, there is no room left for anything else. The code is part of state. The data is part of state. Everything is part of state. State changes. State can be anything. And whatever state is right now will just pass away at some point. Either Nietzsche is right about the eternal return, or we never step into the same state twice. This is the position taken by the Noneleatic languages. But if, instead, we adopt a more limited version of state, all of the sudden we have something to conserve. We have created a space for this other sort of thing, "code," which exists forever unchanging in the Platonic world of forms, the true world, the real world in which change is impossible. This position is the position taken by conventional programming languages.
However, the reason that I describe this as an "anxiety" rather than a simple choice, is because computer science actually wants both things, and it alternates between which one it chooses in any given situation. It wants universal Turing machines, which can not only calculate anything it is possible to calculate, but which also can calculate it in any way in which it is possible to calculate it. But at the same time it wants code to be something which we can set somewhere over there in the unchanging world—something we can rely on when we can't rely on data—and state to be something which merely washes over that code like a wave on a rock.
First, we have conditional branches. But that isn't enough. Sure, the conditional branch is a stolid piece of code unaffected by the wave of state, but here's this other end of the branch, this label floating here, saying nothing, being silent, relying on the history of the state of execution to speak for it what it does not speak on its own. Unacceptable! We need blocks, then, larger semantic structures, bulwarks that protect code from the wave of state which threatens to become part of it. Etc., etc.
Interesting. Yeah, the prefix notation here is actually generally the way assembly is written, so I didn't think too much about it. I believe this convention dates from the design of the EDVAC, and I think it likely dates from two material parts of the design: (1) There was a bit that said "this is data" or "this is instruction," so an instruction had to begin with the instruction. Although I don't recall where the arguments of the instruction were stored with the EDVAC, whether immediately in that word, or in a subsequent word. My guess is the former. (2) The machine code was designed to be translated from a specially designed punch, so you'd hit a "Brand if Equal" key, then "1" "0" and it would mechanically encode this to the instruction, stored on the tape.
That being said, I do think prefix vs. infix notation is significant. Assembly language in general doesn't have any Chomskian recursion, that is, there are not expressions with "values", which might be made up of subexpressions, with subvalues, etc. The same cannot be said of Lisp, which uses prefix notation in part to make the linguistic recursion in the language more clear to the reader. I think it's significant that the compilation of neasm does not make use of yacc, which is a parser for grammars, but instead only requires lex, which is a tool to break a language down into lexical elements, where the only "grammar" is the order in which the elements appear.
Thanks @ebuswell for the lengthy response.
OK, I think I started to see a bit of light at the end of the tunnel and, as they say, hopefully it is not a train
At large, your distinction between code and state appears to me as the distinction existing between software and hardware. The separation implied or engineered between the two is described as eleatic in that it does not move in any direction and it always conserve the possibility to reinstate its original state.
Separate from the hardware layer (or sitting on top of it in a sort of platonic word of ideas), the code appears as an unmovable block.
You then suggest an approach in which this Platonic word could come “down to earth” and merge with “state”. This is what you call a non-eleatic approach. Code that changes state that in turn changes code.
If so, I am inclined to think of the old analogue and mechanical computers for which code and state are united and movable.The non-eleatic paradigms seems to live in them.
A "digital" example could be maybe a calculator in which the keys “C/AC” (the one that allows resetting”) and M+ M- do not exist or they have been removed. In this calculator each calculation step look in one direction only: forward in both code and memory.
Following this logic I have ended up thinking that a non-eleatic computer would need to be designed to accomplish one task only for the completion of which it grows. In fact, the idea of this separation between state and code originates, in my opinion, from the will to make computer multipurpose machine.
I hope to be somewhat on target. (?)
But then, what about my previous “abc” code snippet; isn't an example of non-eleatic code too?
How do meta-programming approaches fit in here? I'm mostly thinking of closure-like techniques available in various modern programming languages like Lisp and Python. I often use "decorators" in Python, which are essentially functions that take other functions as input. The code is thereby modifying itself at run-time. It's a powerful kind of abstraction but it can get very messy to debug if one gets too carried away.
Do you think programming might have turned out very differently today if the von Neumann architecture had won out over the Harvard arch?
@gtorre Sorry, I think I confused the issue by giving only examples of machines or concepts with the word "machine" in it ("Turing Machines" are not necessarily models of actual machines, though that is what they suggest). Let me explain without reference to hardware. Maybe you already understood, but this might be helpful to others anyway.
Let's imagine the following two elements: (1) an algorithm, A, and (2) the state of that algorithm at a certain stage of execution, S. A and S are abstract concepts. In order for A to run on a computer, or be calculated at all in the strict mathematical sense, it must have a representation, R(A), the "code." The same is true for S. In order for there to be an algorithm that restores A to S, S must have a representation, R(S). (Getting a little ahead of ourselves, note that there is not something equivalent to "code" for R(S). "Data" comes close, but data does not necessarily encompass the complete state, and "data" does not indicate the representation but the thing represented.) R(A) and R(S) are strings of symbols, potentially infinitely long. Imagine that R(A) is executed by an interpreter I that knows how to follow the representation, R(A), and execute A. Since we have assumed that the state of execution of A is representable by R(S), as we go from one step in executing the algorithm to another, this must be somehow representable by some sets of symbols in R(S) that are changed, deleted, or created. Since R(A) plus R(S) is the full and complete representation of the algorithm A at stage S of its execution, then we know that it can't be necessary for I to store any state additional to S.
Now, when we ask about what state, S, actually is, there are several possible answers. The first answer is that the algorithm A doesn't have state. S belongs to the interpreter I. In this answer, R(A), code, gets to keep its eternal unchanging nature, we can focus on it, and R(S) can be viewed as merely a byproduct of I as it goes through and executes R(A). The second answer is that S belongs to A—I is stateless. This answer still allows A its unchanging nature, but it says that state is not just about the interpretation of the algorithm, but is fundamental to the algorithm itself.
We can, however, go even further in that direction. Because two countable series of numbers can be mapped to one countable series of numbers, we know that we should be able to completely merge R(A) and R(S) into a single representation. In this interpretation, R(A) becomes R(S) after a single step of execution of A. Code and state are one and the same thing.
We know that all three ways of understanding the relationship between I, S, and A are equivalent. That is, all three ways allow A to execute. But these three ways of understanding this relationship lead to different systems of representation.
Now, all existing computer languages have a way to specify data that changes throughout the execution of the algorithm, as well as code that normally does not change. So in a sort of mathematical sense, all languages end up at the last interpretation of I, S, and A. Together, in a single set of code files, are specified the algorithm, the initial state, as well as the locations of future states. To this extent, all of our computers and languages are a little bit noneleatic. I think that's what's confusing everyone about my exposition (my bad!); the separation of code and state is a tendency that takes place in the meaning of code, in its semiotics, not a clear proscription that takes place in the mathematical grammar of codic languages.
Although grammatically existing code portrays R(S) and R(A) as belonging together, semiotically it attempts to run back towards one of the first two interpretations, separating code and state. Code is always trying, and failing, to push state off elsewhere and alone like an embarrassing friend, and to mention it as little as possible when there's code around. But the embarrassment is of course in the code, and not the friend.
This is why I'm looking at the conditional branch statement: it's where code and state most directly meet. It's where programmers cause the behavior of the computer to be dependent on the state of execution. But at the same time as that dependency is assured, it is rejected. A rewrite of the code would simply change the behavior by changing the code directly. In some ways, this should be the most straightforward approach. You want the computer to do something differently? Well then we should change what we're telling the computer to do. A conditional branch completely alters that relationship. The code is preserved as-is, the state merely gestures towards different pieces of that code. However, again, this was not enough. This dependency was given more and more buffering structure, and code was drawn more and more into itself. Far from being an example of the acceptance of the togetherness of code and state, functional programming languages express their anxiety through the continual attempt to recast data as code and to make code dependent on other code, rather than dependent on some amorphous state. They are largely successful, but this relationship starts to break down as soon as the state outside the computer starts affecting or being affected by the state inside the computer.
It's an interesting question, definitely. Decorators modify the code at compile time, or at least at pre-run time, no? It's been a little while since I wrote Python. Ruby has an idiom that is maybe even more radical, where everything absolutely all the time can be rewritten, such that whole new domain-specific languages can be written with Ruby, and they end up also being valid Ruby code. There's certainly some kind of code-modifying code behavior here, but I don't think it's terribly radical, and I don't think it really moves that core anxiety.
Well, the interesting thing is that the Von Neumann architecture did win, but everybody acts as if it was the Harvard architecture that won. So we have a single flat memory space that is artificially partitioned into code and data segments through features of the processor.
That being said, I think maybe a different project from the Noneleatic languages could be the Truly Eleatic languages, which rigorously reject the existence of state in code. State would never ever be referenced (no addresses, no variables, only literal numbers). Code would depend on its own past execution directly in its semiotics, rather than through the medium of state. This could maybe be a theoretical development of the Harvard architecture. Note that I'm not actually sure this language would or could be Turing complete. The Mark 1 was not.
As for what might have occurred, I think we should be skeptical of the Great Invention theory of the history of technology. What I will actually end up saying in my dissertation is that we ended up writing and thinking about code this way because of the rise of deposit banking, and the epistemology that ledger money implied as it took over from paper money. Btw, in the video about the Apollo 11 computer, an accountant and his ledger are the key metaphor.
@ebuswell Can you give us a brief overview of the ledger banking argument?
Sure. Very briefly. I'm working on that chapter right now. Or rather, I would be working on that chapter if I wasn't writing on this forum.
A little background first. This project is trying to develop a materialist notion of ideology, without falling back on reflectionism. Instead, I'm trying to look at places in society where ideas or representations always-already mediate the material: contracts, some kinds of accounting, money, etc. In these places we see an otherwise ideological object (writing on paper, bits on a machine), take on a material characteristics. A paper $10 bill for example is different from a $5 bill only through the design printed on it. Its effect in the world is not really dependent on anyone's interpretation of the printed design, but that design nevertheless has an ideological effect. By existing always-already in relation to both material and ideal, these objects always implicitly advance an argument about epistemology. So as the material circuits in which these objects are situated change, the objects themselves change in both material and ideological ways. As a consequence, the epistemic ideology they present also shifts.
In the US prior to about 1855, and at pretty similar dates in the rest of the world, the best way for banks to loan out money was by printing notes. After about 1855, mutatis mutandis, bank accounts became the preferred way to loan money (and checks the preferred means of transferring in the US and a few other places, other kinds of giro schemes and proxies in the rest of the world). Thus, your money became a line on a ledger. That has an ideological effect that I won't get into here. Because of the difficulties of scaling a ledger system (incidentally the same difficulties that led to the widespread adoption of NoSQL databases ten years ago), material changes in what a ledger was and how it was handled were necessary. This technical change took a while to eclipse the single, bound book method, but by the late 1930s, even if there were still physical bound ledger books in many places, bank accounts looked less like a series of books, and more like a complex system of slips, binders, punched cards, etc. It is in constant awareness of that systematicity that code develops. Looking just a little bit earlier, however, the chief anxiety in the adoption of all these new devices and methods is alterability. A book is a record. A system is not. The question therefore became: how do we make system and record into the same thing. Or rather, that is the resulting ideological anxiety. Causally speaking, by record-keeping acquiring a systemic character, systems became records. Interpretation of this fact followed.
Code is not, actually, a system which is also a record. Much to the chagrin of Critical Code Studies, code is continually experienced as opaque, and judged by the organization and character of its human developers, who get to stand in for the whole system. But it is influenced by that anxiety between system and record. In code, that anxiety is recapitulated as the anxiety between code and state.
Okay. I don't get it. Why preclude the conditional branch with this thing you are doing? I mean, the conditional branch--at least in law--is what keeps the code in a more reflexive relationship with material reality. Like, in my efforts to trace the historical appearance of what I call the “diegetic commodity” I have directed my attention towards the erosion of the primacy of the conditional statement in the legal codes of the West.
Like, conditional statements in law are codic, but they aren't diegetic, which is to say, they aren't one thing that is read as something else, that something else being the evocation of a diegesis, or fictive world. Like, a conditional law says "Someone who stubs their toe must sing a silly song" which is a very different animal than a diegetic law which might say something like "Someone who stubs their toe has committed a stubbery, and their status will thusly be changed to that of stubber until they engage in an interactive narrative through which they clear their name of stubfoolery."
The shift towards this type of legal code that aims to contain the things of culture within it in this way didn't appear in the West until the early modern period of Europe, a time that you noted in your master's thesis as being marked by the emergence of the practice of the buying and selling of debt (Buswell 2011), and also, with the rise of early modern contract law (Kahn 2004), with social relations increasingly coming to be constituted within and by the deployment of legal code during this time.
So, with this shift in early modern Europe, the law is tasked with something resembling what we do when we make diegetic commodities in larp: congealing the social apparatuses required to make a system of code deployable into a type of veridicality.
So I guess the thing I'm trying to wrap my head around here, Evan B., is why ditch the conditional branch? Conditional code, at least in law, is fine. ...Well, I mean, maybe it's not fine (like, "an eye for an eye?" are you kidding me?), but at least it doesn't lay the framework needed to create universally deployable diegetic artifacts that--at least according to the social apparatuses that will make your life miserable if you behave otherwise--are veridical. So that's diegetic law, which tends to say "doing that thing makes you into [x]" rather than conditional law that says "if you do this then [y] happens in response." Like, the diegetic type is the bad type. Why do you want to get rid of the conditional, the less bad type?
In ditching the conditional branch, are you making a point about code? ...That code doesn't need to behave like ancient law? That it can modern itself up?
Oh right, okay. You do the kind of code that interacts with objects rather than people. Like, the conditional statement in computing serves a vastly different role (or lack thereof) than the conditional statement in legal code... Insofar as, the conditional statement in legal code is a way of keeping that code reflexive with but separate from culture (i.e. it keeps the code pretty directly engaged in the material that it is attempting to manage, and does so without extra shenanigans), but with computer code, the conditional statement has nothing to do with the materiality of the thing manages: machine and the actual steps taken in calculation. Cuz machines don't just roam about acting autonomously of code most of the time (i.e. the human situation), but rather the machine has no autonomy at all and its behavior arrises from code itself...
So, from my very left field larp coder perspective here, it looks to me like perhaps you're cutting off a vestigial tail that found its way into computer language via the influence of earlier types of code?
Obviously, there cannot be diegetic code for computers (computers don't experience subjective diegeses), but the way we've come to structure diegetic code in pervasive society seems to transform into this really amazing other thing when directed towards machines...
But this playful attack you're lobbing against the devision between code and state...are you sure it's the Eleatics you're at beef with here, and not William of Ockham? (I'm thinking of his work in parsing institutions from each other and from the things of ordinary life). But I don't know. I kind of wish you'd help us out by pointing to some passages. :-) But I'm just being lazy here... it seems you've created a compelling reason to read up in the works of the Eleatic School.
Also, are you sure the separation of code and state is causally linked to the appearance of the conditional statement in code? I mean, I don't doubt that you've found this pattern in your exhaustive research into the early crafting of computer code, but what makes you so sure that this relationship isn't just correlative?
(Admittedly, I'm a little nervous about treating this code too seriously at all. Could this be a practical joke? Some kind of aesthetically interesting yet necessarily useless thing you've built? Are you basically just trolling us here with art?!)
(Also, my apologies for posting this so late in the game! If you don't have time to respond before the boards close on Sunday, no worries, and my thanks for your efforts always to provoke!)
Sorry for the late and last-minute response! These are actually some good things for me to clear up.
Only because the conditional branch is the ur-moment, and included in basically all other code, and a part of the thinking of basically all other code, and for no other reason. I don't think it's better to get rid of it. I'm also not sure it's worse. But by entering into a language without the conditional branch, we can start to think differently, and then better see the way that we generally use code. This is part of why this is a humanities project; as an object of software engineering, this project is probably completely useless. It's also why it raises such difficult questions about what code can be or should be to the humanities. There's something different about creating a software engineering product for/from the humanities and writing code/software as a humanist product.
Not at all. Vestigial implies that we're done with it. But the conditional branch is actively productive! This is more like imagining alien biology and biological evolution in order to better understand the contingency of our own biology and evolution. Except in this case, I can make it actually run.
In general, it's important to note that the genesis of a thing is not the thing. I do think that the conditional branch can be traced—eventually—to the evolution of credit structures. But that doesn't make the conditional branch reducible to those structures. It's a historically contingent mathematical object, but it functions as a mathematical object and so also has that limited universality that all other mathematical objects possess.
Actually, I'm pretty sure it isn't the Eleatics' fault. Locating the cause of this in a philosophical movement that began almost 2500 years ago seems like pretty terrible history. So I'm not serious. Except that maybe there is this something that haunts all of our specific little historical moments, and maybe we can catch that something being made as a new thing with the Eleatics, and maybe that something is haunting the conditional branch too.
Anyway, "Eleatic" is (only?) a name and not at all at the core of this project. If anyone is nevertheless curious about the Eleatics, a historical text would probably be more useful than the original sources—many of the fragments we possess come to us embedded in Greek histories anyway. Copleston's History of Philosophy is probably the best comprehensive introduction.
In general, evidence can only give you correlation; for causation, you must have theory. This forum is not the place to give a proper rendition of why I'm putting the code–state anxiety before the conditional branch, but briefly, (1) I can see the code–state anxiety forming in the information processing of finance, before the EDVAC-type computer, (2) the code–state anxiety expresses itself over and over again throughout the rest of the history of computing, especially in structured programming and object orientation, and (3) there is a rejection of an earlier alternative, the conditional substitution, that would have technically performed just as well, and code–state anxiety provides an explanation for that rejection.
Trolling you with art since 1980 —but really I'm not comfortable calling this art. It's some kind of critical humanities work. I may be doing it, but I'm just as much seeking to understand what sort of thing it is as anyone else.
Thank you for addressing my questions, Evan B. This statement in particular filled me with unbridled mirth:
I also very much dig your read of the conditional branch as being haunted. Now there's a whole unexplored dimensionally to the hermeneutics of suspicion right there, my exorcist friend!
I still beg to differ about your insistence that this thing you've made isn't something that can't not be called art. I mean: (1) It's useless in the best of ways. (2) You made it without fully knowing what it's supposed to do/mean to others. (3) With its bare existence it unsettles the status quo at the deepest, most "Ur-moment"-equse of levels. If that's not art, what then is? Unless...this thing you've made...it hasn't gazed back at you with a hideous yellow eye?
...but in all seriousness, if what you say about there being a "remainder" has any inking of materiality to it...
...well, let's just say you're not the only who's noticed binary/boolean seems to have a very interestingly placed exhaust port designed into it, and now that you've just come out and put that fact in everyone's faces, well, what are we supposed to do but ready the proton torpedos? (RIP, Bothan spies)(Also, hi Coda Wie! here i am in conversation with my favorite Marxist! This is me! Masks off, sister!)
But wow, I mean, seriously though. Imagine stumbling artfully into a veritable pharmakon for an inarguably not unnecessary complete hardware-up code jubilee? Like, I really do think that if a fundamental flaw is found in the way code--at its most basic level--interacts with the machine (or doesn't), what else can we do but radically rebuild everything from the hardware up? This is making me think of that guy from my grandfather's fan club that we encountered that one time in Portland who was talking about how there are code bases beyond binary that haven't yet seen the light of day... and how all present computing is a mess because of its reliance upon a single, limited system of interacting with the machine--at least that's what he believed after reading my grandfather's papers, which it seems he still has hidden in LA somewhere. What I assume he must have been talking was simply about how other numeric systems might be used rather than base-2, like "trinary" or perhaps base-60, or maybe that Aztec system with all those interlocking 7s and 12s? (Not that we can ever know what he actually saw in those papers anyway, not until they are made publicly accessible...) But I mean, it seems like you approached this problem of talking about there being a "remainder" from a very different angle.
Perhaps because of my perspective here as a left-field larp coder, I'm finding myself scratching my head here: How did you find yourself with a "remainder" at the level of binary (yes/no) while approaching it from what looks to me to be a very front-end angle? Or maybe you're not talking about binary at all, and this is a feature of your code? Or maybe this is a metaphor for the humanities you've come up with here, and you haven't found a remainder in a scientific way but rather the remainder you've found is limited to the realm of interpretation? Again, this is the left field larp coder perspective, and if this questioning seems too elementary to spend time on, I totally understand. Thank you again for sharing your code and analysis! I'm always a fan of your work.
Oh and if we're making silly statements about our birth years:
I was born in 1984 & I don't want to live there anymore