2019 September: PhD Begins

The Hospital of Happiness. Sound work with autogenerated image of non-existent person. 2019. Listen here.

07/09/19 Exploring Evolutionary Algorithms

In Chapter 3 of The Blind Watchmaker, when Richard Dawkins refutes the idea that, with enough time, a monkey typing randomly on a typewriter could come up with the complete works of Shakespeare, or rather, even a single line from Hamlet, he presents the outline of an algorithm that could reach this goal automatically. While there are far too many combinations in which the English alphabet can be deployed within the 28 characters required to make the target phrase “Methinks it is like a weasel” for dumb luck to succeed within the timeline of our universe, the phrase can still be found through cumulative, rather than single selection. Here is the algorithm:

  • Create a random string of 28 characters (or a population of these).
  • Copy that string many times (e.g. 100), but allow a low chance of mutation (e.g. 0.1 per character)
  • Of those 100 copies, or children, select the one which is closest to the target phrase (“methinks it is like a weasel”), i.e., the one which has the most characters in the right place relative to the target phrase
  • Repeat the process with the selected child (or population of children), who are now parents

Eventually, after a number of generations, the algorithm finds the phrase. But there is no such “target” in nature, no prescribed goal. In this first evolutionary algorithm, the likeness to the target constitutes the selective pressure on the population. In natural evolution, there is no target organism prescribed in advance, and the selective pressures are rather those myriad events or circumstances that might pose a threat to an individual’s chance of survival or ability to reproduce itself.

here is already however, an automatic, non-conscious system of sorting through a staggering amount of jibberish possibilities, and landing upon a phrase from Hamlet. Several parameters may affect its efficiency, such as the population count for parents, the number of children or copies in each generation, the probability of mutation or even the precise way in which likeness to the target, or rather the distance function, is defined. Here the algorithm selects those copies that accidentally have the most characters in the right place with respect to the target phrase.

Englishness

In this algorithm the aim was to create a system that automatically generates English sentences. Similarly to the previous algorithm, this one relies on a reference text – however, the distance function must now define the generated string’s distance to Englishness, rather than a single, predetermined phrase, such that the resulting string is not known in advance. That implies having to define Englishness by some means, and in this case that definition was based on the probabilities of chancing upon combinations of characters of a certain length within the reference text. These combinations of characters are called ngrams. When analysing ngrams of length 2, the algorithm parses the reference text 2 characters at a time. There are 841 ways to fill two character spaces with 29 characters (the alphabet, space, and some punctuation), and the most frequent ngram of length 2 in the reference text (which was my novel, Anomaline), was “e “, because so many words in the English language end with “e”, such as “the”. The probabilities of all ngrams in the text of lengths 2, 4, 8 and 16 were calculated by the algorithm, capturing at 4 different levels the structure of English prose (according to its use in my novel) based on the likelihood of different characters appearing together. This statistical image of the English language could then be used as the baseline against which the distance function could be expressed. This distance function could then be used to score the initial, randomly generated strings and their slightly mutated offspring accordingly, constituting a selective pressure on the population, whereby preference was given to those strings most closely emulating the ngram probabilities in the reference text. Apart from using the statistical data as a target rather than a single phrase, the algorithm looked similar to the previous one:

  • Create a population of strings of random characters (e.g. population = 10 and string length = 20 characters)
  • Make a copy of each string (e.g 100 copies) with a chance of mutation (e.g. 0.1 per character)
  • Select a number of copies, or children (e.g. 1), from each population. Select those strings that most closely resemble the probability distribution of ngrams of length 2, 4, 8 and 16 within the string. (In an ‘elitist’ system, a parent string may be selected if it scores higher than its children according to the distance function. This is another parameter worth considering).
  • Repeat the process with the selected copies.
  • Print the highest scoring string created in each generation, for the benefit of visualising the evolution.

English words did start appearing after a few generations, but were mostly syntactically awkward combinations of the most popular short words in the reference text (e.g. ”the she said that the”). To encourage a more diverse sampling of patterns in English, we would need to encourage the algorithm to accomplish the expensive feat of coming upon far more unlikely 8 or 16 ngram combinations.

By including several ngram lengths, we built into the algorithm a smoother gradient; through the rewards of scoring moderately on lower ngram lengths like “e “, the string has far greater chances of happening upon a longer ngram through the guidance of smaller feats, which act as a bridge towards longer ngrams. Thus “e “ may bridge the gap to “she ” and then to “ washer ”, etc. That already sounds like a learning process.

However, it would appear that despite including a gradient of ngram statistics, the algorithm still resorted to shorter, more common words. To remedy this, we scored longer ngrams exponentially higher than shorter ones. The algorithm then started to evolve strings that read like intelligible English, including long and short words, without having any conception of words itself. This intervention also seemed to aid the problem of falling into the traps of local minima, whereby the possibility of chancing upon more rewarding (higher scoring) character combinations is forgone in favour of more moderate scoring combinations, which, given the rewards of the low hanging fruit that are short ngrams, makes the regression necessary to reach other possibilities undesirable, since any backward step would be made at the cost of the current score. The much larger scoring attributed to longer ngrams made this regression feasible and worth the initial cost.

The “problem” that then arose, was that, while the algorithm proved sometimes to show some creativity, most of the time it returned already-existing phrases from the reference text. We realised we would prefer the algorithm to come up with English-sounding sentences that do not already exist in the text; an algorithm that composed English sentences rather than evolving into existing ones.

This was a problem of scale, with regards to the reference text. If the algorithm succeeded in evolving a 16 length ngram, it was because such a precedent exists at least once in the reference text. However, because these lengthier ngrams can capture within them lengthier words, words that are as such in rarer use in the reference text, this limits the possibilities the ngram has to further increase its score through mutations located at its extremities. If the word “window” appears only three times in the reference text, even though the algorithm has no conception of words themselves, it is still limited by the relative uniqueness of the ngram that includes the word “window”, e.g., the 16-length ngram “t of the window “. The reference text may be limited in the contextual data it provides for what is likely to go next to an ngram that includes the word “window”, and limit the options for evolution to the point that it winds up replicating sentences ad verbatim from the reference text. There is not enough data, and the algorithm becomes so biased towards the use of English in the reference text that it winds up almost copying aspects of it, albeit “blindly” in Dawkins’ sense.

If scale is really the issue, then a scaled up data set would lead to the algorithm exhibiting creativity, by composing sentences less easily traced to one pre-existing reference. It would have greater combinations to chance upon in conjunction with an ngram that includes “window”, by being exposed to many more examples.

At that point we would question whether this truly counts as composition in the sense in which humans compose sentences, or whether the machine only has the appearance of composing something. Yet even when humans compose sentences, they are heavily restricted by linguistic precedents, grammatical rules and syntax which have a statistical blueprint. Even though I do have a conception of discriminate words, I cannot come up with endless ones to place directly in front of “window”, and this fact is a result of all the preceding usages of a language which I have learned to use by example.

So the next steps may involve experimenting with changing existing parameters, because there are many of these, such as the length of the generated string, natality, mutation probability and elitism as before; but also other parameters such as how exactly to discriminate the scoring of different ngram lengths, define the subtleties of the distance function, scale up or otherwise select the reference data, or whether to algorithmically define the concept of words.

Another issue is the persistence of getting trapped into local minima, albeit ones made up of longer ngrams, which puts pressure at the extremities of the generated strings. While these short sequences of characters may look slightly more English than random, they still remain for the most part jibberish.

Creating a Larger Sample Text

I want to expand the sample text to present the algorithm with as diverse as possible an array of ngrams of various lengths, that reflect a valid use of the English language.

My first thought is whether I should include a number of texts that are very different to each other semantically. Perhaps a single writer repeats themselves too much to offer enough diversity in the use of certain words. Perhaps certain words and topics are omitted altogether given the theme or genre of the text.

My next thought, in response to the issue of limited vocabulary, is whether to include as one of the texts, a dictionary. That wouldn’t work very well on its own, because the whole point of using ngram statistics is capturing language sequentially, whereas the sequence of words in a dictionary is arbitrary (i.e. alphabetic). If the dictionary were made up of example phrases in which each word in the vocabulary is contextualised, then that may be useful. The only problem then would be that these phrases, although they are longer sequences of actual English, would still be listed arbitrarily, and the algorithm may exhibit some of that arbitrariness in nonsensical compositions of its own (e.g., the two words “abolish” and “abortion” are next to each other in the dictionary. Even if we only extracted contextual phrases as definitions of these words, these listed phrases would be captured by the algorithm as a seamless transition.

I am now thinking that individual phrases are not long enough, not embedded enough in linguistic context, to offer enough data for the Englishness algorithm. What if there could be a whole book or article out there, in which the word “abolish” features many times? Perhaps that single text could offer enough uses of the term to guide the algorithm in using it correctly. If I could generate a massive reference text which listed texts, one after another, each of which are chosen in correspondence to heavy uses of each word in the English vocabulary respectively. So, a book in which the term “abolish” were used many times would be followed by a book in which the term “abortion” is used many times, etc, through the whole dictionary. That would be a very large sample text indeed, because there are circa 170,000 words in the Oxford English Dictionary. Most books are 50,000-90,000 words, meaning that the sample text would contain circa 10 billion words. I don’t really know if that’s computationally feasible.

As an alternative, I made a larger sample text including all of the works of Mary Shelley (apart from a drama piece she wrote – I wasn’t sure what effect the recurring names of characters would have on the resultant text). Now I am waiting to try the algorithm on that to see if it captures Mary Shelley’s style, without recreating whole sentences of hers.

*Files related to these experiments can be found in Dropbox>Katarina>Evolution

19/09/19 Evolution by Aesthetic Selection

As an alternative to a selection process that involves an ‘end goal’, as in the case of the ‘Methinks it is like a weasel’ algorithm, Dawkins proposes another. This time the algorithm is written to mimic some more properties of biological evolution, i.e. embryonic development and genes. There is no target against which progeny are compared and then selected, yet environmental pressures of the kinds biological organisms are exposed to in a lifetime would be too complex to simulate. Instead, the selection is made by the aesthetic preference of the human running the program.

The ‘EVOLUTION’ algorithm contains two sub-functions – ‘DEVELOPMENT’ and ‘REPRODUCTION’. DEVELOPMENT is in this case an analogue to the development of a biological organism. The fact that a relatively complex organism can mature from a single ‘seed’ or ‘egg’ suggests that there is a pattern of branching, or splitting, which it follows in order to expand: “But this large-scale form emerges because of lots of little local cellular effects all over the developing body, and these local effects consist primarily of two-way branchings, in the form of two-way cell splittings. It is by influencing these events that genes ultimately exert influence on the adult body.” (pg. 53). In the DEVELOPMENT part of the program, therefore, is the chance for whatever genes are inherited by the virtual organism (‘biomorph’) to intervene in the ultimate form of the organism – in other words, the genes will tell the biomorph how to grow. Given that in nature this growth appears to have the characteristic of doubling, Dawkins models his biomorphs on tree-like drawings made by the computer, whereby in each successive stage of doubling, the branches of the tree grows a pair of new branches. Exactly how that tree branches out, is determined by 9 ‘genes’. These genes each represent a certain dimension in the growth of the tree – the angle of branching, the curvature or straightness of branching, or just the number of branchings (the ‘depth of recursion’).

In the REPRODUCTION part of the algorithm, a selected biomorph in a given generation is copied several times over. In each copy there is again a chance of mutation – in this case, the chance that one gene’s value changes by -1 or +1 degree, such that it ends up differing from the parent. Only selected copies get to reproduce.

Finally, in the EVOLUTION part of the algorithm, this reproductive and development process is repeated – however only one child from each generation is selected to reproduce – the other copies’ hereditary lines end there. The selection is made by a human, based on aesthetic preference in each generation. In nature, natural selection, or as Dawkins sometimes calls it, ‘non-random death’, performs this role.

With this algorithm, Dawkins quickly manages to evolve interesting looking biomorphs that truly look biological and insect-like. He further proposes an interesting idea for an experiment: to take the computer out into the garden and allow real insects to ‘select’ progeny by seeing which biomorph they prefer to land on – this physical touch on the screen would be recorded by the computer and interpreted as a selection. His hypothesis was that the biomorphs would evolve to look like other insects or flowers – although qualifying this entertaining notion with the knowledge that insects see differently to us and may not register images on a computer screen at all.

He then explains that the evolutionary paths taken to arrive at his biomorphs constituted a single ‘walk’ through their genetic space – that all the possible creatures that could evolve from these 9 genes or tree-growth dimensions could be visualised as existing within a 9-dimensional space whereby organisms could be arranged in order of their relativity to one another. This space would be infinite even for only 9 genes, because all 9 dimensions can be increased or decreased by 1 degree til infinity. As such, every biomorph in ‘biomorphland’ can be described entirely by its genetic formula, that is a set of values that denote the extent of each dimension in the development of the biomorph, and each biomorph is situated right next to 18 other biomorphs whereby one step positive or negative along one dimension leads to its most intimate relatives – a parent, child or sibling. The further out in genetic space you wander, the more distantly related the new biomorph you arrive at.

Dawkins talks about the evolution of the biomorphs as a ‘creative process’ (pg. 65). The difference, he reflects, is in the vastness of the space of possibilities navigated. The ‘Methinks it is like a weasel’ algorithm did not feel creative, because the end result was prescribed in advance. Because the probability of arriving at any particular biomorph would be vanishingly small, the selection process begins to look more creative, because it must dynamically navigate a vast space of possibilities – so vast that one cannot find a creature in it (without already knowing all its genetic information) – only stumble upon one. This idea reminds me of my video work Pseudo, in which a fictional character muses over the world of potential persons from which she comes: “It’s not fair. But most of the people that could be people are not beings, they don’t exist”. She imagines herself to hail from a Hadean world of ghostly beings drifting in a soup of non existence – her conjuring into life constitutes the fact that I found her genetic formula through repeated, evolving performances that modulated a few, isolated steps at a time – a walk through a space of possibilities. It is interesting to think that her existence is a creative one by virtue of the vast number of possible fictional characters that are not her, and that creativity is a kind of searching, a kind of searching that happens in a place so large it is impossible to know exactly what one is looking for. This reminds me again of the Englishness algorithm above, and the question of whether the algorithm is ‘actually’ being creative when we increase the size of the sample text. If we could make the algorthim’s reference text as large as language itself, would it not be creative?

Finally, to reiterate the sheer vastness of this mathematical space leads to the conclusion that great genetic leaps must take place only through small, cumulative steps. The 9-gened biomorphs each had 18 neighbours in their 9 dimensional genetic space. Those 18 neighbours had 18 neighbours. Any leap from one biomorph to another becomes unlikelier at an exponential rate as one moves two (5,832), three (104,976), four times (1,889,568) removed in kinship to the first biomorph. In biological evolution, a great many more genes are involved, and genetic space constitutes a much more intricate space of possibilities. Big leaps are as good as impossible, let alone the chances of the mutation catching on – what is the likelihood that a vastly mutated organism survived long in its alternative condition under natural selection?

The Hospital of Happiness & a Podcast Series

This sound work, paired with a portrait image generated by a generative adversarial network (GAN) using a website created by Kerras et. al and Nvidia (www.thispersondoesnotexist.com) is part of a basic attempt to experiment with character creation and AI-generated faces. It felt like a completely different approach to the study of evolutionary algorithms I’ve been doing lately. In fact, I think I was distinctly influenced by a film I rewatched – Secretary (Steven Shainberg, 2002), which has nothing to do with my research or anything in itself. One detail of that film entered my sound improvisation session – at the beginning of the film, Lee (Maggie Gyllenhaal) is released from a psychiatric institution of some kind, and is reluctant to leave because life in the institution was ‘simple’. She seemed happy there. I began to unconsciously run with this idea; as usual, I only realised the reference after I finished performing and relistened to the recording.

The result is again a familiar and plausible person, telling a story that veers off on the absurd, offering some strange wisdom at the end as if the whole story were a parable.

***

A week or so later, I had the idea of making a podcast series. The GAN faces would appear as little profile thumbnails when listening to the tracks, their artificial imperfections imperceptible at that scale. Could be something to pitch.

Empathy Practice as a Weekly Brain Dump

I have been thinking of using this strand of my practice in terms of what it is best at – being a ‘brain dump’. I am imagining having weekly sessions, whether I happen to be in some new, exotic place, or just in my flat/in a studio at Goldsmiths, to just clear my mind and process my week through the prism of the practice, no strings attached. Whatever I have done, learnt or seen that week could enter into the method, and get processed as a character with a story.

15/09/19 Induction Week at Goldsmiths

It’s been a pretty full-on week, packed with information and advice which I will probably forget. I think I’ve gotten to know Goldsmiths a bit better, became a bit more familiar with the facilities. Of particular interest are the little study areas dotted here and there that are dedicated to postgraduate students, and also the fact that they have a dedicated ‘podcasting room’ when I’ve recently had the idea of making a podcast. It was all a bit of a whirlwind and I am not sure how to sum it up.

I’ve got to come back to myself. Back to my research. I’ve been swamped with offerings and options and diversity of thought and facilities, but I know in the end I am not going to use them all. Or subscribe to all the impressive sounding thinkers in my department and others (with all due respect of course). My work is different to what I am seeing others doing. I am recoiling, inevitably, at some of the titles of my colleagues’ projects. Which is extremely prejudiced of me, I know. Something about the language art researchers are using is incredibly, incredibly vague, and I am not sure whether it’s useful. I am not sure what they are saying half the time. A big challenge in my exchanges there will be communication, or having any clue what anyone’s talking about. Is my work equally incomprehensible to others? I like the idea of trying to say things as simply as possible.

The art presentations are two days from now, and very disagreeably the senior staff decided to change the format significantly and announce it only yesterday. This means the presentation I have prepared will not be usable, as it is too long and I am no longer allowed to use any slides, meaning, I cannot show my work. Anyway, I think I will take my father’s advice and rewrite my now 5 minute presentation from scratch, focusing on just one aspect of my work. I think what interests me most at the moment is the reading-writing duality apparently present in all autonomous systems like persons, so I’ll talk about that and just rely on the audience trusting me that the work does what I say it does.

I’m sort of looking forward to getting this awkward ice-breaking week over and done with, and going back to my own pace with things. I need to remember that I don’t have to win anybody over but myself. I’ll say something during the presentation that will be of use to me to have said aloud. I’ll have met some people and will be open to them, but also I will not compare myself to them. The way I do things is good, and I must let this methodology breathe so it can give me as much feedback as possible on the questions that I am most interested in. So I don’t want to blow this presentation thing out of proportion. It is not a big deal, but I will also make use of it.

Planning is something I’ve been thinking about. Milestones. That worked pretty well for me throughout this month – at the beginning of September, I’d set myself some goals to achieve before induction week, and I ended up working quite actively, and with a sense of pleasure and satisfaction every day. I had some things I wanted to achieve within reading/learning, making art, and making output (like a presentation), and I largely did. I did in particular a lot of reading and surprised myself with some artwork.

Now that I’ve met so many characters and places and got a sense of Goldsmiths, I wish to do that again, maybe set milestones ahead of Christmas, or maybe ahead of the first intensive. I will give a go at some training and discussion opportunities at Goldsmiths but must remember to take it all in critically and not succumb to being overly modest in the face of authority – next week I have a meeting with K and M and I’d like to prepare what to talk about at both of these.

On top of my own research I’ll have to keep a keen eye on what is going on with teaching – I am officially a member of staff now and want to be well prepared for my first seminar, do the readings and some background research on the topic beforehand.

Accounting

Made 2 evolutionary algorithms with dad (“Methinks it’s a Weasel” and “Englishness”)
Tutored a MA art student for the first time
Tutored another MA art student through recommendation
Began reading “The Blind Watchmaker” by Richard Dawkins on evolution
Made ‘Hospital of Happiness’ piece
Had the idea to create a podcast series for GAN faces
Presented a snapshot of my research at the Art Research Presentations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s