On Tuesday, a musician and AI illustrator who tweets under the handle @Supercomposite wrote a thread about a strange discovery they’d made while playing around with an AI image generator: A woman whose weathered, inflamed face and sorrowful, stern expression emerged unintentionally in pictures generated from their prompts, and whose image persisted across multiple image prompts and recombinations. Stranger still, the AI seemed to strongly associate her with gore, horror, and unsettling imagery. Supercomposite called her Loab.
“Loab” was initially produced using a technique called “negative weighting.” Usually, when using an AI image generator, you’re attempting to make your AI produce whatever image it can that it believes most closely resembles a verbal prompt. Negative weighting allows you to do the opposite: instruct the AI to produce whatever image it can that it believes is most different from the prompt.
Supercomposite asked their AI to produce an image as far away from the prompt “Brando” as possible. The AI produced this:
So far, so good: that image does appear to be quite far from “Brando.” But when Supercomposite attempted to reverse engineer the prompt, instead of finding Marlon Brando, they found Loab:

These initial images of Loab are mysterious -- why is this face the point “farthest” from the gibberish in the prompt -- but, at most, only slightly creepy, in the way AI representations of people sometimes can be.
The “horror” part of the story really starts when Supercomposite prompts the AI to combine a Loab image with other images. These two images (the one on the right is the product of the prompt "[...] hyper compressed glass tunnel surrounded by angels [...] in the style of Wes Anderson”), fed into the AI for combination with no additional text prompt…
…resulted in these images:



Loab’s unsettling face, when combined with innocuous celestial imagery, results in extremely gruesome imagery. This suggests, as Supercomposite explains, that the AI strongly associates Loab’s face with “extremely gory and macabre imagery.” When the AI attempts to create new images out of image prompts with Loab, grotesque flourishes and concepts are dragged along with her.
More spookily to Supercomposite, as they kept combining Loab images, they found that “almost all descendent images contain a recognizable Loab."
[T]hese are produced with other images as inputs, and no text. They are the result of “cross-breeding” images of Loab with images of other things. […] The images that result from crossing Loab with other images can in turn be crossbred with other images. The AI can latch onto the idea of Loab so well that she can persist through generations of this type of crossbreeding, without using the original image […] She haunts the images, persists through generations, and overpowers other bits of the prompt because the AI so easily optimizes toward her face.
Yikes!
The Loab thread is viral; people are hunting for Loab in AI programs; there are Loab jokes. You imagine that, already, across a number of offices in Los Angeles people are already pitching Loab movies to bored agents and executives. Who doesn’t love the idea of a demon summoned by prompt, a ghostly face haunting a neural net, bearing mute witness to unspeakable horror somewhere deep within the machine?
Like any cutting-edge technology, AI also has a tendency to reflect back our anxieties. We can probably chalk at least some portion of the thread’s creepiness to motivated pattern-matching and wishful thinking on our parts. While the images in this email are curated for maximal Loabness, if you look through Supercomposite’s thread, you’ll find lots of images that are extremely unsettling -- but in which the figures are not particularly Loab-y. (Especially if you’re feeling like a spoilsport.) Bits and pieces of Loab (especially her hair) might persist through some transformation, but I’m not sure it should be entirely surprising that if you keep asking an AI to combine an image of a creepy-looking woman with another image, bits and pieces of the creepy-looking woman will persist, even across multiple prompts.
Still, even in spoilsport mode, questions persist: Who is Loab? Why was this face and this set of characteristics produced by negative weighting? Why does the AI associate it with the gore and horror that appears in the images generated above?
The best answer, at least to the latter question, likely has to do with what’s called “the latent space” -- essentially a geometric representation of data. You can imagine the latent space as a kind of conceptual map of the AI’s “knowledge,” on which concepts the AI understands as similar are grouped more closely together. The idea holds true in reverse as well: the more different the AI believes two concepts are, the greater distance between them in the latent space. Negative weighting (the technique that initially produced Loab) is a way to instruct the AI “find me the point farthest away from this prompt.”
If you imagine the AI is being optimized by its designers and users to produce beautiful, or, at least, pleasant images, it will begin to move data associated with unpleasant concepts -- blood, gore, horror, etc. -- far away from the main cluster of “normal,” “pleasant,” “beautiful” concepts in the latent space. In such a case, negative-weighting prompts are more likely to send the AI to clusters of data associated with unpleasant imagery -- they are, after all, the “farthest away” from the majority of images the AI is trying to produce. (The right word for Loab is probably “uncanny,” which is, as Schelling says, “everything that ought to have remained ... hidden and secret and has become visible.”)
It’s probably easier to visualize this than it is to explain, so thank you to James Teo for this:


This is not only an unsurprising outcome, it’s mathematically predictable. Here’s a longer explanation from Matthew Skala; the bottom line is that, geometrically speaking, there are relatively few “furthest point away” points from many starting places in a given space.
That, at least, explains why Loab would be so strongly associated with unsettling concepts and imagery. But it doesn’t answer all of our questions: As Skala writes, “none of this explains why ‘Loab’ in particular would be the endpoint of the convergence, nor why she's so unpleasant, except that if she or something like her exists, she/it has to look like *something*.”
I’ve seen some suggestions out there that the AI might be sorting concepts associated with, say, “older women” or “skin conditions” into or near our hypothesized “horror” cluster because it’s been led to understand those concepts as “unpleasant” in the same way. But it’s very hard to tell if that’s actually what’s happening.
Indeed, it’s hard to say much of anything about Loab -- what she is, where she comes from, why the AI has sorted her in the way it has -- beyond educated guesswork. This is part of what makes her such an attractive AI horror story; even the mundane, rational-technical explanations require a level of specialist knowledge and come across as a kind of demonology: “Loab’s not a ghost -- it’s is an embedding somewhere in the multi-dimensional latent space, strongly associated with gore, darkness, and dread.”
AI is a mysterious (if not mystical) technology, one whose outputs can be confusing or inexplicable even to experts. It’s also now, after the crypto crash, sitting alone at the nexus of tech-industry excitement and news-media paranoia: a bleeding-edge technology, the uses and ramifications of which are still unknown. It’s natural that something in there would unsettle us. I don’t know why it’s Loab, except AI anxiety has to look like *something*.
People who don’t have a good grounding in how neural networks work find things like this phenomenon more meaningful than it is. Along with language and math literacy we need to be teaching people machine learning literacy and getting the word “intelligence” out of the vernacular when describing machine learning/neural networks.
ML/NN’s: Powerful technology we need to be cautious about? YES, same as nuclear power. Somehow deeply meaningful, tapping into something more than an absorption of what humans have generated and posted to the internet? NO.
I’ve studied Computing at degree level and had a career in digital art and photography for nearly 30 plus years. In that period of time l haven’t seen the advent of any kind of sentient Ai (it’s all a myth to sell new tech to the consumer). A form of interesting yet basic Ai exists in the gaming world (any major break throughs in Ai development will be in this area)🤔