On Tuesday, a musician and AI illustrator who tweets under the handle @Supercomposite wrote a thread about a strange discovery they’d made while playing around with an AI image generator: A woman whose weathered, inflamed face and sorrowful, stern expression emerged unintentionally in pictures generated from their prompts, and whose image persisted across multiple image prompts and recombinations. Stranger still, the AI seemed to strongly associate her with gore, horror, and unsettling imagery. Supercomposite called her Loab.
“Loab” was initially produced using a technique called “negative weighting.” Usually, when using an AI image generator, you’re attempting to make your AI produce whatever image it can that it believes most closely resembles a verbal prompt. Negative weighting allows you to do the opposite: instruct the AI to produce whatever image it can that it believes is most different from the prompt.
Supercomposite asked their AI to produce an image as far away from the prompt “Brando” as possible. The AI produced this:
So far, so good: that image does appear to be quite far from “Brando.” But when Supercomposite attempted to reverse engineer the prompt, instead of finding Marlon Brando, they found Loab:
These initial images of Loab are slightly creepy, in the way AI representations of people sometimes can be. But the horror really starts when Supercomposite prompts the AI to combine a Loab image with other images. These two images (the one on the right is the product of the prompt "[...] hyper compressed glass tunnel surrounded by angels [...] in the style of Wes Anderson”), fed into the AI for combination with no additional text prompt…
…resulted in these images:
Wherever “Loab” exists in the latent space, these images suggest, she is somewhere close to “extremely gory and macabre imagery.” More spookily to Supercomposite, she is well-known kept combining Loab images and found that “almost all descendent images contain a recognizable Loab."
[T]hese are produced with other images as inputs, and no text. They are the result of “cross-breeding” images of Loab with images of other things. […] The images that result from crossing Loab with other images can in turn be crossbred with other images. The AI can latch onto the idea of Loab so well that she can persist through generations of this type of crossbreeding, without using the original image […] She haunts the images, persists through generations, and overpowers other bits of the prompt because the AI so easily optimizes toward her face.
More Loabs:
Yikes!
What is going on here? We can probably chalk at least some portion of the thread’s creepiness to motivated pattern-matching and wishful thinking on our parts. While the images in this email are curated for maximal Loabness, if you look through Supercomposite’s thread, you’ll find lots of images that are extremely unsettling -- but in which the figures are not particularly Loab-y. (Especially if you’re feeling like a spoilsport.) Bits and pieces of Loab (especially her hair) might persist through some transformation, but I’m not sure it should be entirely surprising that if you keep asking an AI to combine an image of a creepy-looking woman with another image, bits and pieces of the creepy-looking woman will persist, even across multiple prompts.
Still, Supercomposite is on to something here: Who is this woman? Why is she produce by negative weighting, and why does the AI associate her with the gore and horror that appears in the images generated above?
The answer likely has to do with what’s called “the latent space” -- essentially a geometric representation of the AI’s data. You can imagine the latent space as a kind of conceptual map, on which concepts the AI believes are similar are grouped more closely together. The idea holds true in reverse as well: the more different the AI believes two concepts are, the greater distance between them in the latent space. Negative weighting (the technique that initially produced Loab) is a way to instruct the AI “find me the point farthest away from this prompt.”
But if you imagine the AI is optimized to produce beautiful, or, at least, pleasant images, than negative-weighting prompts are more likely to send the AI to clusters of data associated with unpleasant imagery -- they are, after all, the “farthest away” from the majority of images the AI is trying to produce.
It’s almost easier to visualize this than it is to explain, so thank you to James Teo for this:
Here’s a longer explanation from Matthew Skala; the bottom line is that, geometrically speaking, there are relatively few “furthest point away” points from many starting places in a given space. And yet, as Skala writes, “none of this explains why ‘Loab’ in particular would be the endpoint of the convergence, nor why she's so unpleasant, except that if she or something like her exists, she/it has to look like *something*.”
People who don’t have a good grounding in how neural networks work find things like this phenomenon more meaningful than it is. Along with language and math literacy we need to be teaching people machine learning literacy and getting the word “intelligence” out of the vernacular when describing machine learning/neural networks.
ML/NN’s: Powerful technology we need to be cautious about? YES, same as nuclear power. Somehow deeply meaningful, tapping into something more than an absorption of what humans have generated and posted to the internet? NO.
I’ve studied Computing at degree level and had a career in digital art and photography for nearly 30 plus years. In that period of time l haven’t seen the advent of any kind of sentient Ai (it’s all a myth to sell new tech to the consumer). A form of interesting yet basic Ai exists in the gaming world (any major break throughs in Ai development will be in this area)🤔