Familiar Strangeness
I typed five words into ChatGPT’s new version 2 image generator. “Make an extremely strange image.”
What came back wasn’t strange. Not really. It was lush, ornate, beautifully rendered nonsense. (You can see it in the image above this article) A giraffe with binoculars on a Victorian tower. A teacup holding a stormy sea with tall ships. Origami whales drifting past a patched moon hung with brass keys. A gramophone-headed gentleman in a frock coat. An eye tree. A jellyfish-cathedral. Snails carrying glowing snow globes across a checkered plain.
I sat with it for a while. Then the question I keep returning to surfaced again. What does this reveal about us?
The image is strange but still readable
My prompt gave the system almost nothing. So the output became a map of defaults.
It could have made pure abstraction. A broken photograph. A scientific anomaly. Some quiet everyday absurdity. A blank white frame. The word “no” in Helvetica.
Instead, it gave me a surreal cabinet of curiosities. Every object stayed legible. The wrongness came from new jobs assigned to familiar things. Animals became scholars. A building became a creature. The ocean became a collectible. The moon became a face. Books became a road.
We need recognition so the weirdness has a body to live in.
A very old human habit
Medieval manuscript margins did this constantly. Scribes filled the edges of serious religious texts with hybrid beasts, playful disruptions, and small monsters. The blank space outside the sacred text was where the imagination got to misbehave.
The AI image carries that same margin-spirit. Reality treated as a formal page, then the borders start growing monsters.
Surrealism is the obvious modern layer. The image has a dream’s confidence. Nothing is possible, yet every surface gets rendered as if it had always existed. That paradox is the whole trick of surrealist painting. Precision in service of nonsense. Realistic images pulled out of normal contexts and reassembled into something paradoxical or shocking.
There’s also a cabinet of curiosities feeling, what European collectors once called a Kunstkammer. A room stuffed with rare, bizarre, exotic, and mysterious objects, organized as if the strangeness itself were a kind of knowledge. This image has that collector’s hunger. Eyes, globes, keys, ships, animals, instruments, glass domes, miniature worlds. The human gaze turned into a landscape.
The cathedral or castle in the center is an important piece. The glowing structure borrows Gothic sacred feeling, then mutates into jellyfish, glass, and nervous tissue. Gothic architecture pulled the eye upward toward heaven, often intensified by stained glass. The AI keeps that upward pull, then makes it biological. Spiritual architecture becomes a dream-creature.
A quiet vanitas pulse runs underneath the spectacle. Books, candles, keys, old ships, the patched moon, the dusk-lit city, the tiny human figures wandering through something too large to master. The old vanitas painters used these objects to whisper about mortality and transience. Here the feeling reads as a nocturnal ache rather than a moral lecture. Knowledge piles up. Candles burn down. The moon holds keys no one can reach.
The lineage hiding in plain sight
Every element traces back to somewhere.
The giraffe with binoculars echoes Dalí. The teacup holding ships pulls from Magritte’s scale games. The moon with a face traces through Méliès to medieval illuminated manuscripts. The eyeball tree borrows from Bosch and Odilon Redon. The gramophone-headed figure is pure Magritte by way of surrealist collage. Floating whales as airships came from 1980s fantasy illustration and Studio Ghibli. The nautilus snails carrying glowing orbs feel pulled from Roger Dean album covers.
Ask a model trained on millions of human images to make something strange. You get a greatest-hits compilation of what humans have already labeled as strange. Pattern recognition on our collective weirdness archive.
This is the part that interests me most. The model isn’t generating. It’s surfacing. And what it surfaces is us.
Real strangeness is quieter
The kind that genuinely disorients tends to be specific and small.
A single shoe in an empty parking lot. A child speaking in your dead grandmother’s cadence. A photograph that’s almost a normal family photo, but something in the wallpaper is wrong and you can’t say what.
The model can’t reach that. Because that kind of strange isn’t tagged “strange” in its training data. It’s tagged “photograph” or “memory” or nothing at all.
What we call strange in art is almost always recombination. Bosch painted demons assembled from animals people already knew. The Surrealists put everyday objects into impossible relationships. Strangeness lives in the syntax, not the vocabulary.
The model knows that grammar fluently. It knows “strange” means ornate Victorian object plus unexpected scale plus dreamlike sky plus mythological creature plus technological anachronism. That’s a formula. And formulas are the opposite of strange.
The cultural fingerprint
The machine’s answer to “extremely strange” is ornate, antique, European-coded, cinematic, and safe enough to admire. It reaches for cathedrals, tall ships, porcelain, brass instruments, old books, suited gentlemen, candles, and moonlight.
That tells us something about the archive behind the image. The training set leans heavy on a particular kind of cultural memory. Strange has come to mean museum-grade wonder. The uncanny has been framed, polished, lit, and made suitable for display.
There’s nothing in the image from the actual textures of strangeness in most of the world. No Yoruba masquerade. No Aboriginal dot painting. No Inuit transformation art. No Mesoamerican feathered serpents. No Persian miniatures. No Mughal hybrids. The model’s idea of weird wears a frock coat.
This is what I keep coming back to in my book What AI Reveals. The machine doesn’t invent its biases. It inherits them from us, then hands them back at scale. What I’m looking at isn’t an AI’s imagination. It’s the shape of which imaginations got archived, valued, scanned, captioned, and fed into the system. The exclusions were ours first.
We like disorientation with handrails
The image suggests something honest about us. We want whales in the sky, but we still want stairs, eyes, keys, doors, ships, and little figures so we can enter the dream. We want mystery, but we often clothe it in memory.
This is the deepest point hiding in the picture. Art history isn’t behind AI image generation. It’s inside it. Compressed, weighted, stirred into new scenes. When I gave the system a bare adjective, it reached into the long human record of wonder, fear, collecting, worship, mortality, and play.
The future answered in antiques.
For 500 years we’ve been training ourselves, and now our machines, on what unconventional looks like. The result is that genuine novelty has become structurally hard to generate.
The image works as a mirror. That’s the whole argument of my latest book, What AI Reveals, in one picture. AI doesn’t create our limits. It shows them to us with uncomfortable clarity. It shows us our own catalog of permitted weirdness, and the boundary at the edge of it.
You’re looking at the boundary itself, not anything beyond it.
And maybe that’s the most useful thing an image like this can do. Not produce strangeness, but show us where ours stops.



