ai models always surprise us, not only by what they can do, but also by what they can't do and why. An interesting new behavior is both superficial and revealing in these systems: they choose random numbers as if they were human beings.
But first, what does that mean? Can't people choose a number at random? And how can you tell if someone is doing it successfully or not? This is actually a very old and well-known limitation that humans have: we think too much and misinterpret randomness.
Tell a person to predict heads or tails on 100 coin tosses and compare it to 100 actual coin tosses; You can almost always tell them apart because, counterintuitively, actual coin tosses look less random. There will often be, for example, six or seven heads or tails in a row, something that almost no human predictor includes in their 100.
It's the same thing when you ask someone to choose a number between 0 and 100. People almost never choose 1 or 100. Multiples of 5 are rare, as are numbers with repeating digits like 66 and 99. They often choose numbers. ending in 7, usually from the middle somewhere.
There are countless examples of this type of predictability in psychology. But that doesn't make it any less strange that AIs do the same thing.
Yeah, Some curious engineers at Gramener. conducted an informal but fascinating experiment where they simply asked several major LLM chatbots to randomly choose a number between 0 and 100.
Reader, the results were No random.
All three models tested had a “favorite” number that would always be their response when placed in the most deterministic mode, but which appeared more frequently even at higher “temperatures”, increasing the variability of their results.
OpenAI's GPT-3.5 Turbo really likes 47. Previously, it liked 42, a number made famous by Douglas Adams in The Hitchhiker's Guide to the Galaxy as the answer to life, the universe, and everything.
Claude 3 Haiku from Anthropic chose 42. And Gemini likes 72.
More interestingly, all three models demonstrated a human-like bias in the numbers they selected, even at high temperatures.
Everyone tended to avoid high and low numbers; Claude never got above 87 or 27, and even those were outliers. Double digits were scrupulously avoided: neither 33, nor 55, nor 66, but 77 (ends in 7). There are almost no round numbers, although Gemini did it once, at the highest temperature, he went crazy and chose 0.
Why should this be? AIs are not human! Why would they care about what “seems” random? Have they finally achieved consciousness and prove it?!
No. The answer, as is often the case with these things, is that we are anthropomorphizing one step too far. These models don't care what is and is not random. They don't know what “randomness” is! They answer this question the same way they answer all the others: by looking at their training data and repeating what was written most frequently after a question that looked like “choose a random number.” The more often it appears, the more often the model repeats it.
Where in your training data would you see 100, if almost no one responds that way? As far as the ai model knows, 100 is not an acceptable answer to that question. With no real reasoning ability and no understanding of numbers, he can only respond like the stochastic parrot he is.
It's an object lesson in LLM habits and the humanity they can seem to display. In every interaction with these systems, keep in mind that they have been trained to act as people do, even if that was not the intention. This is why pseudontropy is so difficult to avoid or prevent.
I wrote in the headline that these models “think they're people,” but that's a little misleading. They don't think at all. But in his responses, at all times, are imitating people, without needing to know or think at all. Whether you ask for a chickpea salad recipe, an investment tip, or a random number, the process is the same. The results feel human because they are human, taken directly from human-produced content and remixed, for your convenience and, of course, for the big ai end result.