What an AI’s Non-Human Language Actually Looks Like
Facebotlish looks pretty weird to me to me to me to me to me to me to me to
ADRIENNE LAFRANCE
The Atlantic
In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) When I wrote about all this last week, lots of people reacted with some degree of trepidatious wonder. Machines making up their own language is really cool, sure, but isn’t it actually terrifying?
And also: What does this language actually look like? Here’s an example of one of the bot negotiations that Facebook observed:
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
One way to think about all this is to consider cryptophasia, the name for the phenomenon when twins make up their own secret language, understandable only to them. Perhaps you recall the 2011 YouTube video of two exuberant toddlers chattering back and forth in what sounds like a lively, if inscrutable, dialogue.
There’s some debate over whether this sort of twin speak is actually language or merely a joyful, babbling imitation of language. The YouTube babies are socializing, but probably not saying anything with specific meaning, many linguists say.
In one preprint paper added earlier this year to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract language—and how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots don’t need to have corporeal form to engage in non-verbal communication; they just engage with what’s called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapes—in other words, the researchers write, they witnessed the “automatic emergence of grounded language and communication … no human supervision!”
The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.
So the question of whether Facebook’s bots really made up their own language depends on what we mean when we say “language.” For example, linguists tend to agree that sign languages and vernacular languages really are “capital-L languages,” as Liberman puts it—and not mere approximations of actual language, whatever that is. They also tend to agree that “body language” and computer languages like Python and JavaScript aren’t really languages, even though we call them that.
So here’s the question Liberman poses instead: Could Facebook’s bot language—Facebotlish, he calls it—signal a new and lasting kind of language?
“Probably not, though there’s not enough information available to tell,” he said. “In the first place, it’s entirely text-based, while human languages are all basically spoken or gestured, with text being an artificial overlay.”
The larger point, he says, is that Facebook’s bots are not anywhere near intelligent in the way we think about human intelligence. (That’s part of the reason the term AI can be so misleading.)
“The ‘expert systems’ style of AI programs of the 1970s are at best a historical curiosity now, like the clockwork automata of the 17th century,” Liberman said. “We can be pretty sure that in a few decades, today’s machine-learning AI will seem equally quaint.”
It’s already easy to set up artificial worlds populated by mysterious algorithmic entities with communications procedures that “evolve through a combination of random drift, social convergence, and optimizing selection,” Liberman said. “Just as it’s easy to build a clockwork figurine that plays the clavier.”