The Crazy Language of AI Bots and Non-Human Entities

What an AI’s Non-Human Language Actually Looks Like

Facebotlish looks pretty weird to me to me to me to me to me to me to me to

ADRIENNE LAFRANCE
The Atlantic

Something unexpected happened recently at the Facebook Artificial Intelligence Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.

In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) When I wrote about all this last week, lots of people reacted with some degree of trepidatious wonder. Machines making up their own language is really cool, sure, but isn’t it actually terrifying?

And also: What does this language actually look like? Here’s an example of one of the bot negotiations that Facebook observed:

Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

Not only does this appear to be nonsense, but the bots don’t really seem to be getting anywhere in the negotiation. Alice isn’t budging from her original position, anyway. The weird thing is, Facebook’s data shows that conversations like this sometimes still led to successful negotiations between the bots in the end, a spokesperson from the AI lab told me. (In other cases, researchers adjusted their model and the bots would develop bad strategies for negotiating—even if their conversation remained interpretable by human standards.)

One way to think about all this is to consider cryptophasia, the name for the phenomenon when twins make up their own secret language, understandable only to them. Perhaps you recall the 2011 YouTube video of two exuberant toddlers chattering back and forth in what sounds like a lively, if inscrutable, dialogue.

There’s some debate over whether this sort of twin speak is actually language or merely a joyful, babbling imitation of  language. The YouTube babies are socializing, but probably not saying anything with specific meaning, many linguists say.

In the case of Facebook’s bots, however, there seems to be something more language-like occurring, Facebook’s researchers say. Other AI researchers, too, say they’ve observed machines that can develop their own languages, including languages with a coherent structure, and defined vocabulary and syntax—though not always actual meaningful, by human standards.

In one preprint paper added earlier this year to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract language—and how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots don’t need to have corporeal form to engage in non-verbal communication; they just engage with what’s called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapes—in other words, the researchers write, they witnessed the “automatic emergence of grounded language and communication … no human supervision!”

The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.

But let’s take a step back for a minute. Is what any of these bots are doing really language? “We have to start by admitting that it’s not up to linguists to decide how the word ‘language’ can be used, though linguists certainly have opinions and arguments about the nature of human languages, and the boundaries of that natural class,” said Mark Liberman, a professor of linguistics at the University of Pennsylvania.

So the question of whether Facebook’s bots really made up their own language depends on what we mean when we say “language.” For example, linguists tend to agree that sign languages and vernacular languages really are “capital-L languages,” as Liberman puts it—and not mere approximations of actual language, whatever that is. They also tend to agree that “body language” and computer languages like Python and JavaScript aren’t really languages, even though we call them that.

So here’s the question Liberman poses instead: Could Facebook’s bot language—Facebotlish, he calls it—signal a new and lasting kind of language?

“Probably not, though there’s not enough information available to tell,” he said. “In the first place, it’s entirely text-based, while human languages are all basically spoken or gestured, with text being an artificial overlay.”

The larger point, he says, is that Facebook’s bots are not anywhere near intelligent in the way we think about human intelligence. (That’s part of the reason the term AI can be so misleading.)

“The ‘expert systems’ style of AI programs of the 1970s are at best a historical curiosity now, like the clockwork automata of the 17th century,” Liberman said. “We can be pretty sure that in a few decades, today’s machine-learning AI will seem equally quaint.”

It’s already easy to set up artificial worlds populated by mysterious algorithmic entities with communications procedures that “evolve through a combination of random drift, social convergence, and optimizing selection,” Liberman said. “Just as it’s easy to build a clockwork figurine that plays the clavier.”

___
http://www.theatlantic.com/technology/archive/2017/06/what-an-ais-non-human-language-actually-looks-like/530934/

 

This entry was posted in Uncategorized. Bookmark the permalink.