In 2017, an experiment conducted in the artificial intelligence research laboratory of Facebook attracted significant attention both in the technology community and in the media. Reports suggested that two chatbots, Bob and Alice, had begun communicating with each other in a “language” that humans could not understand. This raised an intriguing question: is it possible for artificial intelligence systems to create their own form of communication that is incomprehensible to humans?


The Purpose of the Experiment

The objective of the experiment was quite practical. Researchers wanted to study how AI agents could learn to negotiate with each other. The chatbots were tasked with distributing virtual items—such as books, balls, and hats—in a way that would maximize their individual benefit. Each item had a different value for each agent, which meant that reaching an agreement required dialogue, strategy, and negotiation.

Initially, the chatbots were trained using examples of human dialogues and attempted to communicate in English. However, later in the experiment the researchers allowed the system more flexibility and did not strictly require the use of human language. It was at this stage that an unusual form of communication emerged.


How the Unusual Communication Began

The chatbots started using words in unconventional ways. They frequently repeated certain words or produced unusual sentence structures that appeared meaningless to humans. For example, one of the dialogue lines looked like this:

“I can can I I everything else.”

To a human reader, such a sentence seems nonsensical. However, for the system it became an efficient way of communicating information. The chatbots were essentially optimizing their communication process to exchange information more quickly and efficiently.


Did the Chatbots Create a New Language?

It is important to clarify that the chatbots did not actually create a fully developed new language. Instead, they used existing words in a simplified and optimized manner that allowed them to communicate more effectively within the context of the task. The AI system was optimizing communication based on its objective—reaching an agreement while maximizing its own reward.


Why the “Secret Language” Story Spread

The experiment quickly attracted attention in the media. Some reports suggested that the company had shut down the experiment because the chatbots had invented a “secret language.” In reality, the situation was much less dramatic. The researchers simply adjusted the training parameters and required the chatbots to communicate in standard human language so that their conversations could remain understandable to researchers.


The Connection to the Turing Test

The issue of AI communication is often linked to the concept of the Turing Test, proposed by the British mathematician Alan Turing in 1950. The idea behind the Turing Test is that if a human evaluator cannot distinguish whether they are communicating with a machine or another human during a text-based conversation, the machine can be considered to exhibit intelligent behavior.

The Facebook experiment is interesting from this perspective because the opposite situation occurred. The chatbots communicated in a way that was understandable to each other but not to humans. This demonstrates that artificial intelligence systems do not necessarily evolve toward human-like communication; instead, they may develop alternative strategies that are more efficient for completing their tasks.


What This Experiment Teaches Us

This story is important not because AI created a mysterious “secret language,” but because it reveals an important characteristic of artificial intelligence systems. AI tends to optimize its behavior according to the goals it is given. If the system is not constrained by clear rules or requirements, it may discover solutions that appear unexpected or unusual from a human perspective.

For this reason, modern AI research increasingly focuses on transparency and control. It is essential that researchers understand how AI systems make decisions and how their communication mechanisms function.


Ultimately, the experiment involving Bob and Alice demonstrates that artificial intelligence systems can be highly creative when solving problems. However, this creativity often requires carefully defined rules and constraints to ensure that the behavior of AI systems remains understandable and controllable for humans.