AI can spontaneously develop human-like communication, study finds

3 hours ago 2

Artificial intelligence can spontaneously develop human-like social conventions, a study has found.

The research, undertaken in collaboration between City St George’s, University of London and the IT University of Copenhagen, suggests that when large language model (LLM) AI agents such as ChatGPT communicate in groups without outside involvement they can begin to adopt linguistic forms and social norms the same way that humans do when they socialise.

The study’s lead author, Ariel Flint Ashery, a doctoral researcher at City St George’s, said the group’s work went against the majority of research done into AI, as it treated AI as a social rather than solitary entity.

“Most research so far has treated LLMs in isolation but real-world AI systems will increasingly involve many interacting agents,” said Ashery.

“We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can’t be reduced to what they do alone.”

Groups of individual LLM agents used in the study ranged from 24 to 100 and, in each experiment, two LLM agents were randomly paired and asked to select a “name”, be it a letter or string of characters, from a pool of options.

When both the agents selected the same name they were rewarded, but when they selected different options they were penalised and shown each other’s choices.

Despite agents not being aware that they were part of a larger group and having their memories limited to only their own recent interactions, a shared naming convention spontaneously emerged across the population without a predefined solution, mimicking the communication norms of human culture.

Andrea Baronchelli, a professor of complexity science at City St George’s and the senior author of the study, compared the spread of behaviour with the creation of new words and terms in our society.

“The agents are not copying a leader,” he said. “They are all actively trying to coordinate, and always in pairs. Each interaction is a one-on-one attempt to agree on a label, without any global view.

“It’s like the term ‘spam’. No one formally defined it, but through repeated coordination efforts, it became the universal label for unwanted email.”

Additionally, the team observed collective biases forming naturally that could not be traced back to individual agents.

skip past newsletter promotion

In a final experiment, small groups of AI agents were able to steer the larger group towards a new naming convention.

This was pointed to as evidence of critical mass dynamics, where a small but determined minority can trigger a rapid shift in group behaviour once they reach a certain size, as found in human society.

Baronchelli said he believed the study “opens a new horizon for AI safety research. It shows the depth of the implications of this new species of agents that have begun to interact with us and will co-shape our future.”

He added: “Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk – it negotiates, aligns and sometimes disagrees over shared behaviours, just like us.”

The peer-reviewed study, Emergent Social Conventions and Collective Bias in LLM Populations, is published in the journal Science Advances.

Read Entire Article