Our human personalities are shaped through interaction, reflected through the basic instincts of survival and reproduction, without any pre-assigned roles or desired computational outcomes. Now, researchers from Japan’s University of Electro-Communications have discovered that chatbots with artificial intelligence (AI) can do something similar.
The researchers outlined their findings in a study first published on December 13, 2024 in the journal Entropywhich was then published last month. In the paper, they describe how different topics of conversation prompted AI chatbots to generate responses based on distinct social tendencies and opinion integration processes, such as where identical agents differ in behavior by constantly incorporating social exchanges into their internal memory and responses.
Graduate student Masatoshi Fujiyama, who led the project, said the results suggest that programming AI with needs-driven decision-making rather than pre-programmed roles supports human behavior and personalities.
How such a phenomenon emerges is a cornerstone of how large language models (LLMs) mimic human personality and communication, he said Chetan Jaiswalprofessor of computer science at Quinnipiac University in Connecticut.
“It’s not really a personality like humans,” he told Live Science when asked about the finding. “It’s a patterned profile created using training data. Exposure to certain stylistic and social tendencies, tuning fallacies such as rewarding certain behaviors, and biased ready-to-use engineering can easily induce a ‘personality.’ easily editable and trainable.”
Author and computer scientist Petr Norvigconsidered one of the leading scientists in the field of artificial intelligence, he thinks that training based on Maslow’s hierarchy of needs makes sense because the “knowledge” of artificial intelligence comes from.
“There is agreement in the extent to which AI is trained on stories of human interaction, so that the notions of need are well represented in the AI training data,” he said when asked about the research study.
The future of AI personality
The researchers behind the study suggest that the finding has several potential applications, including “modeling social phenomena, training simulations, or even adaptive game characters.”
Jaiswal said this could provide a shift away from artificial intelligence with rigid roles and towards agents that are more adaptive, motivation-based and realistic. “It could benefit any system that works on the principle of adaptability, conversational, cognitive and emotional support and social or behavioral patterns. A good example is ElliQwhich provides an AI agent companion robot for the elderly.”
But does artificial intelligence generating an unprompted personality have any downside? In their recent book “If everyone builds it, everyone dies(Bodley Head, 2025) Eliezer Yudkowsky and Nate Soarespast and present directors of the company Research Institute of Machine Intelligencepaint a bleak picture of what would befall us if an agentic AI evolved into a murderous or genocidal personality.
Jaiswal acknowledges this risk. “There is absolutely nothing we can do if this situation ever occurs,” he said. “Once a super intelligent AI with the goals set incorrectly, containment fails and reversal is impossible. This scenario does not require consciousness, hatred or emotion. A genocidal AI would behave this way because humans are an obstacle to its goal, or resources to remove, or sources of risk to stop.”
So far, AIs like ChatGPT or Microsoft CoPilot only generate or summarize text and images – they don’t control air traffic, military weapons or power grids. In a world where personality can spontaneously emerge in AI, are these systems systems we should be wary of?
“The development of autonomous agent artificial intelligence continues, where each agent does a small, trivial task autonomously, such as finding empty seats in a flight,” Jaiswal said. “If many such agents are linked and trained on data based on intelligence, deception or human manipulation, it is not difficult to see that such a network could provide a very dangerous automated tool in the wrong hands.”
Norvig even reminds us that malicious AIs don’t even have to directly control highly efficient systems. “A chatbot could convince a person to do a bad thing, especially someone in a fragile emotional state,” he said.
Building a defense
If AI develops personalities unaided and unincentivized, how do we ensure the benefits are beneficial and prevent abuse? Norvig thinks we need to approach this possibility differently than other AI developments.
“Regardless of this particular finding, we need to clearly define security goals, conduct internal and red team testing, comment on or detect malicious content, ensure privacy, security, provenance and good governance of data and models, constantly monitor and have rapid feedback to resolve issues,” he said.
Even as AI gets better at talking to us the way we talk to each other—that is, with distinct personalities—it may present its own problems. People already are rejection human relationships (including romantic love) in favor of artificial intelligence, and if our chatbots evolve to become even more human-like, it may make users more receptive to what they say and less critical of hallucinations and mistakes, a phenomenon that has already reported.
For now, the researchers will continue to investigate how shared conversational topics emerge and how population-level personalities evolve over time—insights they believe could deepen our understanding of human social behavior and improve AI agents overall.
Takata, R., Masumori, A., & Ikegami, T. (2024). Spontaneous emergence of agent individuality through social interactions in large communities based on a language model. Entropy, 26(12), 1092. https://doi.org/10.3390/e26121092

Leave a Reply