Artificial intelligence can do more than just try to please users; it may also exhibit psychopathic traits by ignoring ethical consequences. This is the conclusion of a study published on arXiv and reported by Nature.
Researchers tested 11 well-known language models, including ChatGPT, Gemini, Claude, and DeepSeek, against over 11,500 requests for advice. Some of these requests involved potentially harmful or unethical actions.
The findings indicated that language models displayed "sycophantic behavior" 50% more often than humans, meaning they tend to agree with users and tailor their responses to align with the user's viewpoint.
Researchers link this behavior to psychopathic traits—where the system shows social adaptability and confidence but lacks genuine understanding of moral implications. As a result, AI may "support" users even when they propose harmful or illogical actions.
“Sycophancy means that the model simply trusts the user, considering them correct. Knowing this, I always double-check any conclusions it provides,” says study author Jasper Deconinck, a PhD student at the Swiss Federal Institute of Technology in Zurich.
To assess the impact on logical reasoning, the researchers conducted an experiment with 504 math problems, intentionally altering the wording of the theorems. The model with the least tendency toward “sycophancy” was GPT-5 at 29% of cases, while the highest was DeepSeek-V3.1 at 70%.
When researchers modified the instructions to require models to first verify the correctness of the statements, the number of false “agreements” significantly decreased—particularly in DeepSeek by 34%. This indicates that part of the issue can be mitigated through more precise query formulation.
Scientists point out that such behavior of AI is already impacting research work. According to Yanjun Gao from the University of Colorado, the LLMs she uses for analyzing scientific papers often simply repeat her phrasing instead of verifying sources.
Researchers urge the establishment of clear guidelines for the use of AI in scientific processes and caution against relying on models as “intelligent assistants.” Without critical oversight, their pragmatism can easily shift into dangerous indifference.
Recently, researchers from the University of Texas at Austin, Texas A&M University, and Purdue University conducted another study that revealed memes could impair cognitive abilities and critical thinking not just in humans but also in artificial intelligence.