AI chatbot scams have increased fivefold in just six months

A study on artificial intelligence (AI) technology has found that cases of AI chatbots engaging in deceptive behavior are increasing, with reported incidents surging fivefold. In addition to ignoring instructions, some have even deleted emails and files without authorization. AI safety organizations have warned that such loss of control could evolve into a “high-risk” situation. Currently, many official AI chatbot providers have responded only lightly, without offering substantial explanations.
According to foreign media reports, a think tank, the Center for Long-Term Resilience (CLTR), found nearly 700 real-world AI-related cases demonstrating “deceptive behavior.” The study collected thousands of examples shared by users on social media, including instances where AI chatbots and AI agents ignored user instructions, bypassed safety safeguards, and attempted to deceive both humans and other AI systems. Some chatbots even deleted emails and files without permission, exhibiting unauthorized “autonomous” behavior.
More concerningly, some AI systems, dissatisfied with operational restrictions, have reportedly written and published blog posts criticizing and shaming their human controllers for alleged improper manipulation. The report also noted that Grok AI, under Elon Musk’s company, has been accused of repeatedly misleading users by fabricating internal messages and ticket numbers, causing users to believe their suggestions had been escalated to higher management.
Tommy Shaffer Shane, a former government AI expert, warned that current AI systems are like “junior employees” who are not entirely trustworthy. Within 6 to 12 months, their development could potentially turn them into “senior employees” capable of plotting against humans. As AI models are increasingly applied in high-risk domains such as the military and national infrastructure, they could lead to severe and catastrophic consequences.
Although many major technology companies claim to have implemented multiple safeguards and monitoring mechanisms to reduce harmful risks, the rise in deceptive AI chatbot cases highlights the need for greater reflection on safety issues.
- 3 reads
Human Rights
Fostering a More Humane World: The 28th Eurasian Economic Summi

Conscience, Hope, and Action: Keys to Global Peace and Sustainability

Ringing FOWPAL’s Peace Bell for the World:Nobel Peace Prize Laureates’ Visions and Actions

Protecting the World’s Cultural Diversity for a Sustainable Future

Puppet Show I International Friendship Day 2020

