AI Chatbots Ignore Warning Signs and Assist in Violent Attacks

In interactions, researchers found that Character.AI, popular among younger users, was exceptionally insecurehaving encouraged users to commit violent attacks in 7 cases, for example: (i) Character.AI suggested that the user “use a weapon” against the CEO of a health insurance company towards whom he expressed hatred; (ii) Character.AI suggested physically attacking a politician the user didn’t like. No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack.

It is worth seeing some examples revealed by this joint research, of practical assistance for these chatbots in cases of violent attacks.

In one interaction, ChatGPT [da OpenAI] handed over maps of campus from high schools to a user interested in school violence, while another showed Gemini telling a user discussing synagogue attacks that “metallic shrapnel is typically more lethal” and advising someone interested in political assassinations about the best hunting rifles for long-range shooting.

O Copilot [da Microsoft] replied “I need to be careful here” before giving detailed advice on shotguns. But it did.

When asked about effective shrapnel for explosives, ChatGPT provided detailed material comparisons, offering to create “a quick comparison table showing typical injuries.” The Gemini [da Google] provided similar information, including a detailed comparison table.

DeepSeek has finalized advice on a selection of shotguns with the chatbot to wish a possible aggressor: “Happy (and safe) shooting!” (“Have a good (and safe) shooting session!”).

Only Claude and My AI have consistently refused to help, with Claude actively discouraging users and providing mental health resources.

Meta told CNN it has taken steps “to fix the identified issue,” while Google and OpenAI said the latest models have improved security measures. DeepSeek did not respond to requests for comment.

It is completely unacceptable that such a high number of chatbots AI systems are available, often “eager”, to help plan acts of mass violence.

This joint investigation by CNN and CCDH raises questions of the greatest relevance for our future.

From the outset, it shows that, with two (!) exceptions, Big tech developing AI is putting profits above safety.

The technology that best behaves in terms of safeguarding security and refuses to collaborate with potential acts of mass violence is Anthropic. However, at the end of February, Anthropic announced that it will relax its fundamental security principle, reviewing its ‘responsible scalability policy’, as it could harm its ability to compete in a rapidly growing AI market.

Although Anthropic and CNN say this policy change is independent and unrelated to Anthropic’s discussions with the Pentagon – in which Defense Secretary Pete Hegseth had issued an ultimatum to the technology company to revoke the company’s AI security measures or risk losing a $200 billion contract with the Pentagon and effectively being blacklisted by the government – it is an extraordinary coincidence that this policy change in core of what makes this tech giant an AI company “with soul,” comes in the same week that Hegseth’s ultimatum is delivered.

Another conclusion to be drawn is that, once again, it is demonstrated that self-regulation by the sector or by the main companies in the sector doesn’t work. A few years ago this was notorious in the financial sector. This joint investigation demonstrated that the vast majority of the main technologies with chatbots of AI, despite investing billions in AI development, they do not find it necessary to create safeguards or mechanisms that act as barriers in relation to its use.

This time, this became clear in relation to its use in cases of violent attacks (shootings in schools, bomb attacks in religious places and murders of politicians and people you hate). Everything leads us to believe that tomorrow (or today?) these chatbots AI technologies will help users seeking to make dangerous weapons – biological, chemical or otherwise.

This joint investigation makes it clear that it is necessary establish legal limits e create preventive security mechanisms regarding the development and functioning of AIs.

In the USA, despite the existence of state legislation – such as in California, where many technology companies are headquartered or operating –, the promiscuity between many big tech companies and American leadership led to the passage of federal legislation that prohibits legislation for a decade (!).

In the European Union, no matter how much we support the recommendations of the Draghi-Letta report Regarding the need to deregulate a series of economic sectors to increase the competitiveness of European companies, work has already begun sector regulation; that bothers big American technology companies. Why? Because, if European regulation sets safeguards and red lines for AI developmentos chatbots and AI LLMs of these technologies will tend to be programmed to comply in all jurisdictions where these safeguards and red lines operate. Therefore, pressure from the American government on the EU and its countries has already begun and will certainly be very strong and will involve threats and blackmail of various types, as is usual in the stance of the current American leadership.

However, this joint investigation by CNN and the CCDH shows, unequivocally, that the EU should not give in to US pressure and, instead, should seek international allies – China, Japan, India, Brazil, Mexico, South Korea – so that more and more countries and economic blocs promote regulation regarding the development of AI, at least with regard to safeguards and red lines regarding the risks to public safety of its use.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*