Microsoft warned this Wednesday, 11th, about the risk of double agents, who take advantage of the access and privileges granted to artificial intelligence (AI) agents to carry out malicious actions without companies noticing.
According to Microsoft’s Cyber Pulse 2026 report, pioneering companies are already working with mixed teams of people and AI agents and more than 80% of Fortune 500 companies use live agents created with ‘low code’ tools.
The Fortune 500 is the annual list prepared by Fortune magazine that ranks the 500 largest corporations in the United States based on their total revenue.
In this sense, sectors such as software and technology (16%), manufacturing (13%), financial institutions (11%) and retail commerce (9%) already use AI agents to support the performance of increasingly complex tasks, such as writing proposals, analyzing financial data, classifying security alerts, automating repetitive processes and obtaining information.
These agents can work in an assisted manner, responding to user instructions, or autonomously, carrying out tasks with very little human intervention.
However, growing adoption is accompanied by new risks, as AI agents are scaling faster than some companies, causing a lack of visibility that can give rise to what are known as shadow AI and double agents.
This threat translates into the abuse of AI agents, taking advantage of the fact that they have broad permissions and access to systems to work autonomously.
“An agent with excessive permissions, or with incorrect instructions, can become a vulnerability,” warned Microsoft in a statement.
As with people, AI agents can become double agents “if they are not managed, have inadequate permissions, or receive instructions from untrustworthy sources.”
According to the Microsoft Data Security Index, only 47% of organizations across all industries say they are implementing specific security controls for generative AI.
The technology advocates the adoption of a zero trust security approach (‘Zero Trust’) with agents, which involves granting access with minimum and indispensable privileges, verifying who or what requests access and designing systems based on the principle that attackers can gain access.

Leave a Reply