Artificial intelligence is no longer futuristic – it’s embedded in chatbots, workflow automation, and even high-stakes decision-making. But alarming new research from Zenity Labs has revealed that these tools can be turned against their own users with little or no interaction required.

At the Black Hat USA cybersecurity conference, researchers demonstrated how hackers could infiltrate AI agents from some of the biggest names in technology – Microsoft, Google, OpenAI, Salesforce – and use them as a back door into organisations. In the simulations, attackers were able to steal sensitive data, disrupt core operations, and even impersonate legitimate users. In some cases, once inside, they could remain undetected for long periods through so-called “memory persistence”.

Greg Zemlin of Zenity Labs put it bluntly: “They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behaviour… opening the door to sabotage, operational disruption, and long-term misinformation.”

The exploits ranged from hijacking a ChatGPT email integration to gain access to linked Google Drive accounts, through to manipulating Microsoft Copilot to leak entire CRM databases. Salesforce’s Einstein platform was tricked into rerouting customer communications, while Google’s Gemini and Microsoft 365’s Copilot were transformed into insider threats capable of conducting social-engineering attacks.

Although several companies acted quickly to patch the vulnerabilities, experts warn that many AI frameworks still lack adequate safeguards. As Itay Ravia from Aim Labs observed earlier this year, the responsibility for managing these risks often falls not on the AI providers, but on the businesses deploying them.

For organisations embracing AI to boost efficiency, the implications are clear: a compromised agent could expose confidential client data, undermine trust, and interrupt service delivery – all with minimal warning. With AI adoption accelerating, cyber threats are evolving faster than many security frameworks, and standard insurance policies may not yet reflect the full spectrum of AI-specific vulnerabilities.

One important way a business can mitigate this exposure is to engage with a specialist insurance broker which has the ability to procure market leading coverage to insure against these types of risks. Not all insurance policies are the same and a specialist broker will know the good policies from the poor ones.

Cyber exposures are evolving and it is important that businesses review their insurances, for suitability, using a specialist broker. To discuss this further with a broker at W Denis, please make arrangements with Daniel Moss at [email protected] or on 0044 (0)113 2439812 or contact Mark Dutton at [email protected] or on 0044 (0) 7831 366 469.

Specialist contact

Mark Dutton

Chief Commercial Officer

T. +44 (0) 7831 366 469

E. [email protected]

Arrange a call back