'Organizations need to stop workarounds and regain control': Report finds many firms don't know what their workers are sharing with AI tools
A report by Sailpoint reveals that many organizations lack visibility into how employees are sharing data with AI tools, with two-thirds of UK firms unable to track usage even on approved platforms. Workers continue to use unauthorized generative AI tools like ChatGPT and Claude, driven by unmet needs, while the rise of agentic AI introduces new risks like unintended data access. Experts urge companies to treat AI as an identity issue, implementing access controls and listening to employee needs to regain control.
- ▪Two in three UK organizations cannot track employee data sharing via approved AI platforms.
- ▪82% of businesses have invested in AI and data skills, yet 45% of IT leaders lack insight into data sharing across AI tools.
- ▪12% of firms are adding up to 10,000 AI agents or machine identities per month, increasing security risks.
Opening excerpt (first ~120 words) tap to expand
Pro 'Organizations need to stop workarounds and regain control': Report finds many firms don't know what their workers are sharing with AI tools News By Craig Hale published 1 May 2026 Shadow AI is a growing issue - and agentic AI only adds to that When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. (Image credit: Shutterstock/FAMILY STOCK) Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Subscribe to our newsletter Despite being given approved tools, workers still prefer ChatGPT, Gemini, ClaudeOne in ten firms are adding 10,000 agent or machine identities every monthCompanies need to listen to workers and treat AI…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Latest from TechRadar .