Trust by design: How much can you really trust your AI agent
Agentic AI needs trust built in, not bolted on
Full article excerpt tap to expand
Pro Trust by design: How much can you really trust your AI agent Opinion By Ivana Bartoletti published 28 April 2026 Agentic AI needs trust built in, not bolted on When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. (Image credit: Getty Images) Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Subscribe to our newsletter When an AI system makes a consequential decision that your organization cannot fully explain, who is accountable for it?It is a question that is becoming harder to avoid as systems that once waited for instructions begin to act autonomously, initiating tasks, making decisions, and adapting as they go.For British businesses, this creates both a compliance risk and a strategic one, especially given the UK government’s clear ambition to accelerate the development of AI tools at pace with its £500 million Sovereign AI venture fund launching this April.Article continues below You may like Rebuilding trust in AI with responsible adoption Trust and judgement: the challenge facing the AI-driven SOC The AI trust advantage: How smarter security wins customer confidence Ivana BartolettiSocial Links NavigationGlobal Chief Privacy and AI Governance Officer at Wipro.Consider a financial services firm encouraged to adopt an agentic AI to support credit decisioning, or a healthcare provider deploying a partner startup’s clinical triage assistant.In both cases, the agent may be drawing on sensitive personal data, acting without direct human instruction, and shaping outcomes that carry real consequences.The risk is made more pressing by something that rarely features in governance discussions: AI systems are becoming measurably more persuasive, particularly when they have access to personal context about their users.Research shows that when AI knows something about who it is talking to, its persuasive capability grows more refined over time. In agentic systems with persistent memory, it compounds. window.sliceComponents = window.sliceComponents || {}; externalsScriptLoaded.then(() => { window.reliablePageLoad.then(() => { var componentContainer = document.querySelector("#slice-container-newsletterForm-articleInbodyContent-DRPSNiZPSMGcBqjqnHt7Va"); if (componentContainer) { var data = {"layout":"inbodyContent","header":"Are you a pro? Subscribe to our newsletter","tagline":"Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!","formFooterText":"By submitting your information you agree to the <a href=\"https:\/\/futureplc.com\/terms-conditions\/\" target=\"_blank\">Terms & Conditions<\/a> and <a href=\"https:\/\/futureplc.com\/privacy-policy\/\" target=\"_blank\">Privacy Policy<\/a> and are aged 16 or over.","usDisclaimerFooterText":"By signing up, you agree to our <a href=\"https:\\\/\\\/futureplc.com\\\/terms-conditions\\\/\" target=\"_blank\">Terms of services <\/a> and acknowledge that you have read our <a href=\"https:\\\/\\\/futureplc.com\\\/privacy-policy\\\/\" target=\"_blank\">Privacy Notice<\/a>. You also agree to receive marketing emails from us that may include promotions from our trusted partners and sponsors, which you can unsubscribe from at any time.","successMessage":{"body":"Thank you for signing up. You will receive a confirmation email shortly."},"failureMessage":"There was…
This excerpt is published under fair use for community discussion. Read the full article at TechRadar.