In the first of a series of guest posts by DIF Ambassador Misha Deville, Misha sets the stage for an underappreciated business problem one layer deeper than traditional User Experience analyses.
The Trust Ceiling
The paradox is that we want AI systems that understand us, but only on our own terms. We’re uneasy when AI appears to use personal data without permission, yet we’re frustrated when it doesn’t ‘get us’ on the first try (in ways that it would need lots of personal data to fill in the context). This might seem like a philosophical problem, but I am pointing to an underappreciated technical one. Without embedding trust negotiation into the system design itself, through features such as transparent data flows and consent mechanisms, these systems cannot negotiate subtext and context; without this capability, we will fail to see AI adoption (and utility) scale as promised.
Friction in trust, and therefore adoption, stems from the lack of transparency and control, not from ‘poor model intelligence’. 66% of U.S. shoppers, for example, say they would not allow AI to make purchases for them, even if it meant securing better deals, because “consumers suspect AI is working for the retailer, not them. Until that trust gap closes, AI will remain a product discovery tool”1.
Public patience with AI product rollouts is already wearing thin. Global brands like Duolingo are facing significant backlash after announcing ‘AI-first’ strategies2, and the perception gap between those building AI systems and the addressable markets expected to adopt them is ever widening. 51% of adults interviewed in a recent Pew Research Study said they were more concerned than excited about AI, which contrasts sharply with the mere 15% of ‘AI experts’ that held this view3.
The generative-AI race to market that made AI more powerful and more personal, also made systems more opaque, leaving users in the dark about how and why decisions are made. In the absence of transparency, even well-intentioned systems lose public trust. The WEF frames this as a missed opportunity to build new markets on a healthy footing: “Without transparency, AI systems might be used that are not value-aligned at a level that is acceptable to users, or users might distrust AI systems that actually are sufficiently value-aligned because they have no way of knowing that.”4
To embed trust into the system itself, people need to be able to verify:
- who the AI is working for (context),
- what it’s allowed to do (consent), and,
- whether the output can be trusted (credibility).
The solution therefore isn’t more intelligent AI models, it’s a complimentary, verifiable identity layer. An identity layer doesn’t just enable trust at the level of individual users trusting individual interfaces; it also supports a healthier marketplace overall by making AI systems traceable, comparable to one another, and accountable to the users and services they interact with. It helps users in aggregate trust AI more in aggregate.
Verifiable credentials built on a backbone of decentralised digital identifiers, enable cryptographic proofs of user and object attributes. Context, consent, and credibility become programmable, and the user experience transforms from coercive to empowering.
The Market Opportunity
Digital identity and AI are fundamentally interdependent, but the current investment landscape and dominant business strategies do not reflect this. Today, AI is seen as a growth engine and identity infrastructure is seen as compliance overhead. This mental model is not just outdated, it’s economically limiting.
In 2024, global VC investment into AI-related companies exceeded $100 billion, marking an 80% increase from 20235. Meanwhile, investment in digital identity declined. In the UK, one of the world’s leading digital identity markets, the sector saw only $58 million in VC funding in 2024, a 69% decline from the year before6. This stark investment gap reveals a misunderstanding of the technology stack required for trustworthy, scalable AI.
The convergence of these technologies is both ethically necessary and commercially advantageous. An identity layer that’s fit for this new era will enable AI breakthroughs to scale with direction, grounding, and accountability. If AI is the engine, then digital identity is the navigation system. It doesn’t slow the rocket down. It ensures it lands where we need it to.
The companies that align AI with verifiable digital identity will capture disproportionate market share where others hit trust ceilings. Strategies that capitalise on both technologies will unlock the promised value in use cases such as:
- In fintech, verified delegation allows AI agents to execute trades securely, with cryptographic proof of authority and clear audit trails.
- In healthcare, patient-controlled access to verified medical records enables truly personalised care without compromising consent or privacy.
- In global supply chains, AI systems can confirm the authenticity of every product and actor, preventing counterfeits, improving traceability, and automating trust at scale.
Digital identity is not a constraint on AI. It’s the infrastructure that allows it to scale responsibly and profitably. Standards like W3C’s Verifiable Credentials Data Model, provide a vital foundation for AI systems to verify context, consent, and credibility without compromising privacy. The companies that embrace this interdependence will define the next wave of digital infrastructure. Those that don’t, will risk building impressive technology that nobody trusts enough to use.
In the next article, we’ll explore how decentralised identity unlocks the real-world value of AI, starting with three core functions behind its promises: personalisation, delegation, and decision-making.
If you’d like to stay updated on the launch of DIF’s AI-focused working group, reach out to contact@identity.foundation.
1 Charleston, SC (2025). “Two-Thirds of Shoppers Say ‘No’ to AI Shopping Assistants – Trust Issues Could Slow Retail’s AI Revolution”. Omnisend.
2 Braun, S (2025). “Duolingo’s CEO outlined his plan to become an ‘AI-first’ company. He didn’t expect the human blacklash that followed.” Fortune.
3 McClain et al. (2025). “How the U.S. Public and AI Experts View Artificial Intelligence”. Pew Research Center.
4 Dignum et al. (2024). “AI Value Alignment: Guiding Artificial Intelligence Towards Shared Human Goals”. World Economic Forum.
5 Fairview Capital. (2024). “Preparing for the Agentic Era in Venture Capital”.
6 Wyman, O. (2025). “Digital Identity Sectoral Analysis 2025”. Gov.UK