Scaling AI DIFferently: Translating Promise into Value

· 7 min read
Scaling AI DIFferently: Translating Promise into Value

In the second of a series of guest posts by DIF Ambassador Misha Deville, Misha explores how decentralized identity provides the missing trust infrastructure needed for AI systems to scale delegation, personalization, and content authenticity. Read the first post in the series, "The Missing Growth Lever."

Everyone is talking about the promises of AI. Faster decisions, tailored experiences, intelligent agents. But delivering on that promise requires more than powerful models. It requires trusted infrastructure.

AI systems create value through delegation, personalisation, and decision-making. Yet these capabilities can’t scale consentfully or securely without the ability to prove who the system is working for (context), what it’s been authorised to do (consent), or whether its outputs can be trusted (credibility). Decentralised identity models and verifiable credentials can provide the missing infrastructure to ensure AI systems can deliver on their promises.

Agentic Delegation at Scale

To scale humans, we deploy agents. But to scale agents, we must manage them like humans.”[1] - Director Product Management, Writer.

Agentic AI is no longer a future proposition, it’s a present bottleneck. Organisations are deploying more autonomous agents, but they’re hitting “scaling cliffs” as agents multiply faster than their supervision and governance systems can manage them. Unlike APIs or scripts, agents are semi-autonomous systems with memory, tool access, and significant sensitive data exposure. Without clear scopes, audit trails, or authority checks, most delegation turns into vast liability surfaces, and a system of patchwork permissions quickly becomes unmanageable.

As Huang et al. write, ““Failure to address the unique identity challenges posed by AI agents operating in Multi-Agent Systems (MAS) could lead to catastrophic security breaches, loss of accountability, and erosion of trust in these powerful technologies”[2].

Most AI systems today are still rooted in prediction, but this is rapidly shifting toward delegated action. The agentic AI market has already reached $13.8 billion in 2025, and as agents start taking action, the question becomes: who is acting, on whose behalf, and under what authority?

‘Authenticated delegation’ enables third parties to verify that:

  • “(a) the interacting entity is an AI agent,
  • (b) that the AI agent is acting on behalf of a specific human user, whether pseudonymised or identifiably known, and
  • (c) that the AI agent has been granted the necessary permissions to perform specific actions”.[3]

This sounds simple, but delegated ‘trust’ doesn’t end with a single permission. Especially in asynchronous flows or agent-to-agent communication, it must be enforced dynamically over time.

Most systems today use token-based models, like OAuth, that assume trust within a known and bounded hierarchy is established when a token is issued, and meaningful wherever invoked. Once the token is delivered however, there is no enforcement that the agent will continue to act within its authorised scope over time, or the domain within which it is meaningful. A delegation-first model like ZTAuth, or other Authorization languages based on Object Capabilities (e.g. ZCaps, UCAN, Hats Protocol), adds runtime checks, making sure agents are still trusted, acting on behalf of the right user, and following the right rules, every time they take an action rather than with a token's expiry period.

Similar ideas are emerging in The MIT Computer Science and AI Lab’s framework for ‘Identity-Verified Autonomous Agents’, which introduces cryptographic proofs of authority and full auditability into multi-agent workflows2. Meanwhile, projects like the A2A protocol are building agent registries to support discovery, entitlements, and secure agent-to-agent communication across trust boundaries and OAuth-style enterprise hierarchies[4].

At the standards level, DIF’s Trusted AI Agents Working Group is building open specifications to support these use cases. Their work spans data models, object capability frameworks, interoperability libraries and runtime trust enforcement patterns. This is about more than securing agent-to-agent interactions. It’s about enabling a full lifecycle of trust, from credentialed instantiation of agents to delegated (logged and fully-auditable) execution, all the way through to forensic audit and remediation in the worst case scenario.

Hyper-personalisation that works

AI-driven hyper-personalisation promises to unlock entirely new value in digital experiences. McKinsey reports show meaningful increases in customer engagement and spend when personalisation is done right[5]. But it can just as easily backfire. A 2019 Gartner study found that 38% of users will walk away from a brand if personalisation feels “creepy”[6], and recent research with Gen Z confirms the duality that personalisation is welcome up until it crosses the line[7].

That line is defined by context and consent. When AI systems infer personal data from web-scraping profiles, browser fingerprinting, adtech data, and other opaque signals, they significantly undermine trust and user agency. When they employ algorithmic transparency, ethical frameworks, and user-authorised data inputs they mitigate the risks of conscious and unconscious mistrust and backlash[8].

Verifiable credentials in this context offer a solution that can give AI systems structured, consent-based attributes that users explicitly approve. This helps shift personalisation away from prediction and toward permission. It reduces the risk of misfires and irrelevant outputs, and increases both system reliability and user trust.

The travel industry is a clear example of the opportunity gap. Identity and preference checks occur at nearly every step, yet the ecosystem remains fragmented[9]. Travellers routinely overshare sensitive data multiple times, with little visibility into where it’s stored or how it’s used. Providers, in turn, struggle to deliver seamless or personalised services without duplicating traveller effort or violating privacy regulations.

That’s starting to change. Initiatives like IATA’s One ID aim to eliminate repetitive ID checks using biometric-backed credentials, creating a more secure, contactless experience. Live pilots by SITA and Indicio, in partnership with Delta Airlines and the Government of Aruba, have also shown how digital travel credentials can streamline identity verification at check-in, boarding, and border control.

These foundational shifts pave the way for more advanced personalisation use cases. With credential infrastructure in place, providers can begin supporting traveller-owned profiles that store personal preferences, enable selective data sharing, and allow AI agents to act on a traveller’s behalf. The DIF Hospitality & Travel Working Group is developing schemas to support this, with traveller profiles that are dynamic, revocable, and built for interoperability. As Nick Price notes, when preferences are embedded in credentials and shared on the traveller’s terms, personalisation becomes possible while still preserving privacy and trust[10].

Decision-Making and Sense-Making in Synthetic Noise

Identity fraud isn’t new, but AI has supercharged its scale, speed, and sophistication. In 2025, 1 in 20 ID verification failures are already being linked directly to deepfakes, while synthetic audio and video forgeries have increased 20% and 12% respectively in fraud attempts year-over-year[11]. National Security Agencies of the US, UK, Canada, and Australia, have warned that the quality and pace of AI-generated forgeries “have reached unprecedented levels and may not be caught by traditional verification methods”[12].

Ironically, fraud detection is one of AI’s strongest use cases. But its success depends on the quality of input data. Risk models tend to rely on patterns from historical data to flag anomalies. If the data is synthetic, spoofed, or unverifiable, the model can learn the wrong patterns or miss the threat altogether. It's a clear case of “attacker's advantage,” since automated attacks are almost free to launch at brute-force scale. What's worse,s AI adversaries are improving at impersonation, so hallucinated and forged content is proliferating across search engines, media outlets, and public discourse, contaminating LLMs at the lowest level of training data.

“As AI agents grow increasingly adept at mimicking human behavior - crafting text, creating personas, and even replicating nuanced human interactions - it becomes harder to maintain digital environments genuinely inhabited by real people.”2

Detecting ‘what’s real’ at scale now requires cryptographic certainty. Verifiable credentials offer a solution to the ‘garbage in, garbage out’ problem by allowing systems to verify data attributes without exposing raw personal data. Content credentials, as standardised by C2PA, provide tamper-evident metadata that can trace authorship, modification history, and usage rights across files, namespaces, and industry associations. This helps prevent both fraud in high-stakes transactions and reduce the risk of model “pollution” by synthetic content[13].

These mechanisms are quickly moving from optional to operational. California’s SB 942, set to take effect in 2026, will require that all AI-generated or AI-altered content be disclosed and tied to an immutable record of provenance. As Erik Passoja writes, “Compliance is just the on-ramp… the real destination is an authenticated digital ecosystem.”[14]. Infrastructure built on signed manifests, cryptographic consent, and watermark durability won’t just prevent fraud, it wil

l underpin new forms of value, from automated licensing to portable and just-in-time reputation.

In all of these cases, a reliable identity layer isn’t a ‘nice to have’, but a prerequisite for trust, adoption, and real-world value. Decentralised identity and verifiable credentials provide the infrastructural foundation that lets AI scale and deliver new opportunities. DIF’s working groups are tackling these challenges head-on, from authenticated AI agents to verifiable travel profiles and content authenticity.

The next article in this series will dive into the work of the Content Authenticity Initiative and DIF’s Creator Assertions Working Group, exploring how open standards are enabling AI to be used confidently in media, preserving trust, provenance, and creative integrity.

Find out more about DIF’s working groups here:

If you’d like to stay updated on the launch of DIF’s Trusted AI Agents Working Group, reach out to contact@identity.foundation.


  1. M. Shetrit (2025). “Supervising the synthetic workforce: Observability for AI agents requires managers, not metrics”. Writer. ↩︎
  2. Huang et al. (2025). “A Novel Zero-Trust Identity Framework for Agentic AI: Decentralized Authentication and Fine-Grained Access Control”. arXiv. ↩︎
  3. South et al. (2025). “Authenticated Delegation and Authorized AI Agents”. arXiv. ↩︎
  4. A2A Project. “Agent Registry - Proposal”. GitHub. ↩︎
  5. McKinsey & Company (2025). “Unlocking the next frontier of personalised marketing”. ↩︎
  6. Gartner (2019). “Gartner Survey Shows Brands Risk Losing 38 Percent of Customers Because of Poor Marketing Personalization Efforts↩︎
  7. Peter et al. (2025). “Gen AI – Gen Z: understanding Gen Z’s emotional responses and brand experiences with Gen AI-driven, hyper-personalized advertising”. Frontiers in Communication. ↩︎
  8. Park, K & Yoon, H (2025). “AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI”. Nature. ↩︎
  9. DIF (2025). “DIF Launches Decentralized Identity Foundation Hospitality & Travel Working Group↩︎
  10. Dock (2025). “How Digital ID is Reshaping the Travel Industry↩︎
  11. Bondar, Ira (2025). “Real-time deepfake fraud in 2025: Fighting back against AI-driven scams”. Veriff. ↩︎
  12. NSA et al. (2025). “Content Credentials: Strengthening Multimedia ↩︎
  13. Adobe (2025). “Content Credentials”. ↩︎
  14. Passoja, Erik (2025). “From Compliance to Prosperity” LinkedIn. ↩︎

Related Articles