DIF's ED Kim Hamilton Duffy, Wayne Chang, CEO of SpruceID and Professor Linda Jeng, a lawyer, former financial regulator and founder of Digital Self Labs, took to the stage at EIC to discuss how Decentralized Identity (DI) can help mitigate threats posed by Large Language Models (LLMs), ably moderated by KuppingerCole's Anne Bailey.
The panelists' thoughts on the nature and scale of the problem
Wayne: “Until now, we’ve been able to get by holding a driver's license up to a webcam, but with new AI tech you can fool these systems - this is already showing up at the edges. AI voice generation is really good now. People are saying it's not a valid identification factor any more.
"Phishing attacks are also on the rise, for example people pretending to be a romantic partner before encouraging the target to invest in crypto. Using AI bots for mimicry makes it easy for scammers to quickly establish trust".
Linda: "The rights conferred by GDPR, to decide with whom you share your data, are already difficult to enforce. Deepfakes will make this even harder.
"I’ve been meeting East Berliners here who tell me surveillance capitalism reminds them of what it was like growing up under the Stasi".
Kim: "A lot of the threats are not new, what we are talking about is an acceleration of these. We were already uncomfortable online. Now, with the things we’re seeing due to the advent of cheap, easy-to-use deepfake tech, we are past breaking point."
How Decentralized Identity can help mitigate these threats
Wayne: “We need to add authenticity to communication. We don’t want to present a strong ID every time we want to use a chat app, so it makes sense to embed DI into comms channels, to prove I’m real.
“I define DI as the ability of any party to play the role of issuer, holder and verifier, based on cryptographic trust. Having digital credentials issued by many parties will enable trusted content certification, giving us confidence about what goes into an AI model.
Linda: “It’s not about identity, it's about data governance, and creating chains of trust to combat risks from synthetic data.
Kim. “I think of DI as a set of standards, technologies and principles that restore individuals’ control over their data. With these technologies, we have the possibility to build products and solutions on strong foundations.
"One of the key aspects with DI is the ability to provide a consistent experience across channels, creating a much safer environment for individuals. For example if you get a phone call from your CEO asking you to transfer money, with DI you can be sure it’s them and not a deep fake of their voice.
Recommendations for solution developers
Wayne: “Focus on the value for the end user. The DI standards uniquely enable you to provide a great user experience while also ensuring privacy and solution sustainability, including the ability to swap out vendors if needed without disrupting the service you’re providing."
Linda: "We have grown used to not having to pay for digital services. The incentives need to change. Think about new models where we get paid for our data, enabled by content authenticity and DI tech."
Kim: “We have to balance usability and privacy. It’s clear people want to use LMM based tech in their lives. On the other hand we’re seeing increasingly aggressive interfaces, for example asking you to give full access to your documents or even your desktop. With DI, finally there are ways to provide people both the convenience AND the trust”
Other opportunities
Wayne: “There’s exciting work happening at Kantara Initiative around automated compliance with data regulations. Imagine giving someone a license to your personal data. Then, if you're a Data Processor it’s easy to automatically demonstrate compliance using consent receipts.
"It makes the “Accept all” problem go away, as you can decide what kind of consent receipts should be automatically generated for which parties.”
Linda: “We need to spend time educating policymakers and the public, but in the end it comes down to end user demand for solutions. There’s no legal requirement for open banking in the US, but it’s happening anyway as people want to share their banking data with fintechs. Creating a smooth, easy UX will help to create the demand.”
Kim: "There’s a huge role for expanding the scope of trust to content authenticity, similar to the browser check mark that shows a website has a valid SSL certificate. C2PA (link) is fantastic, and is already using VCs. However, there is a risk of getting locked into who can verify these claims, if we use Certificate Authorities (CAs) as the root of trust. We are talking to them and there’s strong interest in generalizing the trust model.”
The panelists' key takeaways
Wayne: "One of the early goals of the internet pioneers was to have your personal agent in cyberspace. We need to get back to that original definition of personal agents, taking advantage of them to certify our content and things done on our behalf."
Linda: "We need the right to certify our data as authentic. Right now we can’t tell what’s synthetic versus from an original creator. It’s not judging whether the data is good or bad, it just gives us additional info about the data we’re using."
Kim: "Everything we’re talking about is already here, its just about connecting the pieces. If you're building products, this is a great time to get involved. Come and talk to us at DIF!"
Linda Jeng, Wayne Chang, Kim Hamilton Duffy, Kristy Lam and Elissa Maercklein published Chains of Trust: Combatting Synthetic Data Risks of AI earlier today.