Guest blog: Karyl Fowler, CEO, Transmute

· 4 min read
Guest blog: Karyl Fowler, CEO, Transmute

Transmute has been a member of DIF since 2018 and was a leading contributor to Sidetree, a protocol that enables scalable Decentralized Identifier (DID) networks that can run on top of any existing storage or ledger system. Transmute’s open standards-based digital identity and credential technologies simplify regulatory and business processes. The company is currently focused on enabling digitisation of trade documentation and workflows for global imports and exports.

You’ve consistently highlighted the value of interoperability. Why is this so important to your business?

What really drew us to DIF was the agnosticism towards any specific technology stack and the focus on interoperability. For example, one of the things we’ve collaborated on is open source interoperability tests that support integration of DIDs and Verifiable Credentials (VCs) within existing infrastructure. You also see a lot of DIF community work on interoperability profiles.

A key moment for us was realizing we didn’t need a blockchain to use DIDs and VCs. We didn’t see demand for blockchain in the market, and you don’t need it to provide cryptographic proof that someone claimed something at a certain point in time.

As a result we shifted to focus heavily on W3C and IETF standards and doubled down on ensuring the Decentralized Identifier specification was adopted as an official web standard last year (Transmute’s Verifiable Data Platform offers users did:web and did:key methods out of the box).

The standards are what enables data to cross data silos and organizational boundaries. We can interoperate with anyone that implements standards-conformant DIDs and VCs. Alternatively, we can directly integrate with other parties across the supply chain, other claims sources like ERP systems for instance, where necessary (check out our VDP Adapter Marketplace).

“Let’s solve the data problem by getting on the same [insert blockchain name or other infrastructure du jour]” has not worked. Everyone has made tremendous investments in some kind of system. The purpose of the standards is to insulate the industry from the tech stack.

The terminology used in trade documents was standardized many years ago. How does this affect your work at Transmute?

In addition to our work at DIF, we’re a leading contributor to the W3C supply chain traceability vocabulary (a repository of VC schemas for specific trade categories). To ensure the data inside the VCs can be interpreted by all parties, including machines, we’ve gone to the current sources of trust for each product class and mode of transit that touch our customers.

As well as playing the roles of author and editor of several cornerstone specifications at the W3C, we’ve pioneered open standards-based semantic approaches with UN/CEFACT, the body that standardizes terms used in global trade and where Nis Jespersen, Transmute’s Solution Architect leads the UN Linked Data Project. In addition we collaborated with GS1, the kings of ubiquitous product identifiers and formats, having commercialized the literal barcode, to ensure we’re not reinventing definitions. We’ve taken into account many modes of perspective, from IATA to DCSA.

You’ve previously spoken about the power of Linked Data. Please can you unpack that for us a little?  

Transmute is focused on solving two big problems in global supply chains and cross-border trade. One is, where did the information flowing through our supply chain come from? The other is, what does it mean?

Supply chains still heavily depend on paper-based information flows. Each and every typical international shipment has a minimum of five documents issued, distributed and verified. What’s more, none of these trade documents or product identifiers are typically issued all at once — or even by the same party in the “chain.” "Digitization" so far has meant scanning and transcribing the piece of paper and emailing PDFs.

Transmute Verifiable Data Platform (VDP) uses Decentralized Identifiers to digitally “sign” trade documents for automatic and traceable persistence. DIDs mean you always know where a piece of data came from — who it came from — which solves the first problem.

Our product also leverages Linked Data (JSON-LD) to both codify the linkages across data types and documentation, and also to surface trends and relationships for interpretation or consideration. Making sure machines can read and understand your data is the first and most critical step towards unlocking actionable business insights that automatically tune with time.  

This Linked Data aspect enables documents from multiple issuers to be automatically incorporated into a knowledge graph by matching URIs (Uniform Resource Identifiers) contained within the files.

Everyone in the ecosystem benefits. For example, knowledge graphs enable manufacturers to answer questions about costs and resilience, such as how many suppliers do we have in a certain country or region and what are the cost and delivery-time trends across certain types of suppliers, or where are the weakest “links” and strongest dependencies within our supply network? Customs authorities, like the US Customs and Border Protection (CBP), on the other hand want to clear imported goods as efficiently as possible so they can focus on enforcement in problem areas. Incoming import credentials that are built with Linked Data allow them to more rapidly spot anomalies for further investigation.

One of the advantages of Verifiable Credentials in a commercial context is the ability to enable ecosystem collaboration while protecting participants’ confidentiality and Intellectual Property, for example via selective disclosure. How does this fit with the Knowledge Graph concept?  
The kind of information that’s available on a product should depend on who you are, for example whether you’re CBP or the end buyer. Using Decentralized Identifiers and Verifiable Credentials means our customers can provision access more granularly to different stakeholders. For example, if one party possesses ten documents, their knowledge graph will include the data from all ten documents, but if they only need to submit six of these to another party, say a regulator or an end customer, that party’s knowledge graph will only include data from these six documents.

Data minimization is a compelling commercial proposition that enables customers to only take liability for the data they need to perform the task at hand.

We have many of the same values as the Self Sovereign Identity community, but applied to businesses rather than individuals. What you get is a view of your information ecosystem that’s private to you.

How do verifiable data and Linked Data relate to AI and machine learning?

Everyone wants to use AI/ML on their data, but the quality of the output depends on the quality of the inputs. Having high-integrity data structured and connected to sufficient context to enable an AI or any other machine to interpret it is game-changing — we’re still discovering new uses of this data.