Introducing DIF Grants

DIF is kicking off a program to administer narrowly-scoped financial support for community initiatives, ranging in format from grants to more competitive implementation bounties, hackathon-style open collaborations, and security reviews.

· 5 min read

Lightweight micro-grants to further the DIF mission

DIF is kicking off a program to administer narrowly-scoped financial support for community initiatives, ranging in format from grants to more competitive implementation bounties, hackathon-style open collaborations, and security reviews. Although the complexity and number of stakeholders involved will naturally vary, all of these bounties will all be structured around public “challenges” that directly reward efforts helpful to the community as a whole.

Each of these challenges will culminate in a library-, repository-, or narrow specification-sized donation that closes interoperability gaps, generates usable documentation, or solving other tooling and data problems that move the decentralized identity needle. These could be signature suites, libraries, toolkits, or even detailed tutorials and implementation guides.

Prompts for whitepapers or non-normative implementation guides are harder to define in testable terms, but still welcome; one way of defining them is by listing a significant, finite set of input documents (such as specifications and regulatory profiles) and defining success as simple, useful documentation incorporates all their prescriptions and limitations.

Photo by Julien Chatelain

To make the awarding of these grants as objective and non-controversial as possible, we will be administering them in the form of technically-specified “challenges” that break down into specific, testable criteria. Submissions will be defined as donations to DIF that meet some or all of the criteria of a given challenge; the Technical Steering Committee will administer the adjudication of testing submissions against challenges.

Grant money can be offered directly from DIF’s budget by decision of the DIF’s governing board (the Steering Committee) and/or offered by a member company and approved by the SC. The donator of the grant funds can choose a specific WG or combination of WGs to specify and fine-tune the challenge. These WGs work with the Technical Steering Committee to ratify successful completion and authorize release of funds once a conforming donation has been received.

FAQ for DIF Grants:

Who can offer a Grant?
Any DIF member and/or the DIF Steering Committee can offer a grant (or matching funds) as long as the scope of the challenge benefits the entire DIF community. DIF SC will review grant requests before the challenge is prepared and published.
Who writes the challenge?
Grants challenges are written by the chairs and/or the members of the WG chosen by the donors to administer the grant. The WG is responsible for writing a challenge acceptable to the SC and the grantor. WG chairs should submit the challenge at least one week before the deadline for Grantor approval (aka “go/no go decision”), to allow for fine-tuning or negotiation of terms.
What makes a good challenge?
A good challenge is testable to make the decision about its success objective rather than subjective, and it should include its own testable definition of success and, where appropriate, usable test fixtures.

A good challenge also serves the broader community equally (for instance, not just JWT-, LD-, or Aries-based systems). A challenge should not favor any given commercial product (or infrastructure such as specific blockchains) substantially over others, and care should be taken to think through what the “equivalences” or counterparties are if a challenge comes from a specific community, credential format, etc.

Who can submit and who reviews it?
Participation is open for the broad community (no prior DIF membership is required), but submissions will only be considered upon donation to the challenge-owning DIF WG. The donation should be “clean IP,” i.e. comprised entirely of IP generated by the donator(s), and the challenge owner may also impose some limitations on external dependencies.
Is there a minimum and maximum donation size that DIF will administer as a grant?
For now, DIF has decided not to use this structure for grants under 1000U$. DIF cannot manage grants of over 50,000K U$ at this time.
How is/are winner(s) selected?
Each challenge can specify the relationship between testable criteria and winner(s). The template for challenge authors specifies a few common patterns, i.e., pre-approved grantee(s), bounty-style competition, and sponsored lead/editor for an open work item.

Payment is handled through a contract signed with JDF.

Checklist for defining a good challenge:

  1. Define the solution to the challenge as simply as possible, ideally in one noun phrase, i.e.
    • “A lossless, general-purpose translator between DID Document representations”,
    • “A sample implementation that reimplements DIDComm access via Bluetooth in a second major language,” or
    • “A specification for how EDVs could be accessed over NFT written by someone with broad NFT and security experience”
  2. Consider redundancy of effort - Do people apply with an idea and get approved to execute (i.e., “grant”), working in reasonable confidence of post-success payment if they meet the deadline? Or do they compete with unknown number of other teams, only applying to get paid after completion? It is recommended that challenge owners decide early between these three modes of operation:
    • “mini-grant” mode: applicants get pre-approved before beginning work and work privately until donating
    • “Work item” mode: applicants get pre-approved to lead a public work item at DIF, which may accept volunteer contributions from other DIF members. Work should be donated even if the challenge is not completed or is completed sooner by another party.
    • “Open participation” mode, aka implementation bounty: anyone can work privately and successful donation(s) get(s) the award.
  3. A good challenge should not be met merely by addressing known bugs, repackaging prior art, superficial/minor bugs, etc.
  4. Where in the interoperability WG’s map of implementer decisions does the challenge “live” and which sections does it directly affect? Answering this question may help argue for direct or indirect benefits to the whole community.
  5. How familiar are the authors of the challenge with that specific problem space, in and outside of decentralized identity? Have experts been consulted from outside the WG, or better yet, outside of the decentralized identity community?
    • How evenly distributed would the benefits be? Please consider (and write out) how various subcommunities (i.e., Aries-based solutions, JWT-/JSON-based solutions, and LD-based solutions) would benefit from this solution. If the benefits would be unevenly distributed, consider adding an additional challenge criterion (perhaps even awardable separately) for extending extra benefits to the least-served community, such as an implementation guide, @Context file, vocabulary to aid translation, etc.
  6. Define criteria for success, ideally testable ones to minimize the subjective judgments to be made by the TSC.
    • If possible, include dummy data and other test vectors.
    • Securing formal review by an outsider with specified qualities or experience can be written in as a success criteria or payment trigger.
    • Vaguely-worded requirements of “originality” are hard to assess, but negative requirements like “MUST/SHOULD not re-implement existing solutions linked from the reference section of this spec” can help make sure requirements clear to all involved.

Related Articles

DIF Monthly #30
· 9 min read
DIF Monthly #27
· 8 min read

New year, new you?

DIF is hiring! We are looking for a full-time Program Manager to oversee, facilitate and support our ongoing work. Is this you or do you someone who would be perfect for the role?

· 1 min read
DIF Grant #1: JWS Test Suite
· 4 min read
🚀DIF Monthly #17 (April, 2021)
· 7 min read
The Steering Committee is growing
· 4 min read