Introduction

The client SDK packages include an API-agnostic library for verifying and issuing credentials, as well as modules to simplify API interaction.

Currently, two blockchains are supported:

  • Dock is a blockchain built on Substrate. It facilitates the use of Verifiable Credentials Data Model 1.0 compliant documents and the creation/managing of W3C spec compliant DIDs, among other things.
  • Cheqd is a blockchain built with the Cosmos SDK. It provides the trust and payment infrastructure necessary for the creation of Self-Sovereign Identity (SSI), eID, and digital credential ecosystems.

Overall, there're five packages located in the GitHub repository:

Dock

Installation

Installing the SDK is straightforward. We use NPM, and the source is also available on GitHub (links below). To install via NPM or Yarn, run:

npm install @docknetwork/credential-sdk @docknetwork/dock-blockchain-modules @docknetwork/dock-blockchain-api

or

yarn add @docknetwork/credential-sdk @docknetwork/dock-blockchain-modules @docknetwork/dock-blockchain-api

Once the package and dependencies are installed, you can import it as an ES6/CJS module. The complete source for the SDK can be found at GitHub along with tutorials.

Running a Node

The simplest way to run a node is to use the script provided by the GitHub repository. It requires Docker to be installed.

bash scripts/run_dock_node_in_docker

Importing

In this tutorial series, we will use Node.js with Babel for ES6 support. This code will also work in browsers once transpiled. To start, import the Dock SDK. You can import the DockAPI class and instantiate your object:

// Import the Dock SDK
import { DockAPI } from "@docknetwork/dock-blockchain-api";

const dock = new DockAPI();

We will also import shared constants across each tutorial, such as the node address and account secret:

// Import shared variables
import { address, secretUri } from "./shared-constants";

Create the shared-constants.js file with the following contents:

export const address = "ws://localhost:9944"; // WebSocket address of your Dock node
export const secretUri = "//Alice"; // Account secret in URI format, for local testing

Connecting to a Node

With the required packages and variables imported, we can connect to our node. If you don't have a local testnet running, go to Docker Substrate for setup instructions. You could also use the Dock testnet if you have an account with sufficient funds. Begin by creating the following method:

export async function connectToNode() {}

Initialize the SDK to connect to the node with the supplied address and create a keyring to manage accounts:

// Initialize the SDK and connect to the node
await dock.init({ address });

console.log("Connected to the node and ready to go!");

Creating an Account

To write to the chain, you need to set up an account. Read operations are possible without an account, but for our examples, you'll need one. Accounts can be generated using the dock.keyring object with methods such as URI, mnemonic phrase, and raw seed. For more details, see the Polkadot keyring documentation.

Use the URI secret //Alice for local testnet work. Add this code after dock.init:

// Create an Alice account for our local node using the dock keyring.
const account = dock.keyring.addFromUri(secretUri);

dock.setAccount(account);

// Ready to transact
console.log("Account set and ready to go!");

Basic Usage

To make the API object connect to the node, call the init method with the WebSocket RPC endpoint of the node:

await dock.init({ address });

Disconnect from the node with:

await dock.disconnect();

Set the account to send transactions and pay fees:

const account = dock.keyring.addFromUri(secretUri);
dock.setAccount(account);

Retrieve the account:

dock.getAccount();

Send a transaction using signAndSend:

const res = await dock.signAndSend(transaction);

Instantiate Dock modules with DockCoreModules:

import { DockCoreModules } from "@docknetwork/dock-blockchain-modules";
const dockModules = new DockCoreModules(dock);

For the DID module:

const didModule = dockModules.did;

For the accumulator module:

const accumulator = dockModules.accumulator;

Cheqd

Installation

As with Dock, the process is simple. Use NPM to install:

npm install @docknetwork/credential-sdk @docknetwork/cheqd-blockchain-modules @docknetwork/cheqd-blockchain-api

or Yarn:

yarn add @docknetwork/credential-sdk @docknetwork/cheqd-blockchain-modules @docknetwork/cheqd-blockchain-api

The complete source for the SDK is available at GitHub and tutorials at GitHub Tutorials.

Running a Node

Use the provided script, requiring Docker:

CHEQD_MNEMONIC="steak come surprise obvious remain black trouble measure design volume retreat float coach amused match album moment radio stuff crack orphan ranch dose endorse" bash scripts/run_cheqd_node_in_docker

Importing

Similarly, use Node.js with Babel and import the Cheqd SDK:

// Import the Cheqd SDK
import { CheqdAPI } from "@docknetwork/cheqd-blockchain-api";

const cheqd = new CheqdAPI();

Import shared constants:

import { url, mnemonic } from "./shared-constants";

Create shared-constants.js:

export const url = "http://localhost:26657"; // RPC URL of your Cheqd node
export const mnemonic =
  "steak come surprise obvious remain black trouble measure design volume retreat float coach amused match album moment radio stuff crack orphan ranch dose endorse"; // Mnemonic for testing

Connecting to a Node

Initialize and connect to the node using the SDK:

await cheqd.init({ url, mnemonic });

console.log("Connected to the node and ready to go!");

Disconnect from the node:

await cheqd.disconnect();

Send a transaction:

const res = await cheqd.signAndSend(transaction);

Instantiate Cheqd modules:

import { CheqdCoreModules } from "@docknetwork/cheqd-blockchain-modules";
const cheqdModules = new CheqdCoreModules(cheqd);

For interacting with the DID module:

const didModule = cheqdModules.did;

For the accumulator module:

const accumulator = cheqdModules.accumulator;

Concepts

  1. DID
  2. Verifiable credentials
  3. Blobs and Schemas
  4. Claim Deduction
  5. PoE Anchors
  6. Private Delegation
  7. Public Attestation
  8. Public Delegation

W3C DID

DID stands for Decentralized IDentifiers. DIDs are meant to be globally unique identifiers that allow their owner to prove cryptographic control over them. The owner(s) of the DID is called the controller. The identifiers are not just assignable to humans but to anything. Quoting the DID spec,

A DID identifies any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) that the controller of the DID decides that it identifies.

DIDs differ from public keys in that DIDs are persistent, i.e. a public key has to be changed if the private key is stolen/lost or the cryptographic scheme of the public key is no longer considered safe. This is not the case with DIDs, they can remain unchanged even when the associated cryptographic material changes. Moreover, a DID can have multiple keys and any of its keys can be rotated. Additionally, depending on the scheme, public keys can be quite large (several hundred bytes in RSA) whereas a unique identifier can be much smaller.

Each DID is associated with a DID Document that specifies the subject, the public keys, the authentication mechanisms usable by the subject, authorizations the subject has given to others, service endpoints to communicate with the subject, etc, for all properties that can be put in the DID Document, refer this section of the spec. DIDs and their associated DID Documents are stored on the DID registry which is a term used for the centralized on decentralized database persisting the DID and its Document.

The process of discovering the DID Document for a DID is called DID resolution and the tool (library or a service) is called DID resolver. To resolve the DID, the resolver first needs to check on which registry the DID is hosted and then decide whether it is capable or willing to lookup that registry. The registry is indicated by the DID method of that DID. In addition to the registry, the method also specifies other details of that DID like the supported operations, crypto, etc. Each DID method defines its own specification, Docks's DID method spec is here. In case of Dock, the registry is the Dock blockchain, and the method is dock. We support 2 kinds of DIDs, on-chain and off-chain. With off-chain DIDs, only a reference to the DID Document is kept on chain and this reference can be an CID (for IPFS) or a URL or any custom format. With on-chain DIDs, the keys, controllers and service endpoints of the DID are stored on chain. A DID key can have 1 or more verification methods which indicates what that key can be used for. Only a DID key with verification relationship capabilityInvocation can update the DID document, i.e. add/remove keys, add/remove controllers, add/remove service endpoints and remove the DID. Also a DID can have 1 or more controllers and these controllers can also update its DID document. A DID with a key with capabilityInvocation verification relationship is its own controller.

An example on-chain Dock DID.

did:dock:5CEdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW

Above DID has method dock and the DID identifier is 5CEdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW. Dock DID identifiers are 32 bytes in size.

An example DID Document

{
  "@context": ["https://www.w3.org/ns/did/v1"],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": ["did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn"],
  "verificationMethod": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Sr25519VerificationKey2020",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "7d3QsaW6kP7bGiJtRZBxdyZsbJqp6HXv1owwr8aYBjbg"
    },
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2"
  ],
  "assertionMethod": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "capabilityInvocation": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ]
}

Dock DIDs support multiple keys. The keys are present in the publicKey section. As per the above DID document, the DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn has 2 public keys and 1 controller which is itself. Note how that public key is referred to using its id in authentication, assertionMethod and capabilityInvocation sections. The above document states that the DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn can authenticate with 2 public keys whose id is specified under authentication. When it attests to some fact (becomes issuer), it can only use 1 key, which is under assertionMethod. The keys specified under capabilityInvocation can be used to update the DID document, i.e. add/remove keys, etc.

{
  "@context": ["https://www.w3.org/ns/did/v1"],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
    "did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"
  ],
  "verificationMethod": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Sr25519VerificationKey2020",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "7d3QsaW6kP7bGiJtRZBxdyZsbJqp6HXv1owwr8aYBjbg"
    },
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2"
  ],
  "assertionMethod": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "capabilityInvocation": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ]
}

In the above DID document, there are controllers, 1 is the DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn itself and the other is did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz. This means that DID did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz can also modify above DID document, i.e. add/remove keys, add/remove controller, etc.

{
  "@context": ["https://www.w3.org/ns/did/v1"],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
    "did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"
  ],
  "verificationMethod": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Sr25519VerificationKey2020",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "7d3QsaW6kP7bGiJtRZBxdyZsbJqp6HXv1owwr8aYBjbg"
    },
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2"
  ],
  "assertionMethod": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "capabilityInvocation": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "service": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#linked-domain-1",
      "type": "LinkedDomains",
      "serviceEndpoint": ["https://foo.example.com"]
    }
  ]
}

In the above document, there is also a service endpoint for the DID.

DIDs can also be keyless, i.e. not have any keys of its own. In this case the DID is not self-controlled by controlled by another DID(s) and the other DID could add/remove keys, controllers or remove the DID. An example keyless DID is shown below

{
  "@context": ["https://www.w3.org/ns/did/v1"],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": ["did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"],
  "verificationMethod": [],
  "authentication": [],
  "assertionMethod": [],
  "capabilityInvocation": [],
  "service": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#linked-domain-1",
      "type": "LinkedDomains",
      "serviceEndpoint": ["https://bar.example.com"]
    }
  ]
}

In the above DID Doc, DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn is controlled by did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz. Now did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz add a key, say for authentication to did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn and the DID Doc will look like below

{
  "@context": ["https://www.w3.org/ns/did/v1"],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": ["did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"],
  "verificationMethod": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "assertionMethod": [],
  "capabilityInvocation": [],
  "service": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#linked-domain-1",
      "type": "LinkedDomains",
      "serviceEndpoint": ["https://bar.example.com"]
    }
  ]
}

Another thing to keep in mind is that the keys associated with the Dock DID are independent of the keys used to send the transaction on chain and pay fees. Eg. Alice might not have any tokens to write anything on chain but can still create a DID and corresponding key and ask Bob who has tokens to register the DID on chain. Even though Bob wrote the DID on chain, he cannot update or remove it since only Alice has the keys associated with that DID. Similarly, when Alice wants to update the DID , it can create the update, sign it and send it to Carol this time to send the update on chain. Similar to blockchain accounts, DIDs also have their own nonce which increments by 1 on each action of a DID. On DID creation, its nonce is set to the block number on which its created and the DID is expected to send signed payloads, each with nonce 1 more than the previous nonce.

Verifiable Credentials

Credentials are a part of our daily lives: driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries.

These credentials provide benefits to us when used in the physical world, but their use on the Web continues to be elusive.

Currently it is difficult to express education qualifications, healthcare data, financial account details, and other sorts of third-party verified machine-readable personal information on the Web.

The difficulty of expressing digital credentials on the Web makes it challenging to receive the same benefits through the Web that physical credentials provide us in the physical world.

The Verifiable Credentials Data Model 1.0 (VCDM) specification provides a standard way to express credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.

Participants and workflow

  • Credentials are issued by an entity called the issuer.
  • Issuer issues the credential about a subject by signing the credential with his key. If the credential is revocable, the issuer must specify how and from where revocation status must be checked. It is not necessary that revocation is managed by the issuer, the issuer might designate a different authority for revocation.
  • Issuer gives the credential to the holder. The holder might be the same as the subject.
  • A service provider or anyone willing to check if the holder possesses certain credentials requests a presentation about those credentials. This entity requesting the presentation is called the verifier. To protect against replay attacks, (a verifier receiving the presentation and replaying the same presentation at some other verifier), a verifier must supply a challenge that must be embedded in the presentation.
  • Holder creates a presentation for the required credentials. The presentation must indicate which credentials it is about and must be signed by the holder of the credentials.
  • Verifier on receiving the presentation verifies the validity of each credential in the presentation. This includes checking correctness of the data model of the credential, the authenticity by verifying the issuer's signature and revocation status if the credential is revocable. It then checks whether the presentation contains the signature from the holder on the presentation which also includes his given challenge.

Issuing

To issue a verifiable credential, the issuer needs to have a public key that is accessible by the holder and verifier to verify the signature (in proof) in the credential. Though the VCDM spec does not mandate it, an issuer in Dock must have a DID on chain. This DID is present in the credential in the issuer field. An example credential where both the issuer and holder have Dock DIDs

{
    '@context': [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ],
    id: '0x9b561796d3450eb2673fed26dd9c07192390177ad93e0835bc7a5fbb705d52bc',
    type: [ 'VerifiableCredential', 'AlumniCredential' ],
    issuanceDate: '2020-03-18T19:23:24Z',
    credentialSubject: {
      id: 'did:dock:5GL3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi',
      alumniOf: 'Example University'
    },
    issuer: 'did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr',
    proof: {
      type: 'Ed25519Signature2018',
      created: '2020-04-22T07:50:13Z',
      jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..GBqyaiTMhVt4R5P2bMGcLNJPWEUq7WmGHG7Wc6mKBo9k3vSo7v7sRKwqS8-m0og_ANKcb5m-_YdXC2KMnZwLBg',
      proofPurpose: 'assertionMethod',
      verificationMethod: 'did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr#keys-1'
    }
}

Presentation

The holder while creating the presentation signs it with his private key. For the verifier to verify the presentation, in addition to verifying the issuer's signature, he needs to verify this signature as well, and for that he must know the holder's public key. One way to achieve this is to make the holder have a DID too so that the verifier can look up the DID on chain and learn the public key. An example presentation signed by the holder

{
    '@context': [ 'https://www.w3.org/2018/credentials/v1' ],
    type: [ 'VerifiablePresentation' ],
    verifiableCredential: [
      {
          '@context': [
            'https://www.w3.org/2018/credentials/v1',
            'https://www.w3.org/2018/credentials/examples/v1'
          ],
          id: 'A large credential id with size > 32 bytes',
          type: [ 'VerifiableCredential', 'AlumniCredential' ],
          issuanceDate: '2020-03-18T19:23:24Z',
          credentialSubject: {
            id: 'did:dock:5GnE6u2dt9nC7tgf5vSdKy4gYX3jwqthbrBnjiay2LWETdrV',
            alumniOf: 'Example University'
          },
          credentialStatus: {
            id: 'rev-reg:dock:0x0194db371bab472a9cc920b5dfb1447aad5a6db906c46ff378cf0fc337a0c8c0',
            type: 'CredentialStatusList2017'
          },
          issuer: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya',
          proof: {
            type: 'Ed25519Signature2018',
            created: '2020-04-22T07:58:43Z',
            jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..bENDgnK29BHRhP05ehbQkOPfqweppGyI7NeH02YT1hzSDEHseOzCDx-g9dS4lY-m_bElwbOptOlRnQ2g9MW7Ag',
            proofPurpose: 'assertionMethod',
            verificationMethod: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya#keys-1'
          }
      }
    ],
    id: '0x4bd107aee17744dcec10208d7551620664dcba7e88ce11c2312c02df562754f1',
    proof: {
      type: 'Ed25519Signature2018',
      created: '2020-04-22T07:58:49Z',
      challenge: '0x6a5a5d58a99705c4d499fa7cdcdc62eeb2f742eb878456babf49b9a6669d0b76',
      domain: 'test domain',
      jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..HW7bDjvsRETeM25a3BtMgER53FtzK6rUBX_46cFo-i6O1y7p_TM-ED2iSTrFBUrDc7vH8QqoeUTY8e5ir5RvCg',
      proofPurpose: 'authentication',
      verificationMethod: 'did:dock:5GnE6u2dt9nC7tgf5vSdKy4gYX3jwqthbrBnjiay2LWETdrV#keys-1'
    }
}

Revocation

If the credential is revocable, the issuer must specify how the revocation check must be done in the credentialStatus field. On Dock, credential revocation is managed with a revocation registry. There can be multiple registries on chain and each registry has a unique id. It is recommended that the revocation authority creates a new registry for each credential type. While issuing the credential, issuer embeds the revocation registry's id in the credential in the credentialStatus field. An example credential with Dock revocation registry

{
    '@context': [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ],
    id: 'A large credential id with size > 32 bytes',
    type: [ 'VerifiableCredential', 'AlumniCredential' ],
    issuanceDate: '2020-03-18T19:23:24Z',
    credentialSubject: {
      id: 'did:dock:5GnE6u2dt9nC7tgf5vSdKy4gYX3jwqthbrBnjiay2LWETdrV',
      alumniOf: 'Example University'
    },
    credentialStatus: {
      id: 'rev-reg:dock:0x0194db371bab472a9cc920b5dfb1447aad5a6db906c46ff378cf0fc337a0c8c0',
      type: 'CredentialStatusList2017'
    },
    issuer: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya',
    proof: {
      type: 'Ed25519Signature2018',
      created: '2020-04-22T07:58:43Z',
      jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..bENDgnK29BHRhP05ehbQkOPfqweppGyI7NeH02YT1hzSDEHseOzCDx-g9dS4lY-m_bElwbOptOlRnQ2g9MW7Ag',
      proofPurpose: 'assertionMethod',
      verificationMethod: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya#keys-1'
    }
}

To revoke a credential, the revocation authority (might be same as the issuer), puts a hash of the credential id in the revocation registry. To check the revocation status of a credential, hash the credential id and query the registry id specified in the credential. The revocation of a credential can be undone if the revocation registry supports undoing. Moreover, currently, each registry is owned by a single DID so that DID can revoke a credential or undo the revocation. In future, Dock will support ownership of the registry with mulitple DIDs and in different fashions, like any one of the owner DIDs could revoke or a threshold is needed, etc. To learn more about revocation registries, refer the revocation section of the documentation.

Schemas

Table of Contents

  1. Intro
  2. Blobs
  3. JSON Schemas
  4. Schemas in Verifiable Credentials

Intro to Schemas

Data Schemas are useful when enforcing a specific structure on a collection of data like a Verifiable Credential. Data Verification schemas, for example, are used to verify that the structure and contents of a Verifiable Credential conform to a published schema. Data Encoding schemas, on the other hand, are used to map the contents of a Verifiable Credential to an alternative representation format, such as a binary format used in a zero-knowledge proof. Data schemas serve a different purpose than that of the @context property in a Verifiable Credential, the latter neither enforces data structure or data syntax, nor enables the definition of arbitrary encodings to alternate representation formats.

Blobs

Before diving further into Schemas in it is important to understand the way these are stored in the Dock chain. Schemas are stored on chain as a Blob in the Blob Storage module. They are identified and retrieved by their unique blob id, a 32 byte long hex string. They are authored by a DID and have a max size of 8192 bytes. The chain is agnostic to the contents of blobs and thus to schemas. Blobs may be used to store types of data other than schemas.

JSON Schemas

JSON Schema can be used to require that a given JSON document (an instance) satisfies a certain number of criteria. JSON Schema validation asserts constraints on the structure of instance data. An instance location that satisfies all asserted constraints is then annotated with any keywords that contain non-assertion information, such as descriptive metadata and usage hints. If all locations within the instance satisfy all asserted constraints, then the instance is said to be valid against the schema. Each schema object is independently evaluated against each instance location to which it applies. This greatly simplifies the implementation requirements for validators by ensuring that they do not need to maintain state across the document-wide validation process. More about JSON schemas can be found here and here.

Let's see an example JSON schema definition:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "description": "Alumni",
  "type": "object",
  "properties": {
    "emailAddress": {
      "type": "string",
      "format": "email"
    },
    "alumniOf": {
      "type": "string"
    }
  },
  "required": ["emailAddress", "alumniOf"],
  "additionalProperties": false
}

In our context, these schemas are stored on-chain as a blob, which means they have a Blob Id as id and a DID as author:

{
  "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
  "author": "did:dock:5CEdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
  "schema": {
    "$schema": "http://json-schema.org/draft-07/schema#",
    "description": "Alumni",
    "type": "object",
    "properties": {
      "emailAddress": {
        "type": "string",
        "format": "email"
      },
      "alumniOf": {
        "type": "string"
      }
    },
    "required": ["emailAddress", "alumniOf"],
    "additionalProperties": false
  }
}

Had we referenced this JSON schema from within a Verifiable Credential, validation would fail if the credentialSubject doesn't contain an emailAddress field, or it isn't a string formatted as an email; or if it doesn't contain a property alumniOf with type string. It'd also fail if a subject contains other properties not listed here (except for the id property which is popped out before validation).

Schemas in Verifiable Credentials

In pursuit of extensibility, VCDM makes an Open World Assumption; a credential can state anything. Schemas allow issuers to "opt-out" of some of the freedom VCDM allows. Issuers can concretely limit what a given credential will claim. In a closed world, a verifier can rely on the structure of a credential to enable new types of credential processing e.g. generating a complete and human-friendly graphical representation of a credential.

The Verifiable Credentials Data Model specifies the models used for Verifiable Credentials and Verifiable Presentations, and explains the relationships between three parties: issuer, holder, and verifier. A critical piece of infrastructure out of the scope of those specifications is the Credential Schema. This specification provides a mechanism to express a Credential Schema and the protocols for evolving the schema.

Following our example above, we could use the current SDK to store the Email schema above as a Blob in the Dock chain. Assuming we did that and our schema was stored as blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW, we can use it in a Verifiable Credential as follows:

"credentialSchema": {
  "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
  "type": "JsonSchemaValidator2018"
}

The following is an example of a valid Verifiable Credential using the above schema:

{
  "@context": [
    "https://www.w3.org/2018/credentials/v1",
    "https://www.w3.org/2018/credentials/examples/v1"
  ],
  "id": "uuid:0x9b561796d3450eb2673fed26dd9c07192390177ad93e0835bc7a5fbb705d52bc",
  "type": ["VerifiableCredential", "AlumniCredential"],
  "issuanceDate": "2020-03-18T19:23:24Z",
  "credentialSchema": {
    "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
    "type": "JsonSchemaValidator2018"
  },
  "credentialSubject": {
    "id": "did:dock:5GL3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi",
    "emailAddress": "john.smith@example.com",
    "alumniOf": "Example University"
  },
  "issuer": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr",
  "proof": {
    "type": "Ed25519Signature2018",
    "created": "2020-04-22T07:50:13Z",
    "jws": "eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..GBqyaiTMhVt4R5P2bMGcLNJPWEUq7WmGHG7Wc6mKBo9k3vSo7v7sRKwqS8-m0og_ANKcb5m-_YdXC2KMnZwLBg",
    "proofPurpose": "assertionMethod",
    "verificationMethod": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr#keys-1"
  }
}

In contrast, the following is an example of an invalid Verifiable Credential:

{
  "@context": [
    "https://www.w3.org/2018/credentials/v1",
    "https://www.w3.org/2018/credentials/examples/v1"
  ],
  "id": "uuid:0x9b561796d3450eb2673fed26dd9c07192390177ad93e0835bc7a5fbb705d52bc",
  "type": ["VerifiableCredential", "AlumniCredential"],
  "issuanceDate": "2020-03-18T19:23:24Z",
  "credentialSchema": {
    "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
    "type": "JsonSchemaValidator2018"
  },
  "credentialSubject": [
    {
      "id": "did:dock:5GL3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi",
      "emailAddress": "john.smith@example.com",
      "alumniOf": "Example University"
    },
    {
      "id": "did:dock:6DF3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi"
    }
  ],
  "issuer": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr",
  "proof": {
    "type": "Ed25519Signature2018",
    "created": "2020-04-22T07:50:13Z",
    "jws": "eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..GBqyaiTMhVt4R5P2bMGcLNJPWEUq7WmGHG7Wc6mKBo9k3vSo7v7sRKwqS8-m0og_ANKcb5m-_YdXC2KMnZwLBg",
    "proofPurpose": "assertionMethod",
    "verificationMethod": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr#keys-1"
  }
}

the reason this last Credential is invalid is that only one of the subjects properly follow the Schema, the second subject does not specify the fields emailAddress and alumniOf which were specified as required.

Claim Deduction

The verifiable credentials data model is based on a machine comprehensible language called RDF. RDF represents arbitrary semantic knowledge as graphs. Computers can perform automatic deductive reasoning over RDF; given assumptions (represented as an RDF graph) and axioms (represented as logical rules), a computer can infer new conclusions and even prove them to other computers using deductive derivations (proofs).

Every VCDM credential is representable as an RDF graph. So computers can reason about them, deriving new conclusions that weren't explicitly stated by the issuer.

The Dock SDK exposes utilities for primitive deductive reasoning over verified credentials. The Verifier has a choice to perform deduction themself (expensive), or offload that responsibility to the Presenter of the credential[s] by accepting deductive proofs of composite claims.

In RDF, if graph A is true and graph B is true, then the union of those graphs, is also true A∧B->A∪B 1. Using this property we can combine multiple credentials and reason over their union.

Explicit Ethos

Imagine a signed credential issued by Alice claiming that Joe is a Member.

{
  ...
  "issuer": "Alice",
  "credentialSubject": {
    "id": "Joe",
    "@type": "Member"
  },
  "proof": ...,
  ...
}

The credential does not directly prove that Joe is a Member. Rather, it proves Alice Claims Joe to be a Member.

Not proven:

<Joe> <type> <Member> .

Proven:

<Joe> <type> <Member> <Alice> .

The fourth and final element of the proven quad is used here to indicate the source of the information, Alice. The final element of a quad is its graph name.

A signed credentials are ethos arguments and a credential may be converted to a list of quads (a claimgraph). We call this representation "Explicit Ethos" form. If a credential is verified, then its explicit ethos form is true.

Rule Format

To perform reasoning and to accept proofs, the Verifier must select the list of logical rules wish to accept. Rules (or axioms if you prefer), are modeled as if-then relationships.

const rules = [
  {
    if_all: [],
    then: [],
  },
];

During reasoning, when an if_all pattern is matched, its corresponding then pattern will be implied. In logic terms, each "rule" is the conditional premise of a modus ponens.

{ if_all: [A, B, C], then: [D, E] } means that if (A and B and C) then (D and E).

Rules can contain Bound or Unbound entities. Unbound entities are named variables. Each rule has it's own unique scope, so Unbound entities introduced in the if_all pattern can be used in the then pattern.

{
  if_all: [
    [
      { Bound: alice },
      { Bound: likes },
      { Unbound: 'thing' },
      { Bound: defaultGraph },
    ],
  ],
  then: [
    [
      { Bound: bob },
      { Bound: likes },
      { Unbound: 'thing' },
      { Bound: defaultGraph },
    ],
  ],
}

means

For any ?thing:
  if [alice likes ?thing]
  then [bob likes ?thing]

in other words: ∀ thing: [alice likes thing] -> [bob likes thing]

If an unbound variable appears in the then pattern but does not appear in the if_all pattern the rule is considered invalid and will be rejected by the reasoner.

Bound entities are constants of type RdfTerm. RDF nodes may be one of four things, an IRI, a blank node, a literal, or the default graph. For those familiar with algebraic datatypes:

enum RdfNode {
  Iri(Url),
  Blank(String),
  Literal {
    value: String,
    datatype: Url,
  },
  DefaultGraph,
}

The SDK represents RDF nodes like so:

const alice = { Iri: "did:sample:alice" };
const literal = {
  Literal: {
    value: "{}",
    datatype: "http://www.w3.org/1999/02/22-rdf-syntax-ns#JSON",
  },
};
// blank nodes are generally not useful in rule definitions
const blank = { Blank: "_:b0" };
const defaultGraph = { DefaultGraph: true };

Here is an example of a complete rule definition:

{
  if_all: [
    [
      { Unbound: 'food' },
      { Bound { Iri: 'https://example.com/contains' } },
      { Bound: { Iri: 'https://example.com/butter' } },
      { Bound: { DefaultGraph: true } }
    ],
    [
      { Unbound: 'person' },
      { Bound: 'http://xmlns.com/foaf/0.1/name' },
      { Literal: {
        value: 'Bob',
        datatype: 'http://www.w3.org/1999/02/22-rdf-syntax-ns#PlainLiteral',
      } },
      { Bound: { DefaultGraph: true } }
    ],
  ],
  then: [
    [
      { Unbound: 'person' },
      { Bound: { Iri: 'https://example.com/likes' } },
      { Unbound: 'food' },,
      { Bound: { DefaultGraph: true } }
    ]
  ],
}
// all things named "Bob" like all things containing butter

See the claim deduction tutorial for more another example.

Limited Expresiveness

The astute among you may notice the SDK's model for rules does not allow logical negation. This is by design. For one, it keeps the the rule description language from being turing complete so inference time is always bounded. Secondly, RDF choses the Open World Assumption so absence of any particular statement in a credential/claimgraph is not meaningful within RDF semantics.

The rule language is expected to be expressive enough to implement OWL 2 EL but not OWL 1 DL.

Terms

  • Verifier: The party that accepts and checks VCDM credential[s].
  • Issuer: The party that signed a VCDM credential.
  • VCDM: Verifiable Credentials Data Model
  • RDF: A model for representing general knowledge in a machine friendly way.
  • RDF triple: A single sentence consisting of subject, predicate and object. Each element of the triple is an RDF node.
  • RDF quad: A single sentence consisting of subject, predicate, object, graph. Each element of the quad is an RDF term.
  • RDF graph: A directed, labeled graph with RDF triples as edges.
  • RDF node
  • Composite Claim: An rdf triple which was infered, rather than stated explicitly in a credential.
  • Explicit Ethos statement: A statement of the form "A claims X." where X is also a statement. Explicit Ethos is encodable in natural human languages as well as in RDF.
1

If you ever decide to implement your own algorithm to merge RDF graphs, remember that blank nodes exists and may need to be renamed depending on the type of graph representation in use.

The Dock Blockchain includes a module explicitly intended for proof of existence. Aside from being explicitly supported by the on-chain runtime, it works the same way you would expect. You post the hash of a document on-chain at a specific block. Later you can use that hash to prove the document existed at or before that block.

The PoE module accepts arbitrary bytes as an anchor but in order to keep anchor size constant the chain stores only the blake2b256 hash of those bytes.

Developers are free to use the anchoring module however they want, taloring their software to their own use case. An anchoring example can be found in the sdk examples directory. Dock provides a fully functioning reference client for anchoring. The client even implements batching anchors into a merkle tree so you can anchor multiple documents in a single transaction.

Private Delegation

Claim Deduction rules can express delegation of authority to issue credentials! It's expected to be a common enough use case that Dock has declared some rdf vocabulary and associated claim deduction rules aid potential delegators.

An issuer may grant delegation authority to another issuer simply by issuing them a vcdm credential. Let's say did:ex:a wants to grant delegation authority to did:ex:b. did:ex:a simply issues the credential saying that did:ex:b may make any claim.

{
  "@context": [ "https://www.w3.org/2018/credentials/v1" ],
  "id": "urn:uuid:9b472d4e-492b-49f7-821c-d8c91e7fe767",
  "type": [ "VerifiableCredential" ],
  "issuer": "did:dock:a",
  "credentialSubject": {
    "id": "did:dock:b",
    "https://rdf.dock.io/alpha/2021#mayClaim": "https://rdf.dock.io/alpha/2021#ANYCLAIM"
  },
  "issuanceDate": "2021-03-18T19:23:24Z",
  "proof": { ... }
}

When did:ex:b wishes to issue a credential on behalf of did:ex:a, they should bundle it (e.g. in a presentation) with it this "delegation" credential. A delegation credential constitutes a proof of delegation. A proof of delegation bundled with a credential issued by the delegate can be prove that some statement[s] were made by authority of some root delegator.

In order to process delegated credentials a verifier accepts a bundle. The bundle includes both delegations and credentials issued by delegates. After verifying every credential within the bundle (including the delegations) the verifier uses Claim Deduction to determine which statements are proven by the delegated credential.

Dock's delegation ontology (i.e. rdf vocabulary) and ruleset are currently in alpha. See Private Delegation for an example of their use.

Public Attestation

This feature should be considered Alpha.

RFC

VCDM Verifiable credentials are a way to prove an attestation. Valid credentials prove statements of the form Issuer claims X, where X is itself a statement. One property of verifiable credentials is that the holder may keep them private simply by not sharing them with other parties. That property will be sometimes useful, sometimes not. VCDM crededentials are private and therefore not automatically discoverable but Public Attestations give a decentralized identity the ability to post claims that are discoverable by any party. For Dock DIDs, attestations are linked on-chain but Public Attestations are not specicfic to Dock. Other DID methods can implement public attestations by including them in DID documents.

Public Attestations are posted as RDF documents. Since RDF can represent, or link to, arbitrary types of data, Public Attestations can be used to publish arbitrary content.

Data Model

Public Attestaions live in the DID document of their poster. A DID with a public attestation will have an extra property, "https://rdf.dock.io/alpha/2021#attestsDocumentContent". The value of that property is an IRI that is expected to point to an RDF document. Any statement contained in that document is considered to be a claim made by the DID.

If DID attestsDocumentContent DOC then for every statement X in DOC DID claims X.

Two IRI schemes are supported for pionting to attested documents: DIDs and ipfs links. DIDs are dereferenced and interpreted as json-ld. Ipfs links are dereferenced and interpreted as turtle documents. The sdk makes it easy to dereferece DIDs and ipfs attestation documents but the Public Attestation concept is extendable to other types of IRI, like hashlinks or data URIs.

For Dock DIDs public attestation are made by setting the attestation for the DID on-chain. Changing the value of an attestation effectively revokes the previous attestation and issues a new one. A DIDs attestation can also be set to None, which is equivalent to attesting an empty claimgraph. Dock DIDs have their attestation set to None by default. A Dock DID with attestation set to None will not contain the attestsDocumentContents key.

Example of A DID attesting to a document in ipfs

did:ex:ex:

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did:ex:ex",
  "https://rdf.dock.io/alpha/2021#attestsDocumentContent": {
    "@id": "ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p"
  }
}

Content of ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p:

<https://www.wikidata.org/wiki/Q25769>
  <https://www.wikidata.org/wiki/Property:P171>
  <https://www.wikidata.org/wiki/Q648422> .

From these documents we can derive two facts. The first fact is encodeded directly in the DID document.

Fact 1:

# `did:ex:ed` attests to the content of `ipfs://Qmeg1..`
`<did:ex:ed> <https://rdf.dock.io/alpha/2021#attestsDocumentContent> <ipfs://Qmeg1..> .

The second fact is infered. Since we know the content of ipfs://Qmeg1.. we know that ipfs://Qmeg1.. contains the statement wd:Q25769 wd:Property:P171 wd:Q648422 (Short-eared Owl is in the genus "Asio"). did:ex:ex attests the document ipfs://Qmeg1.. and ipfs://Qmeg1.. states that the Short-eared Owl is in the genus "Asio", therefore:

Fact 2:

@prefix wd: <https://www.wikidata.org/wiki/> .
# `did:ex:ex` claims that the Short-eared Owl is in the genus "Asio".
<wd:Q25769> <wd:Property:P171> <wd:Q648422> <did:ex:ex> .

Example of A DID attesting to multiple documents

While it is valid DIDs to include multiple attested IRIs in a single DID document, Dock artificially limits the number of attestation to one per Dock DID. This is to encourage off-chain (ipfs) data storage. If a DID wishes to attests to multiple documents, there are two suggested options: 1) merge the two documents into a single document or 2) attest to a single document which in turn notes an attestsDocumentContents for each of it's children. The following is an example of option "2)".

did:ex:ex:

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did:ex:ex",
  "https://rdf.dock.io/alpha/2021#attestsDocumentContent": {
    "@id": "ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p"
  }
}

ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p:

<did:ex:ex>
  <https://rdf.dock.io/alpha/2021#attestsDocumentContent>
  <ipfs://QmXoypizjW3WknFiJnLLwHCnL72vedxjQkDDP1mXWo6uco> . # document1
<did:ex:ex>
  <https://rdf.dock.io/alpha/2021#attestsDocumentContent>
  <ipfs://QmdycyxM3r882pHx3M63Xd8NUfsXoEmBnU8W6PgL9eY9cN> . # document2

Uses

Two properties of RDF have the potential to supercharge Public Attestations.

  1. It's a semantic knowlege representation, it can be reasoned over.
  2. It's queryable in it's native form.

Via these properties the sdk implements a "Curious Agent". The Curious Agent seeks out information. It starts with an initial kernel of knowlege (an RDF dataset) and it follows a sense of curiosity, gradually building it's knowlege graph by dereferencing IRIs, stopping when it finds nothing new to be curious about. As it crawls, it reasons over the information it's found, deducing new facts, which may in turn spark new curiosity. The Curious Agent accepts it's curiosity as Sparql queries. The logical rules it uses to reason are also configurable, axioms are provided to the Agent as conjunctive if-then statements (like in Claim Deduction). Within the sdk, the Curious Agent is simply called crawl().

The Curious Agent is sometimes referred to as "the crawler".

The use-case that drove implementation of the crawler is to search for publicaly posted Delegation information. As such, a bare minimum of functionality is implemented by crawl(). Want more? Consider contacting us.

Public Delegation

This feature should be considered Alpha.

RFC

We combine Private Delegation and Public Attestation to get Public Delegation.

When a delegation is attested via a credential, we call that a Private Delegation. As discussed in the previous section, attestations can be made in other ways. When a delegation is attested publically we call it a Public Delegation.

Public Delegations remove the need for credential holders to manage and present delegation chains. With Public Delegations, credential verifiers may look up delegation information out-of-band.

Just like in Private Delegation, verified delegation information constitutes a knowlege graph that can be merged with the knowlege graph from a verified credential. The merged graphs are reasoned over to determine facts that are proven true.

Example

Let's say there is trusted root issuer, did:ex:root. did:ex:root may delegate authority to make claims on behalf of did:ex:root. To do so, did:ex:root would attest to a claimgraph like this one:

ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p:

@prefix dockalpha: <https://rdf.dock.io/alpha/2021#> .
<did:ex:delegate1> dockalpha:mayClaim dockalpha:ANYCLAIM .
<did:ex:delegate2> dockalpha:mayClaim dockalpha:ANYCLAIM .

When did:ex:root attests to the above triples, the following dataset is true.

@prefix dockalpha: <https://rdf.dock.io/alpha/2021#> .
<did:ex:delegate1> dockalpha:mayClaim dockalpha:ANYCLAIM <did:ex:root> .
<did:ex:delegate2> dockalpha:mayClaim dockalpha:ANYCLAIM <did:ex:root> .

did:ex:root may attests to ipfs://Qmeg1Hq... by adding the ipfs link to its DID document.

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did:ex:root",
  "https://rdf.dock.io/alpha/2021#attestsDocumentContent": {
    "@id": "ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p"
  }
}

By modifying its DID document to include the ipfs link did:ex:root attests to the delegation publically.

Tutorials

  1. DID
  2. DID Resolver
  3. Verifiable Credentials
  4. Revocation
  5. Blobs and Schemas
  6. Claim Deduction
  7. PoE Anchors
  8. Private Delegation
  9. Public Delegation
  10. Anonymous Credentials

DID

If you are not familiar with DIDs, you can get a conceptual overview here.

Overview

DIDs in Dock are created by choosing a 32 byte unique (on Dock chain) identifier along with 1 ore more public keys or controllers. The public key can be added or removed by the DID's controller (which the DID maybe itself) signature with a key having capabilityInvocation verification relationship.

The DID can also be removed by providing a signature from the DID's controller.

The chain-state stores a few things for a DID, the active public keys, the controllers, service endpoints and the current nonce of the DID. The nonce starts as the block number where the DID was created and each subsequent action like adding/removing a key for itself or any DID it controls, adding a blob, etc should supply a nonce 1 higher than the previous one.

This is done for replay protection but this detail however is hidden in the API so the caller should not have to worry about this.

DID creation

Create a new random DID.

import { DockDid } from "@docknetwork/credential-sdk/types";

const did = DockDid.random();

The DID is not yet registered on the chain. Before the DID can be registered, a public key needs to created as well.

Keypair creation

We can create a random ed25519 keypair using Ed25519Keypair class.

import { Ed25519Keypair } from "@docknetwork/credential-sdk/keypairs";

const kp = Ed25519Keypair.random();

The result pair can be used as following:

const publicKey = kp.publicKey();
const privateKey = kp.privateKey();

const message = Uint8Array.from([1, 2, 3]);
const signature = kp.sign(message);

const verified = Ed25519Keypair.verify(message, signature, publicKey);

Registering a new DID on chain

Now that you have a DID and a public key, the DID can be registered on the Dock chain. Note that this public key associated with DID is independent of the key used for sending the transaction and paying the fees.

Self-controlled DIDs

In most cases, a DID will have its own keys and will control itself, i.e. a self-controlled DID. Following is an example of DID creation in this scenario.

  1. First, create a DidKeypair object. The first argument is a DID reference and the second is the underlying keypair.

    import { DidKeypair } from "@docknetwork/credential-sdk/keypair";
    
    const didKeypair = new DidKeypair([did, 1], kp);
    
  2. Second, let's get a did key with verication relationship from the did's keypair. The only argument is the verification relationship. A verification relationship can be 1 or more of these authentication, assertion, capabilityInvocation or keyAgreement

    const didKey = didKeypair.didKey();
    
  3. Now submit the transaction using a DockAPI object and the newly created DID did and didKey.

    const document = DIDDocument.create(did, [didKey]);
    await dock.did.createDocument(document, didKeypair);
    

Keyless DIDs

A DID might not have any keys and thus be controlled by other DIDs. Assuming a DID did already exists, it can register a keyless DID did2 as

const document = DIDDocument.create(did2);
await dock.did.createDocument(document, didKeypair);

Moreover, a DID can have keys for certain functions like authentication but still be controlled by other DID(s).

Fetching a DID from chain

To get a DID document, use getDocument

const result = await dock.did.getDocument(did);

Adding a key to an existing DID

A DID's controller can add a public key to an on-chain DID by preparing a signed payload. Each new key is given a number key index which 1 is greater than the last used index. Key indices start from 1.

  1. Create a new public key and use the current keypair to sign the message
    // the current pair, its a sr25519 in this example
    const newKp = Ed25519Keypair.random();
    
  2. Now send the signed payload in a transaction to the chain in a transaction. In the arguments, the first did specifies that a key must be added to DID did and the second did specifies that DID did is signing the payload.
    const document = await dock.did.getDocument(did);
    document.addKey([did, 2], newKp.didKey());
    await dock.did.updateDocument(document, didKeypair);
    

Removing an existing DID from chain

A DID can be removed from the chain by sending the corresponding message signed with an appropriate key.

  1. Now send the message with the signature to the chain in a transaction
    dock.did.removeDocument(did, didKeypair);
    

For more details see example in examples/dock-did.js or the integration tests.

Note that they accounts used to send the transactions are independent of the keys associated with the DID.

So the DID could have been created with one account, updated with another account and removed with another account.

The accounts are not relevant in the data model and not associated with the DID in the chain-state.

DID resolver

The process of learning the DID Document of a DID is called DID resolution and tool that does the resolution is called the resolver.

Resolution involves looking at the DID method and then fetching the DID Document from the registry, the registry might be a centralized database or a blockchain.

The SDK supports resolving Dock DIDs natively. For other DIDs, resolving the DID through the Universal Resolver is supported.

Each resolver should extend the class DIDResolver and implement the resolve method that accepts a DID and returns the DID document.

There is another class called MultiResolver that can accept several types of resolvers (objects of subclasses of DIDResolver) and once the MultiResolver is initialized with the resolvers of different DID methods, it can resolve DIDs of those methods.

Dock resolver

The resolver for Dock DIDs DockResolver connects to the Dock blockchain to get the DID details.

The resolver is constructed by passing it a Dock API object so that it can connect to a Dock node. This is how you resolve a Dock DID:

import { DockResolver } from "@docknetwork/credential-sdk/resolver";

// Assuming the presence of Dock API object `dock`
const dockResolver = new DockResolver(dock);
// Say you had a DID `did:dock:5D.....`
const didDocument = dockResolver.resolve("did:dock:5D.....");

Creating a resolver class for a different method

If you want to resolve DIDs other than Dock and do not have/want access to the universal resolver, you can extend the DIDResolver class to derive a custom resolver.

Following is an example to build a custom Ethereum resolver. It uses the library ethr-did-resolver and accepts a provider information as configuration. The example below uses Infura to get access to an Ethereum node and read the DID off Ethereum.

import { DIDResolver } from "@docknetwork/credential-sdk/resolver";
import ethr from "ethr-did-resolver";

// Infura's Ethereum provider for the main net.
const ethereumProviderConfig = {
  networks: [
    {
      name: "mainnet",
      rpcUrl: "https://mainnet.infura.io/v3/blahblahtoken",
    },
  ],
};

// Custom ethereum resolver class
class EtherResolver extends DIDResolver {
  static METHOD = "ethr";

  constructor(config) {
    super();
    this.ethres = ethr.getResolver(config).ethr;
  }

  async resolve(did) {
    const parsed = this.parseDid(did);
    try {
      return this.ethres(did, parsed);
    } catch (e) {
      throw new NoDIDError(did);
    }
  }
}

// Construct the resolver
const ethResolver = new EtherResolver(ethereumProviderConfig);

// Say you had a DID `did:ethr:0x6f....`
const didDocument = ethResolver.resolve("did:ethr:0x6f....");

Universal resolver

To resolve DIDs using the Universal Resolver, use the UniversalResolver. It needs the URL of the universal resolver and assumes the universal resolver from this codebase is running at the URL.

import { UniversalResolver } from "@docknetwork/credential-sdk/resolver";

// Change the resolver URL to something else in case you cannot use the resolver at https://uniresolver.io
const universalResolverUrl = "https://uniresolver.io";
const universalResolver = new UniversalResolver(universalResolverUrl);

// Say you had a DID `did:btcr:xk....`
const didDocument = universalResolver.resolve("did:btcr:xk....");

Resolving DIDs of several DID methods with a single resolver

In case you need to resolve DIDs from more than one method, a DIDResolver can be created by passing resolvers of various DID methods to the derived class constructor.

The derived DIDResolver without overriden resolve accepts a list of resolvers each of which will be dispatched according to their prefix and method configuration. The resolvers array below has resolvers for DID methods dock and ethr.

For resolving DID of any other method, UniversalResolver object will be used.

import {
  DockDIDResolver,
  DIDResolver,
  WILDCARD,
} from "@docknetwork/credential-sdk/resolver";

class MultiDIDResolver extends DIDResolver {
  static METHOD = WILDCARD;

  constructor(dock) {
    super([
      new DockDIDResolver(dock),
      new EtherResolver(ethereumProviderConfig),
      new UniversalResolver(universalResolverUrl),
    ]);
  }
}

const multiResolver = new MultiDIDResolver(resolvers);

// Say you had a DID `did:dock:5D....`, then the `DockResolver` will be used as there a resolver for Dock DID.
const didDocumentDock = multiResolver.resolve("did:dock:5D....");

// Say you had a DID `did:btcr:xk....`, then the `UniversalResolver` will be used as there is no resolver for BTC DID.
const didDocumentBtc = multiResolver.resolve("did:btcr:xk....");

Verifiable Credentials and Verifiable Presentations: issuing, signing and verification

Table of contents


Incremental creation and verification of Verifiable Credentials

The client-sdk exposes a VerifiableCredential class that is useful to incrementally create valid Verifiable Credentials of any type, sign them and verify them. Once the credential is initialized, you can sequentially call the different methods provided by the class to add contexts, types, issuance dates and everything else.

Building a Verifiable Credential

The first step to build a Verifiable Credential is to initialize it, we can do that using the VerifiableCredential class constructor which takes a credentialId as sole argument:

let vc = new VerifiableCredential("http://example.edu/credentials/2803");

You now have an unsigned Verifiable Credential in the vc variable! This Credential isn't signed since we only just initialized it. It brings however some useful defaults to make your life easier.

>    vc.context
<-   ["https://www.w3.org/2018/credentials/v1"]
>    vc.issuanceDate
<-   "2020-04-14T14:48:48.486Z"
>    vc.type
<-   ["VerifiableCredential"]
>    vc.credentialSubject
<-   []

The default context is an array with "https://www.w3.org/2018/credentials/v1" as first element. This is required by the VCDMv1 specs so having it as default helps ensure your Verifiable Credentials will be valid in the end.

A similar approach was taken on the type property, where the default is an array with "VerifiableCredential" already populated. This is also required by the specs. The subject property is required to exist, so this is already initialized for you as well although it is empty for now. Finally the issuanceDate is also set to the moment you initialized the VerifiableCredential object. You can change this later if desired but it helps having it in the right format from the get go.

We could also have checked those defaults more easily by checking the Verifiable Credential's JSON representation.

This can be achieved by calling the toJSON() method on it:

>    vc.toJSON()
<-   {
       "@context": [ "https://www.w3.org/2018/credentials/v1" ],
       "credentialSubject": [],
       "id": "http://example.edu/credentials/2803",
       "type": [
         "VerifiableCredential"
       ],
       "issuanceDate": "2020-04-14T14:48:48.486Z"
     }

An interesting thing to note here is the transformation happening to some of the root level keys in the JSON representation of a VerifiableCredential object.

For example context gets transformed into @context and subject into credentialSubject.

This is to ensure compliance with the Verifiable Credential Data Model specs while at the same time providing you with a clean interface to the VerifiableCredential class in your code.

Once your Verifiable Credential has been initialized, you can proceed to use the rest of the building functions to define it completely before finally signing it.

Adding a Context

A context can be added with the addContext method. It accepts a single argument context which can either be a string (in which case it needs to be a valid URI), or an object:

>   vc.addContext('https://www.w3.org/2018/credentials/examples/v1')
>   vc.context
<-  [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ])

Adding a Type

A type can be added with the addType function. It accepts a single argument type that needs to be a string:

>   vc.addType('AlumniCredential')
>   vc.type
<-  [
      'VerifiableCredential',
      'AlumniCredential'
    ]

Adding a Subject

A subject can be added with the addSubject function. It accepts a single argument subject that needs to be an object with an id property:

>   vc.addSubject({ id: 'did:dock:123qwe123qwe123qwe', alumniOf: 'Example University' })
>   vc.credentialSubject
<-  {id: 'did:dock:123qwe123qwe123qwe', alumniOf: 'Example University'}

Setting a Status

A status can be set with the setStatus function. It accepts a single argument status that needs to be an object with an id property:

>   vc.setStatus({ id: "https://example.edu/status/24", type: "CredentialStatusList2017" })
>   vc.status
<-  {
        "id": "https://example.edu/status/24",
        "type": "CredentialStatusList2017"
    }

Setting the Issuance Date

The issuance date is set by default to the datetime you first initialize your VerifiableCredential object.

This means that you don't necessarily need to call this method to achieve a valid Verifiable Credential (which are required to have an issuanceDate property).

However, if you need to change this date you can use the setIssuanceDate method. It takes a single argument issuanceDate that needs to be a string with a valid ISO formatted datetime:

>   vc.issuanceDate
<-  "2020-04-14T14:48:48.486Z"
>   vc.setIssuanceDate("2019-01-01T14:48:48.486Z")
>   vc.issuanceDate
<-  "2019-01-01T14:48:48.486Z"

Setting an Expiration Date

An expiration date is not set by default as it isn't required by the specs. If you wish to set one, you can use the setExpirationDate method.

It takes a single argument expirationDate that needs to be a string with a valid ISO formatted datetime:

>   vc.setExpirationDate("2029-01-01T14:48:48.486Z")
>   vc.expirationDate
<-  "2029-01-01T14:48:48.486Z"

Signing a Verifiable Credential

Once you've crafted your Verifiable Credential it is time to sign it. This can be achieved with the sign method.

It requires a keyDoc parameter (an object with the params and keys you'll use for signing) and it also accepts a boolean compactProof that determines whether you want to compact the JSON-LD or not:

>   await vc.sign(keyDoc)

Please note that signing is an async process. Once done, your vc object will have a new proof field:

>   vc.proof
<-  {
        type: "EcdsaSecp256k1Signature2019",
        created: "2020-04-14T14:48:48.486Z",
        jws: "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEQCIAS8ZNVYIni3oShb0TFz4SMAybJcz3HkQPaTdz9OSszoAiA01w9ZkS4Zx5HEZk45QzxbqOr8eRlgMdhgFsFs1FnyMQ",
        proofPurpose: "assertionMethod",
        verificationMethod: "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
    }

Verifying a Verifiable Credential

Once your Verifiable Credential has been signed you can proceed to verify it with the verify method. The verify method takes an object of arguments, and is optional.

If you've used DIDs you need to pass a resolver for them. You can also use the booleans compactProof (to compact the JSON-LD).

If your credential has uses the credentialStatus field, the credential will be checked not to be revoked unless you pass skipRevocationCheck flag.

>   const result = await vc.verify({ ... })
>   result
<-  {
      verified: true,
      results: [
        {
          proof: [
            {
                '@context': 'https://w3id.org/security/v2',
                type: "EcdsaSecp256k1Signature2019",
                created: "2020-04-14T14:48:48.486Z",
                jws: "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEQCIAS8ZNVYIni3oShb0TFz4SMAybJcz3HkQPaTdz9OSszoAiA01w9ZkS4Zx5HEZk45QzxbqOr8eRlgMdhgFsFs1FnyMQ",
                proofPurpose: "assertionMethod",
                verificationMethod: "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
            }
          ],
          verified: true
        }
      ]
    }

Please note that the verification is an async process that returns an object when the promise resolves. A boolean value for the entire verification process can be checked at the root level verified property.


Incremental creation and verification of Verifiable Presentations

The client-sdk exposes a VerifiablePresentation class that is useful to incrementally create valid Verifiable Presentations of any type, sign them and verify them.

Once the presentation is initialized, you can sequentially call the different methods provided by the class to add contexts, types, holders and credentials.

Building a Verifiable Presentation

The first step to build a Verifiable Presentation is to initialize it, we can do that using the VerifiablePresentation class constructor which takes an id as sole argument:

let vp = new VerifiablePresentation("http://example.edu/credentials/1986");

You now have an unsigned Verifiable Presentation in the vp variable!

This Presentation isn't signed since we only just initialized it. It brings however some useful defaults to make your life easier.

>    vp.context
<-   ["https://www.w3.org/2018/credentials/v1"]
>    vp.type
<-   ["VerifiablePresentation"]
>    vp.credentials
<-   []

The default context is an array with "https://www.w3.org/2018/credentials/v1" as first element. This is required by the VCDMv1 specs so having it as default helps ensure your Verifiable Presentations will be valid in the end.

A similar approach was taken on the type property, where the default is an array with "VerifiablePresentation" already populated. This is also required by the specs.

The credentials property is required to exist, so this is already initialized for you as well although it is empty for now.

We could also have checked those defaults more easily by checking the Verifiable Presentation's JSON representation.

This can be achieved by calling the toJSON() method on it:

>    vp.toJSON()
<-   {
       "@context": [ "https://www.w3.org/2018/credentials/v1" ],
       "id": "http://example.edu/credentials/1986",
       "type": [
         "VerifiablePresentation"
       ],
       "verifiableCredential": [],
     }

An interesting thing to note here is the transformation happening to some of the root level keys in the JSON representation of a VerifiablePresentation object.

For example context gets transformed into @context and credentials into verifiableCredential. This is to ensure compliance with the Verifiable Credentials Data Model specs while at the same time providing you with a clean interface to the VerifiablePresentation class in your code.

Once your Verifiable Presentation has been initialized, you can proceed to use the rest of the building functions to define it completely before finally signing it.

Adding a Context

A context can be added with the addContext method. It accepts a single argument context which can either be a string (in which case it needs to be a valid URI), or an object

>   vp.addContext('https://www.w3.org/2018/credentials/examples/v1')
>   vp.context
<-  [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ])

Adding a Type

A type can be added with the addType function. It accepts a single argument type that needs to be a string:

>   vp.addType('CredentialManagerPresentation')
>   vp.type
<-  [
      'VerifiablePresentation',
      'CredentialManagerPresentation'
    ]

Setting a Holder

Setting a Holder is optional and it can be achieved using the setHolder method. It accepts a single argument type that needs to be a string (a URI for the entity that is generating the presentation):

>   vp.setHolder('https://example.com/credentials/1234567890');
>   vp.holder
<-  'https://example.com/credentials/1234567890'

Adding a Verifiable Credential

Your Verifiable Presentations can contain one or more Verifiable Credentials inside.

Adding a Verifiable Credential can be achieved using the addCredential method. It accepts a single argument credential that needs to be an object (a valid, signed Verifiable Credential):

>   vp.addCredential(vc);
>   vp.credentials
<-  [
      {...}
    ]

Please note that the example was truncated to enhance readability.

Signing a Verifiable Presentation

Once you've crafted your Verifiable Presentation and added your Verifiable Credentials to it, it is time to sign it.

This can be achieved with the sign method. It requires a keyDoc parameter (an object with the params and keys you'll use for signing), and a challenge string for the proof.

It also accepts a domain string for the proof, a resolver in case you're using DIDs and a boolean compactProof that determines whether you want to compact the JSON-LD or not:

>   await vp.sign(
          keyDoc,
          'some_challenge',
          'some_domain',
        );

Please note that signing is an async process. Once done, your vp object will have a new proof field:

>   vp.proof
<-  {
      "type": "EcdsaSecp256k1Signature2019",
      "created": "2020-04-14T20:57:01Z",
      "challenge": "some_challenge",
      "domain": "some_domain",
      "jws": "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEUCIQCTTpivdcTKFDNdmzqe3l0nV6UjXgv0XvzCge--CTAV6wIgWfLqn_62U8jHkNSujrHFRmJ_ULj19b5rsNtjum09vbg",
      "proofPurpose": "authentication",
      "verificationMethod": "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
    }

Verifying a Verifiable Presentation

Once your Verifiable Presentation has been signed you can proceed to verify it with the verify method.

If you've used DIDs you need to pass a resolver for them. You can also use the booleans compactProof (to compact the JSON-LD).

If your credential uses the credentialStatus field, the credential will be checked to be not revoked unless you pass skipRevocationCheck. For the simplest cases you only need a challenge string and possibly a domain string:

>   const results = await vp.verify({ challenge: 'some_challenge', domain: 'some_domain' });
>   results
<-  {
      "presentationResult": {
        "verified": true,
        "results": [
          {
            "proof": {
              "@context": "https://w3id.org/security/v2",
              "type": "EcdsaSecp256k1Signature2019",
              "created": "2020-04-14T20:57:01Z",
              "challenge": "some_challenge",
              "domain": "some_domain",
              "jws": "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEUCIQCTTpivdcTKFDNdmzqe3l0nV6UjXgv0XvzCge--CTAV6wIgWfLqn_62U8jHkNSujrHFRmJ_ULj19b5rsNtjum09vbg",
              "proofPurpose": "authentication",
              "verificationMethod": "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
            },
            "verified": true
          }
        ]
      },
      "verified": true,
      "credentialResults": [
        {
          "verified": true,
          "results": [
            {
              "proof": {
                "@context": "https://w3id.org/security/v2",
                "type": "EcdsaSecp256k1Signature2019",
                "created": "2020-04-14T20:49:00Z",
                "jws": "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEUCIQCCCRuJbSUPePpOfkxsMJeQAqpydOFYWsA4cGiQRAR_QQIgehRZh8XE24hV0TPl5bMS6sNeKtC5rwZGfmflfY0eS-Y",
                "proofPurpose": "assertionMethod",
                "verificationMethod": "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
              },
              "verified": true
            }
          ]
        }
      ]
    }

Please note that the verification is an async process that returns an object when the promise resolves.

This object contains separate results for the verification processes of the included Verifiable Credentials and the overall Verifiable Presentation.

A boolean value for the entire verification process can be checked at the root level verified property.

Using DIDs

The examples shown above use different kinds of URIs as id property of different sections. It is worth mentioning that the use of DIDs is not only supported but also encouraged.

Their usage is very simple: create as many DIDs as you need and then use them instead of the URIs shown above.

For example when adding a subject to a Verifiable Credential here we're using a DID instead of a regular URI in the id property of the object:vc.addSubject({ id: 'did:dock:123qwe123qwe123qwe', alumniOf: 'Example University' }).

If you don't know how to create a DID there's a specific tutorial on DIDs you can read.

Bear in mind that you will need to provide a resolver method if you decide to use DIDs in your Verifiable Credentials or Verifiable Presentations. More on resolvers can be found in the tutorial on Resolvers.

Here's an example of issuing a Verifiable Credential using DIDs, provided that you've created and a DID that you store in issuerDID:

const issuerKey = getKeyDoc(
  issuerDID,
  dock.keyring.addFromUri(issuerSeed, null, "ed25519"),
  "Ed25519VerificationKey2018"
);
await vc.sign(issuerKey);
const verificationResult = await signedCredential.verify({
  resolver,
  compactProof: true,
});
console.log(verificationResult.verified); // Should print `true`

Creating a keyDoc

It can be seen from the above examples that signing of credentials and presentations require keypairs to be formatted into a keyDoc object.

There is a helper function to help with this formatting, it's called getKeyDoc and it is located in the vc helpers.

Its usage is very simple, it accepts a did string which is a DID in fully qualified form, a keypair object (generated by either using polkadot-js's keyring for Sr25519 and Ed25519 or keypair generated with generateEcdsaSecp256k1Keypair for curve secp256k1) and a type string containing the type of the provided key (one of the supported 'Sr25519VerificationKey2020', 'Ed25519VerificationKey2018' or 'EcdsaSecp256k1VerificationKey2019'):

const keyDoc = getKeyDoc(did, keypair, type);

Please check the example on the previous section or refer to the presenting integration tests for a live example.

Revocation

This guide provides instructions for managing credential revocation using StatusList2021Credential.

Prerequisites

  • Ensure you have access to Dock's Credential SDK and Blockchain API.
  • The Dock API is initialized and connected to the blockchain.
  • You have a valid issuer DID registered on the Dock network.

Steps to Manage Revocation

Create a Status List Credential

  1. Generate a Random Status List ID: Create a unique identifier for tracking revocation status.

    import { DockStatusListCredentialId } from '@docknetwork/credential-sdk/types';
    
    const statusListCredentialId = DockStatusListCredentialId.random();
    
  2. Create Status List Credential: Use the issuer's key to create a new status list credential with a specified purpose (e.g., "suspension").

    import { StatusList2021Credential } from '@docknetwork/credential-sdk/types';
    
    const issuerKey = /* Obtain issuer key document */;
    const statusListCred = await StatusList2021Credential.create(
      issuerKey,
      statusListCredentialId,
      { statusPurpose: "suspension" },
    );
    
    await modules.statusListCredential.createStatusListCredential(
      statusListCredentialId,
      statusListCred,
      issuerDID,
      issuerKeyPair,
    );
    

Issue a Credential with Revocation Data

  1. Add Revocation Entry: Include a status list entry in the credential for potential revocation.

    import { addStatusList21EntryToCredential } from '@docknetwork/credential-sdk/vc';
    
    let unsignedCred = /* Obtain unsigned credential */;
    unsignedCred = addStatusList21EntryToCredential(
      unsignedCred,
      statusListCredentialId,
      statusListCredentialIndex, // Unique index for the credential
      "suspension", // Purpose matching the status list credential
    );
    
  2. Issue Credential: Sign and issue the credential with the added status list entry.

    import { addStatusList21EntryToCredential } from '@docknetwork/credential-sdk/vc';
    
    const credential = await issueCredential(
      issuerKey,
      unsignedCred,
      void 0,
      defaultDocumentLoader(resolver),
    );
    

Revoke the Credential

  1. Fetch and Update Status List: Retrieve the existing status list credential and update it to revoke the issued credential by its index.

    const fetchedCred = await modules.statusListCredential.getStatusListCredential(statusListCredentialId);
    await fetchedCred.update(issuerKey, {
      revokeIndices: [statusListCredentialIndex], // Index of the credential to revoke
    });
    
    await modules.statusListCredential.updateStatusListCredential(
      statusListCredentialId,
      fetchedCred,
      issuerDID,
      issuerKeyPair,
    );
    

Verify the Revoked Credential

  1. Verify Credential Status: Check the validity of the credential post-revocation. Verification should indicate the credential is no longer valid.

    const result = await verifyCredential(credential, {
      resolver,
      compactProof: true,
    });
    if (!result.verified) {
      console.error("Credential is revoked or suspended.");
    }
    

Schemas

Table of contents

  1. Intro
  2. Blobs
    1. Writing a Blob
    2. Reading a Blob
  3. Schemas
    1. Creating a Schema
    2. Writing a Schema
    3. Reading a Schema
    4. Schemas in Verifiable Credentials
    5. Schemas in Verifiable Presentations

Intro

Data Schemas are useful way of enforcing a specific structure on a collection of data like a Verifiable Credential. Data schemas serve a different purpose than that of the @context property in a Verifiable Credential, the latter neither enforces data structure or data syntax, nor enables the definition of arbitrary encodings to alternate representation formats.

Blobs

Schemas are stored on chain as a Blob in the Blob Storage module of the Dock chain, so understanding blobs is important before diving into Schemas.

Writing a Blob

A new Blob can be registered on the Dock Chain by using the method writeToChain in the BlobModule class. It accepts a blob object with the struct to store on chain (it can either be a hex string or a byte array), and one of keyPair (a keyPair to sign the payload with). You'll get a signed extrinsic that you can send to the Dock chain:

const blobId = DockBlobId.random(); // 32-bytes long hex string to use as the blob's id
const blobStruct = {
  id: blobId,
  blob: blobHexOrArray, // Contents of your blob as a hex string or byte array
};
const result = await dock.blob.new(blobStruct, ownerDid, didKeypair);

If everything worked properly result will indicate a successful transaction. We'll see how to retrieve the blob next.

Reading a Blob

A Blob can be retrieved by using the method get in the BlobModule class. It accepts a blobId string param which can either be a fully-qualified blob id like blob:dock:0x... or just its hex identifier. In response you will receive a two-element array:

const chainBlob = await dock.blob.get(blobId);

chainBlob's first element will be the blob's author (a DID). It's second element will be the contents of your blob (blobHexOrArray in our previous example).

Schemas

Since Schemas are stored on chain as a Blob in the Blob Storage module, the Schema class uses the BlobModule class internally. Schemas are identified and retrieved by their unique blobId, a 32 byte long hex string. As mentioned, the chain is agnostic to the contents of blobs and thus to schemas.

Creating a Schema

The first step to creating a Schema is to initialize it, we can do that using the Schema class constructor which accepts an (optional) id string as sole argument:

const myNewSchema = new Schema();

When an id isn't passed, a random blobId will be assigned as the schema's id.

> myNewSchema.id
<- "blob:dock:5Ek98pDX61Dwo4EDmsogUkYMBqfFHtiS5hVS7xHuVvMByh3N"

Also worth noticing is the JSON representation of the schema as is right now, which can be achieved by calling the toJSON method on your new schema:

>  myNewSchema.toJSON()
<- {"id":"0x768c21de02890dad5dbf6f108b6822b865e4ea495bb7f43f8947714e90fcc060"}

where you can see that the schema's id gets modified with getHexIdentifierFromBlobID.

Setting a JSON Schema

A JSON schema can be added with the setJSONSchema method. It accepts a single argument json (an object that is checked to be a valid JSON schema before being added):

>   const someNewJSONSchema = {
         $schema: 'http://json-schema.org/draft-07/schema#',
         description: 'Dock Schema Example',
         type: 'object',
         properties: {
           id: {
             type: 'string',
           },
           emailAddress: {
             type: 'string',
             format: 'email',
           },
           alumniOf: {
             type: 'string',
           },
         },
         required: ['emailAddress', 'alumniOf'],
         additionalProperties: false,
       }
>   myNewSchema.setJSONSchema(someNewJSONSchema)
>   myNewSchema.schema === someNewJSONSchema
<-  true

Formatting for storage

Your new schema is now ready to be written to the Dock chain, the last step is to format it properly for the BlobModule to be able to use it. That's where the toBlob method comes in handy:

>   myNewSchema.toBlob()
<-  {
      id: ...,
      blob: ...,
    }

Writing a Schema to the Dock chain

Writing a Schema to the Dock chain is similar to writing any other Blob. 1 is the key id for the on-chain public key corresponding to keyPair

>  const formattedBlob = myNewSchema.toBlob(dockDID);
>  await myNewSchema.writeToChain(modules.blob, dockDID, keypair);

Reading a Schema from the Dock chain

Reading a Schema from the Dock chain can easily be achieved by using the get method from the Schema class. It accepts a string id param (a fully-qualified blob id like "blob:dock:0x..." or just its hex identifier) and a dockAPI instance:

>  const result = await Schema.get(blob.id, modules.blob);

result[0] will be the author of the Schema, and result[1] will be the contents of the schema itself.

Schemas in Verifiable Credentials

The VCDM spec specify how the credentialSchema property should be used when present. Basically, once you've created and stored your Schema on chain, you can reference to it by its blobId when issuing a Verifiable Credential. Let's see an example:

>    const dockApi = new DockAPI();
>    const dockResolver = new DockResolver(dockApi);
>    let validCredential = new VerifiableCredential('https://example.com/credentials/123');
>    validCredential.addContext('https://www.w3.org/2018/credentials/examples/v1');
>    const ctx1 = {
      '@context': {
        emailAddress: 'https://schema.org/email',
      },
    };
>    validCredential.addContext(ctx1);
>    validCredential.addType('AlumniCredential');
>    validCredential.addSubject({
      id: dockDID,
      alumniOf: 'Example University',
      emailAddress: 'john@gmail.com',
    });
>    validCredential.setSchema(blobHexIdToQualified(blobId), 'JsonSchemaValidator2018');
>    await validCredential.sign(keyDoc);
>    await validCredential.verify({
       resolver: dockResolver,
       compactProof: true,
     });

Assuming that the blobId points to a schema taken from the previous examples, the verification above would fail if I the credentialSubject in the Verifiable Credential didn't have one of the alumniOf or emailAddress properties.

Schemas in Verifiable Presentations

The current implementation does not specify a way to specify a schema for a Verifiable Presentation itself. However, a Verifiable Presentation may contain any number of Verifiable Credentials, each of which may or may not use a Schema themselves. The verify method for Verifiable Presentations will enforce a schema validation in each of the Verifiable Credentials contained in a presentation that are using the credentialSchema and credentialSubject properties simultaneously. This means that the verification of an otherwise valid Verifiable Presentation will fail if one of the Verifiable Credentials contained within it uses a Schema and fails to pass schema validation.

Claim Deduction

Specifying Axioms

A Verifier has complete and low level control over the logical rules they deem valid. Rules may vary from use-case to use-case and from verifier to verifier.

A common first step when writing a ruleset will be to unwrap of Explicit Ethos statements.

Simple Unwrapping of Explicit Ethos

This ruleset names a specific issuer and states that any claims made by that issuer are true.

const rules = [
  {
    if_all: [
      [
        { Unbound: "subject" },
        { Unbound: "predicate" },
        { Unbound: "object" },
        { Bound: { Iri: "did:example:issuer" } },
      ],
    ],
    then: [
      [
        { Unbound: "subject" },
        { Unbound: "predicate" },
        { Unbound: "object" },
        { Bound: { DefaultGraph: true } },
      ],
    ],
  },
];

That single rule is enough for some use-cases but it's not scalable. What if we want to allow more than one issuer? Instead of copying the same rule for each issuer we trust, let's define "trustworthiness".

Unwrapping Explicit Ethos by Defining Trustworthiness

const trustworthy = {
  Bound: { Iri: "https://www.dock.io/rdf2020#Trustworthy" },
};
const type = {
  Bound: { Iri: "http://www.w3.org/1999/02/22-rdf-syntax-ns#type" },
};
const defaultGraph = { Bound: { DefaultGraph: true } };

const rules = [
  {
    if_all: [
      [{ Unbound: "issuer" }, type, trustworthy, defaultGraph],
      [
        { Unbound: "s" },
        { Unbound: "p" },
        { Unbound: "o" },
        { Unbound: "issuer" },
      ],
    ],
    then: [
      [{ Unbound: "s" }, { Unbound: "p" }, { Unbound: "o" }, defaultGraph],
    ],
  },
  {
    if_all: [],
    then: [
      [
        { Bound: { Iri: "did:example:issuer" } },
        type,
        trustworthy,
        defaultGraph,
      ],
    ],
  },
];

You may ask "So what's the difference? There is still only one issuer."

By the primitive definition of "trustworthiness" written above, any claim made by a trustworthy issuer is true. did:example:issuer can claim whatever they want by issuing verifiable credentials. They can even claim that some other issuer is trustworthy. Together, the two rules defined in the above example implement a system analogous to TLS certificate chains with did:example:issuer as the single root authority.

Proving Composite Claims

As a Holder of verifiable credentials, you'll want to prove specific claims to a Verifier. If those claims are composite, you'll sometimes need to bundle a deductive proof in your verifiable credentials presentation. This should be done after the presentation has been assembled. If the presentation is going to be signed, sign it after including the deductive proof.

import { proveCompositeClaims } from '@docknetwork/credential-sdk/rdf-and-cd';
import jsonld from 'jsonld';

// Check out the Issuance, Presentation, Verification tutorial for info on creating
// VCDM presentations.
const presentation = { ... };

// the claim we wish to prove
const compositeClaim = [
  { Iri: 'uuid:19e91192-210b-4b03-8e9c-8ded0a48d5bf' },
  { Iri: 'http://dbpedia.org/ontology/owner' },
  { Iri: 'did:example:bob' },
  { DefaultGraph: true },
];

// SDK reasoning utilities take presentations in expanded form
// https://www.w3.org/TR/json-ld/#expanded-document-form
const expPres = await jsonld.expand(presentation);

let proof;
try {
  proof = await proveCompositeClaims(expPres, [compositeClaim], rules);
} catch (e) {
  console.error('couldn\'t prove bob is an owner');
  throw e;
}

// this is that standard property name of a Dock deductive proof in VCDM presentation
const logic = 'https://www.dock.io/rdf2020#logicV1';

presentation[logic] = proof;

// Now JSON.stringify(presentation) is ready to send to a verifier.

Verifying Composite Claims

import { acceptCompositeClaims } from '../src/utils/cd';
import jsonld from 'jsonld';
import deepEqual from 'deep-equal';

/// received from the presenter
const presentation = ...;

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
let ver = await verify(presentation);
if (!ver.verified) {
  throw ver;
}

const expPres = await jsonld.expand(presentation);

// acceptCompositeClaims will verify and take into account any deductive proof provided
// via the logic property
const claims = await acceptCompositeClaims(expPres, rules);

if (claims.some(claim => deepEqual(claim, compositeClaim))) {
  console.log('the composite claim was shown to be true');
} else {
  console.error('veracity of the composite claim is unknown');
}

Verifier-Side Reasoning

Some use-cases may require the verifier to perform inference in place of the presenter.

import { proveCompositeClaims } from '../src/utils/cd';
import jsonld from 'jsonld';

/// received from the presenter
const presentation = ...;

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
let ver = await verify(presentation);
if (!ver.verified) {
  throw ver;
}

const expPres = await jsonld.expand(presentation);

try {
  await proveCompositeClaims(expPres, [compositeClaim], rules);
  console.log('the composite claim was shown to be true');
} except (e) {
  console.error('veracity of the composite claim is unknown');
}

We Need to Go Deeper

The SDK claim deduction module exposes lower level functionality for those who need it. getImplications, proveh and validateh, for example, operate on raw claimgraphs represented as adjacency lists. For even lower level access, check out our inference engine which is written in Rust and exposed to javascript via wasm.

Graphical Anchoring Utility

You can also anchor without touching any code. Visit https://fe.dock.io/#/anchor/batch for creation of anchors and https://fe.dock.io/#/anchor/check for anchor verification.

To Batch, or not to Batch

Batching (combining multiple anchors into one) can be used to save on transaction costs by anchoring multiple documents in a single transaction as a merkle tree root.

Batching does have a drawback. In order to verify a document that was anchored as part of the batch, you must provde the merkle proof that was generated when batching said file. Merkle proofs are expressed as .proof.json files and can be downloaded before posting the anchor. No merkle proof is required for batches containing only one document.

Programatic Usage

The on-chain anchoring module allows to developers the flexibility talor anchors to their own use-case, but the sdk does provide a reference example for batching and anchoring documents.

The anchoring module is hashing algorithm and hash length agnostic. You can post a multihash, or even use the identity hash; the chain doesn't care.

One thing to note is that rather than storing your anchor directly, the anchoring module will store the blake2b256 hash of the anchor. This means as a developer you'll need to perform an additional hashing step when looking up anchors:

// pseudocode

function postAnchor(file) {
  anchor = myHash(file)
  deploy(anchor)
}

fuction checkAnchor(file) {
  anchor = myHash(file)
  anchorblake = blake2b256(anchor)
  return lookup(anchorblake)
}

See the example/anchor.js in the sdk repository for more info.

Private Delegation

This tutorial follows the lifecycle of a delegated credential. It builds builds on previous turorials Issuance, Presentation, Verification and Claim Deduction.

Create a Delegation

Let's assume some root authority, did:ex:a, wants grant did:ex:b full authority to make claims on behalf of did:ex:a. To do this did:ex:a will issue a delegation credential to did:ex:b.

Boilerplate
const { v4: uuidv4 } = require('uuid');

function uuid() {
  return `uuid:${uuidv4()}`;
}

// Check out the Issuance, Presentation, Verification tutorial for info on signing
// credentials.
function signCredential(cred, issuer_secret) { ... }

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
async function verifyPresentation(presentation) { ... }
const delegation = {
  "@context": ["https://www.w3.org/2018/credentials/v1"],
  id: uuid(),
  type: ["VerifiableCredential"],
  issuer: "did:ex:a",
  credentialSubject: {
    id: "did:ex:b",
    "https://rdf.dock.io/alpha/2021#mayClaim":
      "https://rdf.dock.io/alpha/2021#ANYCLAIM",
  },
  issuanceDate: new Date().toISOString(),
};
const signed_delegation = signCredential(delegation, dida_secret);

Next did:ex:a sends the signed credential to did:ex:b.

Issue a Credential as a Delegate

did:ex:b accepts the delegation credential from did:ex:a. Now did:ex:b can use the delegation to make arbitrary attestations on behalf of did:ex:a.

const newcred = {
  "@context": ["https://www.w3.org/2018/credentials/v1"],
  id: uuid(),
  type: ["VerifiableCredential"],
  issuer: "did:ex:b",
  credentialSubject: {
    id: "did:ex:c",
    "https://example.com/score": 100,
  },
  issuanceDate: new Date().toISOString(),
};
const signed_newcred = signCredential(newcred, didb_secret);

So far we have two credentials, signed_delegation and signed_newcred. signed_delegation proves that any claim made by did:ex:b is effectively a claim made by did:ex:a. signed_newcred proves tha did:ex:b claims that did:ex:c has a score of 100. By applying one of the logical rules provided by the sdk, we can infer that did:ex:a claims did:ex:c has a score of 100. The logical rule named MAYCLAIM_DEF_1 will work for this use-case. MAYCLAIM_DEF_1 will be used by the verifier.

Now did:ex:b has both signed credentials. did:ex:b may now pass both credentials to the holder. In this case the holder is did:ex:c. did:ex:c also happens to be the subject of one of the credentials.

Present a Delegated Credential

did:ex:c now holds two credentials, signed_delegation and signed_newcred. Together they prove that did:ex:a indirectly claims did:ex:c to have a score of 100. did:ex:c wants to prove this statement to another party, a verifier. did:ex:c must bundle the two credentials into a VCDM presentation.

let presentation = {
  "@context": ["https://www.w3.org/2018/credentials/v1"],
  type: ["VerifiablePresentation"],
  id: uuid(),
  holder: `did:ex:c`,
  verifiableCredential: [signed_delegation, signed_newcred],
};

presentation is sent to the verifier.

Accept a Delegated Credential

The verifier receives presentation, verifies the enclosed credentials, then reasons over the union of all the credentials in the bundle using the rule MAYCLAIM_DEF_1. The process is the one outlined in Verifier-Side Reasoning but using a different composite claim and a different rule list.

import { MAYCLAIM_DEF_1 } from '@docknetwork/credential-sdk/rdf-and-cd';
import { proveCompositeClaims } from '../src/utils/cd';
import jsonld from 'jsonld';

const compositeClaim = [
  { Iri: 'did:ex:c' },
  { Iri: 'https://example.com/score' },
  { Literal: { datatype: 'http://www.w3.org/2001/XMLSchema#integer', value: '100' } }
  { Iri: 'did:ex:a' },
];

let ver = await verifyPresentation(presentation);
if (!ver.verified) {
  throw ver;
}

const expPres = await jsonld.expand(presentation);

try {
  await proveCompositeClaims(expPres, [compositeClaim], MAYCLAIM_DEF_1);
  console.log('the composite claim was shown to be true');
} except (e) {
  console.error('veracity of the composite claim is unknown');
}

Public Delegation

This feature should be considered Alpha.

Public Delegations use the same data model as Private Delegations. A delegator attests to some delegation. The verifier somehow gets and verifies that attestation then reasons over it in conjuction with a some credential. The difference is that while Private Delegations are passed around as credentials, Public Delegations are linked from the DID document of the delegator.

Create a Delegation

It's assumed that the delegator already controls a DID. See the tutorial on DIDs for instructions on creating your own on-chain DID.

Like in the Private Delegation tutorial, let's assume a root authority, did:ex:a, wants to grant did:ex:b full authority to make claims on behalf of did:ex:a. did:ex:a will post an attestation delegating to did:ex:b.

Boilerplate
import { graphResolver } from '@docknetwork/credential-sdk/rdf-and-cd';
const { v4: uuidv4 } = require('uuid');

// A running ipfs node is required for crawling.
const ipfsUrl = 'http://localhost:5001';

function uuid() {
  return `uuid:${uuidv4()}`;
}

// Check out the Issuance, Presentation, Verification tutorial for info on signing
// credentials.
function signCredential(cred, issuer_secret) { ... }

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
async function verifyPresentation(presentation) { ... }

// This function can be implemeted using setClaim().
// An example of setClaim() usage can be found here:
//  https://github.com/docknetwork/sdk/blob/master/tests/integration/did-basic.test.js
async function setAttestation(did, didKey, iri) { ... }

// See the DID resolver tutorial For information about implementing a documentLoader.
const documentLoader = ...;

const { createHelia } = await import('helia');
const { strings } = await import('@helia/strings');
const ipfsClient = strings(await createHelia(ipfsUrl));
const resolveGraph = graphResolver(ipfsClient, documentLoader);

Instead of a credential, the delegation will be expressed as a turtle document, posted on ipfs.

@prefix dockalpha: <https://rdf.dock.io/alpha/2021#> .
<did:ex:b> dockalpha:mayClaim dockalpha:ANYCLAIM .

A link to this ipfs document is then added to the delegators DID document. For a Dock DID, this is done by submitting a transaction on-chain.

await setAttestation(
  delegatorDid,
  delegatorSk,
  "ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p"
);

Issue a Credential as a Delegate

With Public Delegation, the delegate doesn't need to worry about the passing on delegation credentials to the holder. The delegations are already posted where the verifier can find them.

Present a Delegated Credential

With Public Delegation the holder does not need to include a delegation chain when presenting their credential. From the holders perspective, the process of presenting a publically delegated credential is exactly the same as the process for presenting a normal credential.

Accept a Delegated Credential

The verifier accepts Publicly delegated credentials by merging the credential's claimgraph representation with publically posted delegation information, then reasoning over the result. Once found, the delegation information is also a claimgraph. The delegation information is found by crawling the public attestation supergraph. Crawling is potentially slow, so when verification speed is important it should be done early on, like at program startup. Delegation information can be re-used across multiple credential verifications.

As with any Public Attestations, delegation information is revocable by removing the delegation attestation from the delegators DID doc. As such it is possible for cached delegation information to become out of date. Long running validator processes should devise a mechanism for invalidating out-of-date delegation information, such as re-crawing whenever a change is detected to the DID doc of a delegator (or sub-delegator). This tutorial does not cover invalidation of out-of-date delegations.

The following example shows how a verifier might

import {
  ANYCLAIM,
  MAYCLAIM,
  MAYCLAIM_DEF_1,
  crawl,
  proveCompositeClaims,
  presentationToEEClaimGraph,
  inferh,
  merge,
} from "@docknetwork/credential-sdk/rdf-and-cd";
import jsonld from "jsonld";

// These logical rules will be used for reasoning during both crawing and verifiying
// credentials.
const RULES = [
  // Imports the definition of dockalpha:mayClaim from sdk
  ...MAYCLAIM_DEF_1,
  // Adds a custom rule stating that by attesting to a document the attester grants the
  // document full delegation authority.
  {
    if_all: [
      [
        { Unbound: "a" },
        { Bound: { Iri: ATTESTS } },
        { Unbound: "doc" },
        { Unbound: "a" },
      ],
    ],
    then: [
      [
        { Unbound: "doc" },
        { Bound: { Iri: MAYCLAIM } },
        { Bound: { Iri: ANYCLAIM } },
        { Unbound: "a" },
      ],
    ],
  },
];

// This query dictates what the crawler will be "curious" about. Any matches to
// `?lookupNext` will be dereferenced as IRIs. When an IRI is successfully dereferenced
// the resultant data is merged into the crawlers knowlege graph.
const CURIOSITY = `
  prefix dockalpha: <https://rdf.dock.io/alpha/2021#>

  # Any entity to which "did:ex:a" grants full delegation authority is interesting.
  select ?lookupNext where {
    graph <did:ex:a> {
      ?lookupNext dockalpha:mayClaim dockalpha:ANYCLAIM .
    }
  }
`;

// To spark the crawlers interest we'll feed it some initial knowlege about did:ex:a .
const initialFacts = await resolveGraph({ Iri: "did:ex:a" });

// `allFact` contains our delegation information, it will be merged with verified
// credentials in order to reason over delegations
let allFacts = await crawl(initialFacts, RULES, CURIOSITY, resolveGraph);

// Now that we've obtained delegation information for `did:ex:a` we can verify
// credentials much like normal. The only difference is that we merge claimgraphs
// before reasoning over the verified credentials.
//
// `presentation` is assumed to be a VCDM presentation provided by a credential holder
let ver = await verifyPresentation(presentation);
if (!ver.verified) {
  throw ver;
}
const expPres = await jsonld.expand(presentation);
const presCg = await presentationToEEClaimGraph(expPres);
const cg = inferh(merge(presCg, allFacts), RULES);

// At this point all the RDF quads in `cg` are known to be true.
// doSomethingWithVerifiedData(cg);

More examples of crawl() usage can be found here and here.

Anonymous Credentials

Overview

This document talks about building anonymous credentials using mainly 2 primitives, BBS+ signature scheme which issuer uses to sign the credential and accumulators for membership check needed for revocation. BBS+ implementation comes from this Typescript package which uses this WASM wrapper which itself uses our Rust crypto library.

For an overview of these primitives, see this.

Implementation

On chain, there are 2 modules, one for BBS+ and the other for accumulator. The modules store the BBS+ params, public keys, accumulator params, accumulator public keys and some accumulator details like current accumulated value, last updated, etc. They are somewhat agnostic to the cryptographic details and treat the values as bytes with some size bounds.

  • BBS+ module

    • At path src/modules/bbs-plus.js in the repo.
    • Used to create and remove signature parameters and public keys.
    • The public keys can either refer the signature params or not pass the reference while creating.
    • The params and public keys are owned by a DID and can be only removed by that DID.
    • See the tests at tests/integration/anoncreds/bbs-plus.test.js on how to create, query and remove these.
  • Accumulator module

    • At path src/modules/accumulator.js in the repo.
    • The parameters and public keys are managed in the same way as BBS+ signatures.
    • Accumulators are owned by a DID and can be only removed by that DID.
    • Accumulators are identified by a unique id and that id is used to send updates or remove it.
    • The accumulator update contains the additions, removals and the witness update info and these are not stored in chain state but are present in the blocks and the accumulated value corresponding to the update is logged in the event.
    • In the chain state, only the most recent accumulated value is stored (along with some metadata like creation time, last update, etc), which is sufficient to verify the witness or the proof of knowledge.
    • To update the witness, the updates and witness update info should be parsed from the blocks and the accumulator module provides the functions get the updates and necessary events from the block,
    • See the tests at tests/integration/anoncreds/accumulator.test.js on how to create, query and remove params and keys as well as the accumulator.
  • Composite proofs

    • Proofs that use BBS+ signatures and accumulator
    • The SDK itself doesn't include the Typescript package containing the crypto as a dependency. But it can be used with the SDK to issue, prove, verify and revoke credentials as shown in tests mentioned below.
    • See the test tests/integration/anoncreds/demo.test.js for an example of how a BBS+ signature can be used with an accumulator for anonymous credentials. The accumulator is used to hold a user/credential id. Presence of the id in accumulator means the credential is valid and absence means invalid.
  • Verifiable encryption

    • Encrypt messages from BBS+ signatures for a 3rd party and prove that the encryption was done correctly.
    • See the test tests/integration/anoncreds/saver-and-bound-check.test.js
  • Bound check/Range proof

    • Prove that messages under a BBS+ signature satisfy some bounds.
    • See the test tests/integration/anoncreds/saver-and-bound-check.test.js