Intro

Dock is a blockchain built using Substrate to facilitate the use of Verifiable Credentials Data Model 1.0 compliant documents, creating/managing W3C spec compliant DIDs and more. The client SDK contains a library and tooling to interact with the Dock chain and also other things such as verifying and issuing credentials. View the video verison of this tutorial here: https://www.youtube.com/watch?v=jvgn9oSXBDQ

Pre-requisites for these tutorials

For these tutorials we will be a running our own local development node. Instructions to do this can be found at the dock substrate repository. Once you have followed the instructions and have your local node running, you can continue. Please note that you don't always need a node to use the Dock SDK, but certain features rely on it.

Installation

Installation of the SDK is pretty simple, we use NPM and our source is also available at GitHub (links below). To install via NPM or Yarn, run either npm install @docknetwork/sdk or yarn add @docknetwork/sdk respectively. Once the package and dependencies are installed, you can import it like any ES6/CJS module. You can find the complete source for the SDK at https://github.com/docknetwork/sdk and the tutorials at https://github.com/docknetwork/dock-tutorials.

Importing

In this tutorial series we will be using NodeJS with babel for ES6 support, however the same code should work in browsers too once it is transpiled. To begin with, we should import the Dock SDK. Importing the default reference will give us a DockAPI instance. With this we will communicate with the blockchain. You can also import the DockAPI class instanciate your own objects if you prefer. Simply do:

// Import the dock SDK
import dock from '@docknetwork/sdk';

We will add one more import here for some shared constants across each tutorial, just the node address and account secret:

// Import some shared variables
import { address, secretUri } from './shared-constants';

Lets also create this file, creating shared-constants.js with the contents:

export const address = 'ws://localhost:9944'; // Websocket address of your Dock node
export const secretUri = '//Alice'; // Account secret in uri format, we will use Alice for local testing

Connecting to a node

With the required packages and variables imported, we can go ahead and connect to our node. If you don't have a local testnet running alraedy, go to https://github.com/docknetwork/dock-substrate and follow the steps in the readme to start one. You could use the Dock testnet given a proper account with enough funds. First, create a method named connectToNode with an empty body for now:

export async function connectToNode() {

}

Before working with the SDK, we need to initialize it. Upon initialization the SDK will connect to the node with the supplied address and create a keyring to manage accounts. Simply call dock.init and wait for the promise to resolve to connect to your node:

// Initialize the SDK and connect to the node
await dock.init({ address });
console.log('Connected to the node and ready to go!');

Creating an account

In order to write to the chain we will need to set an account. We can perform read operations with no account set, but for our purposes we will need one. Accounts can be generated using the dock.keyring object through multiple methods such as URI, memonic phrase and raw seeds. See the polkadot keyring documentation (https://polkadot.js.org/api/start/keyring.html) for more information.

We will use our URI secret of //Alice which was imported from shared-constants.js to work with our local testnet. Add this code after dock.init:

// Create an Alice account for our local node
// using the dock keyring. You don't -need this
// to perform some read operations.
const account = dock.keyring.addFromUri(secretUri);
dock.setAccount(account);

// We are now ready to transact!
console.log('Connected to the node and ready to go!');

If all has gone well, you should be able to run this script and see that you are connected to the node. If any errors occur, the promise will fail and they will be outputted to the console.

Basic usage

To construct your own API object, once the SDK has been installed, import the Dock API object as

import { DockAPI } from '@docknetwork/sdk/api';
const dock = new DockAPI();

To make the API object connect to the node call init method. This method accepts the Websocket RPC endpoint of the node is needed. Say you have it in address. It also accepts a Polkadot-js keyring as well.

await dock.init({ address, keyring });

To disconnect from the node

await dock.disconnect();

To set the account used in sending the transaction and pay fees, call setAccount with the polkadot-js account

// the `account` object might have been generated as
const account = dock.keyring.addFromUri(secretURI);
// Set the account to pay fees for transactions
dock.setAccount(account);

To get the account, call getAccount

dock.getAccount();

To send a transaction, use the signAndSend on the DockAPI object

const res = await dock.signAndSend(transaction);

For interacting with the DID module, i.e. creating, updating and removing them, get the didModule with did getter

const didModule = dock.did;

Similarly, for the revocation module, get the revocationModule with revocation getter

const revocationModule = dock.revocation;

Concepts

  1. DID
  2. Verifiable credentials
  3. Blobs and Schemas

W3C DID

DID stands for Decentralized IDentifiers. DIDs are meant to be globally unique identifiers that allow their owner to prove cryptographic control over them. The owner(s) of the DID is called the controller. The identifiers are not just assignable to humans but to anything. Quoting the DID spec,

A DID identifies any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) that the controller of the DID decides that it identifies.

DIDs differ from public keys in that DIDs are persistent, i.e. a public key has to be changed if the private key is stolen/lost or the cryptographic scheme of the public key is no longer considered safe. This is not the case with DIDs, they can remain unchanged even when the associated cryptographic material changes. Moreover, a DID can have multiple keys and any of its keys can be rotated. Additionally, depending on the scheme, public keys can be quite large (several hundred bytes in RSA) whereas a unique identifier can be much smaller.

Each DID is associated with a DID Document that specifies the subject, the public keys, the authentication mechanisms usable by the subject, authorizations the subject has given to others, service endpoints to communicate with the subject, etc, for all properties that can be put in the DID Document, refer this section of the spec. DIDs and their associated DID Documents are stored on the DID registry which is a term used for the centralized on decentralized database persisting the DID and its Document.

The process of discovering the DID Document for a DID is called DID resolution and the tool (library or a service) is called DID resolver. To resolve the DID, the resolver first needs to check on which registry the DID is hosted and then decide whether it is capable or willing to lookup that registry. The registry is indicated by the DID method of that DID. In addition to the registry, the method also specifies other details of that DID like the supported operations, crypto, etc. Each DID method defines its own specification, Docks's DID method spec is here. In case of Dock, the registry is the Dock blockchain, and the method is dock. We support 2 kinds of DIDs, on-chain and off-chain. With off-chain DIDs, only a reference to the DID Document is kept on chain and this reference can be an CID (for IPFS) or a URL or any custom format. With on-chain DIDs, the keys, controllers and service endpoints of the DID are stored on chain. A DID key can have 1 or more verification methods which indicates what that key can be used for. Only a DID key with verification relationship capabilityInvocation can update the DID document, i.e. add/remove keys, add/remove controllers, add/remove service endpoints and remove the DID. Also a DID can have 1 or more controllers and these controllers can also update its DID document. A DID with a key with capabilityInvocation verification relationship is its own controller.

An example on-chain Dock DID.

did:dock:5CEdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW

Above DID has method dock and the DID identifier is 5CEdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW. Dock DID identifiers are 32 bytes in size.

An example DID Document

{
  "@context": [
    "https://www.w3.org/ns/did/v1"
  ],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn"
  ],
  "publicKey": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Sr25519VerificationKey2020",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "7d3QsaW6kP7bGiJtRZBxdyZsbJqp6HXv1owwr8aYBjbg"
    },
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2"
  ],
  "assertionMethod": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "capabilityInvocation": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ]
}

Dock DIDs support multiple keys. The keys are present in the publicKey section. As per the above DID document, the DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn has 2 public keys and 1 controller which is itself. Note how that public key is referred to using its id in authentication, assertionMethod and capabilityInvocation sections. The above document states that the DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn can authenticate with 2 public keys whose id is specified under authentication. When it attests to some fact (becomes issuer), it can only use 1 key, which is under assertionMethod. The keys specified under capabilityInvocation can be used to update the DID document, i.e. add/remove keys, etc.

{
  "@context": [
    "https://www.w3.org/ns/did/v1"
  ],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
    "did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"
  ],
  "publicKey": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Sr25519VerificationKey2020",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "7d3QsaW6kP7bGiJtRZBxdyZsbJqp6HXv1owwr8aYBjbg"
    },
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2"
  ],
  "assertionMethod": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "capabilityInvocation": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ]
}

In the above DID document, there are controllers, 1 is the DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn itself and the other is did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz. This means that DID did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz can also modify above DID document, i.e. add/remove keys, add/remove controller, etc.

{
  "@context": [
    "https://www.w3.org/ns/did/v1"
  ],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
    "did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"
  ],
  "publicKey": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Sr25519VerificationKey2020",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "7d3QsaW6kP7bGiJtRZBxdyZsbJqp6HXv1owwr8aYBjbg"
    },
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-2"
  ],
  "assertionMethod": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "capabilityInvocation": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1"
  ],
  "service": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#linked-domain-1",
      "type": "LinkedDomains",
      "serviceEndpoint": [
        "https://foo.example.com"
      ]
    }
  ]
}

In the above document, there is also a service endpoint for the DID.

DIDs can also be keyless, i.e. not have any keys of its own. In this case the DID is not self-controlled by controlled by another DID(s) and the other DID could add/remove keys, controllers or remove the DID. An example keyless DID is shown below

{
  "@context": [
    "https://www.w3.org/ns/did/v1"
  ],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": [
    "did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"
  ],
  "publicKey": [],
  "authentication": [],
  "assertionMethod": [],
  "capabilityInvocation": [],
  "service": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#linked-domain-1",
      "type": "LinkedDomains",
      "serviceEndpoint": [
        "https://bar.example.com"
      ]
    }
  ]
}

In the above DID Doc, DID did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn is controlled by did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz. Now did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz add a key, say for authentication to did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn and the DID Doc will look like below

{
  "@context": [
    "https://www.w3.org/ns/did/v1"
  ],
  "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
  "controller": [
    "did:dock:5Hc3RZyfJd98QbFENrDP57Lga8mSofDFwKQpodN2g2ZcYscz"
  ],
  "publicKey": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
      "type": "Ed25519VerificationKey2018",
      "controller": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn",
      "publicKeyBase58": "p6gb7WNh9SWC4hkye4VV5epo1LYpLXKH21ojfwJLayg"
    }
  ],
  "authentication": [
    "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#keys-1",
  ],
  "assertionMethod": [],
  "capabilityInvocation": [],
  "service": [
    {
      "id": "did:dock:5Hhnorjqd7vXPKdT7Y1ZpHksMBHsVRNewntZjMF2NHm3PoFn#linked-domain-1",
      "type": "LinkedDomains",
      "serviceEndpoint": [
        "https://bar.example.com"
      ]
    }
  ]
}

Another thing to keep in mind is that the keys associated with the Dock DID are independent of the keys used to send the transaction on chain and pay fees. Eg. Alice might not have any tokens to write anything on chain but can still create a DID and corresponding key and ask Bob who has tokens to register the DID on chain. Even though Bob wrote the DID on chain, he cannot update or remove it since only Alice has the keys associated with that DID. Similarly, when Alice wants to update the DID , it can create the update, sign it and send it to Carol this time to send the update on chain. Similar to blockchain accounts, DIDs also have their own nonce which increments by 1 on each action of a DID. On DID creation, its nonce is set to the block number on which its created and the DID is expected to send signed payloads, each with nonce 1 more than the previous nonce.

Verifiable Credentials

Credentials are a part of our daily lives: driver's licenses are used to assert that we are capable of operating a motor vehicle, university degrees can be used to assert our level of education, and government-issued passports enable us to travel between countries.

These credentials provide benefits to us when used in the physical world, but their use on the Web continues to be elusive.

Currently it is difficult to express education qualifications, healthcare data, financial account details, and other sorts of third-party verified machine-readable personal information on the Web.

The difficulty of expressing digital credentials on the Web makes it challenging to receive the same benefits through the Web that physical credentials provide us in the physical world.

The Verifiable Credentials Data Model 1.0 (VCDM) specification provides a standard way to express credentials on the Web in a way that is cryptographically secure, privacy respecting, and machine-verifiable.

Participants and workflow

  • Credentials are issued by an entity called the issuer.
  • Issuer issues the credential about a subject by signing the credential with his key. If the credential is revocable, the issuer must specify how and from where revocation status must be checked. It is not necessary that revocation is managed by the issuer, the issuer might designate a different authority for revocation.
  • Issuer gives the credential to the holder. The holder might be the same as the subject.
  • A service provider or anyone willing to check if the holder possesses certain credentials requests a presentation about those credentials. This entity requesting the presentation is called the verifier. To protect against replay attacks, (a verifier receiving the presentation and replaying the same presentation at some other verifier), a verifier must supply a challenge that must be embedded in the presentation.
  • Holder creates a presentation for the required credentials. The presentation must indicate which credentials it is about and must be signed by the holder of the credentials.
  • Verifier on receiving the presentation verifies the validity of each credential in the presentation. This includes checking correctness of the data model of the credential, the authenticity by verifying the issuer's signature and revocation status if the credential is revocable. It then checks whether the presentation contains the signature from the holder on the presentation which also includes his given challenge.

Issuing

To issue a verifiable credential, the issuer needs to have a public key that is accessible by the holder and verifier to verify the signature (in proof) in the credential. Though the VCDM spec does not mandate it, an issuer in Dock must have a DID on chain. This DID is present in the credential in the issuer field. An example credential where both the issuer and holder have Dock DIDs

{
    '@context': [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ],
    id: '0x9b561796d3450eb2673fed26dd9c07192390177ad93e0835bc7a5fbb705d52bc',
    type: [ 'VerifiableCredential', 'AlumniCredential' ],
    issuanceDate: '2020-03-18T19:23:24Z',
    credentialSubject: {
      id: 'did:dock:5GL3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi',
      alumniOf: 'Example University'
    },
    issuer: 'did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr',
    proof: {
      type: 'Ed25519Signature2018',
      created: '2020-04-22T07:50:13Z',
      jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..GBqyaiTMhVt4R5P2bMGcLNJPWEUq7WmGHG7Wc6mKBo9k3vSo7v7sRKwqS8-m0og_ANKcb5m-_YdXC2KMnZwLBg',
      proofPurpose: 'assertionMethod',
      verificationMethod: 'did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr#keys-1'
    }
}

Presentation

The holder while creating the presentation signs it with his private key. For the verifier to verify the presentation, in addition to verifying the issuer's signature, he needs to verify this signature as well, and for that he must know the holder's public key. One way to achieve this is to make the holder have a DID too so that the verifier can look up the DID on chain and learn the public key. An example presentation signed by the holder

{
    '@context': [ 'https://www.w3.org/2018/credentials/v1' ],
    type: [ 'VerifiablePresentation' ],
    verifiableCredential: [
      {
          '@context': [
            'https://www.w3.org/2018/credentials/v1',
            'https://www.w3.org/2018/credentials/examples/v1'
          ],
          id: 'A large credential id with size > 32 bytes',
          type: [ 'VerifiableCredential', 'AlumniCredential' ],
          issuanceDate: '2020-03-18T19:23:24Z',
          credentialSubject: {
            id: 'did:dock:5GnE6u2dt9nC7tgf5vSdKy4gYX3jwqthbrBnjiay2LWETdrV',
            alumniOf: 'Example University'
          },
          credentialStatus: {
            id: 'rev-reg:dock:0x0194db371bab472a9cc920b5dfb1447aad5a6db906c46ff378cf0fc337a0c8c0',
            type: 'CredentialStatusList2017'
          },
          issuer: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya',
          proof: {
            type: 'Ed25519Signature2018',
            created: '2020-04-22T07:58:43Z',
            jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..bENDgnK29BHRhP05ehbQkOPfqweppGyI7NeH02YT1hzSDEHseOzCDx-g9dS4lY-m_bElwbOptOlRnQ2g9MW7Ag',
            proofPurpose: 'assertionMethod',
            verificationMethod: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya#keys-1'
          }
      }
    ],
    id: '0x4bd107aee17744dcec10208d7551620664dcba7e88ce11c2312c02df562754f1',
    proof: {
      type: 'Ed25519Signature2018',
      created: '2020-04-22T07:58:49Z',
      challenge: '0x6a5a5d58a99705c4d499fa7cdcdc62eeb2f742eb878456babf49b9a6669d0b76',
      domain: 'test domain',
      jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..HW7bDjvsRETeM25a3BtMgER53FtzK6rUBX_46cFo-i6O1y7p_TM-ED2iSTrFBUrDc7vH8QqoeUTY8e5ir5RvCg',
      proofPurpose: 'authentication',
      verificationMethod: 'did:dock:5GnE6u2dt9nC7tgf5vSdKy4gYX3jwqthbrBnjiay2LWETdrV#keys-1'
    }
}

Revocation

If the credential is revocable, the issuer must specify how the revocation check must be done in the credentialStatus field. On Dock, credential revocation is managed with a revocation registry. There can be multiple registries on chain and each registry has a unique id. It is recommended that the revocation authority creates a new registry for each credential type. While issuing the credential, issuer embeds the revocation registry's id in the credential in the credentialStatus field. An example credential with Dock revocation registry

{
    '@context': [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ],
    id: 'A large credential id with size > 32 bytes',
    type: [ 'VerifiableCredential', 'AlumniCredential' ],
    issuanceDate: '2020-03-18T19:23:24Z',
    credentialSubject: {
      id: 'did:dock:5GnE6u2dt9nC7tgf5vSdKy4gYX3jwqthbrBnjiay2LWETdrV',
      alumniOf: 'Example University'
    },
    credentialStatus: {
      id: 'rev-reg:dock:0x0194db371bab472a9cc920b5dfb1447aad5a6db906c46ff378cf0fc337a0c8c0',
      type: 'CredentialStatusList2017'
    },
    issuer: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya',
    proof: {
      type: 'Ed25519Signature2018',
      created: '2020-04-22T07:58:43Z',
      jws: 'eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..bENDgnK29BHRhP05ehbQkOPfqweppGyI7NeH02YT1hzSDEHseOzCDx-g9dS4lY-m_bElwbOptOlRnQ2g9MW7Ag',
      proofPurpose: 'assertionMethod',
      verificationMethod: 'did:dock:5CwAuM8cPetXWbZN2JhMFWtLjxZ6DokiDdHViGw2FfxC1Cya#keys-1'
    }
}

To revoke a credential, the revocation authority (might be same as the issuer), puts a hash of the credential id in the revocation registry. To check the revocation status of a credential, hash the credential id and query the registry id specified in the credential. The revocation of a credential can be undone if the revocation registry supports undoing. Moreover, currently, each registry is owned by a single DID so that DID can revoke a credential or undo the revocation. In future, Dock will support ownership of the registry with mulitple DIDs and in different fashions, like any one of the owner DIDs could revoke or a threshold is needed, etc. To learn more about revocation registries, refer the revocation section of the documentation.

Schemas

Table of Contents

  1. Intro
  2. Blobs
  3. JSON Schemas
  4. Schemas in Verifiable Credentials

Intro to Schemas

Data Schemas are useful when enforcing a specific structure on a collection of data like a Verifiable Credential. Data Verification schemas, for example, are used to verify that the structure and contents of a Verifiable Credential conform to a published schema. Data Encoding schemas, on the other hand, are used to map the contents of a Verifiable Credential to an alternative representation format, such as a binary format used in a zero-knowledge proof. Data schemas serve a different purpose than that of the @context property in a Verifiable Credential, the latter neither enforces data structure or data syntax, nor enables the definition of arbitrary encodings to alternate representation formats.

Blobs

Before diving further into Schemas in it is important to understand the way these are stored in the Dock chain. Schemas are stored on chain as a Blob in the Blob Storage module. They are identified and retrieved by their unique blob id, a 32 byte long hex string. They are authored by a DID and have a max size of 8192 bytes. The chain is agnostic to the contents of blobs and thus to schemas. Blobs may be used to store types of data other than schemas.

JSON Schemas

JSON Schema can be used to require that a given JSON document (an instance) satisfies a certain number of criteria. JSON Schema validation asserts constraints on the structure of instance data. An instance location that satisfies all asserted constraints is then annotated with any keywords that contain non-assertion information, such as descriptive metadata and usage hints. If all locations within the instance satisfy all asserted constraints, then the instance is said to be valid against the schema. Each schema object is independently evaluated against each instance location to which it applies. This greatly simplifies the implementation requirements for validators by ensuring that they do not need to maintain state across the document-wide validation process. More about JSON schemas can be found here and here.

Let's see an example JSON schema definition:

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "description": "Alumni",
  "type": "object",
  "properties": {
    "emailAddress": {
      "type": "string",
      "format": "email"
    },
    "alumniOf": {
      "type": "string"
    }
  },
  "required": ["emailAddress", "alumniOf"],
  "additionalProperties": false
}

In our context, these schemas are stored on-chain as a blob, which means they have a Blob Id as id and a DID as author:

{
   "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
   "author": "did:dock:5CEdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
   "schema": {
      "$schema": "http://json-schema.org/draft-07/schema#",
      "description": "Alumni",
      "type": "object",
      "properties": {
        "emailAddress": {
          "type": "string",
          "format": "email"
        },
        "alumniOf": {
          "type": "string"
        }
      },
      "required": ["emailAddress", "alumniOf"],
      "additionalProperties": false
    }
}

Had we referenced this JSON schema from within a Verifiable Credential, validation would fail if the credentialSubject doesn't contain an emailAddress field, or it isn't a string formatted as an email; or if it doesn't contain a property alumniOf with type string. It'd also fail if a subject contains other properties not listed here (except for the id property which is popped out before validation).

Schemas in Verifiable Credentials

In pursuit of extensibility, VCDM makes an Open World Assumption; a credential can state anything. Schemas allow issuers to "opt-out" of some of the freedom VCDM allows. Issuers can concretely limit what a given credential will claim. In a closed world, a verifier can rely on the structure of a credential to enable new types of credential processing e.g. generating a complete and human-friendly graphical representation of a credential.

The Verifiable Credentials Data Model specifies the models used for Verifiable Credentials and Verifiable Presentations, and explains the relationships between three parties: issuer, holder, and verifier. A critical piece of infrastructure out of the scope of those specifications is the Credential Schema. This specification provides a mechanism to express a Credential Schema and the protocols for evolving the schema.

Following our example above, we could use the current SDK to store the Email schema above as a Blob in the Dock chain. Assuming we did that and our schema was stored as blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW, we can use it in a Verifiable Credential as follows:

"credentialSchema": {
  "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
  "type": "JsonSchemaValidator2018"
}

The following is an example of a valid Verifiable Credential using the above schema:

{
   "@context": [
      "https://www.w3.org/2018/credentials/v1",
      "https://www.w3.org/2018/credentials/examples/v1"
   ],
   "id": "uuid:0x9b561796d3450eb2673fed26dd9c07192390177ad93e0835bc7a5fbb705d52bc",
   "type": [
      "VerifiableCredential",
      "AlumniCredential"
   ],
   "issuanceDate": "2020-03-18T19:23:24Z",
   "credentialSchema": {
      "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
      "type": "JsonSchemaValidator2018"
   },
   "credentialSubject": {
      "id": "did:dock:5GL3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi",
      "emailAddress": "john.smith@example.com",
      "alumniOf": "Example University"
   },
   "issuer": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr",
   "proof": {
      "type": "Ed25519Signature2018",
      "created": "2020-04-22T07:50:13Z",
      "jws": "eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..GBqyaiTMhVt4R5P2bMGcLNJPWEUq7WmGHG7Wc6mKBo9k3vSo7v7sRKwqS8-m0og_ANKcb5m-_YdXC2KMnZwLBg",
      "proofPurpose": "assertionMethod",
      "verificationMethod": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr#keys-1"
   }
}

In contrast, the following is an example of an invalid Verifiable Credential:

{
   "@context": [
      "https://www.w3.org/2018/credentials/v1",
      "https://www.w3.org/2018/credentials/examples/v1"
   ],
   "id": "uuid:0x9b561796d3450eb2673fed26dd9c07192390177ad93e0835bc7a5fbb705d52bc",
   "type": [
      "VerifiableCredential",
      "AlumniCredential"
   ],
   "issuanceDate": "2020-03-18T19:23:24Z",
   "credentialSchema": {
      "id": "blob:dock:1DFdyZkZnALDdCAp7crTRiaCq6KViprTM6kHUQCD8X6VqGPW",
      "type": "JsonSchemaValidator2018"
   },
   "credentialSubject": [
      {
        "id": "did:dock:5GL3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi",
        "emailAddress": "john.smith@example.com",
        "alumniOf": "Example University"
      },
      {
        "id": "did:dock:6DF3xbkr3vfs4qJ94YUHwpVVsPSSAyvJcafHz1wNb5zrSPGi",
      }

   ],
   "issuer": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr",
   "proof": {
      "type": "Ed25519Signature2018",
      "created": "2020-04-22T07:50:13Z",
      "jws": "eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..GBqyaiTMhVt4R5P2bMGcLNJPWEUq7WmGHG7Wc6mKBo9k3vSo7v7sRKwqS8-m0og_ANKcb5m-_YdXC2KMnZwLBg",
      "proofPurpose": "assertionMethod",
      "verificationMethod": "did:dock:5GUBvwnV6UyRWZ7wjsBptSquiSHGr9dXAy8dZYUR9WdjmLUr#keys-1"
   }
}

the reason this last Credential is invalid is that only one of the subjects properly follow the Schema, the second subject does not specify the fields emailAddress and alumniOf which were specified as required.

Claim Deduction

The verifiable credentials data model is based on a machine comprehensible language called RDF. RDF represents arbitrary semantic knowledge as graphs. Computers can perform automatic deductive reasoning over RDF; given assumptions (represented as an RDF graph) and axioms (represented as logical rules), a computer can infer new conclusions and even prove them to other computers using deductive derivations (proofs).

Every VCDM credential is representable as an RDF graph. So computers can reason about them, deriving new conclusions that weren't explicitly stated by the issuer.

The Dock SDK exposes utilities for primitive deductive reasoning over verified credentials. The Verifier has a choice to perform deduction themself (expensive), or offload that responsibility to the Presenter of the credential[s] by accepting deductive proofs of composite claims.

In RDF, if graph A is true and graph B is true, then the union of those graphs, is also true A∧B->A∪B 1. Using this property we can combine multiple credentials and reason over their union.

Explicit Ethos

Imagine a signed credential issued by Alice claiming that Joe is a Member.

{
  ...
  "issuer": "Alice",
  "credentialSubject": {
    "id": "Joe",
    "@type": "Member"
  },
  "proof": ...,
  ...
}

The credential does not directly prove that Joe is a Member. Rather, it proves Alice Claims Joe to be a Member.

Not proven:

<Joe> <type> <Member> .

Proven:

<Joe> <type> <Member> <Alice> .

The fourth and final element of the proven quad is used here to indicate the source of the information, Alice. The final element of a quad is its graph name.

A signed credentials are ethos arguments and a credential may be converted to a list of quads (a claimgraph). We call this representation "Explicit Ethos" form. If a credential is verified, then its explicit ethos form is true.

Rule Format

To perform reasoning and to accept proofs, the Verifier must select the list of logical rules wish to accept. Rules (or axioms if you prefer), are modeled as if-then relationships.

const rules = [
  {
    if_all: [],
    then: [],
  },
];

During reasoning, when an if_all pattern is matched, its corresponding then pattern will be implied. In logic terms, each "rule" is the conditional premise of a modus ponens.

{ if_all: [A, B, C], then: [D, E] } means that if (A and B and C) then (D and E).

Rules can contain Bound or Unbound entities. Unbound entities are named variables. Each rule has it's own unique scope, so Unbound entities introduced in the if_all pattern can be used in the then pattern.

{
  if_all: [
    [
      { Bound: alice },
      { Bound: likes },
      { Unbound: 'thing' },
      { Bound: defaultGraph },
    ],
  ],
  then: [
    [
      { Bound: bob },
      { Bound: likes },
      { Unbound: 'thing' },
      { Bound: defaultGraph },
    ],
  ],
}

means

For any ?thing:
  if [alice likes ?thing]
  then [bob likes ?thing]

in other words: ∀ thing: [alice likes thing] -> [bob likes thing]

If an unbound variable appears in the then pattern but does not appear in the if_all pattern the rule is considered invalid and will be rejected by the reasoner.

Bound entities are constants of type RdfTerm. RDF nodes may be one of four things, an IRI, a blank node, a literal, or the default graph. For those familiar with algebraic datatypes:

enum RdfNode {
  Iri(Url),
  Blank(String),
  Literal {
    value: String,
    datatype: Url,
  },
  DefaultGraph,
}

The SDK represents RDF nodes like so:

const alice = { Iri: 'did:sample:alice' };
const literal = {
  Literal: {
    value: '{}',
    datatype: 'http://www.w3.org/1999/02/22-rdf-syntax-ns#JSON',
  }
};
// blank nodes are generally not useful in rule definitions
const blank = { Blank: '_:b0' };
const defaultGraph = { DefaultGraph: true };

Here is an example of a complete rule definition:

{
  if_all: [
    [
      { Unbound: 'food' },
      { Bound { Iri: 'https://example.com/contains' } },
      { Bound: { Iri: 'https://example.com/butter' } },
      { Bound: { DefaultGraph: true } }
    ],
    [
      { Unbound: 'person' },
      { Bound: 'http://xmlns.com/foaf/0.1/name' },
      { Literal: {
        value: 'Bob',
        datatype: 'http://www.w3.org/1999/02/22-rdf-syntax-ns#PlainLiteral',
      } },
      { Bound: { DefaultGraph: true } }
    ],
  ],
  then: [
    [
      { Unbound: 'person' },
      { Bound: { Iri: 'https://example.com/likes' } },
      { Unbound: 'food' },,
      { Bound: { DefaultGraph: true } }
    ]
  ],
}
// all things named "Bob" like all things containing butter

See the claim deduction tutorial for more another example.

Limited Expresiveness

The astute among you may notice the SDK's model for rules does not allow logical negation. This is by design. For one, it keeps the the rule description language from being turing complete so inference time is always bounded. Secondly, RDF choses the Open World Assumption so absence of any particular statement in a credential/claimgraph is not meaningful within RDF semantics.

The rule language is expected to be expressive enough to implement OWL 2 EL but not OWL 1 DL.

Terms

  • Verifier: The party that accepts and checks VCDM credential[s].
  • Issuer: The party that signed a VCDM credential.
  • VCDM: Verifiable Credentials Data Model
  • RDF: A model for representing general knowledge in a machine friendly way.
  • RDF triple: A single sentence consisting of subject, predicate and object. Each element of the triple is an RDF node.
  • RDF quad: A single sentence consisting of subject, predicate, object, graph. Each element of the quad is an RDF term.
  • RDF graph: A directed, labeled graph with RDF triples as edges.
  • RDF node
  • Composite Claim: An rdf triple which was infered, rather than stated explicitly in a credential.
  • Explicit Ethos statement: A statement of the form "A claims X." where X is also a statement. Explicit Ethos is encodable in natural human languages as well as in RDF.
1

If you ever decide to implement your own algorithm to merge RDF graphs, remember that blank nodes exists and may need to be renamed depending on the type of graph representation in use.

The Dock Blockchain includes a module explicitly intended for proof of existence. Aside from being explicitly supported by the on-chain runtime, it works the same way you would expect. You post the hash of a document on-chain at a specific block. Later you can use that hash to prove the document existed at or before that block.

The PoE module accepts arbitrary bytes as an anchor but in order to keep anchor size constant the chain stores only the blake2b256 hash of those bytes.

Developers are free to use the anchoring module however they want, taloring their software to their own use case. An anchoring example can be found in the sdk examples directory. Dock provides a fully functioning reference client for anchoring. The client even implements batching anchors into a merkle tree so you can anchor multiple documents in a single transaction.

Private Delegation

Claim Deduction rules can express delegation of authority to issue credentials! It's expected to be a common enough use case that Dock has declared some rdf vocabulary and associated claim deduction rules aid potential delegators.

An issuer may grant delegation authority to another issuer simply by issuing them a vcdm credential. Let's say did:ex:a wants to grant delegation authority to did:ex:b. did:ex:a simply issues the credential saying that did:ex:b may make any claim.

{
  "@context": [ "https://www.w3.org/2018/credentials/v1" ],
  "id": "urn:uuid:9b472d4e-492b-49f7-821c-d8c91e7fe767",
  "type": [ "VerifiableCredential" ],
  "issuer": "did:dock:a",
  "credentialSubject": {
    "id": "did:dock:b",
    "https://rdf.dock.io/alpha/2021#mayClaim": "https://rdf.dock.io/alpha/2021#ANYCLAIM"
  },
  "issuanceDate": "2021-03-18T19:23:24Z",
  "proof": { ... }
}

When did:ex:b wishes to issue a credential on behalf of did:ex:a, they should bundle it (e.g. in a presentation) with it this "delegation" credential. A delegation credential constitutes a proof of delegation. A proof of delegation bundled with a credential issued by the delegate can be prove that some statement[s] were made by authority of some root delegator.

In order to process delegated credentials a verifier accepts a bundle. The bundle includes both delegations and credentials issued by delegates. After verifying every credential within the bundle (including the delegations) the verifier uses Claim Deduction to determine which statements are proven by the delegated credential.

Dock's delegation ontology (i.e. rdf vocabulary) and ruleset are currently in alpha. See Private Delegation for an example of their use.

Public Attestation

This feature should be considered Alpha.

RFC

VCDM Verifiable credentials are a way to prove an attestation. Valid credentials prove statements of the form Issuer claims X, where X is itself a statement. One property of verifiable credentials is that the holder may keep them private simply by not sharing them with other parties. That property will be sometimes useful, sometimes not. VCDM crededentials are private and therefore not automatically discoverable but Public Attestations give a decentralized identity the ability to post claims that are discoverable by any party. For Dock DIDs, attestations are linked on-chain but Public Attestations are not specicfic to Dock. Other DID methods can implement public attestations by including them in DID documents.

Public Attestations are posted as RDF documents. Since RDF can represent, or link to, arbitrary types of data, Public Attestations can be used to publish arbitrary content.

Data Model

Public Attestaions live in the DID document of their poster. A DID with a public attestation will have an extra property, "https://rdf.dock.io/alpha/2021#attestsDocumentContent". The value of that property is an IRI that is expected to point to an RDF document. Any statement contained in that document is considered to be a claim made by the DID.

If DID attestsDocumentContent DOC then for every statement X in DOC DID claims X.

Two IRI schemes are supported for pionting to attested documents: DIDs and ipfs links. DIDs are dereferenced and interpreted as json-ld. Ipfs links are dereferenced and interpreted as turtle documents. The sdk makes it easy to dereferece DIDs and ipfs attestation documents but the Public Attestation concept is extendable to other types of IRI, like hashlinks or data URIs.

For Dock DIDs public attestation are made by setting the attestation for the DID on-chain. Changing the value of an attestation effectively revokes the previous attestation and issues a new one. A DIDs attestation can also be set to None, which is equivalent to attesting an empty claimgraph. Dock DIDs have their attestation set to None by default. A Dock DID with attestation set to None will not contain the attestsDocumentContents key.

Example of A DID attesting to a document in ipfs

did:ex:ex:

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did:ex:ex",
  "https://rdf.dock.io/alpha/2021#attestsDocumentContent": {
    "@id": "ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p"
  }
}

Content of ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p:

<https://www.wikidata.org/wiki/Q25769>
  <https://www.wikidata.org/wiki/Property:P171>
  <https://www.wikidata.org/wiki/Q648422> .

From these documents we can derive two facts. The first fact is encodeded directly in the DID document.

Fact 1:

# `did:ex:ed` attests to the content of `ipfs://Qmeg1..`
`<did:ex:ed> <https://rdf.dock.io/alpha/2021#attestsDocumentContent> <ipfs://Qmeg1..> .

The second fact is infered. Since we know the content of ipfs://Qmeg1.. we know that ipfs://Qmeg1.. contains the statement wd:Q25769 wd:Property:P171 wd:Q648422 (Short-eared Owl is in the genus "Asio"). did:ex:ex attests the document ipfs://Qmeg1.. and ipfs://Qmeg1.. states that the Short-eared Owl is in the genus "Asio", therefore:

Fact 2:

@prefix wd: <https://www.wikidata.org/wiki/> .
# `did:ex:ex` claims that the Short-eared Owl is in the genus "Asio".
<wd:Q25769> <wd:Property:P171> <wd:Q648422> <did:ex:ex> .

Example of A DID attesting to multiple documents

While it is valid DIDs to include multiple attested IRIs in a single DID document, Dock artificially limits the number of attestation to one per Dock DID. This is to encourage off-chain (ipfs) data storage. If a DID wishes to attests to multiple documents, there are two suggested options: 1) merge the two documents into a single document or 2) attest to a single document which in turn notes an attestsDocumentContents for each of it's children. The following is an example of option "2)".

did:ex:ex:

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did:ex:ex",
  "https://rdf.dock.io/alpha/2021#attestsDocumentContent": {
    "@id": "ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p"
  }
}

ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p:

<did:ex:ex>
  <https://rdf.dock.io/alpha/2021#attestsDocumentContent>
  <ipfs://QmXoypizjW3WknFiJnLLwHCnL72vedxjQkDDP1mXWo6uco> . # document1
<did:ex:ex>
  <https://rdf.dock.io/alpha/2021#attestsDocumentContent>
  <ipfs://QmdycyxM3r882pHx3M63Xd8NUfsXoEmBnU8W6PgL9eY9cN> . # document2

Uses

Two properties of RDF have the potential to supercharge Public Attestations.

  1. It's a semantic knowlege representation, it can be reasoned over.
  2. It's queryable in it's native form.

Via these properties the sdk implements a "Curious Agent". The Curious Agent seeks out information. It starts with an initial kernel of knowlege (an RDF dataset) and it follows a sense of curiosity, gradually building it's knowlege graph by dereferencing IRIs, stopping when it finds nothing new to be curious about. As it crawls, it reasons over the information it's found, deducing new facts, which may in turn spark new curiosity. The Curious Agent accepts it's curiosity as Sparql queries. The logical rules it uses to reason are also configurable, axioms are provided to the Agent as conjunctive if-then statements (like in Claim Deduction). Within the sdk, the Curious Agent is simply called crawl().

The Curious Agent is sometimes referred to as "the crawler".

The use-case that drove implementation of the crawler is to search for publicaly posted Delegation information. As such, a bare minimum of functionality is implemented by crawl(). Want more? Consider contacting us.

Public Delegation

This feature should be considered Alpha.

RFC

We combine Private Delegation and Public Attestation to get Public Delegation.

When a delegation is attested via a credential, we call that a Private Delegation. As discussed in the previous section, attestations can be made in other ways. When a delegation is attested publically we call it a Public Delegation.

Public Delegations remove the need for credential holders to manage and present delegation chains. With Public Delegations, credential verifiers may look up delegation information out-of-band.

Just like in Private Delegation, verified delegation information constitutes a knowlege graph that can be merged with the knowlege graph from a verified credential. The merged graphs are reasoned over to determine facts that are proven true.

Example

Let's say there is trusted root issuer, did:ex:root. did:ex:root may delegate authority to make claims on behalf of did:ex:root. To do so, did:ex:root would attest to a claimgraph like this one:

ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p:

@prefix dockalpha: <https://rdf.dock.io/alpha/2021#> .
<did:ex:delegate1> dockalpha:mayClaim dockalpha:ANYCLAIM .
<did:ex:delegate2> dockalpha:mayClaim dockalpha:ANYCLAIM .

When did:ex:root attests to the above triples, the following dataset is true.

@prefix dockalpha: <https://rdf.dock.io/alpha/2021#> .
<did:ex:delegate1> dockalpha:mayClaim dockalpha:ANYCLAIM <did:ex:root> .
<did:ex:delegate2> dockalpha:mayClaim dockalpha:ANYCLAIM <did:ex:root> .

did:ex:root may attests to ipfs://Qmeg1Hq... by adding the ipfs link to its DID document.

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did:ex:root",
  "https://rdf.dock.io/alpha/2021#attestsDocumentContent": {
    "@id": "ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p"
  }
}

By modifying its DID document to include the ipfs link did:ex:root attests to the delegation publically.

Tutorials

  1. DID
  2. Revocation
  3. Verifiable credentials
  4. Blobs and Schemas
  5. EVM integration

DID

If you are not familiar with DIDs, you can get a conceptual overview here.

Overview

DIDs in Dock are created by choosing a 32 byte unique (on Dock chain) identifier along with 1 ore more public keys or controllers. The public key can be added or removed by the DID's controller (which the DID maybe itself) signature with a key having capabilityInvocation verification relationship.

The DID can also be removed by providing a signature from the DID's controller.

The chain-state stores a few things for a DID, the active public keys, the controllers, service endpoints and the current nonce of the DID. The nonce starts as the block number where the DID was created and each subsequent action like adding/removing a key for itself or any DID it controls, adding a blob, etc should supply a nonce 1 higher than the previous one.

This is done for replay protection but this detail however is hidden in the API so the caller should not have to worry about this.

DID creation

Create a new random DID.

import {createNewDockDID} from '@docknetwork/sdk/utils/did';

const did = createNewDockDID();

The DID is not yet registered on the chain. Before the DID can be registered, a public key needs to created as well.

Public key creation

Dock supports 3 kinds of public keys, Sr25519, Ed25519 and EcdsaSecp256k1. These public keys are supported through 3 classes, PublicKeySr25519, PublicKeyEd25519 and PublicKeySecp256k1 respectively.

These 3 classes extend from the same class called PublicKey. These can be instantiated directly by passing them as hex encoded bytes.

import {PublicKeySr25519, PublicKeyEd25519, PublicKeySecp256k1} from '@docknetwork/sdk/api';

const pk1 = new PublicKeySr25519(bytesAsHex);
const pk2 = new PublicKeyEd25519(bytesAsHex);
const pk3 = new PublicKeySecp256k1(bytesAsHex);

Or they can be created by first creating a keyring

import {PublicKeySr25519, PublicKeyEd25519} from '@docknetwork/sdk/api';

// Assuming you had a keyring, you can create keypairs or used already created keypairs
const pair1 = keyring.addFromUri(secretUri, someMetadata, 'ed25519');
const pk1 = PublicKeyEd25519.fromKeyringPair(pair1);

const pair2 = keyring.addFromUri(secretUri2, someMetadata, 'sr25519');
const pk2 = PublicKeySr25519.fromKeyringPair(pair2);

Polkadot-js keyring does not support ECDSA with secp256k1 so there is a function generateEcdsaSecp256k1Keypair that takes some entropy and generate a keypair.

import { generateEcdsaSecp256k1Keypair } from '@docknetwork/sdk/utils/misc';
import {PublicKeySecp256k1} from '@docknetwork/sdk/api';
// The pers and entropy are optional but must be used when keys need to be deterministic
const pair3 = generateEcdsaSecp256k1Keypair(pers, entropy);
const pk3 = PublicKeySecp256k1.fromKeyringPair(pair3);

Or you can directly pass any of the above keypairs in the function getPublicKeyFromKeyringPair and it will return an object of the proper child class of PublicKey

import { getPublicKeyFromKeyringPair } from '@docknetwork/sdk/utils/misc';
const publicKey = getPublicKeyFromKeyringPair(pair);

Registering a new DID on chain

Now that you have a DID and a public key, the DID can be registered on the Dock chain. Note that this public key associated with DID is independent of the key used for sending the transaction and paying the fees.

Self-controlled DIDs

In most cases, a DID will have its own keys and will control itself, i.e. a self-controlled DID. Following is an example of DID creation in this scenario.

  1. First create a DidKey object. The first argument of this function is a PublicKey and the second argument is the verification relationship. A verification relationship can be 1 or more of these authentication, assertion, capabilityInvocation or keyAgreement

    import { DidKey, VerificationRelationship } from '@docknetwork/sdk/public-keys';
    const didKey = new DidKey(publicKey, new VerificationRelationship());
    
  2. Now submit the transaction using a DockAPI object and the newly created DID did and didKey.

    await dock.did.new(did, [didKey], []);
    

Keyless DIDs

A DID might not have any keys and thus be controlled by other DIDs. Assuming a DID did1 already exists, it can register a keyless DID did2 as

await dock.did.new(did2, [], [did1]);

Moreover, a DID can have keys for certain functions like authentication but still be controlled by other DID(s).

Fetching a DID from chain

To get a DID document, use getDocument js const result = await dock.did.getDocument(did);

Adding a key to an existing DID

A DID's controller can add a public key to an on-chain DID by preparing a signed payload. Each new key is given a number key index which 1 is greater than the last used index. Key indices start from 1.

  1. Create a new public key and use the current keypair to sign the message
    // the current pair, its a sr25519 in this example
    const currentPair = dock.keyring.addFromUri(secretUri, null, 'sr25519');
    const newPk = // Using any of the above methods
    
  2. The caller might directly create a signed key update
    const vr = new VerificationRelationship();
    // This new key can only be used for issuance.
    vr.setAssertion();
    const newDidKey = new DidKey(newPk, vr);
    
  3. Now send the signed payload in a transaction to the chain in a transaction. In the arguments, the first did specifies that a key must be added to DID did and the second did specifies that DID did is signing the payload The 1 below is for the key index.
    dock.did.addKeys([newDidKey], did, did, currentpair, undefined, false);
    

Removing an existing DID from chain

A DID can be removed from the chain by sending the corresponding message signed with an appropriate key.

  1. Fetch the current keypair to sign the DID removal message
    // the current pair, its a sr25519 in this example
    const currentPair = dock.keyring.addFromUri(secretUri, null, 'sr25519');
    
  2. Now send the message with the signature to the chain in a transaction
    dock.did.remove(did, did, pair)
    

For more details see example in examples/dock-did.js or the integration tests.

Note that they accounts used to send the transactions are independent of the keys associated with the DID.

So the DID could have been created with one account, updated with another account and removed with another account.

The accounts are not relevant in the data model and not associated with the DID in the chain-state.

DID resolver

The process of learning the DID Document of a DID is called DID resolution and tool that does the resolution is called the resolver.

Resolution involves looking at the DID method and then fetching the DID Document from the registry, the registry might be a centralized database or a blockchain.

The SDK supports resolving Dock DIDs natively. For other DIDs, resolving the DID through the Universal Resolver is supported.

Each resolver should extend the class DIDResolver and implement the resolve method that accepts a DID and returns the DID document.

There is another class called MultiResolver that can accept several types of resolvers (objects of subclasses of DIDResolver) and once the MultiResolver is initialized with the resolvers of different DID methods, it can resolve DIDs of those methods.

Dock resolver

The resolver for Dock DIDs DockResolver connects to the Dock blockchain to get the DID details.

The resolver is constructed by passing it a Dock API object so that it can connect to a Dock node. This is how you resolve a Dock DID:

import { DockResolver } from "@docknetwork/sdk/resolver";

// Assuming the presence of Dock API object `dock`
const dockResolver = new DockResolver(dock);
// Say you had a DID `did:dock:5D.....`
const didDocument = dockResolver.resolve("did:dock:5D.....");

Creating a resolver class for a different method

If you want to resolve DIDs other than Dock and do not have/want access to the universal resolver, you can extend the DIDResolver class to derive a custom resolver.

Following is an example to build a custom Ethereum resolver. It uses the library ethr-did-resolver and accepts a provider information as configuration. The example below uses Infura to get access to an Ethereum node and read the DID off Ethereum.

import { DIDResolver } from "@docknetwork/sdk/resolver";
import ethr from "ethr-did-resolver";

// Infura's Ethereum provider for the main net.
const ethereumProviderConfig = {
  networks: [
    {
      name: "mainnet",
      rpcUrl: "https://mainnet.infura.io/v3/blahblahtoken",
    },
  ],
};

// Custom ethereum resolver class
class EtherResolver extends DIDResolver {
  static METHOD = "ethr";

  constructor(config) {
    super();
    this.ethres = ethr.getResolver(config).ethr;
  }

  async resolve(did) {
    const parsed = this.parseDid(did);
    try {
      return this.ethres(did, parsed);
    } catch (e) {
      throw new NoDIDError(did);
    }
  }
}

// Construct the resolver
const ethResolver = new EtherResolver(ethereumProviderConfig);

// Say you had a DID `did:ethr:0x6f....`
const didDocument = ethResolver.resolve("did:ethr:0x6f....");

Universal resolver

To resolve DIDs using the Universal Resolver, use the UniversalResolver. It needs the URL of the universal resolver and assumes the universal resolver from this codebase is running at the URL.

import { UniversalResolver } from "@docknetwork/sdk/resolver";

// Change the resolver URL to something else in case you cannot use the resolver at https://uniresolver.io
const universalResolverUrl = "https://uniresolver.io";
const universalResolver = new UniversalResolver(universalResolverUrl);

// Say you had a DID `did:btcr:xk....`
const didDocument = universalResolver.resolve("did:btcr:xk....");

Resolving DIDs of several DID methods with a single resolver

In case you need to resolve DIDs from more than one method, a DIDResolver can be created by passing resolvers of various DID methods to the derived class constructor.

The derived DIDResolver without overriden resolve accepts a list of resolvers each of which will be dispatched according to their prefix and method configuration. The resolvers array below has resolvers for DID methods dock and ethr.

For resolving DID of any other method, UniversalResolver object will be used.

import { DockDIDResolver, DIDResolver, WILDCARD } from "@docknetwork/sdk/resolver";

class MultiDIDResolver extends DIDResolver {
  static METHOD = WILDCARD;

  constructor(dock) {
    super([
      new DockDIDResolver(dock),
      new EtherResolver(ethereumProviderConfig),
      new UniversalResolver(universalResolverUrl)
    ]);
  }
}

const multiResolver = new MultiDIDResolver(resolvers);

// Say you had a DID `did:dock:5D....`, then the `DockResolver` will be used as there a resolver for Dock DID.
const didDocumentDock = multiResolver.resolve("did:dock:5D....");

// Say you had a DID `did:btcr:xk....`, then the `UniversalResolver` will be used as there is no resolver for BTC DID.
const didDocumentBtc = multiResolver.resolve("did:btcr:xk....");

Verifiable Credentials and Verifiable Presentations: issuing, signing and verification

Table of contents


Incremental creation and verification of Verifiable Credentials

The client-sdk exposes a VerifiableCredential class that is useful to incrementally create valid Verifiable Credentials of any type, sign them and verify them. Once the credential is initialized, you can sequentially call the different methods provided by the class to add contexts, types, issuance dates and everything else.

Building a Verifiable Credential

The first step to build a Verifiable Credential is to initialize it, we can do that using the VerifiableCredential class constructor which takes a credentialId as sole argument:

let vc = new VerifiableCredential('http://example.edu/credentials/2803');

You now have an unsigned Verifiable Credential in the vc variable! This Credential isn't signed since we only just initialized it. It brings however some useful defaults to make your life easier.

>    vc.context
<-   ["https://www.w3.org/2018/credentials/v1"]
>    vc.issuanceDate
<-   "2020-04-14T14:48:48.486Z"
>    vc.type
<-   ["VerifiableCredential"]
>    vc.credentialSubject
<-   []

The default context is an array with "https://www.w3.org/2018/credentials/v1" as first element. This is required by the VCDMv1 specs so having it as default helps ensure your Verifiable Credentials will be valid in the end.

A similar approach was taken on the type property, where the default is an array with "VerifiableCredential" already populated. This is also required by the specs. The subject property is required to exist, so this is already initialized for you as well although it is empty for now. Finally the issuanceDate is also set to the moment you initialized the VerifiableCredential object. You can change this later if desired but it helps having it in the right format from the get go.

We could also have checked those defaults more easily by checking the Verifiable Credential's JSON representation.

This can be achieved by calling the toJSON() method on it:

>    vc.toJSON()
<-   {
       "@context": [ "https://www.w3.org/2018/credentials/v1" ],
       "credentialSubject": [],
       "id": "http://example.edu/credentials/2803",
       "type": [
         "VerifiableCredential"
       ],
       "issuanceDate": "2020-04-14T14:48:48.486Z"
     }

An interesting thing to note here is the transformation happening to some of the root level keys in the JSON representation of a VerifiableCredential object.

For example context gets transformed into @context and subject into credentialSubject.

This is to ensure compliance with the Verifiable Credential Data Model specs while at the same time providing you with a clean interface to the VerifiableCredential class in your code.

Once your Verifiable Credential has been initialized, you can proceed to use the rest of the building functions to define it completely before finally signing it.

Adding a Context

A context can be added with the addContext method. It accepts a single argument context which can either be a string (in which case it needs to be a valid URI), or an object:

>   vc.addContext('https://www.w3.org/2018/credentials/examples/v1')
>   vc.context
<-  [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ])

Adding a Type

A type can be added with the addType function. It accepts a single argument type that needs to be a string:

>   vc.addType('AlumniCredential')
>   vc.type
<-  [
      'VerifiableCredential',
      'AlumniCredential'
    ]

Adding a Subject

A subject can be added with the addSubject function. It accepts a single argument subject that needs to be an object with an id property:

>   vc.addSubject({ id: 'did:dock:123qwe123qwe123qwe', alumniOf: 'Example University' })
>   vc.credentialSubject
<-  {id: 'did:dock:123qwe123qwe123qwe', alumniOf: 'Example University'}

Setting a Status

A status can be set with the setStatus function. It accepts a single argument status that needs to be an object with an id property:

>   vc.setStatus({ id: "https://example.edu/status/24", type: "CredentialStatusList2017" })
>   vc.status
<-  {
        "id": "https://example.edu/status/24",
        "type": "CredentialStatusList2017"
    }

Setting the Issuance Date

The issuance date is set by default to the datetime you first initialize your VerifiableCredential object.

This means that you don't necessarily need to call this method to achieve a valid Verifiable Credential (which are required to have an issuanceDate property).

However, if you need to change this date you can use the setIssuanceDate method. It takes a single argument issuanceDate that needs to be a string with a valid ISO formatted datetime:

>   vc.issuanceDate
<-  "2020-04-14T14:48:48.486Z"
>   vc.setIssuanceDate("2019-01-01T14:48:48.486Z")
>   vc.issuanceDate
<-  "2019-01-01T14:48:48.486Z"

Setting an Expiration Date

An expiration date is not set by default as it isn't required by the specs. If you wish to set one, you can use the setExpirationDate method.

It takes a single argument expirationDate that needs to be a string with a valid ISO formatted datetime:

>   vc.setExpirationDate("2029-01-01T14:48:48.486Z")
>   vc.expirationDate
<-  "2029-01-01T14:48:48.486Z"

Signing a Verifiable Credential

Once you've crafted your Verifiable Credential it is time to sign it. This can be achieved with the sign method.

It requires a keyDoc parameter (an object with the params and keys you'll use for signing) and it also accepts a boolean compactProof that determines whether you want to compact the JSON-LD or not:

>   await vc.sign(keyDoc)

Please note that signing is an async process. Once done, your vc object will have a new proof field:

>   vc.proof
<-  {
        type: "EcdsaSecp256k1Signature2019",
        created: "2020-04-14T14:48:48.486Z",
        jws: "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEQCIAS8ZNVYIni3oShb0TFz4SMAybJcz3HkQPaTdz9OSszoAiA01w9ZkS4Zx5HEZk45QzxbqOr8eRlgMdhgFsFs1FnyMQ",
        proofPurpose: "assertionMethod",
        verificationMethod: "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
    }

Verifying a Verifiable Credential

Once your Verifiable Credential has been signed you can proceed to verify it with the verify method. The verify method takes an object of arguments, and is optional.

If you've used DIDs you need to pass a resolver for them. You can also use the booleans compactProof (to compact the JSON-LD).

If your credential has uses the credentialStatus field, the credential will be checked not to be revoked unless you pass skipRevocationCheck flag.

>   const result = await vc.verify({ ... })
>   result
<-  {
      verified: true,
      results: [
        {
          proof: [
            {
                '@context': 'https://w3id.org/security/v2',
                type: "EcdsaSecp256k1Signature2019",
                created: "2020-04-14T14:48:48.486Z",
                jws: "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEQCIAS8ZNVYIni3oShb0TFz4SMAybJcz3HkQPaTdz9OSszoAiA01w9ZkS4Zx5HEZk45QzxbqOr8eRlgMdhgFsFs1FnyMQ",
                proofPurpose: "assertionMethod",
                verificationMethod: "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
            }
          ],
          verified: true
        }
      ]
    }

Please note that the verification is an async process that returns an object when the promise resolves. A boolean value for the entire verification process can be checked at the root level verified property.


Incremental creation and verification of Verifiable Presentations

The client-sdk exposes a VerifiablePresentation class that is useful to incrementally create valid Verifiable Presentations of any type, sign them and verify them.

Once the presentation is initialized, you can sequentially call the different methods provided by the class to add contexts, types, holders and credentials.

Building a Verifiable Presentation

The first step to build a Verifiable Presentation is to initialize it, we can do that using the VerifiablePresentation class constructor which takes an id as sole argument:

let vp = new VerifiablePresentation('http://example.edu/credentials/1986');

You now have an unsigned Verifiable Presentation in the vp variable!

This Presentation isn't signed since we only just initialized it. It brings however some useful defaults to make your life easier.

>    vp.context
<-   ["https://www.w3.org/2018/credentials/v1"]
>    vp.type
<-   ["VerifiablePresentation"]
>    vp.credentials
<-   []

The default context is an array with "https://www.w3.org/2018/credentials/v1" as first element. This is required by the VCDMv1 specs so having it as default helps ensure your Verifiable Presentations will be valid in the end.

A similar approach was taken on the type property, where the default is an array with "VerifiablePresentation" already populated. This is also required by the specs.

The credentials property is required to exist, so this is already initialized for you as well although it is empty for now.

We could also have checked those defaults more easily by checking the Verifiable Presentation's JSON representation.

This can be achieved by calling the toJSON() method on it:

>    vp.toJSON()
<-   {
       "@context": [ "https://www.w3.org/2018/credentials/v1" ],
       "id": "http://example.edu/credentials/1986",
       "type": [
         "VerifiablePresentation"
       ],
       "verifiableCredential": [],
     }

An interesting thing to note here is the transformation happening to some of the root level keys in the JSON representation of a VerifiablePresentation object.

For example context gets transformed into @context and credentials into verifiableCredential. This is to ensure compliance with the Verifiable Credentials Data Model specs while at the same time providing you with a clean interface to the VerifiablePresentation class in your code.

Once your Verifiable Presentation has been initialized, you can proceed to use the rest of the building functions to define it completely before finally signing it.

Adding a Context

A context can be added with the addContext method. It accepts a single argument context which can either be a string (in which case it needs to be a valid URI), or an object

>   vp.addContext('https://www.w3.org/2018/credentials/examples/v1')
>   vp.context
<-  [
      'https://www.w3.org/2018/credentials/v1',
      'https://www.w3.org/2018/credentials/examples/v1'
    ])

Adding a Type

A type can be added with the addType function. It accepts a single argument type that needs to be a string:

>   vp.addType('CredentialManagerPresentation')
>   vp.type
<-  [
      'VerifiablePresentation',
      'CredentialManagerPresentation'
    ]

Setting a Holder

Setting a Holder is optional and it can be achieved using the setHolder method. It accepts a single argument type that needs to be a string (a URI for the entity that is generating the presentation):

>   vp.setHolder('https://example.com/credentials/1234567890');
>   vp.holder
<-  'https://example.com/credentials/1234567890'

Adding a Verifiable Credential

Your Verifiable Presentations can contain one or more Verifiable Credentials inside.

Adding a Verifiable Credential can be achieved using the addCredential method. It accepts a single argument credential that needs to be an object (a valid, signed Verifiable Credential):

>   vp.addCredential(vc);
>   vp.credentials
<-  [
      {...}
    ]

Please note that the example was truncated to enhance readability.

Signing a Verifiable Presentation

Once you've crafted your Verifiable Presentation and added your Verifiable Credentials to it, it is time to sign it.

This can be achieved with the sign method. It requires a keyDoc parameter (an object with the params and keys you'll use for signing), and a challenge string for the proof.

It also accepts a domain string for the proof, a resolver in case you're using DIDs and a boolean compactProof that determines whether you want to compact the JSON-LD or not:

>   await vp.sign(
          keyDoc,
          'some_challenge',
          'some_domain',
        );

Please note that signing is an async process. Once done, your vp object will have a new proof field:

>   vp.proof
<-  {
      "type": "EcdsaSecp256k1Signature2019",
      "created": "2020-04-14T20:57:01Z",
      "challenge": "some_challenge",
      "domain": "some_domain",
      "jws": "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEUCIQCTTpivdcTKFDNdmzqe3l0nV6UjXgv0XvzCge--CTAV6wIgWfLqn_62U8jHkNSujrHFRmJ_ULj19b5rsNtjum09vbg",
      "proofPurpose": "authentication",
      "verificationMethod": "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
    }

Verifying a Verifiable Presentation

Once your Verifiable Presentation has been signed you can proceed to verify it with the verify method.

If you've used DIDs you need to pass a resolver for them. You can also use the booleans compactProof (to compact the JSON-LD).

If your credential uses the credentialStatus field, the credential will be checked to be not revoked unless you pass skipRevocationCheck. For the simplest cases you only need a challenge string and possibly a domain string:

>   const results = await vp.verify({ challenge: 'some_challenge', domain: 'some_domain' });
>   results
<-  {
      "presentationResult": {
        "verified": true,
        "results": [
          {
            "proof": {
              "@context": "https://w3id.org/security/v2",
              "type": "EcdsaSecp256k1Signature2019",
              "created": "2020-04-14T20:57:01Z",
              "challenge": "some_challenge",
              "domain": "some_domain",
              "jws": "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEUCIQCTTpivdcTKFDNdmzqe3l0nV6UjXgv0XvzCge--CTAV6wIgWfLqn_62U8jHkNSujrHFRmJ_ULj19b5rsNtjum09vbg",
              "proofPurpose": "authentication",
              "verificationMethod": "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
            },
            "verified": true
          }
        ]
      },
      "verified": true,
      "credentialResults": [
        {
          "verified": true,
          "results": [
            {
              "proof": {
                "@context": "https://w3id.org/security/v2",
                "type": "EcdsaSecp256k1Signature2019",
                "created": "2020-04-14T20:49:00Z",
                "jws": "eyJhbGciOiJFUzI1NksiLCJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdfQ..MEUCIQCCCRuJbSUPePpOfkxsMJeQAqpydOFYWsA4cGiQRAR_QQIgehRZh8XE24hV0TPl5bMS6sNeKtC5rwZGfmflfY0eS-Y",
                "proofPurpose": "assertionMethod",
                "verificationMethod": "https://gist.githubusercontent.com/faustow/13f43164c571cf839044b60661173935/raw"
              },
              "verified": true
            }
          ]
        }
      ]
    }

Please note that the verification is an async process that returns an object when the promise resolves.

This object contains separate results for the verification processes of the included Verifiable Credentials and the overall Verifiable Presentation.

A boolean value for the entire verification process can be checked at the root level verified property.

Using DIDs

The examples shown above use different kinds of URIs as id property of different sections. It is worth mentioning that the use of DIDs is not only supported but also encouraged.

Their usage is very simple: create as many DIDs as you need and then use them instead of the URIs shown above.

For example when adding a subject to a Verifiable Credential here we're using a DID instead of a regular URI in the id property of the object:vc.addSubject({ id: 'did:dock:123qwe123qwe123qwe', alumniOf: 'Example University' }).

If you don't know how to create a DID there's a specific tutorial on DIDs you can read.

Bear in mind that you will need to provide a resolver method if you decide to use DIDs in your Verifiable Credentials or Verifiable Presentations. More on resolvers can be found in the tutorial on Resolvers.

Here's an example of issuing a Verifiable Credential using DIDs, provided that you've created and a DID that you store in issuerDID:

const issuerKey = getKeyDoc(issuerDID, dock.keyring.addFromUri(issuerSeed, null, 'ed25519'), 'Ed25519VerificationKey2018');
await vc.sign(issuerKey);
const verificationResult = await signedCredential.verify({ resolver, compactProof: true });
console.log(verificationResult.verified); // Should print `true`

Creating a keyDoc

It can be seen from the above examples that signing of credentials and presentations require keypairs to be formatted into a keyDoc object.

There is a helper function to help with this formatting, it's called getKeyDoc and it is located in the vc helpers.

Its usage is very simple, it accepts a did string which is a DID in fully qualified form, a keypair object (generated by either using polkadot-js's keyring for Sr25519 and Ed25519 or keypair generated with generateEcdsaSecp256k1Keypair for curve secp256k1) and a type string containing the type of the provided key (one of the supported 'Sr25519VerificationKey2020', 'Ed25519VerificationKey2018' or 'EcdsaSecp256k1VerificationKey2019'):

  const keyDoc = getKeyDoc(did, keypair, type)

Please check the example on the previous section or refer to the presenting integration tests for a live example.

Revocation

Overview

Credential revocation is managed with on-chain revocation registries. To revoke a credential, its id (or hash of its id) must be added to the credential. It is advised to have one revocation registry per credential type. Each registry has a unique id and an associated policy. The policy determines who can update the revocation registry. The registry also has an "add-only" flag specifying whether an id once added to the registry can be removed (leading to undoing the revocation) or not. Similar to the replay protection mechanism for DIDs, for each registry, the last modified block number is kept which is updated each time a credential is revoked or unrevoked. For now, only one policy is supported which is that each registry is owned by a single DID. Also, neither the policy nor the "add-only" flag can be updated post the creation of the registry for now.

Registry creation

To create a registry, first a Policy object needs to be created for which a DID is needed. It is advised that the DID is registered on chain first (else someone can look at the registry a register the DID, thus controlling the registry).

import {OneOfPolicy} from '@docknetwork/sdk/utils/revocation';
const policy = new OneOfPolicy();
policy.addOwner(ownerDID);

// Or in a single step
const policy = new OneOfPolicy([ownerDID]);

Now create a random registry id. The registry id supposed to be unique among all registries on chain.

import {createRandomRegistryId} from '@docknetwork/sdk/utils/revocation';
const registryId = createRandomRegistryId();

Now send the transaction to create a registry on-chain using dock.revocation.newRegistry. This method accepts the registry id, the policy object and a boolean that specifies whether the registry is add-only or not meaning that whether undoing revocations is allowed or not. Ifs true, it makes the registry add-only meaning that undoing revocations is not allowed, if false, undoing is allowed.

// Setting the last argument to false to allow unrevoking the credential (undoing revocation)
await dock.revocation.newRegistry(registryId, policy, false);

Revoking a credential

Revoking a credential requires a signature from the owner of the registry. Now get the registry id, registryId and the revocation id (the hash of credential id), revokeId and send the transaction on chain. Revoking an already revoked credential has no effect.

await dock.revocation.revokeCredentialWithOneOfPolicy(registryId, revokeId, ownerDID, ownerKeypair, {didModule: dock.did});

Revoking multiple ids in a single transaction is possible but with a lower level method dock.revocation.revoke. See tests for its usage

Undoing a revocation

Similar to revocation, undoing the revocation also requires a signature from the owner of the registry.

Get the registry id, registryId and the revocation id to undo, revokeId and send the transaction on chain. Unrevoking an unrevoked credential has no effect.

await dock.revocation.unrevokeCredentialWithOneOfPolicy(registryId, revokeId, ownerDID, ownerKeypair, {didModule: dock.did});

Undoing revocation for multiple ids in a single transaction is possible but with a lower level method dock.revocation.unrevoke. See tests for its usage

Checking the revocation status

To check an id is revoked or not, call dock.revocation.getIsRevoked with the registry id and revocation id. Returns true if revoked else false.

const isRevoked = await dock.revocation.getIsRevoked(registryId, revokeId);

Fetching the registry details

To get the details of the registry like policy, add-only status and block number when it was last updated, use dock.revocation.getRegistryDetail

Removing the registry

A registry can be deleted leading to all the corresponding revocation ids being deleted as well. This requires the signature from owner like other updates. Use the dock.revocation.removeRegistry method to remove a registry.

await dock.revocation.removeRegistryWithOneOfPolicy(registryId, ownerDID, ownerKeypair, {didModule: dock.did}, false);

Schemas

Table of contents

  1. Intro
  2. Blobs
    1. Writing a Blob
    2. Reading a Blob
  3. Schemas
    1. Creating a Schema
    2. Writing a Schema
    3. Reading a Schema
    4. Schemas in Verifiable Credentials
    5. Schemas in Verifiable Presentations

Intro

Data Schemas are useful way of enforcing a specific structure on a collection of data like a Verifiable Credential. Data schemas serve a different purpose than that of the @context property in a Verifiable Credential, the latter neither enforces data structure or data syntax, nor enables the definition of arbitrary encodings to alternate representation formats.

Blobs

Schemas are stored on chain as a Blob in the Blob Storage module of the Dock chain, so understanding blobs is important before diving into Schemas.

Writing a Blob

A new Blob can be registered on the Dock Chain by using the method writeToChain in the BlobModule class. It accepts a blob object with the struct to store on chain (it can either be a hex string or a byte array), and one of keyPair (a keyPair to sign the payload with). You'll get a signed extrinsic that you can send to the Dock chain:

const blobId = randomAsHex(DockBlobIdByteSize); // 32-bytes long hex string to use as the blob's id
const blobStruct = {
  id: blobId,
  blob: blobHexOrArray,  // Contents of your blob as a hex string or byte array
}
const result = await dock.blob.new(blobStruct, signerDid, keypair, { didModule: dock.didModule });

If everything worked properly result will indicate a successful transaction. We'll see how to retrieve the blob next.

Reading a Blob

A Blob can be retrieved by using the method get in the BlobModule class. It accepts a blobId string param which can either be a fully-qualified blob id like blob:dock:0x... or just its hex identifier. In response you will receive a two-element array:

const chainBlob = await dock.blob.get(blobId);

chainBlob's first element will be the blob's author (a DID). It's second element will be the contents of your blob (blobHexOrArray in our previous example).

Schemas

Since Schemas are stored on chain as a Blob in the Blob Storage module, the Schema class uses the BlobModule class internally. Schemas are identified and retrieved by their unique blobId, a 32 byte long hex string. As mentioned, the chain is agnostic to the contents of blobs and thus to schemas.

Creating a Schema

The first step to creating a Schema is to initialize it, we can do that using the Schema class constructor which accepts an (optional) id string as sole argument:

const myNewSchema = new Schema();

When an id isn't passed, a random blobId will be assigned as the schema's id.

> myNewSchema.id
<- "blob:dock:5Ek98pDX61Dwo4EDmsogUkYMBqfFHtiS5hVS7xHuVvMByh3N"

Also worth noticing is the JSON representation of the schema as is right now, which can be achieved by calling the toJSON method on your new schema:

>  myNewSchema.toJSON()
<- {"id":"0x768c21de02890dad5dbf6f108b6822b865e4ea495bb7f43f8947714e90fcc060"}

where you can see that the schema's id gets modified with getHexIdentifierFromBlobID.

Setting a JSON Schema

A JSON schema can be added with the setJSONSchema method. It accepts a single argument json (an object that is checked to be a valid JSON schema before being added):

>   const someNewJSONSchema = {
         $schema: 'http://json-schema.org/draft-07/schema#',
         description: 'Dock Schema Example',
         type: 'object',
         properties: {
           id: {
             type: 'string',
           },
           emailAddress: {
             type: 'string',
             format: 'email',
           },
           alumniOf: {
             type: 'string',
           },
         },
         required: ['emailAddress', 'alumniOf'],
         additionalProperties: false,
       }
>   myNewSchema.setJSONSchema(someNewJSONSchema)
>   myNewSchema.schema === someNewJSONSchema
<-  true

Formatting for storage

Your new schema is now ready to be written to the Dock chain, the last step is to format it properly for the BlobModule to be able to use it. That's where the toBlob method comes in handy:

>   myNewSchema.toBlob()
<-  {
      id: ...,
      blob: ...,
    }

Writing a Schema to the Dock chain

Writing a Schema to the Dock chain is similar to writing any other Blob. 1 is the key id for the on-chain public key corresponding to keyPair

>  const formattedBlob = myNewSchema.toBlob(dockDID);
>  await myNewSchema.writeToChain(dock, dockDID, keypair);

Reading a Schema from the Dock chain

Reading a Schema from the Dock chain can easily be achieved by using the get method from the Schema class. It accepts a string id param (a fully-qualified blob id like "blob:dock:0x..." or just its hex identifier) and a dockAPI instance:

>  const result = await Schema.get(blob.id, dock);

result[0] will be the author of the Schema, and result[1] will be the contents of the schema itself.

Schemas in Verifiable Credentials

The VCDM spec specify how the credentialSchema property should be used when present. Basically, once you've created and stored your Schema on chain, you can reference to it by its blobId when issuing a Verifiable Credential. Let's see an example:

>    const dockApi = new DockAPI();
>    const dockResolver = new DockResolver(dockApi);
>    let validCredential = new VerifiableCredential('https://example.com/credentials/123');
>    validCredential.addContext('https://www.w3.org/2018/credentials/examples/v1');
>    const ctx1 = {
      '@context': {
        emailAddress: 'https://schema.org/email',
      },
    };
>    validCredential.addContext(ctx1);
>    validCredential.addType('AlumniCredential');
>    validCredential.addSubject({
      id: dockDID,
      alumniOf: 'Example University',
      emailAddress: 'john@gmail.com',
    });
>    validCredential.setSchema(blobHexIdToQualified(blobId), 'JsonSchemaValidator2018');
>    await validCredential.sign(keyDoc);
>    await validCredential.verify({
       resolver: dockResolver,
       compactProof: true,
     });

Assuming that the blobId points to a schema taken from the previous examples, the verification above would fail if I the credentialSubject in the Verifiable Credential didn't have one of the alumniOf or emailAddress properties.

Schemas in Verifiable Presentations

The current implementation does not specify a way to specify a schema for a Verifiable Presentation itself. However, a Verifiable Presentation may contain any number of Verifiable Credentials, each of which may or may not use a Schema themselves. The verify method for Verifiable Presentations will enforce a schema validation in each of the Verifiable Credentials contained in a presentation that are using the credentialSchema and credentialSubject properties simultaneously. This means that the verification of an otherwise valid Verifiable Presentation will fail if one of the Verifiable Credentials contained within it uses a Schema and fails to pass schema validation.

Claim Deduction

Specifying Axioms

A Verifier has complete and low level control over the logical rules they deem valid. Rules may vary from use-case to use-case and from verifier to verifier.

A common first step when writing a ruleset will be to unwrap of Explicit Ethos statements.

Simple Unwrapping of Explicit Ethos

This ruleset names a specific issuer and states that any claims made by that issuer are true.

const rules = [
  {
    if_all: [
      [
        { Unbound: 'subject' },
        { Unbound: 'predicate' },
        { Unbound: 'object' },
        { Bound: { Iri: 'did:example:issuer' } },
      ],
    ],
    then: [
      [
        { Unbound: 'subject' },
        { Unbound: 'predicate' },
        { Unbound: 'object' },
        { Bound: { DefaultGraph: true } },
      ],
    ],
  }
];

That single rule is enough for some use-cases but it's not scalable. What if we want to allow more than one issuer? Instead of copying the same rule for each issuer we trust, let's define "trustworthiness".

Unwrapping Explicit Ethos by Defining Trustworthiness

const trustworthy = { Bound: { Iri: 'https://www.dock.io/rdf2020#Trustworthy' } };
const type = { Bound: { Iri: 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type' } };
const defaultGraph = { Bound: { DefaultGraph: true } };

const rules = [
  {
    if_all: [
      [{ Unbound: 'issuer' }, type, trustworthy, defaultGraph],
      [{ Unbound: 's' }, { Unbound: 'p' }, { Unbound: 'o' }, { Unbound: 'issuer' }],
    ],
    then: [
      [{ Unbound: 's' }, { Unbound: 'p' }, { Unbound: 'o' }, defaultGraph],
    ],
  },
  {
    if_all: [],
    then: [
      [{ Bound: { Iri: 'did:example:issuer' } }, type, trustworthy, defaultGraph]
    ],
  }
];

You may ask "So what's the difference? There is still only one issuer."

By the primitive definition of "trustworthiness" written above, any claim made by a trustworthy issuer is true. did:example:issuer can claim whatever they want by issuing verifiable credentials. They can even claim that some other issuer is trustworthy. Together, the two rules defined in the above example implement a system analogous to TLS certificate chains with did:example:issuer as the single root authority.

Proving Composite Claims

As a Holder of verifiable credentials, you'll want to prove specific claims to a Verifier. If those claims are composite, you'll sometimes need to bundle a deductive proof in your verifiable credentials presentation. This should be done after the presentation has been assembled. If the presentation is going to be signed, sign it after including the deductive proof.

import { proveCompositeClaims } from '@docknetwork/sdk/utils/cd';
import jsonld from 'jsonld';

// Check out the Issuance, Presentation, Verification tutorial for info on creating
// VCDM presentations.
const presentation = { ... };

// the claim we wish to prove
const compositeClaim = [
  { Iri: 'uuid:19e91192-210b-4b03-8e9c-8ded0a48d5bf' },
  { Iri: 'http://dbpedia.org/ontology/owner' },
  { Iri: 'did:example:bob' },
  { DefaultGraph: true },
];

// SDK reasoning utilities take presentations in expanded form
// https://www.w3.org/TR/json-ld/#expanded-document-form
const expPres = await jsonld.expand(presentation);

let proof;
try {
  proof = await proveCompositeClaims(expPres, [compositeClaim], rules);
} catch (e) {
  console.error('couldn\'t prove bob is an owner');
  throw e;
}

// this is that standard property name of a Dock deductive proof in VCDM presentation
const logic = 'https://www.dock.io/rdf2020#logicV1';

presentation[logic] = proof;

// Now JSON.stringify(presentation) is ready to send to a verifier.

Verifying Composite Claims

import { acceptCompositeClaims } from '../src/utils/cd';
import jsonld from 'jsonld';
import deepEqual from 'deep-equal';

/// received from the presenter
const presentation = ...;

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
let ver = await verify(presentation);
if (!ver.verified) {
  throw ver;
}

const expPres = await jsonld.expand(presentation);

// acceptCompositeClaims will verify and take into account any deductive proof provided
// via the logic property
const claims = await acceptCompositeClaims(expPres, rules);

if (claims.some(claim => deepEqual(claim, compositeClaim))) {
  console.log('the composite claim was shown to be true');
} else {
  console.error('veracity of the composite claim is unknown');
}

Verifier-Side Reasoning

Some use-cases may require the verifier to perform inference in place of the presenter.

import { proveCompositeClaims } from '../src/utils/cd';
import jsonld from 'jsonld';

/// received from the presenter
const presentation = ...;

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
let ver = await verify(presentation);
if (!ver.verified) {
  throw ver;
}

const expPres = await jsonld.expand(presentation);

try {
  await proveCompositeClaims(expPres, [compositeClaim], rules);
  console.log('the composite claim was shown to be true');
} except (e) {
  console.error('veracity of the composite claim is unknown');
}

We Need to Go Deeper

The SDK claim deduction module exposes lower level functionality for those who need it. getImplications, proveh and validateh, for example, operate on raw claimgraphs represented as adjacency lists. For even lower level access, check out our inference engine which is written in Rust and exposed to javascript via wasm.

Graphical Anchoring Utility

You can also anchor without touching any code. Visit https://fe.dock.io/#/anchor/batch for creation of anchors and https://fe.dock.io/#/anchor/check for anchor verification.

To Batch, or not to Batch

Batching (combining multiple anchors into one) can be used to save on transaction costs by anchoring multiple documents in a single transaction as a merkle tree root.

Batching does have a drawback. In order to verify a document that was anchored as part of the batch, you must provde the merkle proof that was generated when batching said file. Merkle proofs are expressed as .proof.json files and can be downloaded before posting the anchor. No merkle proof is required for batches containing only one document.

Programatic Usage

The on-chain anchoring module allows to developers the flexibility talor anchors to their own use-case, but the sdk does provide a reference example for batching and anchoring documents.

The anchoring module is hashing algorithm and hash length agnostic. You can post a multihash, or even use the identity hash; the chain doesn't care.

One thing to note is that rather than storing your anchor directly, the anchoring module will store the blake2b256 hash of the anchor. This means as a developer you'll need to perform an additional hashing step when looking up anchors:

// pseudocode

function postAnchor(file) {
  anchor = myHash(file)
  deploy(anchor)
}

fuction checkAnchor(file) {
  anchor = myHash(file)
  anchorblake = blake2b256(anchor)
  return lookup(anchorblake)
}

See the example/anchor.js in the sdk repository for more info.

Private Delegation

This tutorial follows the lifecycle of a delegated credential. It builds builds on previous turorials Issuance, Presentation, Verification and Claim Deduction.

Create a Delegation

Let's assume some root authority, did:ex:a, wants grant did:ex:b full authority to make claims on behalf of did:ex:a. To do this did:ex:a will issue a delegation credential to did:ex:b.

Boilerplate
const { v4: uuidv4 } = require('uuid');

function uuid() {
  return `uuid:${uuidv4()}`;
}

// Check out the Issuance, Presentation, Verification tutorial for info on signing
// credentials.
function signCredential(cred, issuer_secret) { ... }

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
async function verifyPresentation(presentation) { ... }
const delegation = {
  '@context': [ 'https://www.w3.org/2018/credentials/v1' ],
  id: uuid(),
  type: [ 'VerifiableCredential' ],
  issuer: 'did:ex:a',
  credentialSubject: {
    id: 'did:ex:b',
    'https://rdf.dock.io/alpha/2021#mayClaim':
      'https://rdf.dock.io/alpha/2021#ANYCLAIM'
  },
  issuanceDate: new Date().toISOString(),
};
const signed_delegation = signCredential(delegation, dida_secret);

Next did:ex:a sends the signed credential to did:ex:b.

Issue a Credential as a Delegate

did:ex:b accepts the delegation credential from did:ex:a. Now did:ex:b can use the delegation to make arbitrary attestations on behalf of did:ex:a.

const newcred = {
  '@context': [ 'https://www.w3.org/2018/credentials/v1' ],
  id: uuid(),
  type: [ 'VerifiableCredential' ],
  issuer: 'did:ex:b',
  credentialSubject: {
    id: 'did:ex:c',
    'https://example.com/score': 100,
  },
  issuanceDate: new Date().toISOString(),
};
const signed_newcred = signCredential(newcred, didb_secret);

So far we have two credentials, signed_delegation and signed_newcred. signed_delegation proves that any claim made by did:ex:b is effectively a claim made by did:ex:a. signed_newcred proves tha did:ex:b claims that did:ex:c has a score of 100. By applying one of the logical rules provided by the sdk, we can infer that did:ex:a claims did:ex:c has a score of 100. The logical rule named MAYCLAIM_DEF_1 will work for this use-case. MAYCLAIM_DEF_1 will be used by the verifier.

Now did:ex:b has both signed credentials. did:ex:b may now pass both credentials to the holder. In this case the holder is did:ex:c. did:ex:c also happens to be the subject of one of the credentials.

Present a Delegated Credential

did:ex:c now holds two credentials, signed_delegation and signed_newcred. Together they prove that did:ex:a indirectly claims did:ex:c to have a score of 100. did:ex:c wants to prove this statement to another party, a verifier. did:ex:c must bundle the two credentials into a VCDM presentation.

let presentation = {
  '@context': [ 'https://www.w3.org/2018/credentials/v1' ],
  type: [ 'VerifiablePresentation' ],
  id: uuid(),
  holder: `did:ex:c`,
  verifiableCredential: [ signed_delegation, signed_newcred ],
};

presentation is sent to the verifier.

Accept a Delegated Credential

The verifier receives presentation, verifies the enclosed credentials, then reasons over the union of all the credentials in the bundle using the rule MAYCLAIM_DEF_1. The process is the one outlined in Verifier-Side Reasoning but using a different composite claim and a different rule list.

import { MAYCLAIM_DEF_1 } from '@docknetwork/sdk/rdf-defs';
import { proveCompositeClaims } from '../src/utils/cd';
import jsonld from 'jsonld';

const compositeClaim = [
  { Iri: 'did:ex:c' },
  { Iri: 'https://example.com/score' },
  { Literal: { datatype: 'http://www.w3.org/2001/XMLSchema#integer', value: '100' } }
  { Iri: 'did:ex:a' },
];

let ver = await verifyPresentation(presentation);
if (!ver.verified) {
  throw ver;
}

const expPres = await jsonld.expand(presentation);

try {
  await proveCompositeClaims(expPres, [compositeClaim], MAYCLAIM_DEF_1);
  console.log('the composite claim was shown to be true');
} except (e) {
  console.error('veracity of the composite claim is unknown');
}

Public Delegation

This feature should be considered Alpha.

Public Delegations use the same data model as Private Delegations. A delegator attests to some delegation. The verifier somehow gets and verifies that attestation then reasons over it in conjuction with a some credential. The difference is that while Private Delegations are passed around as credentials, Public Delegations are linked from the DID document of the delegator.

Create a Delegation

It's assumed that the delegator already controls a DID. See the tutorial on DIDs for instructions on creating your own on-chain DID.

Like in the Private Delegation tutorial, let's assume a root authority, did:ex:a, wants to grant did:ex:b full authority to make claims on behalf of did:ex:a. did:ex:a will post an attestation delegating to did:ex:b.

Boilerplate
import createClient from 'ipfs-http-client';
import { graphResolver } from '@docknetwork/sdk/crawl.js';
const { v4: uuidv4 } = require('uuid');

// A running ipfs node is required for crawling.
const ipfsUrl = 'http://localhost:5001';

function uuid() {
  return `uuid:${uuidv4()}`;
}

// Check out the Issuance, Presentation, Verification tutorial for info on signing
// credentials.
function signCredential(cred, issuer_secret) { ... }

// Check out the Issuance, Presentation, Verification tutorial for info on verifying
// VCDM presentations.
async function verifyPresentation(presentation) { ... }

// This function can be implemeted using setClaim().
// An example of setClaim() usage can be found here:
//  https://github.com/docknetwork/sdk/blob/master/tests/integration/did-basic.test.js
async function setAttestation(did, didKey, iri) { ... }

// See the DID resolver tutorial For information about implementing a documentLoader.
const documentLoader = ...;

const ipfsClient = createClient(ipfsUrl);
const resolveGraph = graphResolver(ipfsClient, documentLoader);

Instead of a credential, the delegation will be expressed as a turtle document, posted on ipfs.

@prefix dockalpha: <https://rdf.dock.io/alpha/2021#> .
<did:ex:b> dockalpha:mayClaim dockalpha:ANYCLAIM .

A link to this ipfs document is then added to the delegators DID document. For a Dock DID, this is done by submitting a transaction on-chain.

await setAttestation(
  delegatorDid,
  delegatorSk,
  'ipfs://Qmeg1Hqu2Dxf35TxDg19b7StQTMwjCqhWigm8ANgm8wA3p'
);

Issue a Credential as a Delegate

With Public Delegation, the delegate doesn't need to worry about the passing on delegation credentials to the holder. The delegations are already posted where the verifier can find them.

Present a Delegated Credential

With Public Delegation the holder does not need to include a delegation chain when presenting their credential. From the holders perspective, the process of presenting a publically delegated credential is exactly the same as the process for presenting a normal credential.

Accept a Delegated Credential

The verifier accepts Publicly delegated credentials by merging the credential's claimgraph representation with publically posted delegation information, then reasoning over the result. Once found, the delegation information is also a claimgraph. The delegation information is found by crawling the public attestation supergraph. Crawling is potentially slow, so when verification speed is important it should be done early on, like at program startup. Delegation information can be re-used across multiple credential verifications.

As with any Public Attestations, delegation information is revocable by removing the delegation attestation from the delegators DID doc. As such it is possible for cached delegation information to become out of date. Long running validator processes should devise a mechanism for invalidating out-of-date delegation information, such as re-crawing whenever a change is detected to the DID doc of a delegator (or sub-delegator). This tutorial does not cover invalidation of out-of-date delegations.

The following example shows how a verifier might

import { ANYCLAIM, MAYCLAIM, MAYCLAIM_DEF_1 } from '@docknetwork/sdk/rdf-defs';
import { crawl } from '@docknetwork/sdk/crawl.js';
import {
  proveCompositeClaims,
  presentationToEEClaimGraph,
  inferh,
} from '@docknetwork/sdk/utils/cd';
import { merge } from '@docknetwork/sdk/utils/claimgraph';
import jsonld from 'jsonld';

// These logical rules will be used for reasoning during both crawing and verifiying
// credentials.
const RULES = [
  // Imports the definition of dockalpha:mayClaim from sdk
  ...MAYCLAIM_DEF_1,
  // Adds a custom rule stating that by attesting to a document the attester grants the
  // document full delegation authority.
  {
    if_all: [
      [
        { Unbound: 'a' },
        { Bound: { Iri: ATTESTS } },
        { Unbound: 'doc' },
        { Unbound: 'a' },
      ],
    ],
    then: [
      [
        { Unbound: 'doc' },
        { Bound: { Iri: MAYCLAIM } },
        { Bound: { Iri: ANYCLAIM } },
        { Unbound: 'a' },
      ],
    ],
  },
];

// This query dictates what the crawler will be "curious" about. Any matches to
// `?lookupNext` will be dereferenced as IRIs. When an IRI is successfully dereferenced
// the resultant data is merged into the crawlers knowlege graph.
const CURIOSITY = `
  prefix dockalpha: <https://rdf.dock.io/alpha/2021#>

  # Any entity to which "did:ex:a" grants full delegation authority is interesting.
  select ?lookupNext where {
    graph <did:ex:a> {
      ?lookupNext dockalpha:mayClaim dockalpha:ANYCLAIM .
    }
  }
`;

// To spark the crawlers interest we'll feed it some initial knowlege about did:ex:a .
const initialFacts = await resolveGraph({ Iri: 'did:ex:a' });

// `allFact` contains our delegation information, it will be merged with verified
// credentials in order to reason over delegations
let allFacts = await crawl(initialFacts, RULES, CURIOSITY, resolveGraph);

// Now that we've obtained delegation information for `did:ex:a` we can verify
// credentials much like normal. The only difference is that we merge claimgraphs
// before reasoning over the verified credentials.
//
// `presentation` is assumed to be a VCDM presentation provided by a credential holder
let ver = await verifyPresentation(presentation);
if (!ver.verified) {
  throw ver;
}
const expPres = await jsonld.expand(presentation);
const presCg = await presentationToEEClaimGraph(expPres);
const cg = inferh(merge(presCg, allFacts), RULES);

// At this point all the RDF quads in `cg` are known to be true.
// doSomethingWithVerifiedData(cg);

More examples of crawl() usage can be found here and here.

Ethereum integration

Table of contents

  1. Intro
  2. Dock and EVM accounts
  3. Deploying a DAO
  4. Chainlink integration

This document assumes hands-on experience with Ethereum and interaction with smart contracts using libraries like web3 or ethers-js.

Intro

The chain allows you to deploy and interact with existing EVM smart contracts by using popular Ethereum client libraries like web3 or ethers-js. You can directly send contract's bytecode as well if you don't want to use these libraries. This is possible because the chain integrates 2 modules pallet-evm and pallet-ethereum from Parity's frontier project. pallet-evm allows the chain to execute EVM bytecode and persist state like contract storage but does not understand how Ethereum transactions, blocks, etc are created and have to be parsed. Handling that is the job of pallet-ethereum which uses pallet-evm for executing the bytecode. More detailed docs of these pallets are available here.

The motivation for this integration was to support Chainlink for providing price feed of the DOCK/USD pair which can then be used by the chain to charge transactions at a USD price.

The document will however show how to deploy and use a different set of contracts. A DAO, which replicates Aragon's voting app. The app lets token holders vote in proportion to the tokens they hold and the winning vote executes an action by calling a method on another contract.

Most of the examples use web-3 but there is a test using ethers-js as well.

Dock and EVM accounts

Accounts in Dock are 32 bytes (excluding network identifier and checksum) but EVM accounts are 20 bytes (last 20 bytes of the public key). As there is no direct conversion possible between these two and we don't support binding these two together in an onchain mapping, a separate Ethereum address has to be created and funded with tokens to send Ethereum style transactions. pallet-evm derives a Dock address from this Ethereum address and expects that Dock address to have tokens. The test native-balance-to-eth.test shows an Ethereum account carol created using web3 being given some tokens using function endowEVMAddress.

  // An API object which will connect to node and send non-EVM transactions like balance transfer
  const dock = new DockAPI();
  await dock.init({
    address: FullNodeEndpoint,
  });

  // Jacob has Dock tokens and will send tokens to Carol.
  const jacob = dock.keyring.addFromUri(EndowedSecretURI);
  dock.setAccount(jacob);

  // ...
  // ....

  const carol = "<Account created from web3>";
  await endowEVMAddress(dock, carol.address);

The substrate address can also be generated by function evmAddrToSubstrateAddr. Its balance can be queried either using web3 or polkadot-js.

  // Every EVM address has a mapping to Substrate address whose balance is deducted for fee when the EVM address does a transaction.
  const carolSubsAddr = evmAddrToSubstrateAddr(carol.address);
  console.log(`Querying balance of Carol's address using web3 ${(await web3.eth.getBalance(carol.address))}`);
  console.log(`Querying balance of Carol's address using polkadot-js ${(await getBalance(dock.api, carolSubsAddr, false))}`);

endowEVMAddress uses evmAddrToSubstrateAddr to covert the passed EVM address to the Substrate address and do a transfer as shown below

// Give `amount` of Dock tokens to EVM address. `amount` defaults to the number of tokens required to pay of maximum gas
export function endowEVMAddress(dock, evmAddr, amount) {
  //  Convert EVM address to a Substrate address
  const substrateAddr = evmAddrToSubstrateAddr(evmAddr);

  // Selecting the amount such that it can pay fees for the upto the maximum gas allowed and some extra
  const amt = amount !== undefined ? amount : bnToBn(MinGasPrice).mul(bnToBn(MaxGas)).muln(2);

  // Transfer to the Substrate address created above
  const transfer = dock.api.tx.balances.transfer(substrateAddr, amt);
  return dock.signAndSend(transfer, false);
}

To send arbitrary EVM transactions and deploy contracts using web3, look at the functions sendEVMTxn and deployContract respectively in scripts/eth/helpers.js.

Withdrawing tokens back from an EVM address to a Substrate address is a 3-step process.

  1. Derive an intermediate EVM address from the receiving Substrate address.
  2. Send tokens using web3 to this intermediate EVM address.
  3. Send tokens using polkadot-js from intermediate address to target address.
  // Withdraw some tokens from EVM address, i.e. Carol to Jacob.

  // Jacob's account is set as signer in the API object `dock`

  // Step-1
  // Create an intermediate EVM address
  const intermediateAddress = substrateAddrToEVMAddr(jacob.address);

  // Step-2
  // Carol sends 100 tokens to the intermediate EVM address. `sendTokensToEVMAddress` uses web3 to send an Ethereum style
  // transfer transaction, i.e. `data` field is set to 0 and `value` field specifies the transfer amount.
  await sendTokensToEVMAddress(web3, carol, intermediateAddress, 1000);

  // Step-3
  // Withdraw from the intermediate address to the Substrate address sending this transaction, i.e. Jacob
  const withdraw = dock.api.tx.evm.withdraw(intermediateAddress, 1000);
  await dock.signAndSend(withdraw, false);

The second step above of sending tokens in EVM requires to specify minimum gas price and maximum allowed gas. The function sendTokensToEVMAddress needs to know these values and accepts them as arguments. If not provided it will check for environment variables MinGasPrice and MaxGas. This behavior is common to all script helpers.

DAO

This section shows how to deploy a voting DAO where tokens holder can vote to execute certain actions. This replicates Aragon's voting app. The complete script is here and below is an explainer of the script

  1. Create some accounts that will send Ethereum style transactions and fund them with Dock tokens. The accounts generated in the code are only for testing so create your own accounts for real world apps.

    const web3 = getWeb3();
    
    // Create some test accounts. Alice will be the manager of the DAO while Bob, Carol and Dave will be voters.
    const [alice, bob, carol, dave] = getTestEVMAccountsFromWeb3(web3);
    
    // Endow accounts with tokens so they can pay fees for transactions
    await endowEVMAddressWithDefault(alice.address);
    await endowEVMAddressWithDefault(bob.address);
    await endowEVMAddressWithDefault(carol.address);
    await endowEVMAddressWithDefault(dave.address);
    

    getTestEVMAccountsFromWeb3 uses web3 to create EVM accounts using some test private keys.

    // Returns some test EVM accounts
    export function getTestEVMAccountsFromWeb3(web3) {
      return getTestPrivKeysForEVMAccounts().map((k) => web3.eth.accounts.privateKeyToAccount(k));
    }
    
  2. Create a DAO factory contract which will then be used to initialize a new DAO instance. Also setup the access control list for the DAO and set the DAO manager (an admin role)

    // Create a contract factory to create new DAO instance.
    const [, , , daoFactContractAddr] = await createDaoFactory(web3, alice);
    
    // Create a new DAO instance
    const daoAddr = await createNewDao(web3, alice, alice.address, daoFactContractAddr);
    
    // Set access control and set Alice as DAO's manager
    const aclAddr = await setupAcl(web3, alice, alice.address, daoAddr);
    
  3. A DAO can install several apps but here we will have only one app; for voting. Choose a unique app id.

    // Some unique app id
    const appId = '0x0000000000000000000000000000000000000000000000000000000000000100';
    
  4. Create a voting app (contract) with the above app id. Install the app in the DAO and allow any token holder to vote using the DAO's access control list (ACL).

    // Create a voting contract, install it as an app in the DAO and allow any token holder to vote
    const votingAppAddress = await setupVotingApp(web3, alice, alice.address, appId, daoAddr, aclAddr);
    const votingApp = new web3.eth.Contract(VotingDAOABI, votingAppAddress);
    
  5. Voting in this DAO requires voters to have tokens and their vote will carry weight proportional to their token balance. Deploy a token contract. This token contract is Aragon's MiniMeToken that extends ERC-20 interface. After deploying token, accounts bob, carol and dave are given 51, 29 and 20 tokens respectively. This makes the total supply of the MiniMeToken as 100 where bob, carol and dave hold 51%, 29% and 20% supply respectively.

    // Deploy a token contract where Bob, Carol and Dave will have 51%, 29% and 20% tokens as thus proportional voting power.
    const tokenContractAddr = await deployToken(web3, alice, [[bob.address, 51], [carol.address, 29], [dave.address, 20]]);
    
  6. Now initialize the voting app by setting the token contract address and thresholds for voting. The example scripts set the winning percentage to 51%. As bob, carol and dave hold 51%, 29% and 20% token supply, they will have 51%, 29% and 20% voting power respectively.

    // Initialize the voting by supplying the token contract and thresholds for voting.
    await initializeVotingApp(web3, alice, votingAppAddress, tokenContractAddr);
    
  7. For this example, we want successful voting to increment a counter in a contract. However, this contract is for demo purpose only. counterAddr is the address of the demo contract and incrementScript is the encoded call to a function to increment the counter.

    // A Counter contract as an example executor. In practice, the executor methods will only allow calls by the voting contract.
    const [counterAddr, incrementScript] = await setupVotingExecutor(web3, alice);
    
  8. As bob has 51% of the voting power, it can create a new vote by calling contract method newVote

    // Bob alone can increment the Counter as he has 51% tokens
    console.log(`Counter before increment from Bob ${(await getCounter(web3, counterAddr))}`);
    await sendEVMTxn(web3, bob, votingAppAddress, votingApp.methods.newVote(incrementScript, '').encodeABI());
    console.log(`Counter after increment from Bob ${(await getCounter(web3, counterAddr))}`);
    
  9. As carol and dave together hold less than 51%, they cannot increment the counter by voting. Here carol creates a new vote by calling contract method newVote which returns the vote id and dave uses votes in approval of carol by calling contract method vote and passing the vote id and true. The other true indicates the vote should trigger the execution if successful.

    // Carol creates a new vote
    const voteId = await createNewVote(web3, carol, votingAppAddress, incrementScript);
    // Dave seconds Carol's vote
    await sendEVMTxn(web3, dave, votingAppAddress, votingApp.methods.vote(voteId, true, true).encodeABI());
    console.log("Counter after attempted increment from Carol and Dave. Counter will not change as Bob and Carol don't have enough voting power");
    

The chain will have Chainlink contracts for price feed in addition to Link token and others. The contracts and scripts to interact with Chainlink contracts are at scripts/eth/chainlink in the repo. The scripts have some comments to explain the working.

  • To deploy the Link token, check the script link-token.js at path scripts/eth/chainlink/link-token.js in the repo.
  • To deploy the FluxAggregator contract that is used by oracles to submit prices, check script flux-aggegator.js at path scripts/eth/chainlink/flux-aggegator.js in the repo.
  • To deploy aggregator with access control on reads and with proxy, AccessControlledAggregator and EACAggregatorProxy are deployed. Check script access-controlled-aggregator-proxy.js at path scripts/eth/chainlink/access-controlled-aggregator-proxy.js in the repo. To deploy with DeviationFlaggingValidator that raises flag on price off by a threshold in either direction, use script deviation-flag-validator.js at path scripts/eth/chainlink/deviation-flag-validator.js. The set the validator address while deploying AccessControlledAggregator.
  • To setup a contract for an Oracle (not needed for price feed though), check script oracle.js in scripts/eth/chainlink/oracle.js.

Anonymous credentials

Overview

This document talks about building anonymous credentials using mainly 2 primitives, BBS+ signature scheme which issuer uses to sign the credential and accumulators for membership check needed for revocation. BBS+ implementation comes from this Typescript package which uses this WASM wrapper which itself uses our Rust crypto library.

For an overview of these primitives, see this.

Implementation

On chain, there are 2 modules, one for BBS+ and the other for accumulator. The modules store the BBS+ params, public keys, accumulator params, accumulator public keys and some accumulator details like current accumulated value, last updated, etc. They are somewhat agnostic to the cryptographic details and treat the values as bytes with some size bounds.

  • BBS+ module

    • At path src/modules/bbs-plus.js in the repo.
    • Used to create and remove signature parameters and public keys.
    • The public keys can either refer the signature params or not pass the reference while creating.
    • The params and public keys are owned by a DID and can be only removed by that DID.
    • See the tests at tests/integration/anoncreds/bbs-plus.test.js on how to create, query and remove these.
  • Accumulator module

    • At path src/modules/accumulator.js in the repo.
    • The parameters and public keys are managed in the same way as BBS+ signatures.
    • Accumulators are owned by a DID and can be only removed by that DID.
    • Accumulators are identified by a unique id and that id is used to send updates or remove it.
    • The accumulator update contains the additions, removals and the witness update info and these are not stored in chain state but are present in the blocks and the accumulated value corresponding to the update is logged in the event.
    • In the chain state, only the most recent accumulated value is stored (along with some metadata like creation time, last update, etc), which is sufficient to verify the witness or the proof of knowledge.
    • To update the witness, the updates and witness update info should be parsed from the blocks and the accumulator module provides the functions get the updates and necessary events from the block,
    • See the tests at tests/integration/anoncreds/accumulator.test.js on how to create, query and remove params and keys as well as the accumulator.
  • Composite proofs

    • Proofs that use BBS+ signatures and accumulator
    • The SDK itself doesn't include the Typescript package containing the crypto as a dependency. But it can be used with the SDK to issue, prove, verify and revoke credentials as shown in tests mentioned below.
    • See the test tests/integration/anoncreds/demo.test.js for an example of how a BBS+ signature can be used with an accumulator for anonymous credentials. The accumulator is used to hold a user/credential id. Presence of the id in accumulator means the credential is valid and absence means invalid.
  • Verifiable encryption

    • Encrypt messages from BBS+ signatures for a 3rd party and prove that the encryption was done correctly.
    • See the test tests/integration/anoncreds/saver-and-bound-check.test.js
  • Bound check/Range proof

    • Prove that messages under a BBS+ signature satisfy some bounds.
    • See the test tests/integration/anoncreds/saver-and-bound-check.test.js