All posts in JSON-LD

JSON-LD Best Practice: Context Caching

An important aspect of systems design is understanding the trade-offs you are making in your system. These trade-offs are influenced by a variety of factors: latency, safety, expressiveness, throughput, correctness, redundancy, etc. Systems, even ones that do effectively the same thing, prioritize their trade-offs differently. There is rarely “one perfect solution” to any problem.

It has been asserted that Unconstrained JSON-LD Performance Is Bad for API Specs. In that article, Dr. Chuck Severance asserts that the cost of JSON-LD parsing is 2,000 times more costly in real time and 70 times more costly in CPU time than pure JSON processing. Sounds bad, right? So, lets unpack these claims and see where the journey takes us.

TL;DR: Don’t ever put a system that uses JSON-LD into production without thinking about your JSON-LD context caching strategy. If you want to use JSON-LD, your strategy should probably either be: Cache common contexts and do JSON-LD processing or don’t do JSON-LD processing, but enforce an input format via something like JSON Schema on your clients so they’re still submitting valid JSON-LD.

The Performance Test Suite

After reading the article yesterday, Dave Longley (the co-inventor of JSON-LD and the creator of the jsonld.js and php-json-ld libraries) put together a test suite that effectively re-creates the test that Dr. Severance ran. It processes a JSON-LD Person object in a variety of ways. We chose to change the object because the one that Dr. Severance chose is not necessarily a common use of JSON-LD (due to the extensive use of CURIEs) and we wanted a more realistic example (that uses terms). The suite first tests pure JSON processing, then JSON-LD processing w/ a cached context, and then JSON-LD processing with an uncached context. We ran the tests using the largely unoptimized JSON-LD processors written in PHP and Node vs. a fully optimized JSON processor written in C. The tests used PHP 5.6, PHP 7.0, and node.js 4.2.

So, with Dr. Severance’s data in hand and our shiny new test suite, let’s do some science!

The Results

The raw output of the test suite is available, but we’ve transformed the data into a few pretty pictures below.

The first graph shows the wall time performance hit using basic JSON processing as the baseline (shown as 1x). It then compares that against JSON-LD processing using a cached context and JSON-LD processing using an uncached context. Wall time in this case means the time taken if you were to start a stop watch from the start of the test to the end of the test. Take a look at the longest bars in this graph:

As we can see, there is a significant performance hit any way you look at it. Wall time spent on processing JSON-LD with an uncached context in PHP 5 is 7,551 times slower than plain JSON processing! That’s terrible! Why would anyone choose to use JSON-LD with such a massive performance hit!

Even when you take out the time spent just sitting around, the CPU cost (for running the network code) is still pretty terrible. Take a look at the longest bars in this graph:

CPU processing time for JSON vs. JSON-LD with an uncached context in PHP 5 is 260x slower. For PHP 7 it’s 239x slower. For Node 4.2, it’s 140x slower. Sounds pretty dismal, right? JSON-LD is a performance hog… but hold up, let’s examine why this is happening.

JSON-LD adds meaning to your data. The way it does this is by associating your data with something called a JSON-LD context that has to be downloaded by the JSON-LD processor and applied to the JSON data. The context allows a system to formally apply a set of rules to data to determine whether a remote system and your local system are “speaking the same language”. It removes ambiguity from your data so that you know that when the remote system says “homepage” and your system says “homepage”, that they mean the same thing. Downloading things across a network is an expensive process, orders of magnitude slower than having something loaded in a CPUs cache and executed without ever having to leave home sweet silicon home.

So, what happens when you tell the program to go out to the network and fetch a document from the Internet for every iteration of a 1,000 cycle for loop? Your program takes forever to execute because it spends most of it’s time in network code and waiting for I/O from the remote site. So, this is lesson number 1. JSON-LD is not magic. Things that are slow (because of physics) are still slow in JSON-LD.

Best Practice: JSON-LD Context Caching

Accessing things across a network of any kind is expensive, which is why there are caches. There are primary, secondary, and tertiary caches in our CPUs, there are caches in our memory controllers, there are caches on our storage devices, there are caches on our network cards, there are caches in our routers, and yes, there are even caches in our JSON-LD processors. Use those caches because they provide a huge performance gain. Let’s look at that graph again, and see how much of a performance gain we get by using the caches (look at the second longest bars in both graphs):

CPU processing time for JSON vs. JSON-LD with a cached context in PHP 5 is 67x slower. For PHP 7 it’s 35x slower. For Node 4.2, it’s 18x slower. To pick the worst case, 67x slower (using a cached JSON-LD Context) is way better than 7,551x slower. That said, 67x slower still sounds really scary. So, let’s dig a bit deeper and put some processing time numbers (in milliseconds) behind these figures:

These numbers are less scary. In the common worst case, where we’re using a cached context in the slowest programming language tested, it will take 2ms to do JSON-LD processing per CPU core. If you have 8 cores, you can process 8 JSON-LD API requests in 2ms. It’s true that 2ms is an order of magnitude slower than just pure JSON processing, but the question is: is it worth it to you? Is gaining all of the benefits of using JSON-LD for your industry and your application worth 2ms per request?

If the answer is no, and you really need to shave off 2ms from your API response times, but you still want to use JSON-LD – don’t do JSON-LD processing. You can always delay processing until later by just ensuring that your client is delivering valid JSON-LD; all you need to do that is apply a JSON Schema to the incoming data. This effectively pushes JSON-LD processing off to the API client, which has 2ms to spare. If you’re building any sort of serious API, you’re going to be validating incoming data anyway and you can’t get around that JSON Schema processing cost.

I’ve never had a discussion with someone where 2 milliseconds was the deal breaker between JSON-LD processing and not doing it. There are many things in software systems that eat up more than 2 milliseconds, but JSON-LD still gives you the choice of doing the processing at the server, pushing that responsibility off to the client, or a number of other approaches that provide different trade-offs.

But Dr. Severance said…

There are a few parting thoughts in Unconstrained JSON-LD Performance Is Bad for API Specs that I’d be remiss in not addressing.

JSON-LD evangelists will talk about “caching” – this of course is an irrelevant argument because virtually all of the shared hosting PHP servers do not allow caching so at least in PHP the “caching fixes this” is a useless argument. Any normal PHP application in real production environments will be forced to re-retrieve and re-parse the context documents on every request / response cycle.

phpFastCache exists, use it. If for some reason you can’t, and I know of no reason you couldn’t, cache the context by writing it to disk and retrieving it from disk. Most modern operating systems will optimize this down to a very fast read from memory. If you can’t write to disk in your shared PHP hosting environment, switch to a provider that allows it (which are most of them).

even with cached pre-parsed [ed: JSON-LD Context] documents the additional order of magnitude is due to the need to loop through the structures over and over, to detect many levels of *potential* indirection between prefixes, contexts, and possible aliases for prefixes or aliases.

That is not how JSON-LD processing works. Rather than go into the details, here’s a link to the JSON-LD processing algorithms.

json_decode is written in C in PHP and jsonld_compact is written in PHP and if jsonld_compact were written in C and merged into the PHP core and all of the hosting providers around the world upgraded to PHP 12.0 – it means that perhaps the negative performance impact of JSON-LD would be somewhat lessened “when pigs fly”.

You can do JSON-LD processing in 2ms in PHP 5, 0.7ms in PHP 7, and 1ms in Node 4. You don’t need a C implementation unless you need to shave those times off of your API calls.

If the JSON-LD community actually wants its work to be used outside the “Semantic Web” backwaters – or in situations where hipsters make all the decisions and never run their code into production, the JSON-LD community should stand up and publish a best practice to use JSON-LD in a way that maintains compatibility with JSON – so that APIs and be interoperable and performant in all programming languages. This document should be titled “High Performance JSON-LD” and be featured front and center when talking about JSON-LD as a way to define APIs.

I agree that we need to write more about high performance JSON-LD API design, because we have identified a few things that seem best-practice-y. The problem is that we’ve been too busy drinking our “tripel” mocha lattes and riding our fixies to the latest butchershop-by-day-faux-vapor-bar-by-night flashmob experiences to get around to it. I mean, we are hipsters after all. Play-play balance is important to us and writing best practices sounds like a real drag. 🙂

In all seriousness, JSON-LD is now published by several million sites. We know that people want some best practices from the JSON-LD community. Count this blog post as one of them. If you use JSON-LD, and that includes the developers that created those several million sites that publish JSON-LD, you are a part of the JSON-LD community. We created JSON-LD to help our fellow developers. If you think you have settled on a best practice with JSON-LD, don’t wait for someone else to write about it; please pay it forward and blog about it. Your community of fellow JSON-LD developers will thank you.

Github adds JSON-LD support in core product

Github has just announced that they’ve added JSON-LD and support. Now if you get a pull request via email, and your mail client supports it (like Gmail does), you’ll see an action button shown in the display without having to open the email, like so:

How does Github do this? With Actions, which use JSON-LD markup. When you get an email from Github, it will now include markup that looks like this:

<script type="application/ld+json">
  "@type": "EmailMessage",
  "description": "View this Pull Request on GitHub",
  "action": {
    "@type": "ViewAction",
    "name":"View Pull Request"

Programs like Gmail then take that JSON-LD and translate it into a simple action that you can take without opening the email. Do you send emails to your customers, if so, Meldium has a great HOW-TO on integrating actions with your customer emails.

Hooray for Github,, and JSON-LD!

Identity Credentials and Web Login

In a previous blog post, I outlined the need for a better login solution for the Web and why Mozilla Persona, WebID+TLS, and OpenID Connect currently don’t address important use cases that we’re considering in the Web Payments Community Group. The blog post contained a proposal for a new login mechanism for the Web that was simultaneously more decentralized, more extensible, enabled a level playing field, and was more privacy-aware than the previously mentioned solutions.

In the private conversations we have had with companies large and small, the proposal was met with a healthy dose of skepticism and excitement. There was enough excitement generated to push us to build a proof-of-concept of the technology. We are releasing this proof-of-concept to the Web today so that other technologists can take a look at it. It’s by no means done, there are plenty of bugs and security issues that we plan to fix in the next several weeks, but the core of the idea is there and you can try it out.

TL;DR: There is now an open source demo of credential-based login for the Web. We think it’s better than Persona, WebID+TLS, and OpenID Connect. If we can build enough support for Identity Credentials over the next year, we’d like to standardize it via the W3C.

The Demo

The demonstration that we’re releasing today is a proof-of-concept asserting that we can have a unified, secure identity and login solution for the Web. The technology is capable of storing and transmitting your identity credentials (email address, payment processor, shipping address, driver’s license, passport, etc.) while also protecting your privacy from those that would want to track and sell your online browsing behavior. It is in the same realm of technology as Mozilla Persona, WebID+TLS, and OpenID Connect. Benefits of using this technology include:

  • Solving the NASCAR login problem in a way that greatly increases identity provider competition.
  • Removing the need for usernames and passwords when logging into 99.99% of the websites that you use today.
  • Auto-filling information that you have to repeat over and over again (shipping address, name, email, etc.).
  • Solving the NASCAR payments problem in a way that greatly increases payment processor competition.
  • Storage and transmission of credentials, such as email addresses, driver’s licenses, and digital passports, via the Web that cryptographically prove that you are who you say you are.

The demonstration is based on the Identity Credentials technology being developed by the Web Payments Community Group at the World Wide Web Consortium. It consists of an ecosystem of four example websites. The purpose of each website is explained below:

Identity Provider (

The Identity Provider stores your identity document and any information about you including any credentials that other sites may issue to you. This site is used to accomplish several things during the demo:

  • Create an identity.
  • Register your identity with the Login Hub.
  • Generate a verified email credential and store it in your identity.

Login Hub (

This site helps other websites discover your identity provider in a way that protects your privacy from both the website you’re logging into as well as your identity provider. Eventually the functionality of this website will be implemented directly in browsers, but until that happens, it is used to bootstrap the discovery of and login/credential transmission process for the identity provider. This site is used to do the following things during the demo:

  • Register your identity, creating an association between your identity provider and the email address and passphrase you use on the login hub.
  • Login to a website.

Credential Issuer (

This site is responsible for verifying information about you like your home address, driver’s license, and passport information. Once the site has verified some information about you, it can issue a credential to you. For the purposes of the demonstration, all verifications are simulated and you will immediately be given a credential when you ask for one. All credentials are digitally signed by the issuer which means their validity can be proven without the need to contact the issuer (or be online). This site is used to do the following things during the demo:

  • Login using an email credential.
  • Issue other credentials to yourself like a business address, proof of age, driver’s license, and digital passport.

Single Sign-On Demo

The single sign-on website, while not implemented yet, will be used to demonstrate the simplicity of credential-based login. The sign-on process requires you to click a login button, enter your email and passphrase on the Login Hub, and then verify that you would like to transmit the requested credential to the single sign-on website. This website will allow you to do the following in a future demo:

  • Present various credentials to log in.

How it Works

The demo is split into four distinct parts. Each part will be explained in detail in the rest of this post. Before you try the demo, it is important that you understand that this is a proof-of-concept. The demo is pretty easy to break because we haven’t spent any time polishing it. It’ll be useful for technologists that understand how the Web works. It has only been tested in Google Chrome, versions 31 – 35. There are glaring security issues with the demo that have solutions which have not been implemented yet due to time constraints. We wanted to publish our work as quickly as possible so others could critique it early rather than sitting on it until it was “done”. With those caveats clearly stated up front, let’s dive in to the demo.

Creating an Identity

The first part of the demo requires you to create an identity for yourself. Do so by clicking the link in the previous sentence. Your short name can be something simple like your first name or a handle you use online. The passphrase should be something long and memorable that is specific to you. When you click the Create button, you will be redirected to your new identity page.

Note the text displayed in the middle of the screen. This is your raw identity data in JSON-LD format. It is a machine-readable representation of your credentials. There are only three pieces of information in it in the beginning. The first is the JSON-LD @context value,, which tells machines how to interpret the information in the document. The second is the id value, which is the location of this particular identity on the Web. The third is the sysPasswordHash, which is just a bcrypt hash of your login password to the identity website.

Global Web Login Network

Now that you have an identity, you need to register it with the global Web login network. The purpose of this network is to help map your preferred email address to your identity provider. Keep in mind that in the future, the piece of software that will do this mapping will be your web browser. However, until this technology is built into the browser, we will need to bootstrap the email to identity document mapping in another way.

The way that both Mozilla Persona and OpenID do it is fairly similar. OpenID assumes that your email address maps to your identity provider. So, an OpenID login name of assumes that is your identity provider. Mozilla Persona went a step further by saying that if wouldn’t vouch for your email address, that they would. So Persona would first check to see if spoke the Persona protocol, and if it didn’t, the burden of validating the email address would fall back to Mozilla. This approach put Mozilla in the unenviable position of running a lot of infrastructure to make sure this entire system stayed up and running.

The Identity Credentials solution goes a step further than Mozilla Persona and states that you are the one that decides which identity provider your email address maps to. So, if you have an email address like, you can use as your identity provider. You can probably imagine that this makes the large identity providers nervous because it means that they’re now going to have to compete for your business. You have the choice of who is going to be your identity provider regardless of what your email address is.

So, let’s register your new identity on the global web login network. Click the text on the screen that says “Click here to register”. That will take you to a site called This website serves two purposes. The first is to map your preferred email address to your identity provider. The second is to protect your privacy as you send information from your identity provider and send it to other websites on the Internet (more on this later).

You should be presented with a screen that asks you for three pieces of information. Your preferred email address, a passphrase, and a verification of that passphrase. When you enter this information, it will be used to do a number of things. The first thing that will happen is that a public/private keypair will be generated for the device that you’re using (your web browser, for instance). This key will be used as a second factor of authentication in later steps in this process. The second thing that will happen is that your email address and passphrase will be used to generate a query token, which will be later used to query the decentralized Telehash-based identity network. The third thing that will happen is that your query token to identity document mapping will be encrypted and placed onto the Telehash network.

The Decentralized Database (Telehash)

We could spend an entire blog post itself on Telehash, but the important thing to understand about it is that it provides a mechanism to store data in a decentralized database and query that database at a later time for the data. By storing this query token and query response in the decentralized database, it allows us to find your identity provider mapping regardless of which device you’re using to access the Web and independent of who your email provider is.

In fact, note that I said that you use your “preferred email address” above? It doesn’t need to be an email address, it could be a simple string like “Bob” and a unique passphrase. Even though there are many “Bob”s in the world, the likelyhood that they’d use the same 20+ character passphrase is unlikely and therefore one could use just a first name and a complex passphrase. That said, we’re suggesting that most non-technical people use a preferred email address because most people won’t understand the dangers of SHA-256 collisions on username+passphrase combinations like sha256(“Bob” + “password”). In addition to this aside, the decentralized database solution doesn’t need to be Telehash. It could just as easily be a decentralized ledger like Namecoin or Ripple.

Once you have filled out your preferred email address and passphrase, click the Register button. You will be sent back to your identity provider and will see two new pieces of information. The first piece of information is sysIdpMapping, which is the decentralized database query token (query) and passphrase-encrypted mapping (queryResponse). The second piece of information is sysDeviceKeys, which is the public key associated with the device that you registered your identity through and which will be used as a second factor of authentication in later versions of the demo. The third piece of information is sysRegistered, which is an indicator that the identity has been registered with the decentralized database.

Acquiring an Email Credential

At this point, you can’t really do much with your identity since it doesn’t have any useful credential information associated with it. So, the next step is to put something useful into your identity. When you create an account on most websites, the first thing the website asks you for is an email address. It uses this email address to communicate with you. The website will typically verify that it can send and receive an email to that address before fully activating your account. You typically have to go through this process over and over again, once for each new site that you join. It would be nice if an identity solution designed for the Web would take care of this email registration process for you. For those of you familiar with Mozilla Persona, this approach should sound very familiar to you.

The Identity Credentials technology is a bit different from Mozilla Persona in that it enables a larger number of organizations to verify your email address than just your email provider or Mozilla. In fact, we see a future where there could be tens, if not hundreds, of organizations that could provide email verification. For the purposes of the demo, the Identity Provider will provide a “simulated verification” (aka fake) of your email address. To get this credential, click on the text that says “Click here to get one”.

You will be presented with a single input field for your email address. Any email address will do, but you may want to use the preferred one you entered earlier. Once you have entered your email address, click “Issue Email Credential”. You will be sent back to your identity page and you should see your first credential listed in your JSON-LD identity document beside the credential key. Let’s take a closer look at what constitutes a credential in the system.

The EmailCredential is a statement that a 3rd party has done an email verification on your account. Any credential that conforms to the Identity Credentials specification is composed of a set of claims and a signature value. The claims tie the information that the 3rd party is asserting, such as an email address, to the identity. The signature is composed of a number of fields that can be used to cryptographically prove that only the issuer of the credential was capable of issuing this specific credential. The details of how the signature is constructed can be found in the Secure Messaging specification.

Now that you have an email credential, you can use it to log into a website. The next demonstration will use the email credential to log into a credential issuer website.

Credential-based Login

Most websites will only require an email credential to log in. There are other sites, such as ecommerce sites or high-security websites, that will require a few more credentials to successfully log in or use their services. For example, a ecommerce site might require your payment processor and shipping address to send you the goods you purchased. A website that sells local wines might request that you provide a credential proving that you are above the required drinking age in your locality. A travel website might request your digital passport to ease your security clearing process if you are traveling internationally. There are many more types of speciality credentials that one may issue and use via the Identity Credentials technology. The next demo will entail issuing some of these credentials to yourself. However, before we do that, we have to login to the credential issuer website using our newly acquired email credential.

Go to the website and click on the “Login” button. This will immediately send you to the login hub website where you had previously registered your identity. The request sent to the login hub by will effectively be a request for your email credential. Once you’re on, enter your preferred email address and passphrase and then click “Login”.

While you were entering your email address and passphrase, the page connected to the Telehash network and readied itself to send a query. When you click “Login”, your email address and passphrase are SHA-256’d and sent as a query to the Telehash network. Your identity provider will receive the request and respond to the query with an encrypted message that will then be decrypted using your passphrase. The contents of that message will tell the login hub where your identity provider is holding your identity. The request for the email credential is then forwarded to your identity provider. Note that at this point your identity provider has no idea where the request for your email credential is coming from because it is masked by the login hub website. This masking process protects your privacy.

Once the request for your email credential is received by your identity provider, a response is packaged up and sent back to, which then relays that information back to Once recieves your email credential, it will log you into the website. Note at this point that you didn’t have to enter a single password on the website, all you needed was an email credential to log in. Now that you have logged in, you can start issuing additional credentials to yourself.

Issuing Additional Credentials

The previous section introduced the notion that you can issue many different types of credentials. Once you have logged into the website, you may now issue a few of these credentials to yourself. Since this is a demonstration, no attempt will be made to verify those credentials by a 3rd party. The credentials that you can issue to yourself include a business address, proof of age, payment processor, driver’s license, and passport. You many specify any information that you’d like to specify in the input fields to see how the credential would look if it held real data.

Once you have filled out the various fields, click the blue button to issue the credential. The credential will be digitally signed and sent to your identity provider, which will then show you the credential that was issued to you. You have a choice to accept or reject the credential. If you accept the credential, it is written to your identity.

You may repeat this process as many times as you would like. Note that on the passport credential how there is an issued on date as well as an expiration date to demonstrate that credentials can have a time limit associated with them.

Known Issues

As mentioned throughout this post, this demonstration has a number of shortcomings and areas that need improvement, among them are:

  • Due to a lack of time, we didn’t setup our own HTTPS Telehash seed. Since we didn’t setup the HTTPS Telehash seed, we couldn’t run secured by TLS due to security settings in most web browsers related to WebSocket connections. Not using TLS results in a gigantic man-in-the-middle attack possibility. A future version will, of course, use both TLS and HSTS on the website.
  • The Telehash query/response database isn’t decentralized yet. There are a number of complexities associated with creating a decentralized storage/query network, and we haven’t decided on what the proper approach should be. There is no reason why the decentralized database couldn’t be NameCoin or Ripple-based, and it would probably be good if we had multiple backend databases that supported the same query/response protocol.
  • We don’t check digital signatures yet, but will soon. We were focused on the flow of data first and ensuring security parameters were correct second. Clearly, you would never want to run such a system in production, but we will improve it such that all digital signatures are verified.
  • We do not use the public/private keypair generated in the browser to limit the domain and validity length of credentials yet. When the system is productionized, implementing this will be a requirement and will protect you even if your credentials are stolen through a phishing attack on
  • We expect there to be many more security vulnerabilities that we haven’t detected yet. That said, we do believe that there are no major design flaws in the system and are releasing the proof-of-concept, along with source code, to the general public for feedback.

Feedback and Future Work

If you have any questions or concerns about this particular demo, please leave them as comments on this blog post or send them as comments to the mailing list.

Just as you logged in to the website using your email credential, you may also use other credentials such as your driver’s license or passport to log in to websites. Future work on this demo will add functionality to demonstrate the use of other forms of credentials to perform logins while also addressing the security issues outlined in the previous section.

A Proposal for Credential-based Login

Mozilla Persona allows you to sign in to web sites using any of your existing email addresses without needing to create a new username and password on each website. It was a really promising solution for the password-based security nightmare that is login on the Web today.

Unfortunately, all the paid engineers for Mozilla Persona have been transitioned off of the project. While Mozilla is going to continue to support Persona into the foreseeable future, it isn’t going to directly put any more resources into improving Persona. Mozilla had very good reasons for doing this. That doesn’t mean that the recent events aren’t frustrating or sad. The Persona developers made a heroic effort. If you find yourself in the presence of Lloyd, Ben, Dan, Jed, Shane, Austin, or Jared (sorry if I missed someone!) be sure to thank them for their part in moving the Web forward.

If Not Persona, Then What?

At the moment, the Web’s future with respect to a better login experience is unclear. The current leader seems to be OpenID Connect which, while implemented across millions of sites, is still not seeing the sort of adoption that you’d need for a general Web-based login solution. It’s also really complex, so complex that the lead editor of the foundation OpenID is built on left the work a long time ago in frustration. It also doesn’t pro-actively protect your privacy, meaning that your identity provider can track where you go on the Web. OpenID also gives an undue amount of power to email providers, like Gmail and Yahoo as your email provider would typically end up becoming most people’s identity provider as well.

WebID+TLS should also be mentioned as this proposal embraces and extends a number of good features from that specification. Concepts such as an identity document, which is a place where you store facts about yourself, and the ability to express that information as Linked Data are solid concepts. WebID+TLS, unfortunately, relies on the client-certificate technology built into most browsers which is confusing to non-technologists and puts too much of a burden, such as requiring the use of an RDF TURTLE processor as well as the ability to hook into the TLS stream, onto websites adopting the technology. WebID+TLS also doesn’t do much to protect against pervasive monitoring and tracking of your behavior online by companies that would like to sell that behavior to the highest bidder.

Somewhere else on the Internet, the Web Payments Community Group is working on technology to build payments into the core architecture of the Web. Login and identity are a big part of payments. We need a solution that allows someone to login to a website and transmit their payment preferences at the same time. A single authorized click by you would provide your email address, shipping address, and preferred payment provider. Another authorized click by you would buy an item and have it shipped to your preferred address. There will be no need to fill out credit card information, shipping, or billing addresses and no need to create an email address and password for every site to which you want to send money. Persona was going to be this login solution for us, but that doesn’t seem achievable at this point.

What Persona Got Right

The Persona after-action review that Mozilla put together is useful. If you care about identity and login, you should read it. Persona did four groundbreaking things:

  1. It was intended to be fully decentralized, being integrated into the browser eventually.
  2. It focused on privacy, ensuring that your identity provider couldn’t track the sites that you were logging in to.
  3. It used an email address as your login ID, which is a proven approach to login on the Web.
  4. It was simple.

It failed for at least three important reasons that were not specific to Mozilla:

  1. It required email providers to buy into the protocol.
  2. It had a temporary, centralized solution that required a costly engineering team to keep it up and running.
  3. If your identity provider goes down, you can’t login to any website.

Finally, the Persona solution did one thing well. It provided a verified email credential, but is that enough for the Web?

The Need for Verifiable Credentials

There is a growing need for digitally verifiable credentials on the Web. Being able to prove that you are who you say you are is important when paying or receiving payment. It’s also important when trying to prove that you are a citizen of a particular country, of a particular age, licensed to perform a specific task (like designing a house), or have achieved a particular goal (like completing a training course). In all of these cases, it requires the ability for you to collect digitally signed credentials from a third party, like a university, and store it somewhere on the Web in an interoperable way.

The Web Payments group is working on just such a technology. It’s called the Identity Credentials specification.

We had somewhat of an epiphany a few weeks ago when it became clear that Persona was in trouble. An email address is just another type of credential. The process for transmitting a verified email address to a website should be the same as transmitting address information or your payment provider preference. Could we apply this concept and solve the login on the web problem as well as the transmission of verified credentials problem? It turns out that the answer is: most likely, yes.

Verified Credential-based Web Login

The process for credential-based login on the Web would more or less work like this:

  1. You get an account with an identity provider, or run one yourself. Not everyone wants to run one themselves, but it’s the Web, you should be able to easily do so if you want to.
  2. You show up at a website, it asks you to login by typing in your email address. No password is requested.
  3. The website then kick-starts a login process via that will be driven by a Javascript polyfill in the beginning, but will be integrated into the browser in time.
  4. A dialog is presented to you (that the website has no control over or visibility into) that asks you to login to your identity provider. Your identity provider doesn’t have to be your email provider. This step is skipped if you’ve logged in previously and your session with your identity provider is still active.
  5. A digitally signed assertion that you control your email address is given by your identity provider to the browser, which is then relayed on to the website you’re logging in to.

Details of how this process works can be found in the section titled Credential-based Login in the Identity Credentials specification. The important thing to note about this approach is that it takes all the best parts of Persona while overcoming key things that caused its demise. Namely:

  • Using an innovative new technology called Telehash, it is fully decentralized from day one.
  • It doesn’t require browser buy-in, but is implemented in such a way that allows it to be integrated into the browser eventually.
  • It is focused on privacy, ensuring that your identity provider can’t track the sites that you are logging into.
  • It uses an email address as your login ID, which is a proven approach to login on the Web.
  • It is simple, requiring far fewer web developer gymnastics than OpenID to implement. It’s just one Javascript library and one call.
  • It doesn’t require email providers to buy into the protocol like Persona did. Any party that the relying party website trusts can digitally sign a verified email credential.
  • If your identity provider goes down, there is still hope that you can login by storing your email credentials in a password-protected decentralized hash table on the Internet.

Why Telehash?

There is a part of this protocol that requires the system to map your email address to an identity provider. The way Persona did it was to query to see if your email provider was a Persona Identity Provider (decentralized), and if not, the system would fall back to Mozilla’s email-based verification system (centralized). Unfortunately, if Persona’s verification system was down, you couldn’t log into a website at all. This rarely happened, but that was more because Mozilla’s team was excellent at keeping the site up and there weren’t any serious attempts to attack the site. It was still a centralized solution.

The Identity Credentials specification takes a different approach to the problem. It allows any identity provider to claim an email address. This means that you no longer need buy-in from email providers. You just need buy-in from identity providers, and there are a ton of them out there that would be happy to claim and verify addresses like, or Unfortunately, this approach means that either you need browser support, or you need some sort of mapping mechanism that maps email addresses to identity providers. Enter Telehash.

Telehash is an Internet-wide distributed hashtable (DHT) based on the proven Kademlia protocol used by BitTorrent and Gnutella. All communication is fully encrypted. It allows you to store-and-replicate things like the following JSON document:

  "email": "",
  "identityService": ""

If you want to find out who’s identity provider is, you just query the Telehash network. The more astute readers among you see the obvious problem in this solution, though. There are massive trust, privacy, and distributed denial of service attack concerns here.

Attacks on the Distributed Mapping Protocol

There are four problems with the system described in the previous section.

The first is that you can find out which email addresses are associated with which identity providers; that leaks information. Finding out that is associated with the identity provider is a problem. Finding out that they’re also associated with the identity provider outs them as military personnel, which turns a regular privacy problem into a national security issue.

The second is that anyone on the network can claim to be an identity provider for that email address, which means that there is a big phishing risk. A nefarious identity provider need only put an entry for in the DHT pointing to their corrupt identity provider service and watch the personal data start pouring in.

The third is that a website wouldn’t know which digital signature on a email to trust. Which verified credential is trustworthy and which one isn’t?

The fourth is that you can easily harvest all of the email addresses on the network and spam them.

Attack Mitigation on the Distributed Mapping Protocol

There are ways to mitigate the problems raised in the previous section. For example, replacing the email field with a hash of the email address and passphrase would prevent attackers from both spamming an email address and figuring out how it maps to an identity provider. It would also lower the desire for attackers to put fake data into the DHT because only the proper email + passphrase would end up returning a useful result to a query. The identity service would also need to be encrypted with the passphrase to ensure that injecting bogus data into the network wouldn’t result in an entry collision.

In addition to these three mitigations, the network would employ a high CPU/memory proof-of-work to put a mapping into the DHT so the network couldn’t get flooded by bogus mappings. Keep in mind that the proof-of-work doesn’t stop bad data from getting into the DHT, it just slows its injection into the network.

Finally, figuring out which verified email credential is valid is tricky. One could easily anoint 10 non-profit email verification services that the network would trust, or something like the certificate-authority framework, but that could be argued as over-centralization. In the end, this is more of a policy decision because you would want to make sure email verification services are legally bound to do the proper steps to verify an email while ensuring that people aren’t gouged for the service. We don’t have a good solution to this problem yet, but we’re working on it.

With the modifications above, the actual data uploaded to the DHT will probably look more like this:

  "id": "c8e52c34a306fe1d487a0c15bc3f9bbd11776f30d6b60b10d452bcbe268d37b0",  <-- SHA256 hash of + >15 character passphrase
  "proofOfWork": "000000000000f7322e6add42",                                 <-- Proof of work for email to identity service mapping
  "identityService": "GZtJR2B5uyH79QXCJ...s8N2B5utJR2B54m0Lt"                <-- Passphrase-encrypted identity provider service URL

To query the network, the customer must provide both an email address and a passphrase which are hashed together. If the hash doesn't exist on the network, then nothing is returned by Telehash.

Also note that this entire Telehash-based mapping mechanism goes away once the technology is built into the browser. The telehash solution is merely a stop-gap measure until the identity credential solution is built into browsers.

The Far Future

In the far future, browsers would communicate with your identity providers to retrieve data that are requested by websites. When you attempt to login to a website, the website would request a set of credentials. Your browser would either provide the credentials directly if it has cached them, or it would fetch them from your identity provider. This system has all of the advantages of Persona and provides realistic solutions to a number of the scalability issues that Persona suffers from.

The greatest challenges ahead will entail getting a number of things right. Some of them include:

  • Mitigate the attack vectors for the Telehash + Javascript-based login solution. Even though the Telehash-based solution is temporary, it must be solid until browser implementations become the norm.
  • Ensure that there is buy-in from large companies wanting to provide credentials for people on the Web. We have a few major players in the pipeline at the moment, but we need more to achieve success.
  • Clearly communicate the benefits of this approach over OpenID and Persona.
  • Make sure that setting up your own credential-based identity provider is as simple as dropping a PHP file into your website.
  • Make it clear that this is intended to be a W3C standard by creating a specification that could be taken standards-track within a year.
  • Get buy-in from web developers and websites, which is going to be the hardest part.

JSON-LD and Why I Hate the Semantic Web

Full Disclosure: I am one of the primary creators of JSON-LD, lead editor on the JSON-LD 1.0 specification, and chair of the JSON-LD Community Group. This is an opinionated piece about JSON-LD. A number of people in this space don’t agree with my viewpoints. My statements should not be construed as official statements from the JSON-LD Community Group, W3C, or Digital Bazaar (my company) in any way, shape, or form. I’m pretty harsh about the technologies covered in this article and want to be very clear that I’m attacking the technologies, not the people that created them. I think most of the people that created and promote them are swell and I like them a lot, save for a few misguided souls, who are loveable and consistently wrong.

JSON-LD became an official Web Standard last week. This is after exactly 100 teleconferences typically lasting an hour and a half, fully transparent with text minutes and recorded audio for every call. There were 218+ issues addressed, 2,000+ source code commits, and 3,102+ emails that went through the JSON-LD Community Group. The journey was a fairly smooth one with only a few jarring bumps along the road. The specification is already deployed in production by companies like Google, the BBC,, Yandex, Yahoo!, and Microsoft. There is a quickly growing list of other companies that are incorporating JSON-LD. We’re off to a good start.

In the previous blog post, I detailed the key people that brought JSON-LD to where it is today and gave a rough timeline of the creation of JSON-LD. In this post I’m going to outline the key decisions we made that made JSON-LD stand out from the rest of the technologies in this space.

I’ve heard many people say that JSON-LD is primarily about the Semantic Web, but I disagree, it’s not about that at all. JSON-LD was created for Web Developers that are working with data that is important to other people and must interoperate across the Web. The Semantic Web was near the bottom of my list of “things to care about” when working on JSON-LD, and anyone that tells you otherwise is wrong. 😛

TL;DR: The desire for better Web APIs is what motivated the creation of JSON-LD, not the Semantic Web. If you want to make the Semantic Web a reality, stop making the case for it and spend your time doing something more useful, like actually making machines smarter or helping people publish data in a way that’s useful to them.


If you don’t know what JSON-LD is and you want to find out why it is useful, check out this video on Linked Data and this one on an Introduction to JSON-LD. The rest of this post outlines the things that make JSON-LD different from the traditional Semantic Web / Linked Data stack of technologies and why we decided to design it the way that we did.

Decision 1: Decrypt the Cryptic

Many W3C specifications are so cryptic that they require the sacrifice of your sanity and a secret W3C decoder ring to read. I never understood why these documents were so difficult to read, and after years of study on the matter, I think I found the answer. It turns out that most specification editors are just crap at writing.

It’s not like many of the things that are in most W3C specifications are complicated, it’s just that the editor is bad at explaining them to non-implementers, which are most of the web developers that end up reading these specification documents. This approach is often defended by raising the point that readability of the specification by non-implementers is viewed as secondary to its technical accuracy for implementers. The audience is the implementer, and you are expected to cater to them. To counter that point, though, we all know that technical accuracy is a bad excuse for crap writing. You can write something that is easy to understand and technically accurate, it just takes more effort to do that. Knowing your audience helps.

We tried our best to eliminate complex techno-babble from the JSON-LD specification. I made it a point to not mention RDF at all in the JSON-LD 1.0 specification because you didn’t need to go off and read about it to understand what was going on in JSON-LD. There was tremendous push back on this point, which I’ll go into later, but the point is that we wanted to communicate at a more conversational level than typical Internet and Web specifications because being pedantic too early in the spec sets the wrong tone.

It didn’t always work, but it certainly did set the tone we wanted for the community, which was that this Linked Data stuff didn’t have to seem so damn complicated. The JSON-LD 1.0 specification starts out by primarily using examples to introduce key concepts. It starts at basics, assuming that the audience is a web developer with modest training, and builds its way up slowly into more advanced topics. The first 70% of the specification contains barely any normative/conformance language, but after reading it, you know what JSON-LD can do. You can look at the section on the JSON-LD Context to get an idea of what this looks like in practice.

This approach wasn’t a wild success. Reading sections of the specification that have undergone feedback from more nitpicky readers still make me cringe because ease of understanding has been sacrificed at the alter of pedantic technical accuracy. However, I don’t feel embarrassed to point web developers to a section of the specification when they ask for an introduction to a particular feature of JSON-LD. There are not many specifications where you can do that.

Decision 2: Radical Transparency

One of the things that has always bothered me about W3C Working Groups is that you have to either be an expert to participate, or you have to be a member of the W3C, which can cost a non-trivial amount of money. This results in your typical web developer being able to comment on a specification, but not really having the ability to influence a Working Group decision with a vote. It also hobbles the standards-making community because the barrier to entry is perceived as impossibly high. Don’t get me wrong, the W3C staff does as much as they can to drive inclusion and they do a damn good job at it, but that doesn’t stop some of their member companies from being total dicks behind closed door sessions.

The W3C is a consortium of mostly for-profit companies and they have things they care about like market share, quarterly profits, and drowning goats (kidding!)… except for, anyone can join as long as you pay the membership dues! My point is that because there is a lack of transparency at times, it makes even the best Working Group less responsive to the general public, and that harms the public good. These closed door rules are there so that large companies can say certain things without triggering a lawsuit, which is sometimes used for good but typically results in companies being jerks and nobody finding out about it.

So, in 2010 we kicked off the JSON-LD work by making it radically open and we fought for that openness every step of the way. Anyone can join the group, anyone can vote on decisions, anyone can join the teleconferences, there are no closed door sessions, and we record the audio of every meeting. We successfully kept the technical work on the specification this open from the beginning to the release of JSON-LD 1.0 web standard a week ago. People came and went from the group over the years, but anyone could participate at any level and that was probably the thing I’m most proud of regarding the process that was used to create JSON-LD. Had we not have been this open, Markus Lanthaler may have never gone from being a gifted student in Italy to editor of the JSON-LD API specification and now leader of the Hypermedia Driven Web APIs community. We also may never have had the community backing to do some of the things we did in JSON-LD, like kicking RDF in the nuts.

Decision 3: Kick RDF in the Nuts

RDF is a shitty data model. It doesn’t have native support for lists. LISTS for fuck’s sake! The key data structure that’s used by almost every programmer on this planet and RDF starts out by giving developers a big fat middle finger in that area. Blank nodes are an abomination that we need, but they are applied inconsistently in the RDF data model (you can use them in some places, but not others). When we started with JSON-LD, RDF didn’t have native graph support either. For all the “RDF data model is elegant” arguments we’ve seen over the past decade, there are just as many reasons to kick it to the curb. This is exactly what we did when we created JSON-LD, and that really pissed off a number of people that had been working on RDF for over a decade.

I personally wanted JSON-LD to be compatible with RDF, but that’s about it. You could convert JSON-LD to and from RDF and get something useful, but JSON-LD had a more sane data model where lists were a first-class construct, you had generalized graphs, and you could use JSON-LD using a simple library and standard JSON tooling. To put that in perspective, to work with RDF you typically needed a quad store, a SPARQL engine, and some hefty libraries. Your standard web developer has no interest in that toolchain because it adds more complexity to the solution than is necessary.

So screw it, we thought, let’s create a graph data model that looks and feels like JSON, RDF and the Semantic Web be damned. That’s exactly what we did and it was working out pretty well until…

Decision 4: Work with the RDF Working Group. Whut?!

Around mid-2012, the JSON-LD stuff was going pretty well and the newly chartered RDF Working Group was going to start work on RDF 1.1. One of the work items was a serialization of RDF for JSON. The lead solutions for RDF in JSON were things like the aptly named RDF/JSON and JTriples, both of which would look incredibly foreign to web developers and continue the narrative that the Semantic Web community creates esoteric solutions to non-problems. The biggest problem being that many of the participants in the RDF Working Group at the time didn’t understand JSON.

The JSON-LD group decided to weigh in on the topic by pointing the RDF WG to JSON-LD as an example of what was needed to convince people that this whole Linked Data thing could be useful to web developers. I remember the discussions getting very heated over multiple months, and at times, thinking that the worst thing we could do to JSON-LD was to hand it over to the RDF Working Group for standardization.

It is at that point that David Wood, one of the chairs of the RDF Working Group, phoned me up to try and convince me that it would be a good idea to standardize the work through the RDF WG. I was very skeptical because there were people in the RDF Working Group who drove some of thinking that I had grown to see as toxic to the whole Linked Data / Semantic Web movement. I trusted Dave Wood, though. I had never seen him get religiously zealous about RDF like some of the others in the group and he seemed to be convinced that we could get JSON-LD through without ruining it. To Dave’s credit, he was mostly right. 🙂

Decision 5: Hate the Semantic Web

It’s not that the RDF Working Group was populated by people that are incompetent, or that I didn’t personally like. I’ve worked with many of them for years, and most of them are very intelligent, capable, gifted people. The problem with getting a room full of smart people together is that the group’s world view gets skewed. There are many reasons that a working group filled with experts don’t consistently produce great results. For example, many of the participants can be humble about their knowledge so they tend to think that a good chunk of the people that will be using their technology will be just as enlightened. Bad feature ideas can be argued for months and rationalized because smart people, lacking any sort of compelling real world data, are great at debating and rationalizing bad decisions.

I don’t want people to get the impression that there was or is any sort of animosity in the Linked Data / Semantic Web community because, as far as I can tell, there isn’t. Everyone wants to see this stuff succeed and we all have our reasons and approaches.

That said, after 7+ years of being involved with Semantic Web / Linked Data, our company has never had a need for a quad store, RDF/XML, N3, NTriples, TURTLE, or SPARQL. When you chair standards groups that kick out “Semantic Web” standards, but even your company can’t stomach the technologies involved, something is wrong. That’s why my personal approach with JSON-LD just happened to be burning most of the Semantic Web technology stack (TURTLE/SPARQL/Quad Stores) to the ground and starting over. It’s not a strategy that works for everyone, but it’s the only one that worked for us, and the only way we could think of jarring the more traditional Semantic Web community out of its complacency.

I hate the narrative of the Semantic Web because the focus has been on the wrong set of things for a long time. That community, who I have been consciously distancing myself from for a few years now, is schizophrenic in its direction. Precious time is spent in groups discussing how we can query all this Big Data that is sure to be published via RDF instead of figuring out a way of making it easy to publish that data on the Web by leveraging common practices in use today. Too much time is spent assuming a future that’s not going to unfold in the way that we expect it to. That’s not to say that TURTLE, SPARQL, and Quad stores don’t have their place, but I always struggle to point to a typical startup that has decided to base their product line on that technology (versus ones that choose MongoDB and JSON on a regular basis).

I like JSON-LD because it’s based on technology that most web developers use today. It helps people solve interesting distributed problems without buying into any grand vision. It helps you get to the “adjacent possible” instead of having to wait for a mirage to solidify.

Decision 6: Believe in Consensus

All this said, you can’t hope to achieve anything by standing on idealism alone and I do admit that some of what I say above is idealistic. At some point you have to deal with reality, and that reality is that there are just as many things that the RDF and Semantic Web initiative got right as it got wrong. The RDF data model is shitty, but because of the gauntlet thrown down by JSON-LD and a number of like-minded proponents in the RDF Working Group, the RDF Data Model was extended in a way that made it compatible with JSON-LD. As a result, the gap between the RDF model and the JSON-LD model narrowed to the point that it became acceptable to more-or-less base JSON-LD off of the RDF model. It took months to do the alignment, but it was consensus at its best. Nobody was happy with the result, but we could all live with it.

To this day I assert that we could rip the data model section out of the JSON-LD specification and it wouldn’t really affect the people using JSON-LD in any significant way. That’s consensus for you. The section is in there because other people wanted it in there and because the people that didn’t want it in there could very well have turned out to be wrong. That’s really the beauty of the W3C and IETF process. It allows people that have seemingly opposite world views to create specifications that are capable of supporting both world views in awkward but acceptable ways.

JSON-LD is a product of consensus. Nobody agrees on everything in there, but it all sticks together pretty well. There being a consensus on consensus is what makes the W3C, IETF, and thus the Web and the Internet work. Through all of the fits and starts, permathreads, pedantry, idealism, and deadlock, the way it brings people together to build this thing we call the Web is beautiful thing.


I’d like to thank the W3C staff that were involved in getting JSON-LD to offical Web standard status (and the staff, in general, for being really fantastic people). Specifically, Ivan Herman for simultaneously pointing out all of the problems that lay in the road ahead while also providing ways to deal with each one as we came upon them. Sandro Hawke for pushing back against JSON-LD, but always offering suggestions about how we could move forward. I actually think he may have ended up liking JSON-LD in the end :). Doug Schepers and Ian Jacobs for fighting for W3C Community Groups, without which JSON-LD would not have been able to plead the case for Web developers. The systems team and publishing team who are unknown to most of you, but work tirelessly to ensure that everything continues to operate, be published, and improve at W3C.

From the RDF Working group, the chairs (David Wood and Guus Schreiber), for giving JSON-LD a chance and ensuring that it got a fair shake. Richard Cyganiak for pushing us to get rid of microsyntaxes and working with us to try and align JSON-LD with RDF. Kingsley Idehen for being the first external implementer of JSON-LD after we had just finished scribbling the first design down on paper and tirelessly dogfooding what he preaches. Nobody does it better. The rest of the RDF Working Group members without which JSON-LD would have escaped unscathed from your influence, making my life a hell of a lot easier, but leaving JSON-LD and the people that use it in a worse situation had you not been involved.

The Origins of JSON-LD

Full Disclosure: I am one of the primary creators of JSON-LD, lead editor on the JSON-LD 1.0 specification, and chair of the JSON-LD Community Group. These are my personal opinions and not the opinions of the W3C, JSON-LD Community Group, or my company.

JSON-LD became an official Web Standard last week. This is after exactly 100 teleconferences typically lasting an hour and a half, fully transparent with text minutes and recorded audio for every call. There were 218+ issues addressed, 2,071+ source code commits, and 3,102+ emails that went through the JSON-LD Community Group. The journey was a fairly smooth one with only a few jarring bumps along the road. The specification is already deployed in production by companies like Google, the BBC,, Yandex, Yahoo!, and Microsoft. There is a quickly growing list of other companies that are incorporating JSON-LD, but that’s the future. This blog post is more about the past, namely where did JSON-LD come from? Who created it and why?

I love origin stories. When I was in my teens and early twenties, the only origin stories I liked to read about were of the comic and anime variety. Spiderman, great origin story. Superman, less so, but entertaining. Nausicaä, brilliant. Major Motoko Kusanagi, nuanced. Spawn, dark. Those connections with characters fade over time as you understand that this world has more interesting ones. Interesting because they touch the lives of billions of people, and since I’m a technologist, some of my favorite origin stories today consist of finding out the personal stories behind how a particular technology came to be. The Web has a particularly riveting origin story. These stories are hard to find because they’re rarely written about, so this is my attempt at documenting how JSON-LD came to be and the handful of people that got it to where it is today.

The Origins of JSON-LD

When you’re asked to draft the press pieces on the launch of new world standards, you have two lists of people in your head. The first is the “all inclusive list”, which is every person that uttered so much as a word that resulted in a change to the specification. That list is typically very long, so you end up saying something like “We’d like to thank all of the people that provided input to the JSON-LD specification, including the JSON-LD Community, RDF Working Group, and individuals who took the time to send in comments and improve the specification.” With that statement, you are sincere and cover all of your bases, but feel like you’re doing an injustice to the people without which the work would never have survived.

The all inclusive list is very important, they helped refine the technology to the point that everyone could achieve consensus on it being something that is world class. However, 90% of the back breaking work to get the specification to the point that everyone else could comment on it is typically undertaken by a 4-5 people. It’s a thankless and largely unpaid job, and this is how the Web is built. It’s those people that I’d like to thank while exploring the origins of JSON-LD.


JSON-LD started around late 2008 as the work on RDFa 1.0 was wrapping up. We were under pressure from Microformats and Microdata, which we were also heavily involved in, to come up with a good way of programming against RDFa data. At around the same time, my company was struggling with the representation of data for the Web Payments work. We had already made the switch to JSON a few years previous and were storing that data in MySQL, mostly because MongoDB didn’t exist yet. We were having a hard time translating the RDFa we were ingesting (products for sale, pricing information, etc.) into something that worked well in JSON. At around the same time, Mark Birbeck, one of the creators of RDFa, and I were thinking about making something RDFa-like for JSON. Mark had proposed a syntax for something called RDFj, which I thought had legs, but Mark didn’t necessarily have the time to pursue.

The Hard Grind

After exchanging a few emails with Mark about the topic over the course of 2009, and letting the idea stew for a while, I wrote up a quick proposal for a specification and passed it by Dave Longley, Digital Bazaar’s CTO. We kicked the idea around a bit more and in May of 2010, published the first working draft of JSON-LD. While Mark was instrumental in injecting the first set of basis ideas into JSON-LD, Dave Longley would become the most important key technical mind behind how to make JSON-LD work for web programmers.

At that time, JSON-LD had a pretty big problem. You can represent data in JSON-LD in a myriad of different ways, making it hard to tell if two JSON-LD documents are the same or not. This was an important problem to Digital Bazaar because we were trying to figure out how to create product listings, digital receipts, and contracts using JSON-LD. We had to be able to tell if two product listings were the same, and we had to figure out a way to serialize the data so that products and their associated prices could be listed on the Web in a decentralized way. This meant digital signatures, and you have to be able to create a canonical/normalized form for your data if you want to be able to digitally sign it.

Dave Longley invented the JSON-LD API, JSON-LD Framing, and JSON-LD Graph Normalization to tackle these canonicalization/normalization issues and did the first four implementations of the specification in C++, JavaScript, PHP, and Python. The JSON-LD Graph Normalization problem itself took roughly 3 months of concentrated 70+ hour work weeks and dozens of iterations by Dave Longley to produce an algorithm that would work. To this day, I remain convinced that there are only a handful of people on this planet with a mind that is capable of solving those problems. He was the first and only one that cracked those problems. It requires a sort of raw intelligence, persistence, and ability to constantly re-evaluate the problem solving approach you’re undertaking in a way that is exceedingly rare.

Dave and I continued to refine JSON-LD, with him working on the API and me working on the syntax for the better part of 2010 and early 2011. When MongoDB started really taking off in 2010, the final piece just clicked into place. We had the makings of a Linked Data technology stack that would work for web developers.

Toward Stability

Around April 2011, we launched the JSON-LD Community Group and started our public push to try and put the specification on a standards track at the World Wide Web Consortium (W3C). It is at this point that Gregg Kellogg joined us to help refine the rough edges of the specification and provide his input. For those of you that don’t know Gregg, I know of no other person that has done complete implementations of the entire stack of Semantic Web technologies. He has Ruby implementations of quad stores, TURTLE, N3, NQuads, SPARQL engines, RDFa, JSON-LD, etc. If it’s associated with the Semantic Web in any way, he’s probably implemented it. His depth of knowledge of RDF-based technologies is unmatched and he focused that knowledge on JSON-LD to help us hone it to what it is today. Gregg helped us with key concepts, specification editing, implementations, tests, and a variety of input that left its mark on JSON-LD.

Markus Lanthaler also joined us around the same time (2011) that Gregg did. The story of how Markus got involved with the work is probably my favorite way of explaining how the standards process should work. Markus started giving us input while a masters student at Technische Universität Graz. He didn’t have a background in standards, he didn’t know anything about the W3C process or specification editing, he was as green as one can be with respect to standards creation. We all start where he did, but I don’t know of many people that became as influential as quickly as Markus did.

Markus started by commenting on the specification on the mailing list, then quickly started joining calls. He’d raise issues and track them, he started on his PHP implementation, then started making minor edits to the specifications, then major edits until earning our trust to become lead specification editor for the JSON-LD API specification and one of the editors for the JSON-LD Syntax specification. There was no deliberate process we used to make him lead editor, it just sort of happened based on all the hard work he was putting in, which is the way it should be. He went through a growth curve that normally takes most people 5 years in about a year and a half, and it happened exactly how it should happen in a meritocracy. He earned it and impressed us all in the process.

The Final Stretch

Of special mention as well is Niklas Lindström, who joined us starting in 2012 on almost every JSON-LD teleconference and provided key input to the specifications. Aside from being incredibly smart and talented, Niklas is particularly gifted in his ability to find a balanced technical solution that moved the group forward when we found ourselves deadlocked on a particular decision. Paul Kuykendall joined us toward the very end of the JSON-LD work in early 2013 and provided fresh eyes on what we were working on. Aside from being very level-headed, Paul helped us understand what was important to web developers and what wasn’t toward the end of the process. It’s hard to find perspective as work wraps up on a standard, and luckily Paul joined us at exactly the right moment to provide that insight.

There were literally hundreds of people that provided input on the specification throughout the years, and I’m very appreciative of that input. However, without this core of 4-6 people, JSON-LD would have never had a chance. I will never be able to find the words to express how deeply appreciative I am to Dave, Markus, Gregg, Niklas and Paul, who did the work on a primarily volunteer basis. At this moment in time, the Web is at the core of the way human kind communicates and the most ardent protectors of this public good create standards to ensure that the Web continues to serve all of us. It boils my blood to then know that they will go largely unrewarded by society for creating something that will benefit hundreds of millions of people, but that’s another post for another time.

The next post in this series tells the story of how JSON-LD was nearly eliminated on several occasions by its critics and proponents while on its journey toward a web standard.

JSON-LD is the Bee’s Knees

Full disclosure: I’m one of the primary authors and editors of the JSON-LD specification. I am also the chair of the group that created JSON-LD and have been an active participant in a number of Linked Data initiatives: RDFa (chair, author, editor), JSON-LD (chair, co-creator), Microdata (primary opponent), and Microformats (member, haudio and hvideo microformat editor). I’m biased, but also well informed.

JSON-LD has been getting a great deal of good press lately. It was adopted by Google, Yahoo, Yandex, and Microsoft for use in The PaySwarm universal payment protocol is based on it. It was also integrated with Google’s Gmail service and the open social networking folks have also started integrating it into the Activity Streams 2.0 work.

That all of these positive adoption stories exist was precisely the reason why Shane Becker’s post on why JSON-LD is an Unneeded Spec was so surprising. If you haven’t read it yet, you may want to as the rest of this post will dissect the arguments he makes in his post (it’s a pretty quick 5 minute read). The post is a broad brush opinion piece based on a number of factual errors and misinformed opinion. I’d like to clear up these errors in this blog post and underscore some of the reasons JSON-LD exists and how it has been developed.

A theatrical interpretation of the “JSON-LD is Unneeded” blog post

Shane starts with this claim:

Today I learned about a proposed spec called JSON-LD. The “LD” is for linked data (Linked Data™ in the Uppercase “S” Semantic Web sense).

When I started writing the original JSON-LD specification, one of the goals was to try and merge lessons learned in the Microformats community with lessons learned during the development of RDFa and Microdata. This meant figuring out a way to marry the lowercase semantic web with the uppercase Semantic Web in a way that was friendly to developers. For developers that didn’t care about the uppercase Semantic Web, JSON-LD would still provide a very useful data structure to program against. In fact, Microformats, which are the poster-child for the lowercase semantic web, were supported by JSON-LD from day one.

Shane’s article is misinformed with respect to the assertion that JSON-LD is solely for the uppercase Semantic Web. JSON-LD is mostly for the lowercase semantic web, the one that developers can use to make their applications exchange and merge data with other applications more easily. JSON-LD is also for the uppercase Semantic Web, the one that researchers and large enterprises are using to build systems like IBM’s Watson supercomputer, search crawlers, Gmail, and open social networking systems.

Linked data. Web sites. Standards. Machine readable.
Cool. All of those sound good to me. But they all sound familiar, like we’ve already done this before. In fact, we have.

We haven’t done something like JSON-LD before. I wish we had because we wouldn’t have had to spend all that time doing research and development to create the technology. When writing about technology, it is important to understand the basics of a technology stack before claiming that we’ve “done this before”. An astute reader will notice that at no point in Shane’s article is any text from the JSON-LD specification quoted, just the very basic introductory material on the landing page of the website. More on this below.

Linked data
That’s just the web, right? I mean, we’ve had the <a href> tag since literally the beginning of HTML / The Web. It’s for linking documents. Documents are a representation of data.

Speaking as someone that has been very involved in the Microformats and RDFa communities, yes, it’s true that the document-based Web can be used to publish Linked Data. The problem is that standard way of expressing a link to another piece of data that can be followed did not carry over to the data-based Web. That is, most JSON-based APIs don’t have a standard way of encoding a hyperlink.

The other implied assertion with the statement above is that the document-based Web is all we need. If this were true, sending HTML documents to Web applications would be all we needed. Web developers know that this isn’t the case today for a number of obvious reasons. We send JSON data back and forth on the Web when we need to program against things like Facebook, Google, or Twitter’s services. JSON is a very useful data format for machine-to-machine data exchange. The problem is that JSON data has no standard way of doing a variety of things we do on the document-based Web, like expressing links, expressing the types of data (like times and dates), and a variety of other very useful features for the data-based Web. This is one of the problems that JSON-LD addresses.

Web sites
If it’s not wrapped in HTML and viewable in a browser it, is it really a website? JSON isn’t very useful in the browser by itself. It’s not style-able. It’s not very human-readable. And worst of all, it’s not clickable.

Websites are composed of many parts. It’s a weak argument to say that if a site is mainly composed of data that isn’t in HTML, and isn’t viewable in a browser, that it’s not a real website. The vast majority of websites like Twitter and Facebook are composed of data and API calls with a relatively thin varnish of HTML on top. JSON is the primary way that applications interact with these and other data-driven websites. It’s almost guaranteed these days that any company that has a popular API uses JSON in their Web service protocol.

Shane’s argument here is pretty confused. It assumes that the primary use of JSON-LD is to express data in an HTML page. Sure, JSON-LD can do that, but focusing on that brush stroke is missing the big picture. The big picture is that JSON-LD allows applications that use it to share data and interoperate in a way that is not possible with regular JSON, and it’s especially useful when used in conjunction with a Web service or a document-based database like MongoDB or CouchDB.

Standards based
To their credit, JSON-LD did license their website content Creative Commons CC0 Public Domain. But, the spec itself isn’t. It’s using (what seems to be) a W3C boilerplate copyright / license. Copyright © 2010-2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.

Nope. The JSON-LD specification has been released under a Creative Commons Attribution 3.0 license multiple times in the past, and it will be released under a Creative Commons license again, most probably CC0. The JSON-LD specification was developed in a W3C Community Group using a Creative Commons license and then released to be published as a Web standard via W3C using their W3C Community Final Specification Agreement (FSA), which allows the community to fork the specification at any point in time and publish it under a different license.

When you publish a document through the W3C, they have their own copyright, license, and patent policy associated with the document being published. There is a legal process in place at W3C that asserts that companies can implement W3C published standards in a patent and royalty-free way. You don’t get that with CC0, in fact, you don’t get any such vetting of the technology or any level of patent and royalty protection.

What we have with JSON-LD is better than what is proposed in Shane’s blog post. You get all of the benefits of having W3C member companies vet the technology for technical and patent issues while also being able to fork the specification at any point in the future and publish it under a license of your choosing as long as you state where the spec came from.

Machine readable
Ah… “machine readable”. Every couple of years the current trend of what machine readable data should look like changes (XML/JSON, RSS/Atom, xml-rpc/SOAP, rest/WS-*). Every time, there are the same promises. This will solve our problems. It won’t change. It’ll be supported forever. Interoperability. And every time, they break their promises. Today’s empires, tomorrow’s ashes.

At no point has any core designer of JSON-LD claimed 1) that JSON-LD will “solve our problems” (or even your particular problem), 2) that it won’t change, and 3) that it will be supported forever. These are straw-man arguments. The current consensus of the group is that JSON-LD is best suited to a particular class of problems and that some developers will have no need for it. JSON-LD is guaranteed to change in the future to keep pace with what we learn in the field, and we will strive for backward compatibility for features that are widely used. Without modification, standardized technologies have a shelf life of around 10 years, 20-30 if they’re great. The designers of JSON-LD understand that, like the Web, JSON-LD is just another grand experiment. If it’s useful, it’ll stick around for a while, if it isn’t, it’ll fade into history. I know of no great software developer or systems designer that has ever made these three claims and been serious about it.

We do think that JSON-LD will help Web applications interoperate better than they do with plain ‘ol JSON. For an explanation of how, there is a nice video introducing JSON-LD.

With respect to the “Today’s empires, tomorrow’s ashes” cynicism, we’ve already seen a preview of the sort of advances that Web-based machine-readable data can unleash. Google, Yahoo!, Microsoft, Yandex, and Facebook all use a variety of machine-readable data technologies that have only recently been standardized. These technologies allow for faster, more accurate, and richer search results. They are also the driving technology for software systems like Watson. These systems exist because there are people plugging away at the hard problem of machine readable data in spite of cynicism directed at past failures. Those failures aren’t ashes, they’re the bedrock of tomorrow’s breakthroughs.

Instead of reinventing the everything (over and over again), let’s use what’s already there and what already works. In the case of linked data on the web, that’s html web pages with clickable links between them.

Microformats, Microdata, and RDFa do not work well for data-based Web services. Using Linked Data with data-based Web services is one of the primary reasons that JSON-LD was created.

For open standards, open license are a deal breaker. No license is more open than Creative Commons CC0 Public Domain + OWFa. (See also the Mozilla wiki about standards/license, for more.) There’s a growing list of standards that are already using CC0+OWFa.

I think there might be a typo here, but if not, I don’t understand why open licenses are a deal breaker for open standards. Especially things like the W3C FSA or the Creative Commons licenses we’ve published the JSON-LD spec under. Additionally, CC0 + OWFa might be neat. Shane’s article was the first time that I had heard of OWFa and I’d be a proponent for pushing it in the group if it granted more freedom to the people using and developing JSON-LD than the current set of agreements we have in place. After glossing over the legal text of the OWFa, I can’t see what CC0 + OWFa buys us over CC0 + W3C patent attribution. If someone would like to make these benefits clear, I could take a proposal to switch to CC0 + OWFa to the JSON-LD Community Group and see if there is interest in using that license in the future.

No process is more open than a publicly editable wiki.

A counter-point to publicly accessible forums

Publicly editable wikis are notorious for edit wars, they are not a panacea. Just because you have a wiki, does not mean you have an open community. For example, the Microformats community was notorious for having a different class of unelected admins that would meet in San Francisco and make decisions about the operation of the community. This seemingly innocuous practice would creep its way into the culture and technical discussion on a regular basis leading to community members being banned from time to time. Similarly, Wikipedia has had numerous issues with publicly editable wikis and the behavior of their admins.

Depending on how you define “open”, there are a number of processes that are far more open than a publicly editable wiki. For example, the JSON-LD specification development process is completely open to the public, based on meritocracy, and is consensus-driven. The mailing list is open. The bug tracker is open. We have weekly design teleconferences where all the audio is recorded and minuted. We have these teleconferences to this day and will continue to have them into the future because we make transparency a priority. JSON-LD, as far as I know, is the first such specification in the world developed where all the previously described operating guidelines are standard practice.

(Mailing lists are toxic.)

A community is as toxic as its organizational structure enables it to be. The JSON-LD community is based on meritocracy, consensus, and has operated in a very transparent manner since the beginning (open meetings, all calls are recorded and minuted, anyone can contribute to the spec, etc.). This has, unsurprisingly, resulted in a very pleasant and supportive community. That said, there is no perfect communication medium. They’re all lossy and they all have their benefits and drawbacks. Sometimes, when you combine multiple communication channels as a part of how your community operates, you get better outcomes.

Finally, for machine readable data, nothing has been more widely adopted by publishers and consumers than microformats. As of June 2012, microformats represents about 70% of all of the structured data on the web. And of that ~70%, the vast majority was h-card and xfn. (All RDFa is about 25% and microdata is a distant third.)

Microformats are good if all you need to do is publish your basic contact and social information on the Web. If you want to publish detailed product information, financial data, medical data, or address other more complex scenarios, Microformats won’t help you. There have been no new Microformats released in the last 5 years and the mailing list traffic has been almost non-existent for around 5 years. From what I can tell, most everyone has moved on to RDFa, Microdata, or JSON-LD.

There are a few that are working on Microformats 2, but I haven’t seen anything that it provides that is not already provided by existing solutions that also have the added benefit of being W3C standards or backed by major companies like Google, Facebook, Yahoo!, Microsoft, and Yandex.

Maybe it’s because of the ease of publishing microformats. Maybe it’s the open process for developing the standards. Maybe it’s because microformats don’t require any additions to HTML. (Both RDFa and microdata required the use of additional attributes or XML namespaces.) Whatever the reason, microformats has the most uptake. So, why do people keep trying to reinvent what microformats is already doing well?

People aren’t reinventing what Microformats are already doing well, they’re attempting to address problems that Microformats do not solve.

For example, one of the reasons that Google adopted JSON-LD is because markup was much easier in JSON-LD than it was in Microformats, as evidenced by the example below:

Back to JSON-LD. The “Simple Example” listed on the homepage is a person object representing John Lennon. His birthday and wife are also listed on the object.

          "@context": "",
          "@id": "",
          "name": "John Lennon",
          "born": "1940-10-09",
          "spouse": ""

I look at this and see what should have been HTML with microformats (h-card and xfn). This is actually a perfect use case for h-card and xfn: a person and their relationship to another person. Here’s how it could’ve been marked up instead.

        <div class="h-card">
          <a href="" class="u-url u-uid p-name">John Lennon</a>
          <time class="dt-bday" datetime="1940-10-09">October 9<sup>th</sup>, 1940</time>
          <a rel="spouse" href="">Cynthia Lennon</a>.

I’m willing to bet that most people familiar with JSON will find the JSON-LD markup far easier to understand and get right than the Microformats-based equivalent. In addition, sending the Microformats markup to a REST-based Web service would be very strange. Alternatively, sending the JSON-LD markup to a REST-based Web service would be far more natural for a modern day Web developer.

This HTML can be easily understood by machine parsers and humans parsers. Microformats 2 parsers already exists for: JavaScript (in the browser), Node.js, PHP and Ruby. HTML + microformats2 means that machines can read your linked data from your website and so can humans. It means that you don’t need an “API” that is something other than your website.

You have been able to do the same thing, and much more, using RDFa and Microdata for far longer (since 2006) than you have been able to do it in Microformats 2. Let’s be clear, there is no significant advantage to using Microformats 2 over RDFa or Microdata. In fact, there are a number of disadvantages for using Microformats 2 at this point, like little to no support from the search companies, very little software tooling, and an anemic community (of which I am a member) for starters. Additionally, HTML + Microformats 2 does not address the Web service API issue at all.

Please don’t waste time and energy reinventing all of the wheels. Instead, please use what already works and what works the webby way.

Do not miss the irony of this statement. RDFa has been doing what Microformats 2 does today since 2006, and it’s a Web standard. Even if you don’t like RDFa 1.0, RDFa 1.1, RDFa Lite 1.1, and Microdata all came before Microformats 2. To assert that wheels should not be reinvented and then claim that Microformats 2, which was created far after there were already a number of well-established solutions, is quite a strange position to take.


JSON-LD was created by people that have been directly involved in the Linked Data, lowercase semantic web, uppercase Semantic Web, Microformats, Microdata, and RDFa work. It has proven to be useful to them. There are a number of very large technology companies that have adopted JSON-LD, further underscoring its utility. Expect more big announcements in the next six months. The JSON-LD specifications have been developed in a radically open and transparent way, the document copyright and licensing provisions are equally open. I hope that this blog post has helped clarify most of the misinformed opinion in Shane Becker’s blog post.

Most importantly, cynicism will not solve the problems that we face on the Web today. Hard work will, and there are very few communities that I know of that work harder and more harmoniously than the excellent volunteers in the JSON-LD community.

If you would like to learn more about Linked Data, a good video introduction exists. If you want to learn more about JSON-LD, there is a good video introduction to that as well.

Linked Data Signatures vs. Javascript Object Signing and Encryption

The Web Payments Community Group at the World Wide Web Consortium (W3C) is currently performing a thorough analysis on the MozPay API. The first part of the analysis examined the contents of the payment messages . This is the second part of the analysis, which will focus on whether the use of the Javascript Object Signing and Encryption (JOSE) group’s solutions to achieve message security is adequate, or if the Web Payment group’s solutions should be used instead.

The Contenders

The IETF JOSE Working Group is actively standardizing the following specifications for the purposes of adding message security to JSON:

JSON Web Algorithms (JWA)
Details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm.
JSON Web Key (JWK)
Details a data structure that represents one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, this specification details how you do that in a standard way.
JSON Web Token (JWT)
Defines a way of representing claims such as “Bob was born on November 15th, 1984”. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications.
JSON Web Encryption (JWE)
Defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way.
JSON Web Signature (JWS)
Defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so.

The W3C Web Payments group is actively standardizing a similar specification for the purpose of adding message security to JSON messages:

Linked Data Signatures (code named: HTTP Keys)
Describes a simple, decentralized security infrastructure for the Web based on JSON, Linked Data, and public key cryptography. This system enables Web applications to establish identities for agents on the Web, associate security credentials with those identities, and then use those security credentials to send and receive messages that are both encrypted and verifiable via digital signatures.

Both groups are relying on technology that has existed and been used for over a decade to achieve secure communications on the Internet (symmetric and asymmetric cryptography, public key infrastructure, X509 certificates, etc.). The key differences between the two have to do more with flexibility, implementation complexity, and how the data is published on the Web and used between systems.

Basic Differences

In general, the JOSE group is attempting to create a flexible/generalized way of expressing cryptography parameters in JSON. They are then using that information and encrypting or signing specific data (called claims in the specifications).

The Web Payments group’s specification achieves the same thing, but while not trying to be as generalized as the JOSE group. Flexibility and generalization tends to 1) make the ecosystem more complex than it needs to be for 95% of the use cases, 2) make implementations harder to security audit, and 3) make it more difficult to achieve interoperability between all implementations. The Linked Data Signatures specification attempts to outline a single best practice that will work for 95% of the applications out there. The 5% of Web applications that need to do more than the Linked Data Signatures spec can use the JOSE specifications. The Linked Data Signatures specification is also more Web-y. The more Web-y nature of the spec gives us a number of benefits, such as giving us a Web-scale public key infrastructure as a pleasant side-effect, that we will get into below.

JSON-LD Advantages over JSON

Fundamentally, the Linked Data Signatures specification relies on the Web and Linked Data to remove some of the complexity that exists in the JOSE specs while also achieving greater flexibility from a data model perspective. Specifically, the Linked Data Signatures specification utilizes Linked Data via a new standards-track technology called JSON-LD to allow anyone to build on top of the core protocol in a decentralized way. JSON-LD data is fundamentally more Web-y than JSON data. Here are the benefits of using JSON-LD over regular JSON:

  • A universal identifier mechanism for JSON objects via the use of URLs.
  • A way to disambiguate JSON keys shared among different JSON documents by mapping them to URLs via a context.
  • A standard mechanism in which a value in a JSON object may refer to a JSON object on a different document or site on the Web.
  • A way to associate datatypes with values such as dates and times.
  • The ability to annotate strings with their language. For example, the word ‘chat’ means something different in English and French and it helps to know which language was used when expressing the text.
  • A facility to express one or more directed graphs, such as a social network, in a single document. Graphs are the native data structure of the Web.
  • A standard way to map external JSON application data to your application data domain.
  • A deterministic way to generate a hash on JSON data, which is helpful when attempting to figure out if two data sources are expressing the same information.
  • A standard way to digitally sign JSON data.
  • A deterministic way to merge JSON data from multiple data sources.

Plain old JSON, while incredibly useful, does not allow you to do the things mentioned above in a standard way. There is a valid argument that applications may not need this amount of flexibility, and for those applications, JSON-LD does not require any of the features above to be used and does not require the JSON data to be modified in any way. So people that want to remain in the plain ‘ol JSON bucket can do so without the need to jump into the JSON-LD bucket with both feet.

JSON Web Algorithms vs. Linked Data Signatures

The JSON Web Algorithms specification details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm. The specification is 70 pages long and is effectively just a collection of what values are allowed for each key used in JOSE-based JSON documents. The design approach taken for the JOSE specifications requires that such a document exists.

The Linked Data Signatures specification takes a different approach. Rather than declare all of the popular algorithms and cryptography schemes in use today, it defines just one digital signature scheme (RSA encryption with a SHA-256 hashing scheme), one encryption scheme (128-bit AES with cyclic block chaining), and one way of expressing keys (as PEM-formatted data). If placed into a single specification, like the JWA spec, it would be just a few pages long (really, just 1 page of actual content).

The most common argument against the Linked Data Signatures spec, with respect to the JWA specification, is that it lacks the same amount of cryptographic algorithm agility that the JWA specification provides. While this may seem like a valid argument on the surface, keep in mind that the core algorithms used by the Linked Data Signatures specification can be changed at any point to any other set of algorithms. So, the specification achieves algorithm agility while greatly reducing the need for a large 70-page specification detailing the allowable values for the various cryptographic algorithms. The other benefit is that since the cryptography parameters are outlined in a Linked Data vocabulary, instead of a process-heavy specification, that they can be added to at any point as long as there is community consensus. Note that while the vocabulary can be added to, thus providing algorithm agility if a particular cryptography scheme is weakened or broken, already defined cryptography schemes in the vocabulary must not be changed once the cryptography vocabulary terms become widely used to ensure that production deployments that use the older mechanism aren’t broken.

Providing just one way, the best practice at the time, to do digital signatures, encryption, and key publishing reduces implementation complexity. Reducing implementation complexity makes it easier to perform security audits on implementations. Reducing implementation complexity also helps ensure better interoperability and more software library implementations, as the barrier to creating a fully conforming implementation is greatly reduced.

The Web Payments group believes that new digital signature and encryption schemes will have to be updated every 5-7 years. It is better to delay the decision to switch to another primary algorithm as long as as possible (and as long as it is safe to do so). Delaying the cryptographic algorithm decision ensures that the group will be able to make a more educated decision than attempting to predict which cryptographic algorithms may be the successors to currently deployed algorithms.

Bottom line: The Linked Data Signatures specification utilizes a much simpler approach than the JWA specification while supporting the same level of algorithm agility.

JSON Web Key vs. Linked Data Signatures

The JSON Web Key (JWK) specification details a data structure that is capable of representing one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, JWK details how you do that in an standard way. A typical RSA public key looks like the following using the JWK specification:

  "keys": [{
    "n": "0vx7agoe ... DKgw",

A similar RSA public key looks like the following using the Linked Data Signatures specification:

  "@context": "",
  "@id": "",
  "@type": "Key",
  "owner": "",
  "publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBG0BA...OClDQAB\n-----END PUBLIC KEY-----\n"

There are a number of differences between the two key formats. Specifically:

  1. The JWK format expresses key information by specifying the key parameters directly. The Linked Data Signatures format places all of the key parameters into a PEM-encoded blob. This approach was taken because it is easier for developers to use the PEM data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---)
  2. In the general case, the Linked Data Signatures key format assigns URL identifiers to keys and publishes them on the Web as JSON-LD, and optionally as RDFa. This means that public key information is discoverable and human and machine-readable by default, which means that all of the key parameters can be read from the Web. The JWK mechanism does assign a key ID to keys, but does not require that they are published to the Web if they are to be used in message exchanges. The JWK specification could be extended to enable this, but by default, doesn’t provide this functionality.
  3. The Linked Data Signatures format is also capable of specifying an identity that owns the key, which allows a key to be tied to an identity and that identity to be used for thinks like Access Control to Web resources and REST APIs. The JWK format has no such mechanism outlined in the specification.

Bottom line: The Linked Data Signatures specification provides four major advantages over the JWK format: 1) the key information is expressed at a higher level, which makes it easier to work with for Web developers, 2) it allows key information to be discovered by deferencing the key ID, 3) the key information can be published (and extended) in a variety of Linked Data formats, and 4) it provides the ability to assign ownership information to keys.

JSON Web Tokens vs. Linked Data Signatures

The JSON Web Tokens (JWT) specification defines a way of representing claims such as “Bob was born on November 15th, 1984”. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications. Here is an example of a JWT document:

  "iss": "joe",
  "exp": 1300819380,
  "": true

JWT documents contain keys that are public, such as iss and exp above, and keys that are private (which could conflict with keys from the JWT specification). The data format is fairly free-form, meaning that any data can be placed inside a JWT Claims Set like the one above.

Since the Linked Data Signatures specification utilizes JSON-LD for its data expression mechanism, it takes a fundamentally different approach. There are no headers or claims sets in the Linked Data Signatures specification, just data. For example, the data below is effectively a JWT claims set expressed in JSON-LD:

  "@context": "",
  "@type": "Person",
  "name": "Manu Sporny",
  "gender": "male",
  "homepage": ""

Note that there are no keywords specific to the Linked Data Signatures specification, just keys that are mapped to URLs (to prevent collisions) and data. In JSON-LD, these keys and data are machine-interpretable in a standards-compliant manner (unlike JWT data), and can be merged with other data sources without the danger of data being overwritten or colliding with other application data.

Bottom line: The Linked Data Signatures specifications use of a native Linked Data format removes the requirement for a specification like JWT. As far as the Linked Data Signatures specification is concerned, there is just data, which you can then digitally sign and encrypt. This makes the data easier to work with for Web developers as they can continue to use their application data as-is instead of attempting to restructure it into a JWT.

JSON Web Encryption vs. Linked Data Signatures

The JSON Web Encryption (JWE) specification defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way. A JWE-encrypted message looks like this:

  "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0",
  "unprotected": {"jku": ""},
  "recipients": [{
    "header": {
      "encrypted_key": "UGhIOgu ... MR4gp_A"
  "iv": "AxY8DCtDaGlsbGljb3RoZQ",
  "ciphertext": "KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY",
  "tag": "Mz-VPPyU4RlcuYv1IwIvzw"

To decrypt this information, an application would retrieve the private key associated with the recipients[0].header, and then decrypt the encrypted_key. Using the decrypted encrypted_key value, it would then use the iv to decrypt the protected header. Using the algorithm provided in the protected header, it would then use the decrypted encrypted_key, iv, the algorithm specified in the protected header, and the ciphertext to retrieve the original message as a result.

For comparison purposes, a Linked Data Signatures encrypted message looks like this:

  "@context": "",
  "@type": "EncryptedMessage2012",
  "data": "VTJGc2RH ... Fb009Cg==",
  "encryptionKey": "uATte ... HExjXQE=",
  "iv": "vcDU1eWTy8vVGhNOszREhSblFVqVnGpBUm0zMTRmcWtMrRX==",
  "publicKey": ""

To decrypt this information, an application would use the private key associated with the publicKey to decrypt the encryptionKey and iv. It would then use the decrypted encryptionKey and iv to decrypt the value in data, retrieving the original message as a result.

The Linked Data Signatures encryption protocol is simpler than the JWE protocol for three major reasons:

  1. The @type of the message, EncryptedMessage2012, encapsulates all of the cryptographic algorithm information in a machine-readable way (that can also be hard-coded in implementations). The JWE specification utilizes the protected field to express the same sort of information, which is allowed to get far more complicated than the Linked Data Signatures equivalent, leading to more complexity.
  2. Key information is expressed in one entry, the publicKey entry, which is a link to a machine-readable document that can express not only the public key information, but who owns the key, the name of the key, creation and revocation dates for the key, as well as a number of other Linked Data values that result in a full-fledged Web-based PKI system. Not only is Linked Data Signatures encryption simpler than JWE, but it also enables many more types of extensibility.
  3. The key data is expressed in a PEM-encoded format, which is expressed as a base-64 encoded blob of information. This approach was taken because it is easier for developers to use the data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---).

The rest of the entries in the JSON are typically required for the encryption method selected to secure the message. There is not a great deal of difference between the two specifications when it comes to the parameters that are needed for the encryption algorithm.

Bottom line: The major difference between the Linked Data Signatures and JWE specification has to do with how the encryption parameters are specified as well as how many of them there can be. The Linked Data Signatures specification expresses only one encryption mechanism and outlines the algorithms and keys external to the message, which leads to a reduction in complexity. The JWE specification allows many more types of encryption schemes to be used, at the expense of added complexity.

JSON Web Signatures vs. Linked Data Signatures

The JSON Web Signatures (JWS) specification defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so. A JWS digital signature looks like the following:

  "payload": "eyJpc ... VlfQ",
    "header": {
    "signature": "cC4hi ... 77Rw"

For the purposes of comparison, a Linked Data Signatures message and signature looks like the following:

  "@context": ["", ""]
  "@type": "Person",
  "name": "Manu Sporny",
  "homepage": "",
    "@type": "GraphSignature2012",
    "creator": "",
    "created": "2013-08-04T17:39:53Z",
    "signatureValue": "OGQzN ... IyZTk="

There are a number of stark differences between the two specifications when it comes to digital signatures:

  1. The Linked Data Signatures specification does not need to base-64 encode the payload being signed. This makes it easier for a developer to see (and work with) the data that was digitally signed. Debugging signed messages is also simplified as special tools to decode the payload are unnecessary.
  2. The Linked Data Signatures specification does not require any header parameters for the payload, which reduces the number of things that can go wrong when verifying digitally signed messages. One could argue that this also reduces flexibility. The counter-argument is that different signature schemes can always be switched in by just changing the @type of the signature.
  3. The signer’s public key is available via a URL. This means that, in general, all Linked Data Signatures signatures can be verified by dereferencing the creator URL and utilizing the published key data to verify the signature.
  4. The Linked Data Signatures specification depends on a normalization algorithm that is applied to the message. This algorithm is non-trivial, typically implemented behind a JSON-LD library .normalize() method call. JWS does not require data normalization. The trade-off is simplicity at the expense of requiring your data to always be encapsulated in the message. For example, the Linked Data Signatures specification is capable of pointing to a digital signature expressed in RDFa on a website using a URL. An application can then dereference that URL, convert the data to JSON-LD, and verify the digital signature. This mechanism is useful, for example, when you want to publish items for sale along with their prices on a Web page in a machine-readable way. This sort of use case is not achievable with the JWS specification. All data is required to be in the message. In other words, Linked Data Signatures performs a signature on information that could exist on the Web where the JWS specification performs a signature on a string of text in a message.
  5. The JWS mechanism enables HMAC-based signatures while the Linked Data Signatures mechanism avoids the use of HMAC altogether, taking the position that shared secrets are typically a bad practice.

Bottom line: The Linked Data Signatures specification does not need to encode its payloads, but does require a rather complex normalization algorithm. It supports discovery of signature key data so that signatures can be verified using standard Web protocols. The JWS specification is more flexible from an algorithmic standpoint and simpler from a signature verification standpoint. The downside is that the only data input format must be from the message itself and can’t be from an external Linked Data source, like an HTML+RDFa web page listing items for sale.


The Linked Data Signatures and JOSE designs, while attempting to achieve the same basic goals, deviate in the approaches taken to accomplish those goals. The Linked Data Signatures specification leverages more of the Web with its use of a Linked Data format and URLs for identifying and verifying identity and keys. It also attempts to encapsulate a single best practice that will work for the vast majority of Web applications in use today. The JOSE specifications are more flexible in the type of cryptographic algorithms that can be used which results in more low-level primitives used in the protocol, increasing complexity for developers that must create interoperable JOSE-based applications.

From a specification size standpoint, the JOSE specs weigh in at 225 pages, the Linked Data Signatures specification weighs in at around 20 pages. This is rarely a good way to compare specifications, and doesn’t always result in an apples to apples comparison. It does, however, give a general idea of the amount of text required to explain the details of each approach, and thus a ballpark idea of the complexity associated with each specification. Like all specifications, picking one depends on the use cases that an application is attempting to support. The goal with the Linked Data Signatures specification is that it will be good enough for 95% of Web developers out there, and for the remaining 5%, there is the JOSE stack.

[Editor’s Note: The original text of this blog post contained the phrase “Secure Messaging”, which has since been rebranded to “Linked Data Signatures“.]

Technical Analysis of 2012 MozPay API Message Format

The W3C Web Payments group is currently analyzing a new API for performing payments via web browsers and other devices connected to the web. This blog post is a technical analysis of the MozPay API with a specific focus on the payment protocol and its use of JOSE (JSON Object Signing and Encryption). The first part of the analysis takes the approach of examining the data structures used today in the MozPay API and compares them against what is possible via PaySwarm. The second part of the analysis examines the use of JOSE to achieve the use case and security requirements of the MozPay API and compares the solution to JSON-LD, which is the mechanism used to achieve the use case and security requirements of the PaySwarm specification.

Before we start, it’s useful to have an example of what the current MozPay payment initiation message looks like. This message is generated by a MozPay Payment Provider and given to the browser to initiate a native purchase process:

  "aud": "",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",
    "description": "Adventure Game item",
    "icons": {
      "64": "",
      "128": ""
    "productData": "user_id=1234&my_session_id=XYZ",
    "postbackURL": "",
    "chargebackURL": ""

The message is effectively a JSON Web Token. I say effectively because it seems like it breaks the JWT spec in subtle ways, but it may be that I’m misreading the JWT spec.

There are a number of issues with the message that we’ve had to deal with when creating the set of PaySwarm specifications. It’s important that we call those issues out first to get an understanding of the basic concerns with the MozPay API as it stands today. The comments below use the JWT code above as a reference point.

Unnecessarily Cryptic JSON Keys

  "aud": "",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,

This is more of an issue with the JOSE specs than it is the MozPay API. I can’t think of a good line of argumentation to shorten things like ‘issuer’ to ‘iss’ and ‘type’ to ‘typ’ (seriously :), the ‘e’ was too much?). It comes off as 1980s protocol design, trying to save bits on the wire. Making code less readable by trying to save characters in a human-readable message format works against the notion that the format should be readable by a human. I had to look up what iss, aud, iat, and exp meant. The only reason that I could come up with for using such terse entries was that the JOSE designers were attempting to avoid conflicts with existing data in JWT claims objects. If this was the case, they should have used a prefix like “@” or “$”, or placed the data in a container value associated with a key like ‘claims’.

PaySwarm always attempts to use terminology that doesn’t require you to go and look at the specification to figure out basic things. For example, it uses creator for iss (issuer), validFrom for iat (issued at), and validUntil for exp (expire time).



The MozPay API specification does not require the APPLICATION_KEY to be a URL. Since it’s not a URL, it’s not discoverable. The application key is also specific to each Marketplace, which means that one Marketplace could use a UUID, another could use a URL, and so on. If the system is intended to be decentralized and interoperable, the APPLICATION_KEY should either be dereferenceable on the public Web without coordination with any particular entity, or a format for the key should be outlined in the specification.

All identities and keys used in digital signatures in PaySwarm use URLs for the identifiers that must contain key information in some sort of machine-readable format (RDFa and JSON-LD, for now). This means that 1) they’re Web-native, 2) they can be dereferenced, and 3) when they’re dereferenced, a client can extract useful data from the document retrieved.


  "aud": "",

It’s not clear what the aud parameter is used for in the MozPay API, other than to identify the marketplace.

Issued At and Expiration Time

  "iat": 1337357297,
  "exp": 1337360897,

The iat (issued at) and exp (expiration time) values are encoded in the number of seconds since January 1st, 1970. These are not very human readable and make debugging issues with purchases more difficult than they need to be.

PaySwarm uses the W3C Date/Time format, which are human-readable strings that are also easy for machines to process. For example, November 5th, 2013 at 1:15:30 AM (Zulu / Universal Time) is encoded as: 2013-11-05T13:15:30Z.

The Request

  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",

This object in the MozPay API is a description of the thing that is to be sold. Technically, it’s not really a request. The outer object is the request. There is a big of a conflation of terminology here that should probably be fixed at some point.

In PaySwarm, the contents of the MozPay request value is called an Asset. An asset is a description of the thing that is to be sold.

Request ID

  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",

The MozPay API encodes the request ID as a universally unique identifier (UUID). The major downside to this approach is that other applications can’t find the information on the Web to 1) discover more about the item being sold, 2) discuss the item being sold by referring to it by a universal ID, 3) feed it to a system that can read data published at the identifier address, and 4) index it for the purposes of searching.

The PaySwarm specifications use a URL for the identifier for assets and publish machine-readable data at the asset location so that other systems can discover more information about the item being sold, refer to the item being sold in discussions (like reviews of the item), start a purchase by referencing the URL, index the item being sold such that it may be utilized in price-comparison/search engines.

Price Point

  "request": {
    "pricePoint": 1,

The pricePoint for the item being sold is currently a whole number. This is problematic because prices are usually decimal numbers including a fraction and a currency.

PaySwarm publishes its pricing information in a currency agnostic way that is compatible with all known monetary systems. Some of these systems include USD, EUR, JYP, RMB, Bitcoin, Brixton Pound, Bernal Bucks, Ven, and a variety of other alternative currencies. The amount is specified as a decimal with fraction and a currency URL. A URL is utilized for the currency because PaySwarm supports arbitrary currencies to be created and managed external to the PaySwarm system.


  "request": {
    "icons": {
      "64": "",
      "128": ""

Icon data is currently modeled in a way that is useful to developers by indexing the information as a square pixel size for the icon. This allows developers to access the data like so: icons.64 or icons.128. Values are image URLs, which is the right choice.

PaySwarm uses JSON-LD and can support this sort of data layout through a feature called data indexing. Another approach is to just have an array of objects for icons, which would allow us to include extended information about the icons. For example:

  "request": {
  "icon": [{size: 64, id: "", label: "Magical Unicorn"}, ...]

Product Data

  "request": {
    "productData": "user_id=1234&my_session_id=XYZ",

If the payment technology we’re working on is going to be useful to society at large, we have to allow richer descriptions of products. For example, model numbers, rich markup descriptions, pictures, ratings, colors, and licensing terms are all important parts of a product description. The value needs to be larger than a 256 byte string and needs to support decentralized extensibility. For example, Home Depot should be able to list UPC numbers and internal reference numbers in the asset description and the payment protocol should preserve that extra information, placing it into digital receipts.

PaySwarm uses JSON-LD and thus supports decentralized extensibility for product data. This means that any vendor may express information about the asset in JSON-LD and it will be preserved in all digital contracts and digital receipts. This allows the asset and digital receipt format to be used as a platform that can be built on top of by innovative retailers. It also increases data fidelity by allowing far more detailed markup of asset information than what is currently allowed via the MozPay API.

Postback URL

  "request": {
    "postbackURL": "",

The postback URL is a pretty universal concept among Web-based payment systems. The payment processor needs a URL endpoint that the result of the purchase can be sent to. The postback URL serves this purpose.

PaySwarm has a similar concept, but just lists it in the request URL as ‘callback’.

Chargeback URL

  "request": {
    "chargebackURL": ""

The chargeback URL is a URL endpoint that is called whenever a refund is issued for a purchased item. It’s not clear if the vendor has a say in whether or not this should be allowed for a particular item. For example, what happens when a purchase is performed for a physical good? Should chargebacks be easy to do for those sorts of items?

PaySwarm does not build chargebacks into the core protocol. It lets the merchant request the digital receipt of the sale to figure out if the sale has been invalidated. It seems like a good idea to have a notification mechanism build into the core protocol. We’ll need more discussion on this to figure out how to correctly handle vendor-approved refunds and customer-requested chargebacks.


There are a number of improvements that could be made to the basic MozPay API that would enable more use cases to be supported in the future while keeping the level of complexity close to what it currently is. The second part of this analysis will examine the JavaScript Object Signature and Encryption (JOSE) technology stack and determine if there is a simpler solution that could be leveraged to simplify the digital signature requirements set forth by the MozPay API.

[UPDATE: The second part of this analysis is now available]

Google adds JSON-LD support to Search and Google Now

Full disclosure: I’m one of the primary designers of JSON-LD and the Chair of the JSON-LD group at the World Wide Web Consortium.

Last week, Google announced support for JSON-LD markup in Gmail. Putting JSON-LD in front of 425 million people is a big validation of the technology.

Hot on the heels of last weeks announcement, Google has just announced additional JSON-LD support for two more of their core products! The first is their flagship product, Google Search. The second is their new intelligent personal assistant service called Google Now.

The addition of JSON-LD support to Google Search now allows you to do incredibly accurate personalized searches. For example, here’s an example search for “my flights”:

and here’s an example for “my hotel reservation for next week”:

Web developers that mark certain types of sort of information up as JSON-LD in the e-mails that they send to you can now enable new functionality in these core Google services. For example, using JSON-LD will make it really easy for you to manage flights, hotel bookings, reservations at restaurants, and events like concerts and movies from within Google’s ecosystem. It also makes it easy for services like Google Now to push a notification to your phone when your flight has been delayed:

Or, show your boarding pass on your mobile phone when you’ve arrived at the airport:

Or, let you know when you need to leave to make your reservation for a restaurant:

Google Search and Google Now can make these recommendations to you because the information that you received about these flights, boarding passes, hotels, reservations, and other events were marked up in JSON-LD format when they hit your Gmail inbox. The most exciting thing about all of this is that it’s just the beginning of what Linked Data can do to for all of us. Over the next decade, Linked Data will be at the center of getting computing and the monotonous details of our everyday grind out of the way so that we can focus more on enjoying our lives.

If you want to dive deeper into this technology, Google’s page on schemas is a good place to start.

Google adds JSON-LD support to Gmail

Google announced support for JSON-LD markup in Gmail at Google I/O 2013. The design team behind JSON-LD is delighted by this announcement and applaud the Google engineers that integrated JSON-LD with Gmail. This blog post examines what this announcement means for Gmail customers as well as providing some suggestions to the Google Gmail engineers on how they could improve their JSON-LD markup.

JSON-LD enables the representation of Linked Data in JSON by describing a common JSON representation format for expressing graphs of information (see Google’s Knowledge Graph). It allows you to mix regular JSON data with Linked Data in a single JSON document. The format has already been adopted by large companies such as Google in their Gmail product and is now available to over 425 million people via currently live software products around the world.

The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build inter-operable Linked Data Web services, and to store Linked Data in JSON-based storage engines.

For Google’s Gmail customers, this means that Gmail will now be able to recognize people, places, events and a variety of other Linked Data objects. You can then take actions on the Linked Data objects embedded in an e-mail. For example, if someone sends you an invitation to a party, you can do a single-click response on whether or not you’ll attend a party right from your inbox. Doing so will also create a reminder for the party in your calendar. There are other actions that you can perform on Linked Data objects as well, like approving an expense report, reviewing a restaurant, saving a coupon for a free online movie, making a flight, hotel, or restaurant reservation, and many other really cool things that you couldn’t do before from the inside of your inbox.

What Google Got Right and Wrong

Google followed the JSON-LD standard pretty closely, so the vast majority of the markup looks really great. However, there are four issues that the Google engineers will probably want to fix before pushing the technology out to developers.

Invalid Context URL

The first issue is a fairly major one. Google isn’t using the JSON-LD @context parameter correctly in any of their markup examples. It’s supposed to be a URL, but they’re using a text string instead. This means that their JSON-LD documents are unreadable by all of the conforming JSON-LD processors today. For example, Google does the following when declaring a context in JSON-LD:

  "@context": ""

When they should be doing this:

  "@context": ""

It’s a fairly simple change; just add “http://” to the beginning of the “” value. If Google doesn’t make this change, it’ll mean that JSON-LD processors will have to include a special hack to translate “” to “” just for this use case. I hope that this was just a simple oversight by the Google engineers that implemented these features and not something that was intentional.

Context isn’t Online

The second issue has to do with the JSON-LD Context for There doesn’t seem to be a downloadable context for at the moment. Not having a Web-accessible JSON-LD context is bad because the context is at the heart and soul of a JSON-LD document. If you don’t publish a JSON-LD context on the Web somewhere, applications won’t be able to resolve any of the Linked Data objects in the document.

The Google engineers could fix this fairly easily by providing a JSON-LD Context document when a web client requests a document of type “application/ld+json” from the URL. The JSON-LD community would be happy to help the Google engineers create such a document.

Keyword Aliasing, FTW

The third issue is a minor usability issue with the markup. The Google help pages on the JSON-LD functionality use the @type keyword in JSON-LD to express the type of Linked Data object that is being expressed. The Google engineers that wrote this feature may not have been aware of the Keyword Aliasing feature in JSON-LD. That is, they could have just aliased @type to type. Doing so would mean that the Gmail developer documentation wouldn’t have to mention the “specialness” of the @type keyword.

Use RDFa Lite

The fourth issue concerns the use of Microdata. JSON-LD was designed to work seamlessly with RDFa Lite 1.1; you can easily and losslessly convert data between the two markup formats. JSON-LD is compatible with Microdata, but pairing the two is a sub-optimal design choice. When JSON-LD data is converted to Microdata, information is lost due to data fidelity issues in Microdata. For example, there is no mechanism to specify that a value is a URL in Microdata.

RDFa Lite 1.1 does not suffer from these issues and has been proven to be a drop-in replacement for Microdata without any of the downsides that Microdata has. The designers of JSON-LD are the same designers behind RDFa Lite 1.1 and have extensive experience with Microdata. We specifically did not choose to pair JSON-LD with Microdata because it was a bad design choice for a number of reasons. I hope that the Google engineers will seek out advice from the JSON-LD and RDFa communities before finalizing the decision to use Microdata, as there are numerous downsides associated with that decision.


All in all, the Google engineers did a good job of implementing JSON-LD in Gmail. With a few small fixes to the Gmail documentation and code examples, they will be fully compliant with the JSON-LD specifications. The JSON-LD community is excited about this development and looks forward to working with Google to improve the recent release of JSON-LD for Gmail.

Identifiers in JSON-LD and RDF

TL;DR: This blog post argues that the extension of blank node identifiers in JSON-LD and RDF for the purposes of identifying predicates and naming graphs is important. It is important because it simplifies the usage of both technologies for developers. The post also provides a less-optimal solution if the RDF Working Group does not allow blank node identifiers for predicates and graph names in RDF 1.1.

We need identifiers as humans to convey complex messages. Identifiers let us refer to a certain thing by naming it in a particular way. Not only do humans need identifiers, but our computers need identifiers to refer to data in order to perform computations. It is no exaggeration to say that our very civilization depends on identifiers to manage the complexity of our daily lives, so it is no surprise that people spend a great deal of time thinking about how to identify things. This is especially true when we talk about the people that are building the software infrastructure for the Web.

The Web has a very special identifier called the Uniform Resource Locator (URL). It is probably one of the best known identifiers in the world, mostly because everybody that has been on the Web has used one. URLs are great identifiers because they are very specific. When I give you a URL to put into your Web browser, such as the link to this blog post, I can be assured that when you put the URL into your browser that you will see what I see. URLs are globally scoped, they’re supposed to always take you to the same place.

There is another class of identifier on the Web that is not globally scoped and is only used within a document on the Web. In English, these identifiers are used when we refer to something as “that thing”, or “this widget”. We can really only use this sort of identifier within a particular context where the people participating in the conversation understand the context. Linguists call this concept deixis. “Thing” doesn’t always refer to the same subject, but based on the proper context, we can usually understand what is being identified. Our consciousness tags the “thing” that is being talked about with a tag of sorts and then refers to that thing using this pseudo-identifier. Most of this happens unconsciously (notice how your mind unconsciously tied the use of ‘this’ in this sentence to the correct concept?).

The take-away is that there are globally-scoped identifiers like URLs, and there are also locally-scoped identifiers, that require a context in order to understand what they refer to.


In JSON, developers typically express data like this:

  "name": "Joe"

Note how that JSON object doesn’t have an identifier associated with it. JSON-LD creates a straight-forward way of giving that object an identifier:

  "@context": ...,
  "@id": "",
  "name": "Joe"

Both you and I can refer to that object using and be sure that we’re talking about the same thing. There are times that assigning a global identifier to every piece of data that we create is not desired. For example, it doesn’t make much sense to assign an identifier to a transient message that is a request to get a sensor reading. This is especially true if there are millions of these types or requests and we never want to refer to the request once it has been transmitted. This is why JSON-LD doesn’t force developers to assign an identifier to the objects that they express. The people that created the technology understand that not everything needs a global identifier.

Computers are less forgiving, they need identifiers for most everything, but a great deal of that complexity can be hidden from developers. When an identifier becomes necessary in order to perform computations upon the data, the computer can usually auto-generate an identifier for the data.

RDF, Graphs, and Blank Node Identifiers

The Resource Description Framework (RDF) primarily uses an identifier called the Internationalized Resource Identifier (IRI). Where URLs can typically only express links in Western languages, an IRI can express links in almost every language in use today including Japanese, Tamil, Russian and Mandarin. RDF also defines a special type of identifier called a blank node identifier. This identifier is auto-generated and is locally scoped to the document. It’s an advanced concept, but is one that is pretty useful when you start dealing with transient data, where creating a global identifier goes beyond the intended usage of the data. An RDF-compatible program will step in and create blank node identifiers on your behalf, but only when necessary.

Both JSON-LD and RDF have the concept of a Statement, Graph, and a Dataset. A Statement consists of a subject, predicate, and an object (for example: “Dave likes cookies”). A Graph is a collection of Statements (for example: Graph A contains all the things that Dave said and Graph B contains all the things that Mary said). A Dataset is a collection of Graphs (for example: Dataset Z contains all of the things Dave and Mary said yesterday).

In JSON-LD, at present, you can use a blank node identifier for subjects, predicates, objects, and graphs. In RDF, you can only use blank node identifiers for subjects and objects. There are people, such as myself, in the RDF WG that think this is a mistake. There are people that think it’s fine. There are people that think it’s the best compromise that can be made at the moment. There is a wide field of varying opinions strewn between the various extremes.

The end result is that the current state of affairs have put us into a position where we may have to remove blank node identifier support for predicates and graphs from JSON-LD, which comes across as a fairly arbitrary limitation to those not familiar with the inner guts of RDF. Don’t get me wrong, I feel it’s a fairly arbitrary limitation. There are those in the RDF WG that don’t think it is and that may prevent JSON-LD from being able to use what I believe is a very useful construct.

Document-local Identifiers for Predicates

Why do we need blank node identifiers for predicates in JSON-LD? Let’s go back to the first example in JSON to see why:

  "name": "Joe"

The JSON above is expressing the following Statement: “There exists a thing whose name is Joe.”

The subject is “thing” (aka: a blank node) which is legal in both JSON-LD and RDF. The predicate is “name”, which doesn’t map to an IRI. This is fine as far as the JSON-LD data model is concerned because “name”, which is local to the document, can be mapped to a blank node. RDF cannot model “name” because it has no way of stating that the predicate is local to the document since it doesn’t support blank nodes for predicates. Since the predicate doesn’t map to an IRI, it can’t be modeled in RDF. Finally, “Joe” is a string used to express the object and that works in both JSON-LD and RDF.

JSON-LD supports the use of blank nodes for predicates because there are some predicates, like every key used in JSON, that are local to the document. RDF does not support the use of blank nodes for predicates and therefore cannot properly model JSON.

Document-local Identifiers for Graphs

Why do we need blank node identifiers for graphs in JSON-LD? Let’s go back again to the first example in JSON:

  "name": "Joe"

The container of this statement is a Graph. Another way of writing this in JSON-LD is this:

  "@context": ...,
  "@graph": {
    "name": "Joe"

However, what happens when you have two graphs in JSON-LD, and neither one of them is the RDF default graph?

  "@context": ...,
  "@graph": [
      "@graph": {
        "name": "Joe"
      "@graph": {
        "name": "Susan"

In JSON-LD, at present, it is assumed that a blank node identifier may be used to name each graph above. Unfortunately, in RDF, the only thing that can be used to name a graph is an IRI, and a blank node identifier is not an IRI. This puts JSON-LD in an awkward position, either JSON-LD can:

  1. Require that developers name every graph with an IRI, which seems like a strange demand because developers don’t have to name all subjects and objects with an IRI, or
  2. JSON-LD can auto-generate a regular IRI for each predicate and graph name, which seems strange because blank node identifiers exist for this very purpose (not to mention this solution won’t work in all cases, more below), or
  3. JSON-LD can auto-generate a special IRI for each predicate and graph name, which would basically re-invent blank node identifiers.

The Problem

The problem surfaces itself when you try to convert a JSON-LD document to RDF. If the RDF Working Group doesn’t allow blank node identifiers for predicates and graphs, then what do you use to identify predicates and graphs that have blank node identifiers associated with them in the JSON-LD data model? This is a feature we do want to support because there are a number of important use cases that it enables. The use cases include:

  1. Blank node predicates allow JSON to be mapped directly to the JSON-LD and RDF data models.
  2. Blank node graph names allow developers to use graphs without explicitly naming them.
  3. Blank node graph names make the RDF Dataset Normalization algorithm simpler.
  4. Blank node graph names prevent the creation of a parallel mechanism to generate and manage blank node-like identifiers.

It’s easy to see the problem exposed when performing RDF Dataset Normalization, which we need to do in order to digitally sign information expressed in JSON-LD and RDF. The rest of this post will focus on this area, as it exposes the problems with not supporting blank node identifiers for predicates and graph names. In JSON-LD, the two-graph document above could be normalized to this NQuads (subject, predicate, object, graph) representation:

_:bnode0 _:name "Joe" _:graph1 .
_:bnode1 _:name "Susan" _:graph2 .

This is illegal in RDF since you can’t have a blank node identifier in the predicate or graph position. Even if we were to use an IRI in the predicate position, the problem (of not being able to normalize “un-labeled” JSON-LD graphs like the ones in the previous section) remains.

The Solutions

This section will cover the proposed solutions to the problem in order least desirable to most desirable.

Don’t allow blank node identifiers for predicates and graph names

Doing this in JSON-LD ignores the point of contention. The same line of argumentation can be applied to RDF. The point is that by forcing developers to name graphs using IRIs, we’re forcing them to do something that they don’t have to do with subjects and objects. There is no technical reason that has been presented where the use of a blank node identifier in the predicate or graph position is unworkable. Telling developers that they must name graphs using IRIs will be surprising to them, because there is no reason that the software couldn’t just handle that case for them. Requiring developers to do things that a computer can handle for them automatically is anti-developer and will harm adoption in the long run.

Generate fragment identifiers for graph names

One solution is to generate fragment identifiers for graph names. This, coupled with the base IRI would allow the data to be expressed legally in NQuads:

_:bnode0 <> "Joe" <> .
_:bnode1 <> "Susan" <> .

The above is legal RDF. The approach is problematic when you don’t have a base IRI, such as when JSON-LD is used as a messaging protocol between two systems. In that use case, you end up with something like this:

_:bnode0 <#name> "Joe" <#graph1> .
_:bnode1 <#name> "Susan" <#graph2> .

RDF requires absolute IRIs and so the document above is illegal from an RDF perspective. The other down-side is that you have to keep track of all fragment identifiers in the output and make sure that you don’t pick fragment identifiers that are used elsewhere in the document. This is fairly easy to do, but now you’re in the position of tracking and renaming both blank node identifiers and fragment IDs. Even if this approach worked, you’d be re-inventing the blank node identifier. This approach is unworkable for systems like PaySwarm that use transient JSON-LD messages across a REST API; there is no base IRI in this use case.

Skolemize to create identifiers for graph names

Another approach is skolemization, which is just a fancy way of saying: generate a unique IRI for the blank node when expressing it as RDF. The output would look something like this:

_:bnode0 <> "Joe" <> .
_:bnode1 <> "Susan" <> .

This would be just fine if there was only one application reading and consuming data. However, when we are talking about RDF Dataset Normalization, there are cases where two applications must read and independently verify the representation of a particular IRI. One scenario that illustrates the example fairly nicely is the blind verification scenario. In this scenario, two applications de-reference an IRI to fetch a JSON-LD document. Each application must perform RDF Dataset Normalization and generate a hash of that normalization to see if they retrieved the same data. Based on a strict reading of the skolemization rules, Application A would generate this:

_:bnode0 <> "Joe" <> .
_:bnode1 <> "Susan" <> .

and Application B would generate this:

_:bnode0 <> "Joe" <> .
_:bnode1 <> "Susan" <> .

Note how the two graphs would never hash to the same value because the Skolem IRIs are completely different. The RDF Dataset Normalization algorithm would have no way of knowing which IRIs are blank node stand-ins and which ones are legitimate IRIs. You could say that publishers are required to assign the skolemized IRIs to the data they publish, but that ignores the point of contention, which is that you don’t want to force developers to create identifiers for things that they don’t care to identify. You could argue that the publishing system could generate these IRIs, but then you’re still creating a global identifier for something that is specifically meant to be a document-scoped identifier.

A more lax reading of the Skolemization language might allow one to create a special type of Skolem IRI that could be detected by the RDF Dataset Normalization algorithm. For example, let’s say that since JSON-LD is the one that is creating these IRIs before they go out to the RDF Dataset Normalization Algorithm, we use the tag IRI scheme. The output would look like this for Application A:

_:bnode0 <,2013:dsid:345> "Joe" <,2013:dsid:254> .
_:bnode1 <,2013:dsid:345> "Susan" <,2013:dsid:363> .

and this for Application B:

_:bnode0 <,2013:dsid:a> "Joe" <,2013:dsid:b> .
_:bnode1 <,2013:dsid:a> "Susan" <,2013:dsid:c> .

The solution still doesn’t work, but we could add another step to the RDF Dataset Normalization algorithm that would allow it to rename any IRI starting with,2013:. Keep in mind that this is exactly the same thing that we do with blank nodes, and it’s effectively duplicating that functionality. The algorithm would allow us to generate something like this for both applications doing a blind verification.

_:bnode0 <,2013:dsid:predicate-1> "Joe" <,2013:dsid:graph-1> .
_:bnode1 <,2013:dsid:predicate-1> "Susan" <,2013:dsid:graph-2> .

This solution does violate one strong suggestion in the Skolemization section:

Systems wishing to do this should mint a new, globally unique IRI (a Skolem IRI) for each blank node so replaced.

The IRI generated is definitely not globally unique, as there will be many,2013:dsid:graph-1s in the world, each associated with data that is completely different. This approach also goes against something else in Skolemization that states:

This transformation does not appreciably change the meaning of an RDF graph.

It’s true that using tag IRIs doesn’t change the meaning of the graph when you assume that the document will never find its way into a database. However, once you place the document in a database, it certainly creates the possibility of collisions in applications that are not aware of the special-ness of IRIs starting with,2013:dsid:. The data is fine taken by itself, but a disaster when merged with other data. We would have to put a warning in some specification for systems to make sure to rename the incoming,2013:dsid: IRIs to something that is unique to the storage subsystem. Keep in mind that this is exactly what is done when importing blank node identifiers into a storage subsystem. So, we’ve more-or-less re-invented blank node identifiers at this point.

Allow blank node identifiers for graph names

This leads us to the question of why not just extend RDF to allow blank node identifiers for predicates and graph names? Ideally, that’s what I would like to see happen in the future as it places the least burden on developers, and allows RDF to easily model JSON. The responses from the RDF WG are varied. These are all of the current arguments against that I have heard:

There are other ways to solve the problem, like fragment identifiers and skolemization, than introducing blank nodes for predicates and graph names.

Fragment identifiers don’t work, as demonstrated above. There is really only one workable solution based on a very lax reading of skolemization, and as demonstrated above, even the best skolemization solution re-invents the concept of a blank node.

There are other use cases that are blocked by the introduction of blank node identifiers into the predicate and graph name position.

While this has been asserted, it is still unclear exactly what those use cases are.

Adding blank node identifiers for predicates and graph names will break legacy applications.

If blank nodes for predicates and graph names were illegal before, wouldn’t legacy applications reject that sort of input? The argument that there are bugs in legacy applications that make them not robust against this type of input is valid, but should that prevent the right solution from being adopted? There has been no technical reason put forward for why blank nodes for predicates or graph names cannot work, other than software bugs prevent it.

The PaySwarm work has chosen to model the data in a very strange way.

The people that have been working on RDFa, JSON-LD, and the Web Payments specifications for the past 5 years have spent a great deal of time attempting to model the data in the simplest way possible, and in a way that is accessible to developers that aren’t familiar with RDF. Whether or not it may seem strange is arguable since this response is usually levied by people not familiar with the Web Payments work. This blog post outlines a variety of use cases where the use of a blank node for predicates and graph naming is necessary. Stating that the use cases are invalid ignores the point of contention.

If we allow blank nodes to be used when naming graphs, then those blank nodes should denote the graph.

At present, RDF states that a graph named using an IRI may denote the graph or it may not denote the graph. This is a fancy way of saying that the IRI that is used for the graph name may be an identifier for something completely different (like a person), but de-referencing the IRI over the Web results in a graph about cars. I personally think that is a very dangerous concept to formalize in RDF, but there are others that have strong opinions to the contrary. The chances of this being changed in RDF 1.1 is next to none.

Others have argued that while that may be the case for IRIs, it doesn’t have to be the case for blank nodes that are used to name graphs. In this case, we can just state that the blank node denotes the graph because it couldn’t possibly be used for anything else since the identifier is local to the document. This makes a great deal of sense, but it is different from how an IRI is used to name a graph and that difference is concerning to a number of people in the RDF Working Group.

However, that is not an argument to disallow blank nodes from being used for predicates and graph names. The group could still allow blank nodes to be used for this purpose while stating that they may or may not be used to denote the graph.

The RDF Working Group does not have enough time left in its charter to make a change this big.

While this may be true, not making a decision on this is causing more work for the people working on JSON-LD and RDF Dataset Normalization. Having the,2013:dsid: identifier scheme is also going to make many RDF-based applications more complex in the long run, resulting in a great deal more work than just allowing blank nodes for predicates and graph names.


I have a feeling that the RDF Working Group is not going to do the right thing on this one due to the time pressure of completing the work that they’ve taken on. The group has already requested, and has been granted, a charter extension. Another extension is highly unlikely, so the group wants to get everything wrapped up. This discussion could take several weeks to settle. That said, the solution that will most likely be adopted (a special tag-based skolem IRI) will cause months of work for people living in the JSON-LD and RDF ecosystem. The best solution in the long run would be to solve this problem now.

If blank node identifiers for predicates and graphs are rejected, here is the proposal that I think will move us forward while causing an acceptable amount of damage down the road:

  1. JSON-LD continues to support blank node identifiers for use as predicates and graph names.
  2. When converting JSON-LD to RDF, a special, relabelable IRI prefix will be used for blank nodes in the predicate and graph name position of the form,2013:dsid:

Thanks to Dave Longley for proofing this blog post and providing various corrections.

The Problem with RDF and Nuclear Power

Full disclosure: I am the chair of the RDFa Working Group, the JSON-LD Community Group, a member of the RDF Working Group, as well as other Semantic Web initiatives. I believe in this stuff, but am critical about the path we’ve been taking for a while now.

The Resource Description Framework (a model for publishing data on the Web) has this horrible public perception akin to how many people in the USA view nuclear power. The coal industry campaigned quite aggressively to implant the notion that nuclear power was not as safe as coal. Couple this public misinformation campaign with a few nuclear-power-related catastrophes and it is no surprise that the current public perception toward nuclear power can be summarized as: “Not in my back yard”. Nevermind that, per tera-watt, nuclear power generation has killed far fewer people since its inception than coal. Nevermind that it is one of the more viable power sources if we gaze hundreds of years into Earth’s future, especially with the recent renewed interest in Liquid Fluoride Thorium Reactors. When we look toward the future, the path is clear, but public perception is preventing us from proceeding down that path at the rate that we need to in order to prevent more damage to the Earth.

RDF shares a number of these similarities with nuclear power. RDF is one of the best data modeling mechanisms that humanity has created. Looking into the future, there is no equally-powerful, viable alternative. So, why has progress been slow on this very exciting technology? There was no public mis-information campaign, so where did this negative view of RDF come from?

In short, RDF/XML was the Semantic Web’s 3 Mile Island incident. When it was released, developers confused RDF/XML (bad) with the RDF data model (good). There weren’t enough people and time to counter-act the negative press that RDF was receiving as a result of RDF/XML and thus, we are where we are today because of this negative perception of RDF. Even Wikipedia’s page on the matter seems to imply that RDF/XML is RDF. Some purveyors of RDF think that the public perception problem isn’t that bad. I think that when developers hear RDF, they think: “Not in my back yard”.

The solution to this predicament: Stop mentioning RDF and the Semantic Web. Focus on tools for developers. Do more dogfooding.

To explain why we should adopt this strategy, we can look to Tesla for inspiration. Elon Musk, founder of PayPal and now the CEO of Tesla Motors, recently announced the Tesla Supercharger project. At a high-level, the project accomplishes the following jaw-dropping things:

  1. It creates a network of charging stations for electric cars that are capable of charging a Tesla in less than 30 minutes.
  2. The charging stations are solar powered and generate more electricity than the cars use, feeding the excess power into the local power grid.
  3. The charging stations are free to use for any person that owns a Tesla vehicle.
  4. The charging stations are operational and available today.

This means that, in 4-5 years, any owner of a Tesla vehicle be able to drive anywhere in the USA, for free, powered by the sun. No person in their right mind (with the money) would pass up that offer. No fossil fuel-based company will ever be able to provide “free”, clean energy. This is the sort of proposition we, the RDF/Linked Data/Semantic Web community, need to make; I think we can re-position ourselves to do just that.

Here is what the RDF and Linked Data community can learn from Tesla:

  1. The message shouldn’t be about the technology. It should be about the problems we have today and a concrete solution on how to address those problems.
  2. Demonstrate real value. Stop talking about the beauty of RDF, theoretical value, or design. Deliver production-ready, open-source software tools.
  3. Build a network of believers by spending more of your time working with Web developers and open-source projects to convince them to publish Linked Data. Dogfood our work.

Here is how we’ve applied these lessons to the JSON-LD work:

  1. We don’t mention RDF in the specification, unless absolutely necessary, and in many cases it isn’t necessary. RDF is plumbing, it’s in the background, and developers don’t need to know about it to use JSON-LD.
  2. We purposefully built production-ready tools for JSON-LD from day one; a playground, multiple production-ready implementations, and a JavaScript implementation of the browser-based API.
  3. We are working with Wikidata, Wikimedia, Drupal, the Web Payments and Read Write Web groups at W3C, and a number of other private clients to ensure that we’re providing real value and dogfooding our work.

Ultimately, RDF and the Semantic Web are of no interest to Web developers. They also have a really negative public perception problem. We should stop talking about them. Let’s shift the focus to be on Linked Data, explaining the problems that Web developers face today, and concrete, demonstrable solutions to those problems.

Note: This post isn’t meant as a slight against any one person or group. I was just working on the JSON-LD spec, aggressively removing prose discussing RDF, and the analogy popped into my head. This blog post was an exercise in organizing my thoughts on the matter.

Linked JSON: RDF for the Masses

There are times when we can see ourselves doing things that will be successful, and then there are times when we can see ourselves screwing it all up. I’ve just witnessed the latter in the RDF Working Group at the World Wide Web Consortium and thought that it may help to do a post-mortem on what went wrong. This was a social failure, not a technical one. Unlike technical failures, social failures are so much more complicated – so, let’s see if we can find out what went wrong.


I spend a great deal of my time trying to convince technology leaders at large companies like Google, the New York Times, Facebook, Twitter, Sony, Universal and Warner Brothers to choose a common path forward that will help the Web flourish. Most of that time is spent at the World Wide Web Consortium (W3C), in standards working groups, trying to predict and build the future of the Web. I’m currently the Chair of the RDF Web Applications Working Group, formerly known as the RDFa Working Group. My participation covers many different working groups at the W3C; RDFa, HTML5, RDF, WebID, Web Apps, Social Web, Semantic Web Coordination, and a few others. The hope is that all of these groups are building technologies that will actually make all of our lives easier – especially for those that create and build the Web.

The Pull of Linked Data

There is a big push on the Web right now to publish data in an inter-operable way. RDFa is a good example of this new push to get as much Linked Data out there as possible. Our latest work in the RDF Working Group was to try and find a way to bring Linked Data to JSON. That is, we were given the task of figuring out a way to get companies like Google, Yahoo!, The New York Times, Facebook and Twitter to publish their data in a standards-compliant format that the rest of the world could use. We’ve already convinced some of these large companies to publish their data in RDFa. This was a huge win for the Web, but it was only a fraction of the interesting data out there. The rest of it is locked up in Web Services – in volumes of JSON data that are passed back and forth via JSON-REST APIs every day.

Wouldn’t it be great if we had something like RDFa for JSON? A way for a standard software stack to extract globally meaningful objects from Web Services? In fact, that is what JSON-LD was designed to do. There are also a number of other JSON formats that could be read not only as JSON, but as RDF. If we could get the world to start publishing their JSON data as Linked Data, we would have more transparency and more inter-operable systems. The rate at which we re-use data from other JSON-based systems would grow by leaps and bounds.

This is what the charge of the RDF Working Group was, and at the Face-to-Face meeting a little over a week ago, we failed miserably to deliver on that promise.

Failure Timeline

Here is a quick run-down of what happened:

  • March 2010: Work starts on JSON-LD – focusing on an easy-to-use, stripped down version of Linked Data for Web Developers. The work builds on previous work done by lots of smart people across the Web.
  • Summer 2010: An W3C RDF Workshop finds that there is a deep desire in the community for a JSON-based RDF format.
  • January 2011: The RDF Working Group starts up – starts to analyze 10 different RDF in JSON format proposals. There is general confusion in the group as to the exact community we’re attempting to address. Some think it’s people that are already using RDF/Graph Stores and SPARQL, others believe we are attempting to bring independent Web Developers into the world of Linked Data. I was of the latter mindset – we don’t need to convince people that are already using RDF to keep using RDF.
  • March 2011: Arguments continue about what features of JSON we’ll use and whether or not we are just creating another triple-based serialization for RDF, or if we are creating an easier to use form of Linked Data in JSON.
  • April 2011: At the RDF Face-to-Face, a show of hands decides to place the JSON work intended for independent Web Developers on the back burner for a year or more. The reason was that there was no consensus that we were solving a problem that needed to be solved.

Before I get into what went wrong, I don’t intend any of this to be bashing anyone in the RDF Working Group. They’re all good people that want to do good things for the Web. Many of them have put years of work into RDF – they want to see it succeed. They are also very smart people – they are the worlds leading experts in this stuff. There were no politics or back-room dealing that occurred. The criticism is more about the group dynamic – why we failed to deliver what some of us saw as our primary directive in the group.

What Went Wrong?

How did we go from knowing that people wanted to get Linked Data out of JSON to deciding to back-burner the work on providing just that to the people that build the Web? I pondered what went wrong for about a week and came up with the following list:

  • I failed to gather support and evidence that people wanted to get Linked Data out of JSON. I place most of the blame on myself for not educating the group before the decision needed to be made. I wouldn’t be saying this if the vote was close, but when it came time to show who supported the work – out of a group of 20-some-odd people, only two raised their hands. One of those people was me. I should have spent more time getting the larger companies to weigh in on the matter. I should have had more documentation and evidence ready on why the world needed to get Linked Data out of JSON. I should have had more one-on-one conversations with the people that I could see struggling with why we needed Linked Data for JSON. I assumed that it was obvious that the world needed this and that assumption came back to kick our collective asses.
  • A lack of Web App developers in the RDF Working Group helped compound the problem stated above. Most of the group didn’t understand why just serializing triples to JSON wasn’t good enough as most of them had APIs to make sense of the triples. They were also not convinced that we needed to bring Web App developers into the RDF community. RDF is already successful, right? Wrong. Every RDF serialization format is losing out to JSON when it comes to data interchange – not by a little, but by a staggering margin. The RDF community is so pathetically tiny compared to the Web App development community. The people around the world that use JSON as their primary data serialization format are easily 100 fold greater than those using RDF. I’m convinced that there is a problem. I don’t think that the majority of traditional RDF community thinks that there is a problem.
  • Lacking a common vision will kill a community. It has been said that standards groups should not innovate, but instead they should standardize solutions that are already out in the marketplace. There are days where I believe this – the TURTLE work has been easy to move forward in the RDF Working Group. There are also days where I know this is not true. Standards groups can be fantastic innovators – just look at the WHATWG, CSS, RDFa, Web Applications, and HTML5 Working Groups. At the heart of the matter is whether or not a group has a common vision. If you don’t have a common vision, you go nowhere. We didn’t have a common vision for the Linked Data in JSON work.
  • Only one company in the group was depending on the technology to be completed in order to ship a product. That company was Digital Bazaar, for the PaySwarm work. None of the other companies really have any skin in the game. Sure some of them would like to see something developed, but they’re not dependent on it. One recipe for disaster is to get a group of people together to work on something without hardly any negative consequence for failure.
  • I pushed JSON-LD too hard when discussing the various possibilities. I pushed it because I thought it was the best solution, and still do. I think my sense of urgency came across as being too pushy and authoritarian. This strategy, if you could call it that, backfired. Rather than open up a debate on the proper Linked Data JSON format, it seemed as if some people refused to have any sort of debate on the formats and instead chose to debate which community we were attempting to address in order to slow down the decision process until they could catch up with the state of all of the serialization formats.
  • Old school RDF people compose the majority of the RDF Working Group. It’s hard to pinpoint, but I saw what I could only describe as an “old world” mentality in the RDF Working Group. Browser-based APIs and development weren’t that important to them. They had functioning graph storage engines, operational SPARQL query engines, and PhDs to solve all of the hard problems that they may find in their everyday usage of RDF. Independent Web developers rarely have all of these advantages – many of them have none of these advantages. Many Web developers only have a browser, JavaScript, some server side code and JSON.parse() for performing data serialization and deserialization. JSON coupled with REST is simple, fast, stable and works for most everything we do. To solve 80% of our problems, there is no need for the added complexity that the “old school” RDF crowd brings to the table.
  • The RDF Working Group didn’t do their homework. We are all busy, I get that. However, even after two months, it was painfully clear that many in the group had not taken the time to understand the proposals on the table in any amount of depth. In some cases, I’m convinced that some did not even look at the proposals before passing judgement on whether or not the solution was sound.
  • Experts tend to over-analyze and cripple themselves and their colleagues with all of the potential failure scenarios. There were assertions in the group at times that, while had a basis of validity, were not constructive and came across as typical academic nay-saying. It is easier to find reasons why a particular direction will not succeed when you’re an expert. This nay-saying was very active in the RDF Working Group. We didn’t have a group that was saying “Yes, we can make this happen.” Instead, we had a minority that set the tone for the group by repeating “I don’t know if this’ll work, let’s not do it.”

I think the RDF Working Group has lost it’s way – we have forgotten the end-goal of enabling everyone on the Web to use Linked Data. We have chosen to deal with the easier problems instead of taking the biggest problem (adoption) seriously. There are many rational arguments to be made about why we’re not doing the work – none of those reasons are going to help spread Linked Data outside the modestly sized community that it enjoys at the moment. We need to get Web Apps developers using Linked Data – JSON is one way to accomplish that goal. It is a shame that we’re passing up this opportunity.

All is Not Lost

The RDF Working Group is only working on one interesting thing right now – and that’s how to represent multiple graphs in the various RDF serializations. Call them Named Graphs, or Graph Literals, or something else – but at least we’re taking that bull by the horns. As for the rest of the work that the RDF Working Group plans to do – it’s uninspired. I joined the group hoping to breathe some new life into RDF – make it exciting and useful to JavaScript developers. Instead, we ended up spending most of our time polishing things that are already working for most people. Don’t get me wrong, it’s good that some of these things are being polished – but it’s not going to impact RDF adoption rates in any significant way.

All is not lost. We decided to create a public linked data in JSON mailing list (not activated yet) where the people that would like to see something come of JSON in Linked Data could continue the work. We’re already revving JSON-LD and updating it to reflect issues that we discovered over the past several months. That’s where I’ll be spending most of my effort on Linked Data in JSON from now on – the RDF Working Group has demonstrated that we can’t accomplish the goal of growing the Linked Data community there.

JSON-LD: Cycle Breaking and Object Framing

We’ve been doing a bit of research at Digital Bazaar on how to best meld graph-based object models with what most developers are familiar with these days – JSON-based object programming (aka: associative-array based object models). We want to enable developers to use the same data models that they use in JavaScript today, but to work with arbitrary graph data.

This is an issue that we think is at the heart of why RDF has not caught on as a general data model – the data is very difficult to work with in programming languages. There is no native data structure that is easy to work with without a complex set of APIs.

When a JavaScript author gets JSON-LD from a remote source, the graph that the JSON-LD expresses can take a number of different but valid forms. That is, the information expressed by the graph can be identical, but each graph can be structured differently.

Think of these two statements:

The Q library contains book X.
Book X is contained in the Q library.

The information that is expressed in both sentences is exactly the same, but the structure of each sentence is different. Structure is very important when programming. When you write code, you expect the structure of your data to not change.

However, when we program using graphs, the structure is almost always unknown, so a mechanism to impose a structure is required in order to help the programmer be more productive.

The way the graph is represented is entirely dependent on the algorithm used to normalize and the algorithm used to break cycles in the graph. Consider the following example, which is a graph with three top-level objects – a library, a book and a chapter. Each of the items is related to one another, thus the graph can be expressed in JSON-LD in a number of different ways:

      "dc": "",
      "ex": ""
         "@": "",
         "a": "ex:Library",
         "ex:contains":  ""
         "@": "",
         "a": "ex:Book",
         "dc:contributor": "Writer",
         "dc:title": "My Book",
         "ex:contains": ""
         "@": "",
         "a": "ex:Chapter",
         "dc:description": "Fun",
         "dc:title": "Chapter One"

The JSON-LD graph above could also be represented like so:

      "dc": "",
      "ex": ""
   "@": "",
   "a": "ex:Library",
      "@": "",
      "a": "ex:Book",
      "dc:contributor": "Writer",
      "dc:title": "My Book",
         "@": "",
         "a": "ex:Chapter",
         "dc:description": "Fun",
         "dc:title": "Chapter One"

Both of the examples above express the exact same information, but the graph structure is very different. If a developer can receive both of the objects from a remote source, how do they ensure that they only have to write one code path to deal with both examples?

That is, how can a developer reliably write the following code:

// print all of the books and their corresponding chapters
var library = jsonld.toObject(jsonLdText);
for(var bookIndex = 0; bookIndex < library["ex:contains"].length;
   var book = library["ex:contains"][bookIndex];
   var bookTitle = book["dc:title"];
   for(var chapterIndex = 0; chapterIndex < book["ex:contains"].length;
      var chapter = book["ex:contains"][chapterIndex];
      var chapterTitle = chapter["dc:title"];
      console.log("Book: " + bookTitle + " Chapter: " + chapterTitle);

The answer boils down to ensuring that the data structure that is built for the developer from the JSON-LD is framed in a way that makes property access predictable. That is, the developer provides a structure that MUST be filled out by the JSON-LD API. The working title for this mechanism is called "Cycle Breaking and Object Framing" since both mechanisms must be operable in order to solve this problem.

The developer would specify a Frame for their language-native object like the following:

   "#": {"ex": ""},
   "a": "ex:Library",
      "a": "ex:Book",
         "a": "ex:Chapter"

The object frame above asserts that the developer expects to get a library containing one or more books containing one or more chapters returned to them. This ensures that the data is structured in a way that
is predictable and only one code path is necessary to work with graphs that can take multiple forms. The API call that they would use would look something like this:

var library = jsonld.toObject(jsonLdText, objectFrame);

The discussion on this particular issue is continued on the JSON-LD mailing list.