All posts by ManuSporny

Identity Credentials and Web Login

In a previous blog post, I outlined the need for a better login solution for the Web and why Mozilla Persona, WebID+TLS, and OpenID Connect currently don’t address important use cases that we’re considering in the Web Payments Community Group. The blog post contained a proposal for a new login mechanism for the Web that was simultaneously more decentralized, more extensible, enabled a level playing field, and was more privacy-aware than the previously mentioned solutions.

In the private conversations we have had with companies large and small, the proposal was met with a healthy dose of skepticism and excitement. There was enough excitement generated to push us to build a proof-of-concept of the technology. We are releasing this proof-of-concept to the Web today so that other technologists can take a look at it. It’s by no means done, there are plenty of bugs and security issues that we plan to fix in the next several weeks, but the core of the idea is there and you can try it out.

TL;DR: There is now an open source demo of credential-based login for the Web. We think it’s better than Persona, WebID+TLS, and OpenID Connect. If we can build enough support for Identity Credentials over the next year, we’d like to standardize it via the W3C.

The Demo

The demonstration that we’re releasing today is a proof-of-concept asserting that we can have a unified, secure identity and login solution for the Web. The technology is capable of storing and transmitting your identity credentials (email address, payment processor, shipping address, driver’s license, passport, etc.) while also protecting your privacy from those that would want to track and sell your online browsing behavior. It is in the same realm of technology as Mozilla Persona, WebID+TLS, and OpenID Connect. Benefits of using this technology include:

  • Solving the NASCAR login problem in a way that greatly increases identity provider competition.
  • Removing the need for usernames and passwords when logging into 99.99% of the websites that you use today.
  • Auto-filling information that you have to repeat over and over again (shipping address, name, email, etc.).
  • Solving the NASCAR payments problem in a way that greatly increases payment processor competition.
  • Storage and transmission of credentials, such as email addresses, driver’s licenses, and digital passports, via the Web that cryptographically prove that you are who you say you are.

The demonstration is based on the Identity Credentials technology being developed by the Web Payments Community Group at the World Wide Web Consortium. It consists of an ecosystem of four example websites. The purpose of each website is explained below:

Identity Provider (identus.org)

The Identity Provider stores your identity document and any information about you including any credentials that other sites may issue to you. This site is used to accomplish several things during the demo:

  • Create an identity.
  • Register your identity with the Login Hub.
  • Generate a verified email credential and store it in your identity.

Login Hub (login-hub.com)

This site helps other websites discover your identity provider in a way that protects your privacy from both the website you’re logging into as well as your identity provider. Eventually the functionality of this website will be implemented directly in browsers, but until that happens, it is used to bootstrap the discovery of and login/credential transmission process for the identity provider. This site is used to do the following things during the demo:

  • Register your identity, creating an association between your identity provider and the email address and passphrase you use on the login hub.
  • Login to a website.

Credential Issuer (credential.club)

This site is responsible for verifying information about you like your home address, driver’s license, and passport information. Once the site has verified some information about you, it can issue a credential to you. For the purposes of the demonstration, all verifications are simulated and you will immediately be given a credential when you ask for one. All credentials are digitally signed by the issuer which means their validity can be proven without the need to contact the issuer (or be online). This site is used to do the following things during the demo:

  • Login using an email credential.
  • Issue other credentials to yourself like a business address, proof of age, driver’s license, and digital passport.

Single Sign-On Demo

The single sign-on website, while not implemented yet, will be used to demonstrate the simplicity of credential-based login. The sign-on process requires you to click a login button, enter your email and passphrase on the Login Hub, and then verify that you would like to transmit the requested credential to the single sign-on website. This website will allow you to do the following in a future demo:

  • Present various credentials to log in.

How it Works

The demo is split into four distinct parts. Each part will be explained in detail in the rest of this post. Before you try the demo, it is important that you understand that this is a proof-of-concept. The demo is pretty easy to break because we haven’t spent any time polishing it. It’ll be useful for technologists that understand how the Web works. It has only been tested in Google Chrome, versions 31 – 35. There are glaring security issues with the demo that have solutions which have not been implemented yet due to time constraints. We wanted to publish our work as quickly as possible so others could critique it early rather than sitting on it until it was “done”. With those caveats clearly stated up front, let’s dive in to the demo.

Creating an Identity

The first part of the demo requires you to create an identity for yourself. Do so by clicking the link in the previous sentence. Your short name can be something simple like your first name or a handle you use online. The passphrase should be something long and memorable that is specific to you. When you click the Create button, you will be redirected to your new identity page.

Note the text displayed in the middle of the screen. This is your raw identity data in JSON-LD format. It is a machine-readable representation of your credentials. There are only three pieces of information in it in the beginning. The first is the JSON-LD @context value, https://w3id.org/identity/v1, which tells machines how to interpret the information in the document. The second is the id value, which is the location of this particular identity on the Web. The third is the sysPasswordHash, which is just a bcrypt hash of your login password to the identity website.

Global Web Login Network

Now that you have an identity, you need to register it with the global Web login network. The purpose of this network is to help map your preferred email address to your identity provider. Keep in mind that in the future, the piece of software that will do this mapping will be your web browser. However, until this technology is built into the browser, we will need to bootstrap the email to identity document mapping in another way.

The way that both Mozilla Persona and OpenID do it is fairly similar. OpenID assumes that your email address maps to your identity provider. So, an OpenID login name of joe@gmail.com assumes that gmail.com is your identity provider. Mozilla Persona went a step further by saying that if gmail.com wouldn’t vouch for your email address, that they would. So Persona would first check to see if gmail.com spoke the Persona protocol, and if it didn’t, the burden of validating the email address would fall back to Mozilla. This approach put Mozilla in the unenviable position of running a lot of infrastructure to make sure this entire system stayed up and running.

The Identity Credentials solution goes a step further than Mozilla Persona and states that you are the one that decides which identity provider your email address maps to. So, if you have an email address like bob@gmail.com, you can use yahoo.com as your identity provider. You can probably imagine that this makes the large identity providers nervous because it means that they’re now going to have to compete for your business. You have the choice of who is going to be your identity provider regardless of what your email address is.

So, let’s register your new identity on the global web login network. Click the text on the screen that says “Click here to register”. That will take you to a site called login-hub.com. This website serves two purposes. The first is to map your preferred email address to your identity provider. The second is to protect your privacy as you send information from your identity provider and send it to other websites on the Internet (more on this later).

You should be presented with a screen that asks you for three pieces of information. Your preferred email address, a passphrase, and a verification of that passphrase. When you enter this information, it will be used to do a number of things. The first thing that will happen is that a public/private keypair will be generated for the device that you’re using (your web browser, for instance). This key will be used as a second factor of authentication in later steps in this process. The second thing that will happen is that your email address and passphrase will be used to generate a query token, which will be later used to query the decentralized Telehash-based identity network. The third thing that will happen is that your query token to identity document mapping will be encrypted and placed onto the Telehash network.

The Decentralized Database (Telehash)

We could spend an entire blog post itself on Telehash, but the important thing to understand about it is that it provides a mechanism to store data in a decentralized database and query that database at a later time for the data. By storing this query token and query response in the decentralized database, it allows us to find your identity provider mapping regardless of which device you’re using to access the Web and independent of who your email provider is.

In fact, note that I said that you use your “preferred email address” above? It doesn’t need to be an email address, it could be a simple string like “Bob” and a unique passphrase. Even though there are many “Bob”s in the world, the likelyhood that they’d use the same 20+ character passphrase is unlikely and therefore one could use just a first name and a complex passphrase. That said, we’re suggesting that most non-technical people use a preferred email address because most people won’t understand the dangers of SHA-256 collisions on username+passphrase combinations like sha256(“Bob” + “password”). In addition to this aside, the decentralized database solution doesn’t need to be Telehash. It could just as easily be a decentralized ledger like Namecoin or Ripple.

Once you have filled out your preferred email address and passphrase, click the Register button. You will be sent back to your identity provider and will see two new pieces of information. The first piece of information is sysIdpMapping, which is the decentralized database query token (query) and passphrase-encrypted mapping (queryResponse). The second piece of information is sysDeviceKeys, which is the public key associated with the device that you registered your identity through and which will be used as a second factor of authentication in later versions of the demo. The third piece of information is sysRegistered, which is an indicator that the identity has been registered with the decentralized database.

Acquiring an Email Credential

At this point, you can’t really do much with your identity since it doesn’t have any useful credential information associated with it. So, the next step is to put something useful into your identity. When you create an account on most websites, the first thing the website asks you for is an email address. It uses this email address to communicate with you. The website will typically verify that it can send and receive an email to that address before fully activating your account. You typically have to go through this process over and over again, once for each new site that you join. It would be nice if an identity solution designed for the Web would take care of this email registration process for you. For those of you familiar with Mozilla Persona, this approach should sound very familiar to you.

The Identity Credentials technology is a bit different from Mozilla Persona in that it enables a larger number of organizations to verify your email address than just your email provider or Mozilla. In fact, we see a future where there could be tens, if not hundreds, of organizations that could provide email verification. For the purposes of the demo, the Identity Provider will provide a “simulated verification” (aka fake) of your email address. To get this credential, click on the text that says “Click here to get one”.

You will be presented with a single input field for your email address. Any email address will do, but you may want to use the preferred one you entered earlier. Once you have entered your email address, click “Issue Email Credential”. You will be sent back to your identity page and you should see your first credential listed in your JSON-LD identity document beside the credential key. Let’s take a closer look at what constitutes a credential in the system.

The EmailCredential is a statement that a 3rd party has done an email verification on your account. Any credential that conforms to the Identity Credentials specification is composed of a set of claims and a signature value. The claims tie the information that the 3rd party is asserting, such as an email address, to the identity. The signature is composed of a number of fields that can be used to cryptographically prove that only the issuer of the credential was capable of issuing this specific credential. The details of how the signature is constructed can be found in the Secure Messaging specification.

Now that you have an email credential, you can use it to log into a website. The next demonstration will use the email credential to log into a credential issuer website.

Credential-based Login

Most websites will only require an email credential to log in. There are other sites, such as ecommerce sites or high-security websites, that will require a few more credentials to successfully log in or use their services. For example, a ecommerce site might require your payment processor and shipping address to send you the goods you purchased. A website that sells local wines might request that you provide a credential proving that you are above the required drinking age in your locality. A travel website might request your digital passport to ease your security clearing process if you are traveling internationally. There are many more types of speciality credentials that one may issue and use via the Identity Credentials technology. The next demo will entail issuing some of these credentials to yourself. However, before we do that, we have to login to the credential issuer website using our newly acquired email credential.

Go to the credential.club website and click on the “Login” button. This will immediately send you to the login hub website where you had previously registered your identity. The request sent to the login hub by credential.club will effectively be a request for your email credential. Once you’re on login-hub.com, enter your preferred email address and passphrase and then click “Login”.

While you were entering your email address and passphrase, the login-hub.com page connected to the Telehash network and readied itself to send a query. When you click “Login”, your email address and passphrase are SHA-256′d and sent as a query to the Telehash network. Your identity provider will receive the request and respond to the query with an encrypted message that will then be decrypted using your passphrase. The contents of that message will tell the login hub where your identity provider is holding your identity. The request for the email credential is then forwarded to your identity provider. Note that at this point your identity provider has no idea where the request for your email credential is coming from because it is masked by the login hub website. This masking process protects your privacy.

Once the request for your email credential is received by your identity provider, a response is packaged up and sent back to login-hub.com, which then relays that information back to credential.club. Once credential.club recieves your email credential, it will log you into the website. Note at this point that you didn’t have to enter a single password on the credential.club website, all you needed was an email credential to log in. Now that you have logged in, you can start issuing additional credentials to yourself.

Issuing Additional Credentials

The previous section introduced the notion that you can issue many different types of credentials. Once you have logged into the credential.club website, you may now issue a few of these credentials to yourself. Since this is a demonstration, no attempt will be made to verify those credentials by a 3rd party. The credentials that you can issue to yourself include a business address, proof of age, payment processor, driver’s license, and passport. You many specify any information that you’d like to specify in the input fields to see how the credential would look if it held real data.

Once you have filled out the various fields, click the blue button to issue the credential. The credential will be digitally signed and sent to your identity provider, which will then show you the credential that was issued to you. You have a choice to accept or reject the credential. If you accept the credential, it is written to your identity.

You may repeat this process as many times as you would like. Note that on the passport credential how there is an issued on date as well as an expiration date to demonstrate that credentials can have a time limit associated with them.

Known Issues

As mentioned throughout this post, this demonstration has a number of shortcomings and areas that need improvement, among them are:

  • Due to a lack of time, we didn’t setup our own HTTPS Telehash seed. Since we didn’t setup the HTTPS Telehash seed, we couldn’t run login-hub.com secured by TLS due to security settings in most web browsers related to WebSocket connections. Not using TLS results in a gigantic man-in-the-middle attack possibility. A future version will, of course, use both TLS and HSTS on the login-hub.com website.
  • The Telehash query/response database isn’t decentralized yet. There are a number of complexities associated with creating a decentralized storage/query network, and we haven’t decided on what the proper approach should be. There is no reason why the decentralized database couldn’t be NameCoin or Ripple-based, and it would probably be good if we had multiple backend databases that supported the same query/response protocol.
  • We don’t check digital signatures yet, but will soon. We were focused on the flow of data first and ensuring security parameters were correct second. Clearly, you would never want to run such a system in production, but we will improve it such that all digital signatures are verified.
  • We do not use the public/private keypair generated in the browser to limit the domain and validity length of credentials yet. When the system is productionized, implementing this will be a requirement and will protect you even if your credentials are stolen through a phishing attack on login-hub.com.
  • We expect there to be many more security vulnerabilities that we haven’t detected yet. That said, we do believe that there are no major design flaws in the system and are releasing the proof-of-concept, along with source code, to the general public for feedback.

Feedback and Future Work

If you have any questions or concerns about this particular demo, please leave them as comments on this blog post or send them as comments to the public-web-payments@w3.org mailing list.

Just as you logged in to the credential.club website using your email credential, you may also use other credentials such as your driver’s license or passport to log in to websites. Future work on this demo will add functionality to demonstrate the use of other forms of credentials to perform logins while also addressing the security issues outlined in the previous section.

The Marathonic Dawn of Web Payments

A little over six years ago, a group of doe-eyed Web developers, technologists, and economists decided that the way we send and receive money over the Web was fundamentally broken and needed to be fixed. The tiring dance of filling out your personal details on every website you visited seemed archaic. This was especially true when handing over your debit card number, which is basically a password into your bank account, to any fly by night operation that had something you wanted to buy. It took days to send money where an email would take milliseconds. Even with the advent of Bitcoin, not much has changed since 2007.

At the time, we naively thought that it wouldn’t take long for the technology industry to catch on to this problem and address it like they’ve addressed many of the other issues around publishing and communication over the Web. After all, getting paid and paying for services is something all of us do as a fundamental part of modern day living. Change didn’t come as fast as we had hoped. So we kept our heads down and worked for years gathering momentum to address this issue on the Web. I’m happy to say that we’ve just had a breakthrough.

The first ever W3C Web Payments Workshop happened two weeks ago. It was a success. Through it, we have taken significant steps toward a better future for the Web and those that make a living by using it. This is the story of how we got from there to here, what the near future looks like, and the broad implications this work has for the Web.

TL;DR: The W3C Web Payments Workshop was a success, we’re moving toward standardizing some technologies around the way we send and receive money on the Web; join the Web Payments Community Group if you want to find out more.

Primordial Web Payment Soup

In late 2007, our merry little band of collaborators started piecing together bits of the existing Web platform in an attempt to come up with something that could be standardized. After a while, it became painfully obvious that the Web Platform was missing some fundamental markup and security technologies. For example, there was no standard machine-readable or automate-able way of describing an item for sale on the Web. This meant that search engines can’t index all the things on the Web that are offered for sale. It also meant that all purchasing decisions had to be made by people. You couldn’t tell your Web browser something like “I trust the New York Times, let them charge me $0.05 per article up to $10 per month for access to their website”. Linked Data seemed like the right solution for machine-readable products, but the Linked Data technologies at the time seemed mired in complex, draconian solutions (SOAP, XML, XHTML, etc.): the bane of most Web Developers.

We became involved in the Microformats community and in the creation of technologies like RDFa in the hope that we could apply it to the Web Payments work. When it became apparent that RDFa was only going to solve part of the problem (and potentially produce a new set of problems), we created JSON-LD and started to standardize it through the W3C.

As these technologies started to grow out of the need to support payments on the Web, it became apparent that we needed to get more people from the general public, government, policy, traditional finance, and technology sectors involved.

Founding a Payment Incubator for the Web

We needed to build a movement around the Web Payments work and the founding of a community was the first step in that movement. In 2009, we founded the PaySwarm Community and worked on the technologies related to payments on the Web with a handful of individuals. In 2011, we transitioned the PaySwarm Community to the W3C and renamed the group to the Web Payments Community Group. To be clear, Community Groups at W3C are never officially sanctioned by W3C’s membership, but they are where most of the pre-standardization work happens. The purpose of the Web Payments Community Group was to incubate payment technologies and lobby W3C to start official standardization work related to how we exchange monetary value on the Web.

What started out as nine people spread across the world has grown into an active community of more than 150 people today. That community includes interesting organizations like Bloomberg, Mozilla, Stripe, Yandex, Ripple Labs, Citigroup, Opera, Joyent, and Telefónica. We have 14 technologies that are in the pre-standardization phase, ready to be placed into the standardization pipeline at W3C if we can get enough support from Web developers and the W3C member organizations.

Traction

In 2013, a number of us thought there was enough momentum to lobby W3C to hold the world’s first Web Payments Workshop. The purpose of the workshop would be to get major payment providers, government organizations, telecommunication providers, Web technologists, and policy makers into the same room to see if they thought that payments on the Web were broken and to see if people in the room thought that there was something that we could do about it.

In November of 2013, plans were hatched to hold the worlds first Web Payments Workshop. Over the next several months, the W3C, the Web Payments Workshop Program Committee, and the Web Payments Community Group worked to bring together as many major players as possible. The result was something better than we could have hoped for.

The Web Payments Workshop

In March 2014, the Web Payments Workshop was held in the beautiful, historic, and apropos Paris stock exchange, the Palais Brongniart. It was packed by an all-star list of financial and technology industry titans like the US Federal Reserve, Google, SWIFT, Yandex, Mozilla, Bloomberg, ISOC, Rabobank, and 103 other people and organizations that shape financial and Web standards. In true W3C form, every single session was minuted and is available to the public. The sessions focused on the following key areas related to payments and the Web. The entire contents of each session, all 14 hours of discussion, are linked to below:

  1. Introductions by W3C and European Commission
  2. Overview of Current and Future Payment Ecosystems
  3. Toward an Ideal Web Payments Experience
  4. Back End: Banks, Regulation, and Future Clearing
  5. Enhancing the Customer and Merchant Experience
  6. Front End: Wallets – Initiating Payment and Digital Receipts
  7. Identity, Security, and Privacy
  8. Wrap-up of Workshop and Next Steps

I’m not going to do any sort of deep dive into what happened during the workshop. W3C has released a workshop report that does justice to summarizing what went on during the event. The rest of this blog post will focus on what will most likely happen if we continue to move down the path we’ve started on wrt. Web Payments at W3C.

The Next Year in Web Payments

The next step of the W3C process is to convene an official group that will take all of the raw input from the Web Payments Workshop, the papers submitted to the event, input from various W3C Community Groups and from the industry at large, and reduce the scope of work down to something that is narrowly focused but will have a very large series of positive impacts on the Web.

This group will most likely operate for 6-12 months to make its initial set of recommendations for work that should start immediately in existing W3C Working Groups. It may also recommend that entirely new groups be formed at W3C to start standardization work. Once standardization work starts, it will be another 3-4 years before we see an official Web standard. While that sounds like a long time, keep in mind that large chunks of the work will happen in parallel, or have already happened. For example, the first iteration of the RDFa and JSON-LD bits of the Web Payments work are already done and standardized. The HTTP Signatures work is quite far along (from a technical standpoint, it still needs a thorough security review and consensus to move forward).

So, what kind of new work can we expect to get started at W3C? While nothing is certain, looking at the 14 pre-standards documents that the Web Payments Community Group is working on helps us understand where the future might take us. The payment problems of highest concern mentioned in the workshop papers also hint at the sorts of issues that need to be addressed for payments on the Web. Below are a few ideas of what may spin out of the work over the next year. Keep in mind that these predictions are mine and mine alone, they are in no way tied to any sort of official consensus either at the W3C or in the Web Payments Community Group.

Identity and Verified Credentials

One of the most fundamental problems that was raised at the workshop was the idea that identity on the Web is broken. That is, being able to prove who you are to a website, such as a bank or merchant, is incredibly difficult. Since it’s hard for us to prove who we are on the Web, fraud levels are much higher than they should be and peer-to-peer payments require a network of trusted intermediaries (which drive up the cost of the simplest transaction).

The Web Payments Community Group is currently working on technology called Identity Credentials that could be applied to this problem. It’s also closely related to the website login problem that Mozilla Persona was attempting to solve. Security and privacy concerns abound in this area, so we have to make sure to carefully design for those concerns. We need a privacy-conscious identity solution for the Web, and it’s possible that a new Working Group may need to be created to push forward initiatives like credential-based login for the Web. I personally think it would be unwise for W3C members to put off the creation of an Identity Working Group for much longer.

Wallets, Payment Initiation, and Digital Receipts

Another agreement that seemed to come out of the workshop was the belief that we need to create a level playing field for payments while also not attempting to standardize one payment solution for the Web. The desire was to standardize on the bare minimum necessary to make it so that websites only needed a few ways to initiate payments and receive confirmation for them. The ideal case was that your browser or wallet software would pick the best payment option for you based on your needs (best protection, fastest payment confirmation, lowest fees, etc.).

Digital wallets that hold different payment mechanisms, loyalty cards, personal data, and receipts were discussed. Unfortunately, the scope of a wallet’s functionality was not clear. Would a wallet consist of a browser-based API? Would it be cloud-based? Both? How would you sync data between wallets on different devices? What sort of functionality would be the bare minimum? These are questions that the upcoming W3C Payments Interest Group should answer. The desired outcome, however seemed to be fairly concrete: provide a way for people to do a one-click purchase on any website without having to hand over all of their personal information. Make it easy for Web developers to integrate this functionality into websites using a standards-based approach.

Shifting to use some Bitcoin-like protocol seemed to be a non-starter for most everyone in the room, however the idea that we could create Bitcoin/USD/Euro wallets that could initiate payment and provide a digital receipt proving that funds were moved seemed to be one possible implementation target. This would allow Visa, Mastercard, PayPal, Bitcoin, and banks to not have to reinvent their entire payment networks in order to support simple one-click purchases on the Web. The Web Payments Community Group does have a Web Commerce API specification and a Web Commerce protocol that covers this area, but it may need to be modified or expanded based on the outcome of the “What is a digital wallet and what does it do?” discussion.

Everything Else

The three major areas where it seemed like work could start at W3C revolved around verified identity, payment initiation, and digital receipts. In order to achieve those broad goals, we’re also going to have to work on some other primitives for the Web.

For example, JSON-LD was mentioned a number of times as the digital receipt format. If JSON-LD is going to be the digital receipt format, we’re going to have to have a way of digitally signing those receipts. JOSE is one approach, Secure Messaging is another, and there is currently a debate over which is best suited for digitally signing JSON-LD data.

If we are going to have digital receipts, then what goes into those receipts? How are we going to express the goods and services that someone bought in an interoperable way? We need something like the product ontology to help us describe the supply and demand for products and services on the Web.

If JSON-LD is going to be utilized, some work needs to be put into Web vocabularies related to commerce, identity, and security. If mobile-based NFC payment is a part of the story, we need to figure out how that’s going to fit into the bigger picture, and so on.

Make a Difference, Join us

As you can see, even if the payments scope is very narrow, there is still a great deal of work that needs to be done. The good news is that the narrow scope above would focus on concrete goals and implementations. We can measure progress for each one of those initiatives, so it seems like what’s listed above is quite achievable over the next few years.

There also seems to be broad support to address many of the most fundamental problems with payments on the Web. That’s why I’m calling this a breakthrough. For the first time, we have some broad agreement that something needs to be done and that W3C can play a major role in this work. That’s not to say that if a W3C Payments Interest Group is formed that they won’t self destruct for one reason or another, but based on the sensible discussion at the Web Payments Workshop, I wouldn’t bet on that outcome.

If the Web Payments work at W3C is successful, it means a more privacy-conscious, secure, and semantically rich Web for everyone. It also means it will be easier for you to make a living through the Web because the proper primitives to do things like one-click payments on the Web will finally be there. That said, it’s going to take a community effort. If you are a Web developer, designer, or technical writer, we need your help to make that happen.

If you want to become involved, or just learn more about the march toward Web Payments, join the Web Payments Community Group.

If you are a part of an organization that would specifically like to provide input to the Web Payments Steering Group Charter at W3C, join here.

A Proposal for Credential-based Login

Mozilla Persona allows you to sign in to web sites using any of your existing email addresses without needing to create a new username and password on each website. It was a really promising solution for the password-based security nightmare that is login on the Web today.

Unfortunately, all the paid engineers for Mozilla Persona have been transitioned off of the project. While Mozilla is going to continue to support Persona into the foreseeable future, it isn’t going to directly put any more resources into improving Persona. Mozilla had very good reasons for doing this. That doesn’t mean that the recent events aren’t frustrating or sad. The Persona developers made a heroic effort. If you find yourself in the presence of Lloyd, Ben, Dan, Jed, Shane, Austin, or Jared (sorry if I missed someone!) be sure to thank them for their part in moving the Web forward.

If Not Persona, Then What?

At the moment, the Web’s future with respect to a better login experience is unclear. The current leader seems to be OpenID Connect which, while implemented across millions of sites, is still not seeing the sort of adoption that you’d need for a general Web-based login solution. It’s also really complex, so complex that the lead editor of the foundation OpenID is built on left the work a long time ago in frustration. It also doesn’t pro-actively protect your privacy, meaning that your identity provider can track where you go on the Web. OpenID also gives an undue amount of power to email providers, like Gmail and Yahoo as your email provider would typically end up becoming most people’s identity provider as well.

WebID+TLS should also be mentioned as this proposal embraces and extends a number of good features from that specification. Concepts such as an identity document, which is a place where you store facts about yourself, and the ability to express that information as Linked Data are solid concepts. WebID+TLS, unfortunately, relies on the client-certificate technology built into most browsers which is confusing to non-technologists and puts too much of a burden, such as requiring the use of an RDF TURTLE processor as well as the ability to hook into the TLS stream, onto websites adopting the technology. WebID+TLS also doesn’t do much to protect against pervasive monitoring and tracking of your behavior online by companies that would like to sell that behavior to the highest bidder.

Somewhere else on the Internet, the Web Payments Community Group is working on technology to build payments into the core architecture of the Web. Login and identity are a big part of payments. We need a solution that allows someone to login to a website and transmit their payment preferences at the same time. A single authorized click by you would provide your email address, shipping address, and preferred payment provider. Another authorized click by you would buy an item and have it shipped to your preferred address. There will be no need to fill out credit card information, shipping, or billing addresses and no need to create an email address and password for every site to which you want to send money. Persona was going to be this login solution for us, but that doesn’t seem achievable at this point.

What Persona Got Right

The Persona after-action review that Mozilla put together is useful. If you care about identity and login, you should read it. Persona did four groundbreaking things:

  1. It was intended to be fully decentralized, being integrated into the browser eventually.
  2. It focused on privacy, ensuring that your identity provider couldn’t track the sites that you were logging in to.
  3. It used an email address as your login ID, which is a proven approach to login on the Web.
  4. It was simple.

It failed for at least three important reasons that were not specific to Mozilla:

  1. It required email providers to buy into the protocol.
  2. It had a temporary, centralized solution that required a costly engineering team to keep it up and running.
  3. If your identity provider goes down, you can’t login to any website.

Finally, the Persona solution did one thing well. It provided a verified email credential, but is that enough for the Web?

The Need for Verifiable Credentials

There is a growing need for digitally verifiable credentials on the Web. Being able to prove that you are who you say you are is important when paying or receiving payment. It’s also important when trying to prove that you are a citizen of a particular country, of a particular age, licensed to perform a specific task (like designing a house), or have achieved a particular goal (like completing a training course). In all of these cases, it requires the ability for you to collect digitally signed credentials from a third party, like a university, and store it somewhere on the Web in an interoperable way.

The Web Payments group is working on just such a technology. It’s called the Identity Credentials specification.

We had somewhat of an epiphany a few weeks ago when it became clear that Persona was in trouble. An email address is just another type of credential. The process for transmitting a verified email address to a website should be the same as transmitting address information or your payment provider preference. Could we apply this concept and solve the login on the web problem as well as the transmission of verified credentials problem? It turns out that the answer is: most likely, yes.

Verified Credential-based Web Login

The process for credential-based login on the Web would more or less work like this:

  1. You get an account with an identity provider, or run one yourself. Not everyone wants to run one themselves, but it’s the Web, you should be able to easily do so if you want to.
  2. You show up at a website, it asks you to login by typing in your email address. No password is requested.
  3. The website then kick-starts a login process via navigator.id.login() that will be driven by a Javascript polyfill in the beginning, but will be integrated into the browser in time.
  4. A dialog is presented to you (that the website has no control over or visibility into) that asks you to login to your identity provider. Your identity provider doesn’t have to be your email provider. This step is skipped if you’ve logged in previously and your session with your identity provider is still active.
  5. A digitally signed assertion that you control your email address is given by your identity provider to the browser, which is then relayed on to the website you’re logging in to.

Details of how this process works can be found in the section titled Credential-based Login in the Identity Credentials specification. The important thing to note about this approach is that it takes all the best parts of Persona while overcoming key things that caused its demise. Namely:

  • Using an innovative new technology called Telehash, it is fully decentralized from day one.
  • It doesn’t require browser buy-in, but is implemented in such a way that allows it to be integrated into the browser eventually.
  • It is focused on privacy, ensuring that your identity provider can’t track the sites that you are logging into.
  • It uses an email address as your login ID, which is a proven approach to login on the Web.
  • It is simple, requiring far fewer web developer gymnastics than OpenID to implement. It’s just one Javascript library and one navigator.id.login() call.
  • It doesn’t require email providers to buy into the protocol like Persona did. Any party that the relying party website trusts can digitally sign a verified email credential.
  • If your identity provider goes down, there is still hope that you can login by storing your email credentials in a password-protected decentralized hash table on the Internet.

Why Telehash?

There is a part of this protocol that requires the system to map your email address to an identity provider. The way Persona did it was to query to see if your email provider was a Persona Identity Provider (decentralized), and if not, the system would fall back to Mozilla’s email-based verification system (centralized). Unfortunately, if Persona’s verification system was down, you couldn’t log into a website at all. This rarely happened, but that was more because Mozilla’s team was excellent at keeping the site up and there weren’t any serious attempts to attack the site. It was still a centralized solution.

The Identity Credentials specification takes a different approach to the problem. It allows any identity provider to claim an email address. This means that you no longer need buy-in from email providers. You just need buy-in from identity providers, and there are a ton of them out there that would be happy to claim and verify addresses like john.doe@gmail.com, or alice.smith@ymail.com. Unfortunately, this approach means that either you need browser support, or you need some sort of mapping mechanism that maps email addresses to identity providers. Enter Telehash.

Telehash is an Internet-wide distributed hashtable (DHT) based on the proven Kademlia protocol used by BitTorrent and Gnutella. All communication is fully encrypted. It allows you to store-and-replicate things like the following JSON document:

{
  "email": "john.doe@gmail.com",
  "identityService": "https://identity.example.com/identities"
}

If you want to find out who john.doe@gmail.com’s identity provider is, you just query the Telehash network. The more astute readers among you see the obvious problem in this solution, though. There are massive trust, privacy, and distributed denial of service attack concerns here.

Attacks on the Distributed Mapping Protocol

There are four problems with the system described in the previous section.

The first is that you can find out which email addresses are associated with which identity providers; that leaks information. Finding out that john.doe@gmail.com is associated with the https://identity.example.com/ identity provider is a problem. Finding out that they’re also associated with the https://public.cyberwarfare.usairforce.mil/ identity provider outs them as military personnel, which turns a regular privacy problem into a national security issue.

The second is that anyone on the network can claim to be an identity provider for that email address, which means that there is a big phishing risk. A nefarious identity provider need only put an entry for john.doe@gmail.com in the DHT pointing to their corrupt identity provider service and watch the personal data start pouring in.

The third is that a website wouldn’t know which digital signature on a email to trust. Which verified credential is trustworthy and which one isn’t?

The fourth is that you can easily harvest all of the email addresses on the network and spam them.

Attack Mitigation on the Distributed Mapping Protocol

There are ways to mitigate the problems raised in the previous section. For example, replacing the email field with a hash of the email address and passphrase would prevent attackers from both spamming an email address and figuring out how it maps to an identity provider. It would also lower the desire for attackers to put fake data into the DHT because only the proper email + passphrase would end up returning a useful result to a query. The identity service would also need to be encrypted with the passphrase to ensure that injecting bogus data into the network wouldn’t result in an entry collision.

In addition to these three mitigations, the network would employ a high CPU/memory proof-of-work to put a mapping into the DHT so the network couldn’t get flooded by bogus mappings. Keep in mind that the proof-of-work doesn’t stop bad data from getting into the DHT, it just slows its injection into the network.

Finally, figuring out which verified email credential is valid is tricky. One could easily anoint 10 non-profit email verification services that the network would trust, or something like the certificate-authority framework, but that could be argued as over-centralization. In the end, this is more of a policy decision because you would want to make sure email verification services are legally bound to do the proper steps to verify an email while ensuring that people aren’t gouged for the service. We don’t have a good solution to this problem yet, but we’re working on it.

With the modifications above, the actual data uploaded to the DHT will probably look more like this:

{
  "id": "c8e52c34a306fe1d487a0c15bc3f9bbd11776f30d6b60b10d452bcbe268d37b0",  <-- SHA256 hash of john.doe@gmail.com + >15 character passphrase
  "proofOfWork": "000000000000f7322e6add42",                                 <-- Proof of work for email to identity service mapping
  "identityService": "GZtJR2B5uyH79QXCJ...s8N2B5utJR2B54m0Lt"                <-- Passphrase-encrypted identity provider service URL
}

To query the network, the customer must provide both an email address and a passphrase which are hashed together. If the hash doesn't exist on the network, then nothing is returned by Telehash.

Also note that this entire Telehash-based mapping mechanism goes away once the technology is built into the browser. The telehash solution is merely a stop-gap measure until the identity credential solution is built into browsers.

The Far Future

In the far future, browsers would communicate with your identity providers to retrieve data that are requested by websites. When you attempt to login to a website, the website would request a set of credentials. Your browser would either provide the credentials directly if it has cached them, or it would fetch them from your identity provider. This system has all of the advantages of Persona and provides realistic solutions to a number of the scalability issues that Persona suffers from.

The greatest challenges ahead will entail getting a number of things right. Some of them include:

  • Mitigate the attack vectors for the Telehash + Javascript-based login solution. Even though the Telehash-based solution is temporary, it must be solid until browser implementations become the norm.
  • Ensure that there is buy-in from large companies wanting to provide credentials for people on the Web. We have a few major players in the pipeline at the moment, but we need more to achieve success.
  • Clearly communicate the benefits of this approach over OpenID and Persona.
  • Make sure that setting up your own credential-based identity provider is as simple as dropping a PHP file into your website.
  • Make it clear that this is intended to be a W3C standard by creating a specification that could be taken standards-track within a year.
  • Get buy-in from web developers and websites, which is going to be the hardest part.

JSON-LD and Why I Hate the Semantic Web

Full Disclosure: I am one of the primary creators of JSON-LD, lead editor on the JSON-LD 1.0 specification, and chair of the JSON-LD Community Group. This is an opinionated piece about JSON-LD. A number of people in this space don’t agree with my viewpoints. My statements should not be construed as official statements from the JSON-LD Community Group, W3C, or Digital Bazaar (my company) in any way, shape, or form. I’m pretty harsh about the technologies covered in this article and want to be very clear that I’m attacking the technologies, not the people that created them. I think most of the people that created and promote them are swell and I like them a lot, save for a few misguided souls, who are loveable and consistently wrong.

JSON-LD became an official Web Standard last week. This is after exactly 100 teleconferences typically lasting an hour and a half, fully transparent with text minutes and recorded audio for every call. There were 218+ issues addressed, 2,000+ source code commits, and 3,102+ emails that went through the JSON-LD Community Group. The journey was a fairly smooth one with only a few jarring bumps along the road. The specification is already deployed in production by companies like Google, the BBC, HealthData.gov, Yandex, Yahoo!, and Microsoft. There is a quickly growing list of other companies that are incorporating JSON-LD. We’re off to a good start.

In the previous blog post, I detailed the key people that brought JSON-LD to where it is today and gave a rough timeline of the creation of JSON-LD. In this post I’m going to outline the key decisions we made that made JSON-LD stand out from the rest of the technologies in this space.

I’ve heard many people say that JSON-LD is primarily about the Semantic Web, but I disagree, it’s not about that at all. JSON-LD was created for Web Developers that are working with data that is important to other people and must interoperate across the Web. The Semantic Web was near the bottom of my list of “things to care about” when working on JSON-LD, and anyone that tells you otherwise is wrong. :P

TL;DR: The desire for better Web APIs is what motivated the creation of JSON-LD, not the Semantic Web. If you want to make the Semantic Web a reality, stop making the case for it and spend your time doing something more useful, like actually making machines smarter or helping people publish data in a way that’s useful to them.

Why JSON-LD?

If you don’t know what JSON-LD is and you want to find out why it is useful, check out this video on Linked Data and this one on an Introduction to JSON-LD. The rest of this post outlines the things that make JSON-LD different from the traditional Semantic Web / Linked Data stack of technologies and why we decided to design it the way that we did.

Decision 1: Decrypt the Cryptic

Many W3C specifications are so cryptic that they require the sacrifice of your sanity and a secret W3C decoder ring to read. I never understood why these documents were so difficult to read, and after years of study on the matter, I think I found the answer. It turns out that most specification editors are just crap at writing.

It’s not like many of the things that are in most W3C specifications are complicated, it’s just that the editor is bad at explaining them to non-implementers, which are most of the web developers that end up reading these specification documents. This approach is often defended by raising the point that readability of the specification by non-implementers is viewed as secondary to its technical accuracy for implementers. The audience is the implementer, and you are expected to cater to them. To counter that point, though, we all know that technical accuracy is a bad excuse for crap writing. You can write something that is easy to understand and technically accurate, it just takes more effort to do that. Knowing your audience helps.

We tried our best to eliminate complex techno-babble from the JSON-LD specification. I made it a point to not mention RDF at all in the JSON-LD 1.0 specification because you didn’t need to go off and read about it to understand what was going on in JSON-LD. There was tremendous push back on this point, which I’ll go into later, but the point is that we wanted to communicate at a more conversational level than typical Internet and Web specifications because being pedantic too early in the spec sets the wrong tone.

It didn’t always work, but it certainly did set the tone we wanted for the community, which was that this Linked Data stuff didn’t have to seem so damn complicated. The JSON-LD 1.0 specification starts out by primarily using examples to introduce key concepts. It starts at basics, assuming that the audience is a web developer with modest training, and builds its way up slowly into more advanced topics. The first 70% of the specification contains barely any normative/conformance language, but after reading it, you know what JSON-LD can do. You can look at the section on the JSON-LD Context to get an idea of what this looks like in practice.

This approach wasn’t a wild success. Reading sections of the specification that have undergone feedback from more nitpicky readers still make me cringe because ease of understanding has been sacrificed at the alter of pedantic technical accuracy. However, I don’t feel embarrassed to point web developers to a section of the specification when they ask for an introduction to a particular feature of JSON-LD. There are not many specifications where you can do that.

Decision 2: Radical Transparency

One of the things that has always bothered me about W3C Working Groups is that you have to either be an expert to participate, or you have to be a member of the W3C, which can cost a non-trivial amount of money. This results in your typical web developer being able to comment on a specification, but not really having the ability to influence a Working Group decision with a vote. It also hobbles the standards-making community because the barrier to entry is perceived as impossibly high. Don’t get me wrong, the W3C staff does as much as they can to drive inclusion and they do a damn good job at it, but that doesn’t stop some of their member companies from being total dicks behind closed door sessions.

The W3C is a consortium of mostly for-profit companies and they have things they care about like market share, quarterly profits, and drowning goats (kidding!)… except for GoatCoats.com, anyone can join as long as you pay the membership dues! My point is that because there is a lack of transparency at times, it makes even the best Working Group less responsive to the general public, and that harms the public good. These closed door rules are there so that large companies can say certain things without triggering a lawsuit, which is sometimes used for good but typically results in companies being jerks and nobody finding out about it.

So, in 2010 we kicked off the JSON-LD work by making it radically open and we fought for that openness every step of the way. Anyone can join the group, anyone can vote on decisions, anyone can join the teleconferences, there are no closed door sessions, and we record the audio of every meeting. We successfully kept the technical work on the specification this open from the beginning to the release of JSON-LD 1.0 web standard a week ago. People came and went from the group over the years, but anyone could participate at any level and that was probably the thing I’m most proud of regarding the process that was used to create JSON-LD. Had we not have been this open, Markus Lanthaler may have never gone from being a gifted student in Italy to editor of the JSON-LD API specification and now leader of the Hypermedia Driven Web APIs community. We also may never have had the community backing to do some of the things we did in JSON-LD, like kicking RDF in the nuts.

Decision 3: Kick RDF in the Nuts

RDF is a shitty data model. It doesn’t have native support for lists. LISTS for fuck’s sake! The key data structure that’s used by almost every programmer on this planet and RDF starts out by giving developers a big fat middle finger in that area. Blank nodes are an abomination that we need, but they are applied inconsistently in the RDF data model (you can use them in some places, but not others). When we started with JSON-LD, RDF didn’t have native graph support either. For all the “RDF data model is elegant” arguments we’ve seen over the past decade, there are just as many reasons to kick it to the curb. This is exactly what we did when we created JSON-LD, and that really pissed off a number of people that had been working on RDF for over a decade.

I personally wanted JSON-LD to be compatible with RDF, but that’s about it. You could convert JSON-LD to and from RDF and get something useful, but JSON-LD had a more sane data model where lists were a first-class construct, you had generalized graphs, and you could use JSON-LD using a simple library and standard JSON tooling. To put that in perspective, to work with RDF you typically needed a quad store, a SPARQL engine, and some hefty libraries. Your standard web developer has no interest in that toolchain because it adds more complexity to the solution than is necessary.

So screw it, we thought, let’s create a graph data model that looks and feels like JSON, RDF and the Semantic Web be damned. That’s exactly what we did and it was working out pretty well until…

Decision 4: Work with the RDF Working Group. Whut?!

Around mid-2012, the JSON-LD stuff was going pretty well and the newly chartered RDF Working Group was going to start work on RDF 1.1. One of the work items was a serialization of RDF for JSON. The lead solutions for RDF in JSON were things like the aptly named RDF/JSON and JTriples, both of which would look incredibly foreign to web developers and continue the narrative that the Semantic Web community creates esoteric solutions to non-problems. The biggest problem being that many of the participants in the RDF Working Group at the time didn’t understand JSON.

The JSON-LD group decided to weigh in on the topic by pointing the RDF WG to JSON-LD as an example of what was needed to convince people that this whole Linked Data thing could be useful to web developers. I remember the discussions getting very heated over multiple months, and at times, thinking that the worst thing we could do to JSON-LD was to hand it over to the RDF Working Group for standardization.

It is at that point that David Wood, one of the chairs of the RDF Working Group, phoned me up to try and convince me that it would be a good idea to standardize the work through the RDF WG. I was very skeptical because there were people in the RDF Working Group who drove some of thinking that I had grown to see as toxic to the whole Linked Data / Semantic Web movement. I trusted Dave Wood, though. I had never seen him get religiously zealous about RDF like some of the others in the group and he seemed to be convinced that we could get JSON-LD through without ruining it. To Dave’s credit, he was mostly right. :)

Decision 5: Hate the Semantic Web

It’s not that the RDF Working Group was populated by people that are incompetent, or that I didn’t personally like. I’ve worked with many of them for years, and most of them are very intelligent, capable, gifted people. The problem with getting a room full of smart people together is that the group’s world view gets skewed. There are many reasons that a working group filled with experts don’t consistently produce great results. For example, many of the participants can be humble about their knowledge so they tend to think that a good chunk of the people that will be using their technology will be just as enlightened. Bad feature ideas can be argued for months and rationalized because smart people, lacking any sort of compelling real world data, are great at debating and rationalizing bad decisions.

I don’t want people to get the impression that there was or is any sort of animosity in the Linked Data / Semantic Web community because, as far as I can tell, there isn’t. Everyone wants to see this stuff succeed and we all have our reasons and approaches.

That said, after 7+ years of being involved with Semantic Web / Linked Data, our company has never had a need for a quad store, RDF/XML, N3, NTriples, TURTLE, or SPARQL. When you chair standards groups that kick out “Semantic Web” standards, but even your company can’t stomach the technologies involved, something is wrong. That’s why my personal approach with JSON-LD just happened to be burning most of the Semantic Web technology stack (TURTLE/SPARQL/Quad Stores) to the ground and starting over. It’s not a strategy that works for everyone, but it’s the only one that worked for us, and the only way we could think of jarring the more traditional Semantic Web community out of its complacency.

I hate the narrative of the Semantic Web because the focus has been on the wrong set of things for a long time. That community, who I have been consciously distancing myself from for a few years now, is schizophrenic in its direction. Precious time is spent in groups discussing how we can query all this Big Data that is sure to be published via RDF instead of figuring out a way of making it easy to publish that data on the Web by leveraging common practices in use today. Too much time is spent assuming a future that’s not going to unfold in the way that we expect it to. That’s not to say that TURTLE, SPARQL, and Quad stores don’t have their place, but I always struggle to point to a typical startup that has decided to base their product line on that technology (versus ones that choose MongoDB and JSON on a regular basis).

I like JSON-LD because it’s based on technology that most web developers use today. It helps people solve interesting distributed problems without buying into any grand vision. It helps you get to the “adjacent possible” instead of having to wait for a mirage to solidify.

Decision 6: Believe in Consensus

All this said, you can’t hope to achieve anything by standing on idealism alone and I do admit that some of what I say above is idealistic. At some point you have to deal with reality, and that reality is that there are just as many things that the RDF and Semantic Web initiative got right as it got wrong. The RDF data model is shitty, but because of the gauntlet thrown down by JSON-LD and a number of like-minded proponents in the RDF Working Group, the RDF Data Model was extended in a way that made it compatible with JSON-LD. As a result, the gap between the RDF model and the JSON-LD model narrowed to the point that it became acceptable to more-or-less base JSON-LD off of the RDF model. It took months to do the alignment, but it was consensus at its best. Nobody was happy with the result, but we could all live with it.

To this day I assert that we could rip the data model section out of the JSON-LD specification and it wouldn’t really affect the people using JSON-LD in any significant way. That’s consensus for you. The section is in there because other people wanted it in there and because the people that didn’t want it in there could very well have turned out to be wrong. That’s really the beauty of the W3C and IETF process. It allows people that have seemingly opposite world views to create specifications that are capable of supporting both world views in awkward but acceptable ways.

JSON-LD is a product of consensus. Nobody agrees on everything in there, but it all sticks together pretty well. There being a consensus on consensus is what makes the W3C, IETF, and thus the Web and the Internet work. Through all of the fits and starts, permathreads, pedantry, idealism, and deadlock, the way it brings people together to build this thing we call the Web is beautiful thing.

Postscript

I’d like to thank the W3C staff that were involved in getting JSON-LD to offical Web standard status (and the staff, in general, for being really fantastic people). Specifically, Ivan Herman for simultaneously pointing out all of the problems that lay in the road ahead while also providing ways to deal with each one as we came upon them. Sandro Hawke for pushing back against JSON-LD, but always offering suggestions about how we could move forward. I actually think he may have ended up liking JSON-LD in the end :) . Doug Schepers and Ian Jacobs for fighting for W3C Community Groups, without which JSON-LD would not have been able to plead the case for Web developers. The systems team and publishing team who are unknown to most of you, but work tirelessly to ensure that everything continues to operate, be published, and improve at W3C.

From the RDF Working group, the chairs (David Wood and Guus Schreiber), for giving JSON-LD a chance and ensuring that it got a fair shake. Richard Cyganiak for pushing us to get rid of microsyntaxes and working with us to try and align JSON-LD with RDF. Kingsley Idehen for being the first external implementer of JSON-LD after we had just finished scribbling the first design down on paper and tirelessly dogfooding what he preaches. Nobody does it better. The rest of the RDF Working Group members without which JSON-LD would have escaped unscathed from your influence, making my life a hell of a lot easier, but leaving JSON-LD and the people that use it in a worse situation had you not been involved.

The Origins of JSON-LD

Full Disclosure: I am one of the primary creators of JSON-LD, lead editor on the JSON-LD 1.0 specification, and chair of the JSON-LD Community Group. These are my personal opinions and not the opinions of the W3C, JSON-LD Community Group, or my company.

JSON-LD became an official Web Standard last week. This is after exactly 100 teleconferences typically lasting an hour and a half, fully transparent with text minutes and recorded audio for every call. There were 218+ issues addressed, 2,071+ source code commits, and 3,102+ emails that went through the JSON-LD Community Group. The journey was a fairly smooth one with only a few jarring bumps along the road. The specification is already deployed in production by companies like Google, the BBC, HealthData.gov, Yandex, Yahoo!, and Microsoft. There is a quickly growing list of other companies that are incorporating JSON-LD, but that’s the future. This blog post is more about the past, namely where did JSON-LD come from? Who created it and why?

I love origin stories. When I was in my teens and early twenties, the only origin stories I liked to read about were of the comic and anime variety. Spiderman, great origin story. Superman, less so, but entertaining. Nausicaä, brilliant. Major Motoko Kusanagi, nuanced. Spawn, dark. Those connections with characters fade over time as you understand that this world has more interesting ones. Interesting because they touch the lives of billions of people, and since I’m a technologist, some of my favorite origin stories today consist of finding out the personal stories behind how a particular technology came to be. The Web has a particularly riveting origin story. These stories are hard to find because they’re rarely written about, so this is my attempt at documenting how JSON-LD came to be and the handful of people that got it to where it is today.

The Origins of JSON-LD

When you’re asked to draft the press pieces on the launch of new world standards, you have two lists of people in your head. The first is the “all inclusive list”, which is every person that uttered so much as a word that resulted in a change to the specification. That list is typically very long, so you end up saying something like “We’d like to thank all of the people that provided input to the JSON-LD specification, including the JSON-LD Community, RDF Working Group, and individuals who took the time to send in comments and improve the specification.” With that statement, you are sincere and cover all of your bases, but feel like you’re doing an injustice to the people without which the work would never have survived.

The all inclusive list is very important, they helped refine the technology to the point that everyone could achieve consensus on it being something that is world class. However, 90% of the back breaking work to get the specification to the point that everyone else could comment on it is typically undertaken by a 4-5 people. It’s a thankless and largely unpaid job, and this is how the Web is built. It’s those people that I’d like to thank while exploring the origins of JSON-LD.

Inception

JSON-LD started around late 2008 as the work on RDFa 1.0 was wrapping up. We were under pressure from Microformats and Microdata, which we were also heavily involved in, to come up with a good way of programming against RDFa data. At around the same time, my company was struggling with the representation of data for the Web Payments work. We had already made the switch to JSON a few years previous and were storing that data in MySQL, mostly because MongoDB didn’t exist yet. We were having a hard time translating the RDFa we were ingesting (products for sale, pricing information, etc.) into something that worked well in JSON. At around the same time, Mark Birbeck, one of the creators of RDFa, and I were thinking about making something RDFa-like for JSON. Mark had proposed a syntax for something called RDFj, which I thought had legs, but Mark didn’t necessarily have the time to pursue.

The Hard Grind

After exchanging a few emails with Mark about the topic over the course of 2009, and letting the idea stew for a while, I wrote up a quick proposal for a specification and passed it by Dave Longley, Digital Bazaar’s CTO. We kicked the idea around a bit more and in May of 2010, published the first working draft of JSON-LD. While Mark was instrumental in injecting the first set of basis ideas into JSON-LD, Dave Longley would become the most important key technical mind behind how to make JSON-LD work for web programmers.

At that time, JSON-LD had a pretty big problem. You can represent data in JSON-LD in a myriad of different ways, making it hard to tell if two JSON-LD documents are the same or not. This was an important problem to Digital Bazaar because we were trying to figure out how to create product listings, digital receipts, and contracts using JSON-LD. We had to be able to tell if two product listings were the same, and we had to figure out a way to serialize the data so that products and their associated prices could be listed on the Web in a decentralized way. This meant digital signatures, and you have to be able to create a canonical/normalized form for your data if you want to be able to digitally sign it.

Dave Longley invented the JSON-LD API, JSON-LD Framing, and JSON-LD Graph Normalization to tackle these canonicalization/normalization issues and did the first four implementations of the specification in C++, JavaScript, PHP, and Python. The JSON-LD Graph Normalization problem itself took roughly 3 months of concentrated 70+ hour work weeks and dozens of iterations by Dave Longley to produce an algorithm that would work. To this day, I remain convinced that there are only a handful of people on this planet with a mind that is capable of solving those problems. He was the first and only one that cracked those problems. It requires a sort of raw intelligence, persistence, and ability to constantly re-evaluate the problem solving approach you’re undertaking in a way that is exceedingly rare.

Dave and I continued to refine JSON-LD, with him working on the API and me working on the syntax for the better part of 2010 and early 2011. When MongoDB started really taking off in 2010, the final piece just clicked into place. We had the makings of a Linked Data technology stack that would work for web developers.

Toward Stability

Around April 2011, we launched the JSON-LD Community Group and started our public push to try and put the specification on a standards track at the World Wide Web Consortium (W3C). It is at this point that Gregg Kellogg joined us to help refine the rough edges of the specification and provide his input. For those of you that don’t know Gregg, I know of no other person that has done complete implementations of the entire stack of Semantic Web technologies. He has Ruby implementations of quad stores, TURTLE, N3, NQuads, SPARQL engines, RDFa, JSON-LD, etc. If it’s associated with the Semantic Web in any way, he’s probably implemented it. His depth of knowledge of RDF-based technologies is unmatched and he focused that knowledge on JSON-LD to help us hone it to what it is today. Gregg helped us with key concepts, specification editing, implementations, tests, and a variety of input that left its mark on JSON-LD.

Markus Lanthaler also joined us around the same time (2011) that Gregg did. The story of how Markus got involved with the work is probably my favorite way of explaining how the standards process should work. Markus started giving us input while a masters student at Technische Universität Graz. He didn’t have a background in standards, he didn’t know anything about the W3C process or specification editing, he was as green as one can be with respect to standards creation. We all start where he did, but I don’t know of many people that became as influential as quickly as Markus did.

Markus started by commenting on the specification on the mailing list, then quickly started joining calls. He’d raise issues and track them, he started on his PHP implementation, then started making minor edits to the specifications, then major edits until earning our trust to become lead specification editor for the JSON-LD API specification and one of the editors for the JSON-LD Syntax specification. There was no deliberate process we used to make him lead editor, it just sort of happened based on all the hard work he was putting in, which is the way it should be. He went through a growth curve that normally takes most people 5 years in about a year and a half, and it happened exactly how it should happen in a meritocracy. He earned it and impressed us all in the process.

The Final Stretch

Of special mention as well is Niklas Lindström, who joined us starting in 2012 on almost every JSON-LD teleconference and provided key input to the specifications. Aside from being incredibly smart and talented, Niklas is particularly gifted in his ability to find a balanced technical solution that moved the group forward when we found ourselves deadlocked on a particular decision. Paul Kuykendall joined us toward the very end of the JSON-LD work in early 2013 and provided fresh eyes on what we were working on. Aside from being very level-headed, Paul helped us understand what was important to web developers and what wasn’t toward the end of the process. It’s hard to find perspective as work wraps up on a standard, and luckily Paul joined us at exactly the right moment to provide that insight.

There were literally hundreds of people that provided input on the specification throughout the years, and I’m very appreciative of that input. However, without this core of 4-6 people, JSON-LD would have never had a chance. I will never be able to find the words to express how deeply appreciative I am to Dave, Markus, Gregg, Niklas and Paul, who did the work on a primarily volunteer basis. At this moment in time, the Web is at the core of the way human kind communicates and the most ardent protectors of this public good create standards to ensure that the Web continues to serve all of us. It boils my blood to then know that they will go largely unrewarded by society for creating something that will benefit hundreds of millions of people, but that’s another post for another time.

The next post in this series tells the story of how JSON-LD was nearly eliminated on several occasions by its critics and proponents while on its journey toward a web standard.

Web Payments and the World Banking Conference

The standardization group for all of the banks in the world (SWIFT) was kind enough to invite me to speak at the world’s premier banking conference about the Web Payments work at the W3C. The conference, called SIBOS, happened last week and brings together 7,000+ people from banks and financial institutions around the world. The event was being held in Dubai this year. They wanted me to present on the new Web Payments work being done at the World Wide Web Consortium (W3C) including the work we’re doing with PaySwarm, Mozilla, the Bitcoin community, and Ripple Labs.

If you’ve never been to Dubai, I highly recommend visiting. It is a city of extremes. It contains the highest density of stunningly award-winning sky scrapers while the largest expanses of desert loom just outside of the city. Man-made islands dot the coastline, willed into shapes like that of a multi-mile wide palm tree or massive lumps of stone, sand, steel and glass resembling all of the countries of the world. I saw the largest in-mall aquarium in the world and ice skated in 105 degree weather. Poverty lines the outskirts of Dubai while ATMs that vend gold can be found throughout the city. Lamborghinis, Ferraris, Maybachs, and Porsches roared down the densely packed highways while plants struggled to survive in the oppressive heat and humidity.

The extravagances nestle closely to the other extremes of Dubai: a history of indentured servitude, women’s rights issues, zero-tolerance drug possession laws, and political self-censorship of the media. In a way, it was the perfect location for the worlds premier banking conference. The capital it took to achieve everything that Dubai had to offer flowed through the banks represented at the conference at some point in time.

The Structure of the Conference

The conference was broken into two distinct areas. The more traditional banking side was on the conference floor and resembled what you’d expect of a well-established trade show. It was large, roughly the size of four football fields. Innotribe, the less-traditional and much hipper innovation track, was outside of the conference hall and focused on cutting edge thinking, design, new technologies. The banks are late to the technology game, but that’s to be expected in any industry that has a history that can be measured in centuries. Innotribe is trying to fix the problem of innovation in banking.

“Customers”

One of the most surprising things that I learned during the conference was the different classes of customers a bank has and which class of customers are most profitable to the banks. Many people are under the false impression that the most valuable customer a bank can have is the one that walks into one of their branches and opens an account. In general, the technology industry tends to value the individual customer as the primary motivator for everything that it does. This impression, with respect to the banking industry, was shattered when I heard the head of an international bank utter the following with respect to banking branches: “80% of our customers add nothing but sand to our bottom line.” The banker was alluding to the perception that the most significant thing that customers bring into the banking branch is the sand on the bottom of their shoes. The implication is that most customers are not very profitable to banks and are thus not a top priority. This summarizes the general tone of the conference with respect to customers when it came to the managers of these financial institutions.

Fundamentally, a bank’s motives are not aligned with most of their customer’s needs because that’s not where they make the majority of their money. Most of a bank’s revenue comes from activities like short-term lending, utilizing leverage against deposits, float-based leveraging, high-frequency trading, derivatives trading, and other financial exercises that are far removed with what most people in the world think of when they think of the type of activities one does at a bank.

For example, it has been possible to do realtime payments over the current banking network for a while now. The standards and technology exists to do so within the vast majority of the bank systems in use today. In fact, enabling this has been put to a vote for the last five years in a row. Every time it has been up for a vote, the banks have voted against it. The banks make money on the day-to-day float against the transfers, so the longer it takes to complete a transfer, the more money the banks make.

I did hear a number of bankers state publicly that they cared about the customer experience and wanted to improve upon it. However, those statements rang pretty hollow when it came to the product focus on the show floor, which revolved around B2B software, high-frequency trading protocols, high net-value transactions, etc. There were a few customer-focused companies, but they were dwarfed by the size of the major banks and financial institutions in attendance at the conference.

The Standards Team

I was invited to the conference by two business units within SWIFT. The first was the innovation group inside of SWIFT, called Innotribe. The second was the core standards group at SWIFT. There are over 6,900 banks that participate in the SWIFT network. Their standards team is very big, many times larger than the W3C, and extremely well funded. The primary job of the standards team at SWIFT is to create standards that help their member companies exchange financial information with the minimum amount of friction. Their flagship product is a standard called ISO 20022, which is a 3,463 page document that outlines every sort of financial message that the SWIFT network supports today.

The SWIFT standards team are a very helpful group of people that are trying their hardest to pull their membership into the future. They fundamentally understand the work that we’re doing in the Web Payments group and are interested in participating more deeply. They know that technology is going to eventually disrupt their membership and they want to make sure that there is a transition path for their membership, even if their membership would like to view these new technologies, like Bitcoin, PaySwarm, and Ripple as interesting corner cases.

In general, the banks don’t view technical excellence as a fundamental part of their success. Most view personal relationships as the fundamental thing that keeps their industry ticking. Most bankers come from an accounting background of some kind and don’t think of technology as something that can replace the sort of work that they do. This means that standards and new technologies almost always take a back seat to other more profitable endeavors such as implementing proprietary high frequency trading and derivatives trading platforms (as opposed to customer-facing systems like PaySwarm).

SWIFT’s primary customers are the banks, not the bank’s customers. Compare this with the primary customer of most Web-based organizations and the W3C, which is the individual. Since SWIFT is primarily designed to serve the banks, and banks make most of their money doing things like derivatives and high-frequency trading, there really is no champion for the customer in the banking organizations. This is why using your bank is a fairly awful experience. Speaking from a purely capitalistic standpoint, individuals that have less than a million dollars in deposits are not a priority.

Hobbled by Complexity

I met with over 30 large banks while I was at SIBOS and had a number of low-level discussions with their technology teams. The banking industry seems to be crippled by the complexity of their current systems. Minor upgrades cost millions of dollars due to the requirement to keep backwards compatibility. For example, at one point during the conference, it was explained that there was a proposal to make the last digit in an IBAN number a particular value if the organization was not a bank. The amount of push-back on the proposal was so great that it was never implemented since it would cost thousands of banks several million dollars each to implement the feature. Many of the banks are still running systems as part of their core infrastructure that were created in the 1980s, written in COBOL or Fortran, and well past their initial intended lifecycles.

A bank’s legacy systems mean that they have a very hard time innovating on top of their current architecture, and it could be that launching a parallel financial systems architecture would be preferable to broadly interfacing with the banking systems in use today. Startups launching new core financial services are at a great advantage as long as they limit the number of places that they interface with these old technology infrastructures.

Commitment to Research and Development

The technology utilized in the banking industry is, from a technology industry point of view, archaic. For example, many of the high-frequency trading messages are short ASCII text strings that look like this:

8=FIX.4.1#9=112#35=0#49=BRKR#56=INVMGR#34=235#52=19980604-07:58:28#112=19980604-07:58:28#10=157#

Imagine anything like that being accepted as a core part of the Web. Messages are kept to very short sequences because they must be processed in less than 5 microseconds. There is no standard binary protocol, even for high-frequency trading. Many of the systems that are core to a bank’s infrastructure pre-date the Web, sometimes by more than a decade or two. At most major banking institutions, there is very little R&D investment into new models of value transfer like PaySwarm, Bitcoin, or Ripple. In a room of roughly 100 bank technology executives, when asked how many of them had an R&D or innovation team, only around 5% of the people in the room raised their hands.

Compare this with the technology industry, which devotes a significant portion of their revenue to R&D activities and tries to continually disrupt their industry through the creation of new technologies.

No Shared Infrastructure

The technology utilized in the banking industry is typically created and managed in-house. It is also highly fractured; the banks share the messaging data model, but that’s about it. The SWIFT data model is implemented over and over again by thousands of banks. There is no set of popular open source software that one can use to do banking, which means that almost every major bank writes their own software. There is a high degree of waste when it comes to technology re-use in the banking industry.

Compare this with how much of the technology industry shares in the development of core infrastructure like operating systems, Web servers, browsers, and open source software libraries. This sort of shared development model does not exist in the banking world and the negative effects of this lack of shared architecture are evident in almost every area of technology associated with the banking world.

Fear of Technology Companies

The banks are terrified of the thought of Google, Apple, or Amazon getting into the banking business. These technology companies have hundreds of millions of customers, deep brand trust, and have shown that they can build systems to handle complexity with relative ease. At one point it was said that if Apple, Google, or Amazon wanted to buy Visa, they could. Then in one fell swoop, one of these technology companies could remove one of the largest networks that banks rely on to move money in the retail space.

While all of the banks seemed to be terrified of being disrupted, there seemed to be very little interest in doing any sort of drastic change to their infrastructure. In many cases, the banks are just not equipped to deal with the Web. They tend to want to build everything internally and rarely acquire technology companies to improve their technology departments.

There was also a relative lack of executives at banks that I spoke with that were able to carry on a fairly high-level conversation about things like Web technology. It demonstrated that it is going to still be some time until the financial industry can understand the sort of disruption that things like PaySwarm, Bitcoin, and Ripple could trigger. Many know that there are going to be a large chunk of jobs that are going to be going away, but those same individuals do not have the skill set to react to the change, or are too busy with paying customers to focus on the coming disruption.

A Passing Interest in Disruptive Technologies

There was a tremendous amount of interest in Bitcoin, PaySwarm, Ripple and how it could disrupt banking. However, much like the music industry, all but a few of the banks seemed to want to learn how they could adopt or use the technology. Many of the conversations ended with a general malaise related to technological disruption with no real motivation to dig deeper lest they find something truly frightening. Most executives would express how nervous they were about competition from technology companies, but were not willing to make any deep technological changes that would undermine their current revenue streams. There were parallels between many bank executives I spoke with, the innovators dilemma, and how many of the music industry executives I had been involved with in the early 2000s reacted to the rise of Napster, peer-to-peer file trading networks, and digital music.

Many higher-level executives were dismissive about the sorts of lasting changes Web technologies could have on their core business, often to the point of being condescending when they spoke about technologies like Bitcoin, PaySwarm, and Ripple. Most arguments boiled down to the customer needing to trust some financial institution to carry out the transaction, demonstrating that they did not fundamentally understand the direction that technologies like Bitcoin and Ripple are headed.

Lessons Learned

We were able to get the message out about the sort of work that we’re doing at W3C when it comes to Web Payments and it was well received. I have already been asked to present at next year’s conference. There is a tremendous opportunity here for the technology sector to either help the banks move into the future, or to disrupt many of the services that have been seen as belonging to the more traditional financial institutions. There is also a big opportunity for the banks to seize the work that is being done in Web Payments, Bitcoin, and Ripple, and apply it to a number of the problems that they have today.

The trip was a big success in that the Web Payments group now has very deep ties into SWIFT, major banks, and other financial institutions. Many of the institutions expressed a strong desire to collaborate with them on future Web Payments work. The financial institutions we spoke with thought that many of these technologies were 10 years away from affecting them, so there was no real sense of urgency to integrate the technology. I’d put the timeline closer to 3-4 years than 10 years. That said, there was general agreement that these technologies mattered. The lines of communication are now more open than they used to be between the traditional financial industry and the Web Payments group at W3C. That’s a big step in the right direction.

Interested in becoming a part of the Web Payments work, or just peeking in from time to time? It’s open to the public. Join here.

The Downward Spiral of Microdata

Full disclosure: I’m the chair of the RDFa Working Group and have been heavily involved during the RDFa and Microdata standardization initiatives. I am biased, but also understand all of the nuanced decisions that were made during the creation of both specifications.

Support for the Microdata API has just been removed from Webkit (Apple Safari). Support for the Microdata API was also removed from Blink (Google Chrome) a few months ago. This means that Apple Safari and Google Chrome will no longer support the Microdata API. Removal of the feature from a browser also shows us a likely future for Microdata, which is less and less support.

In addition, this discussion on the Blink developer list demonstrates that there isn’t anyone to pick up the work of maintaining the Microdata implementation. Microdata has also been ripped out of the main HTML5 specification at the W3C, with the caveat that the Microdata specification will only continue “if editorial resources can be found”. Translation: if an editor doesn’t step up to edit the Microdata specification, Microdata is dead at W3C. It just takes someone to raise their hand to volunteer, so why is it that out of a group of hundreds of people, no one has stepped up to maintain, create a test suite for, and push the Microdata specification forward?

A number of observers have been surprised by these events, but for those that have been involved in the month-to-month conversation around Microdata, it makes complete sense. Microdata doesn’t have an active community supporting it. It never really did. For a Web specification to be successful, it needs an active community around it that is willing to do the hard work of building and maintaining the technology. RDFa has that in spades, Microdata does not.

Microdata was, primarily, a shot across the bow at RDFa. The warning worked because the RDFa community reacted by creating RDFa Lite, which matches Microdata feature-for-feature, while also supporting things that Microdata is incapable of doing. The existence of RDFa Lite left the HTML Working Group in an awkward position. Publishing two specifications that did the exact same thing in almost the exact same way is a position that no standards organization wants to be in. At that point, it became a race to see which community could create the developer tools and support web developers that were marking up pages.

Microdata, to this day, still doesn’t have a specification editor, an active community, a solid test suite, or any of the other things that are necessary to become a world class technology. To be clear, I’m not saying Microdata is dying (4 million out of 329 million domains use it), just that not having these basic things in place will be very problematic for the future of Microdata.

To put that in perspective, HTML5+RDFa 1.1 will become an official W3C Recommendation (world standard) next Thursday. There was overwhelming support from the W3C member companies to publish it as a world standard. There have been multiple specification editors for RDFa throughout the years, there are hundreds of active people in the community integrating RDFa into pages across the Web, there are 7 implementations of RDFa in a variety of programming languages, there is a mailing list, website and an IRC channel dedicated to answering questions for people learning RDFa, and there is a test suite with 800 tests covering RDFa in 6 markup languages (HTML4, HTML5, XML, SVG, XHTML1 and XHTML5). If you want to build a solution on a solid technology, with a solid community and solid implementations; RDFa is that solution.

JSON-LD is the Bee’s Knees

Full disclosure: I’m one of the primary authors and editors of the JSON-LD specification. I am also the chair of the group that created JSON-LD and have been an active participant in a number of Linked Data initiatives: RDFa (chair, author, editor), JSON-LD (chair, co-creator), Microdata (primary opponent), and Microformats (member, haudio and hvideo microformat editor). I’m biased, but also well informed.

JSON-LD has been getting a great deal of good press lately. It was adopted by Google, Yahoo, Yandex, and Microsoft for use in schema.org. The PaySwarm universal payment protocol is based on it. It was also integrated with Google’s Gmail service and the open social networking folks have also started integrating it into the Activity Streams 2.0 work.

That all of these positive adoption stories exist was precisely the reason why Shane Becker’s post on why JSON-LD is an Unneeded Spec was so surprising. If you haven’t read it yet, you may want to as the rest of this post will dissect the arguments he makes in his post (it’s a pretty quick 5 minute read). The post is a broad brush opinion piece based on a number of factual errors and misinformed opinion. I’d like to clear up these errors in this blog post and underscore some of the reasons JSON-LD exists and how it has been developed.

A theatrical interpretation of the “JSON-LD is Unneeded” blog post

Shane starts with this claim:

Today I learned about a proposed spec called JSON-LD. The “LD” is for linked data (Linked Data™ in the Uppercase “S” Semantic Web sense).

When I started writing the original JSON-LD specification, one of the goals was to try and merge lessons learned in the Microformats community with lessons learned during the development of RDFa and Microdata. This meant figuring out a way to marry the lowercase semantic web with the uppercase Semantic Web in a way that was friendly to developers. For developers that didn’t care about the uppercase Semantic Web, JSON-LD would still provide a very useful data structure to program against. In fact, Microformats, which are the poster-child for the lowercase semantic web, were supported by JSON-LD from day one.

Shane’s article is misinformed with respect to the assertion that JSON-LD is solely for the uppercase Semantic Web. JSON-LD is mostly for the lowercase semantic web, the one that developers can use to make their applications exchange and merge data with other applications more easily. JSON-LD is also for the uppercase Semantic Web, the one that researchers and large enterprises are using to build systems like IBM’s Watson supercomputer, search crawlers, Gmail, and open social networking systems.

Linked data. Web sites. Standards. Machine readable.
Cool. All of those sound good to me. But they all sound familiar, like we’ve already done this before. In fact, we have.


We haven’t done something like JSON-LD before. I wish we had because we wouldn’t have had to spend all that time doing research and development to create the technology. When writing about technology, it is important to understand the basics of a technology stack before claiming that we’ve “done this before”. An astute reader will notice that at no point in Shane’s article is any text from the JSON-LD specification quoted, just the very basic introductory material on the landing page of the website. More on this below.

Linked data
That’s just the web, right? I mean, we’ve had the <a href> tag since literally the beginning of HTML / The Web. It’s for linking documents. Documents are a representation of data.

Speaking as someone that has been very involved in the Microformats and RDFa communities, yes, it’s true that the document-based Web can be used to publish Linked Data. The problem is that standard way of expressing a link to another piece of data that can be followed did not carry over to the data-based Web. That is, most JSON-based APIs don’t have a standard way of encoding a hyperlink.

The other implied assertion with the statement above is that the document-based Web is all we need. If this were true, sending HTML documents to Web applications would be all we needed. Web developers know that this isn’t the case today for a number of obvious reasons. We send JSON data back and forth on the Web when we need to program against things like Facebook, Google, or Twitter’s services. JSON is a very useful data format for machine-to-machine data exchange. The problem is that JSON data has no standard way of doing a variety of things we do on the document-based Web, like expressing links, expressing the types of data (like times and dates), and a variety of other very useful features for the data-based Web. This is one of the problems that JSON-LD addresses.

Web sites
If it’s not wrapped in HTML and viewable in a browser it, is it really a website? JSON isn’t very useful in the browser by itself. It’s not style-able. It’s not very human-readable. And worst of all, it’s not clickable.

Websites are composed of many parts. It’s a weak argument to say that if a site is mainly composed of data that isn’t in HTML, and isn’t viewable in a browser, that it’s not a real website. The vast majority of websites like Twitter and Facebook are composed of data and API calls with a relatively thin varnish of HTML on top. JSON is the primary way that applications interact with these and other data-driven websites. It’s almost guaranteed these days that any company that has a popular API uses JSON in their Web service protocol.

Shane’s argument here is pretty confused. It assumes that the primary use of JSON-LD is to express data in an HTML page. Sure, JSON-LD can do that, but focusing on that brush stroke is missing the big picture. The big picture is that JSON-LD allows applications that use it to share data and interoperate in a way that is not possible with regular JSON, and it’s especially useful when used in conjunction with a Web service or a document-based database like MongoDB or CouchDB.

Standards based
To their credit, JSON-LD did license their website content Creative Commons CC0 Public Domain. But, the spec itself isn’t. It’s using (what seems to be) a W3C boilerplate copyright / license. Copyright © 2010-2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.


Nope. The JSON-LD specification has been released under a Creative Commons Attribution 3.0 license multiple times in the past, and it will be released under a Creative Commons license again, most probably CC0. The JSON-LD specification was developed in a W3C Community Group using a Creative Commons license and then released to be published as a Web standard via W3C using their W3C Community Final Specification Agreement (FSA), which allows the community to fork the specification at any point in time and publish it under a different license.

When you publish a document through the W3C, they have their own copyright, license, and patent policy associated with the document being published. There is a legal process in place at W3C that asserts that companies can implement W3C published standards in a patent and royalty-free way. You don’t get that with CC0, in fact, you don’t get any such vetting of the technology or any level of patent and royalty protection.

What we have with JSON-LD is better than what is proposed in Shane’s blog post. You get all of the benefits of having W3C member companies vet the technology for technical and patent issues while also being able to fork the specification at any point in the future and publish it under a license of your choosing as long as you state where the spec came from.

Machine readable
Ah… “machine readable”. Every couple of years the current trend of what machine readable data should look like changes (XML/JSON, RSS/Atom, xml-rpc/SOAP, rest/WS-*). Every time, there are the same promises. This will solve our problems. It won’t change. It’ll be supported forever. Interoperability. And every time, they break their promises. Today’s empires, tomorrow’s ashes.


At no point has any core designer of JSON-LD claimed 1) that JSON-LD will “solve our problems” (or even your particular problem), 2) that it won’t change, and 3) that it will be supported forever. These are straw-man arguments. The current consensus of the group is that JSON-LD is best suited to a particular class of problems and that some developers will have no need for it. JSON-LD is guaranteed to change in the future to keep pace with what we learn in the field, and we will strive for backward compatibility for features that are widely used. Without modification, standardized technologies have a shelf life of around 10 years, 20-30 if they’re great. The designers of JSON-LD understand that, like the Web, JSON-LD is just another grand experiment. If it’s useful, it’ll stick around for a while, if it isn’t, it’ll fade into history. I know of no great software developer or systems designer that has ever made these three claims and been serious about it.

We do think that JSON-LD will help Web applications interoperate better than they do with plain ‘ol JSON. For an explanation of how, there is a nice video introducing JSON-LD.

With respect to the “Today’s empires, tomorrow’s ashes” cynicism, we’ve already seen a preview of the sort of advances that Web-based machine-readable data can unleash. Google, Yahoo!, Microsoft, Yandex, and Facebook all use a variety of machine-readable data technologies that have only recently been standardized. These technologies allow for faster, more accurate, and richer search results. They are also the driving technology for software systems like Watson. These systems exist because there are people plugging away at the hard problem of machine readable data in spite of cynicism directed at past failures. Those failures aren’t ashes, they’re the bedrock of tomorrow’s breakthroughs.

Instead of reinventing the everything (over and over again), let’s use what’s already there and what already works. In the case of linked data on the web, that’s html web pages with clickable links between them.

Microformats, Microdata, and RDFa do not work well for data-based Web services. Using Linked Data with data-based Web services is one of the primary reasons that JSON-LD was created.

For open standards, open license are a deal breaker. No license is more open than Creative Commons CC0 Public Domain + OWFa. (See also the Mozilla wiki about standards/license, for more.) There’s a growing list of standards that are already using CC0+OWFa.

I think there might be a typo here, but if not, I don’t understand why open licenses are a deal breaker for open standards. Especially things like the W3C FSA or the Creative Commons licenses we’ve published the JSON-LD spec under. Additionally, CC0 + OWFa might be neat. Shane’s article was the first time that I had heard of OWFa and I’d be a proponent for pushing it in the group if it granted more freedom to the people using and developing JSON-LD than the current set of agreements we have in place. After glossing over the legal text of the OWFa, I can’t see what CC0 + OWFa buys us over CC0 + W3C patent attribution. If someone would like to make these benefits clear, I could take a proposal to switch to CC0 + OWFa to the JSON-LD Community Group and see if there is interest in using that license in the future.

No process is more open than a publicly editable wiki.

A counter-point to publicly accessible forums

Publicly editable wikis are notorious for edit wars, they are not a panacea. Just because you have a wiki, does not mean you have an open community. For example, the Microformats community was notorious for having a different class of unelected admins that would meet in San Francisco and make decisions about the operation of the community. This seemingly innocuous practice would creep its way into the culture and technical discussion on a regular basis leading to community members being banned from time to time. Similarly, Wikipedia has had numerous issues with publicly editable wikis and the behavior of their admins.

Depending on how you define “open”, there are a number of processes that are far more open than a publicly editable wiki. For example, the JSON-LD specification development process is completely open to the public, based on meritocracy, and is consensus-driven. The mailing list is open. The bug tracker is open. We have weekly design teleconferences where all the audio is recorded and minuted. We have these teleconferences to this day and will continue to have them into the future because we make transparency a priority. JSON-LD, as far as I know, is the first such specification in the world developed where all the previously described operating guidelines are standard practice.

(Mailing lists are toxic.)

A community is as toxic as its organizational structure enables it to be. The JSON-LD community is based on meritocracy, consensus, and has operated in a very transparent manner since the beginning (open meetings, all calls are recorded and minuted, anyone can contribute to the spec, etc.). This has, unsurprisingly, resulted in a very pleasant and supportive community. That said, there is no perfect communication medium. They’re all lossy and they all have their benefits and drawbacks. Sometimes, when you combine multiple communication channels as a part of how your community operates, you get better outcomes.

Finally, for machine readable data, nothing has been more widely adopted by publishers and consumers than microformats. As of June 2012, microformats represents about 70% of all of the structured data on the web. And of that ~70%, the vast majority was h-card and xfn. (All RDFa is about 25% and microdata is a distant third.)

Microformats are good if all you need to do is publish your basic contact and social information on the Web. If you want to publish detailed product information, financial data, medical data, or address other more complex scenarios, Microformats won’t help you. There have been no new Microformats released in the last 5 years and the mailing list traffic has been almost non-existent for around 5 years. From what I can tell, most everyone has moved on to RDFa, Microdata, or JSON-LD.

There are a few that are working on Microformats 2, but I haven’t seen anything that it provides that is not already provided by existing solutions that also have the added benefit of being W3C standards or backed by major companies like Google, Facebook, Yahoo!, Microsoft, and Yandex.

Maybe it’s because of the ease of publishing microformats. Maybe it’s the open process for developing the standards. Maybe it’s because microformats don’t require any additions to HTML. (Both RDFa and microdata required the use of additional attributes or XML namespaces.) Whatever the reason, microformats has the most uptake. So, why do people keep trying to reinvent what microformats is already doing well?

People aren’t reinventing what Microformats are already doing well, they’re attempting to address problems that Microformats do not solve.

For example, one of the reasons that Google adopted JSON-LD is because markup was much easier in JSON-LD than it was in Microformats, as evidenced by the example below:

Back to JSON-LD. The “Simple Example” listed on the homepage is a person object representing John Lennon. His birthday and wife are also listed on the object.

        {
          "@context": "http://json-ld.org/contexts/person.jsonld",
          "@id": "http://dbpedia.org/resource/John_Lennon",
          "name": "John Lennon",
          "born": "1940-10-09",
          "spouse": "http://dbpedia.org/resource/Cynthia_Lennon"
        }

I look at this and see what should have been HTML with microformats (h-card and xfn). This is actually a perfect use case for h-card and xfn: a person and their relationship to another person. Here’s how it could’ve been marked up instead.

        <div class="h-card">
          <a href="http://dbpedia.org/resource/John_Lennon" class="u-url u-uid p-name">John Lennon</a>
          <time class="dt-bday" datetime="1940-10-09">October 9<sup>th</sup>, 1940</time>
          <a rel="spouse" href="http://dbpedia.org/resource/Cynthia_Lennon">Cynthia Lennon</a>.
        </div>

I’m willing to bet that most people familiar with JSON will find the JSON-LD markup far easier to understand and get right than the Microformats-based equivalent. In addition, sending the Microformats markup to a REST-based Web service would be very strange. Alternatively, sending the JSON-LD markup to a REST-based Web service would be far more natural for a modern day Web developer.

This HTML can be easily understood by machine parsers and humans parsers. Microformats 2 parsers already exists for: JavaScript (in the browser), Node.js, PHP and Ruby. HTML + microformats2 means that machines can read your linked data from your website and so can humans. It means that you don’t need an “API” that is something other than your website.

You have been able to do the same thing, and much more, using RDFa and Microdata for far longer (since 2006) than you have been able to do it in Microformats 2. Let’s be clear, there is no significant advantage to using Microformats 2 over RDFa or Microdata. In fact, there are a number of disadvantages for using Microformats 2 at this point, like little to no support from the search companies, very little software tooling, and an anemic community (of which I am a member) for starters. Additionally, HTML + Microformats 2 does not address the Web service API issue at all.

Please don’t waste time and energy reinventing all of the wheels. Instead, please use what already works and what works the webby way.


Do not miss the irony of this statement. RDFa has been doing what Microformats 2 does today since 2006, and it’s a Web standard. Even if you don’t like RDFa 1.0, RDFa 1.1, RDFa Lite 1.1, and Microdata all came before Microformats 2. To assert that wheels should not be reinvented and then claim that Microformats 2, which was created far after there were already a number of well-established solutions, is quite a strange position to take.

Conclusion

JSON-LD was created by people that have been directly involved in the Linked Data, lowercase semantic web, uppercase Semantic Web, Microformats, Microdata, and RDFa work. It has proven to be useful to them. There are a number of very large technology companies that have adopted JSON-LD, further underscoring its utility. Expect more big announcements in the next six months. The JSON-LD specifications have been developed in a radically open and transparent way, the document copyright and licensing provisions are equally open. I hope that this blog post has helped clarify most of the misinformed opinion in Shane Becker’s blog post.

Most importantly, cynicism will not solve the problems that we face on the Web today. Hard work will, and there are very few communities that I know of that work harder and more harmoniously than the excellent volunteers in the JSON-LD community.

If you would like to learn more about Linked Data, a good video introduction exists. If you want to learn more about JSON-LD, there is a good video introduction to that as well.

Secure Messaging vs. Javascript Object Signing and Encryption

The Web Payments group at the World Wide Web Consortium (W3C) is currently performing a thorough analysis on the MozPay API. The first part of the analysis examined the contents of the payment messages . This is the second part of the analysis, which will focus on whether the use of the Javascript Object Signing and Encryption (JOSE) group’s solutions to achieve message security is adequate, or if the Web Payment group’s solutions should be used instead.

The Contenders

The IETF JOSE Working Group is actively standardizing the following specifications for the purposes of adding message security to JSON:

JSON Web Algorithms (JWA)
Details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm.
JSON Web Key (JWK)
Details a data structure that represents one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, this specification details how you do that in a standard way.
JSON Web Token (JWT)
Defines a way of representing claims such as “Bob was born on November 15th, 1984″. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications.
JSON Web Encryption (JWE)
Defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way.
JSON Web Signature (JWS)
Defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so.

The W3C Web Payments group is actively standardizing a similar specification for the purpose of adding message security to JSON messages:

Secure Messaging (code named: HTTP Keys)
Describes a simple, decentralized security infrastructure for the Web based on JSON, Linked Data, and public key cryptography. This system enables Web applications to establish identities for agents on the Web, associate security credentials with those identities, and then use those security credentials to send and receive messages that are both encrypted and verifiable via digital signatures.

Both groups are relying on technology that has existed and been used for over a decade to achieve secure communications on the Internet (symmetric and asymmetric cryptography, public key infrastructure, X509 certificates, etc.). The key differences between the two have to do more with flexibility, implementation complexity, and how the data is published on the Web and used between systems.

Basic Differences

In general, the JOSE group is attempting to create a flexible/generalized way of expressing cryptography parameters in JSON. They are then using that information and encrypting or signing specific data (called claims in the specifications).

The Web Payments group’s specification achieves the same thing, but while not trying to be as generalized as the JOSE group. Flexibility and generalization tends to 1) make the ecosystem more complex than it needs to be for 95% of the use cases, 2) make implementations harder to security audit, and 3) make it more difficult to achieve interoperability between all implementations. The Secure Messaging specification attempts to outline a single best practice that will work for 95% of the applications out there. The 5% of Web applications that need to do more than the Secure Messaging spec can use the JOSE specifications. The Secure Messaging specification is also more Web-y. The more Web-y nature of the spec gives us a number of benefits, such as giving us a Web-scale public key infrastructure as a pleasant side-effect, that we will get into below.

JSON-LD Advantages over JSON

Fundamentally, the Secure Messaging specification relies on the Web and Linked Data to remove some of the complexity that exists in the JOSE specs while also achieving greater flexibility from a data model perspective. Specifically, the Secure Messaging specification utilizes Linked Data via a new standards-track technology called JSON-LD to allow anyone to build on top of the core protocol in a decentralized way. JSON-LD data is fundamentally more Web-y than JSON data. Here are the benefits of using JSON-LD over regular JSON:

  • A universal identifier mechanism for JSON objects via the use of URLs.
  • A way to disambiguate JSON keys shared among different JSON documents by mapping them to URLs via a context.
  • A standard mechanism in which a value in a JSON object may refer to a JSON object on a different document or site on the Web.
  • A way to associate datatypes with values such as dates and times.
  • The ability to annotate strings with their language. For example, the word ‘chat’ means something different in English and French and it helps to know which language was used when expressing the text.
  • A facility to express one or more directed graphs, such as a social network, in a single document. Graphs are the native data structure of the Web.
  • A standard way to map external JSON application data to your application data domain.
  • A deterministic way to generate a hash on JSON data, which is helpful when attempting to figure out if two data sources are expressing the same information.
  • A standard way to digitally sign JSON data.
  • A deterministic way to merge JSON data from multiple data sources.

Plain old JSON, while incredibly useful, does not allow you to do the things mentioned above in a standard way. There is a valid argument that applications may not need this amount of flexibility, and for those applications, JSON-LD does not require any of the features above to be used and does not require the JSON data to be modified in any way. So people that want to remain in the plain ‘ol JSON bucket can do so without the need to jump into the JSON-LD bucket with both feet.

JSON Web Algorithms vs. Secure Messaging

The JSON Web Algorithms specification details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm. The specification is 70 pages long and is effectively just a collection of what values are allowed for each key used in JOSE-based JSON documents. The design approach taken for the JOSE specifications requires that such a document exists.

The Secure Messaging specification takes a different approach. Rather than declare all of the popular algorithms and cryptography schemes in use today, it defines just one digital signature scheme (RSA encryption with a SHA-256 hashing scheme), one encryption scheme (128-bit AES with cyclic block chaining), and one way of expressing keys (as PEM-formatted data). If placed into a single specification, like the JWA spec, it would be just a few pages long (really, just 1 page of actual content).

The most common argument against the Secure Messaging spec, with respect to the JWA specification, is that it lacks the same amount of cryptographic algorithm agility that the JWA specification provides. While this may seem like a valid argument on the surface, keep in mind that the core algorithms used by the Secure Messaging specification can be changed at any point to any other set of algorithms. So, the specification achieves algorithm agility while greatly reducing the need for a large 70-page specification detailing the allowable values for the various cryptographic algorithms. The other benefit is that since the cryptography parameters are outlined in a Linked Data vocabulary, instead of a process-heavy specification, that they can be added to at any point as long as there is community consensus. Note that while the vocabulary can be added to, thus providing algorithm agility if a particular cryptography scheme is weakened or broken, already defined cryptography schemes in the vocabulary must not be changed once the cryptography vocabulary terms become widely used to ensure that production deployments that use the older mechanism aren’t broken.

Providing just one way, the best practice at the time, to do digital signatures, encryption, and key publishing reduces implementation complexity. Reducing implementation complexity makes it easier to perform security audits on implementations. Reducing implementation complexity also helps ensure better interoperability and more software library implementations, as the barrier to creating a fully conforming implementation is greatly reduced.

The Web Payments group believes that new digital signature and encryption schemes will have to be updated every 5-7 years. It is better to delay the decision to switch to another primary algorithm as long as as possible (and as long as it is safe to do so). Delaying the cryptographic algorithm decision ensures that the group will be able to make a more educated decision than attempting to predict which cryptographic algorithms may be the successors to currently deployed algorithms.

Bottom line: The Secure Messaging specification utilizes a much simpler approach than the JWA specification while supporting the same level of algorithm agility.

JSON Web Key vs. Secure Messaging

The JSON Web Key (JWK) specification details a data structure that is capable of representing one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, JWK details how you do that in an standard way. A typical RSA public key looks like the following using the JWK specification:

{
  "keys": [{
    "kty":"RSA",
    "n": "0vx7agoe ... DKgw",
    "e":"AQAB",
    "alg":"RS256",
    "kid":"2011-04-29"
  }]
}

A similar RSA public key looks like the following using the Secure Messaging specification:

{
  "@context": "https://w3id.org/security/v1",
  "@id": "https://example.com/i/bob/keys/1",
  "@type": "Key",
  "owner": "https://example.com/i/bob",
  "publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBG0BA...OClDQAB\n-----END PUBLIC KEY-----\n"
}

There are a number of differences between the two key formats. Specifically:

  1. The JWK format expresses key information by specifying the key parameters directly. The Secure Messaging format places all of the key parameters into a PEM-encoded blob. This approach was taken because it is easier for developers to use the PEM data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---)
  2. In the general case, the Secure Messaging key format assigns URL identifiers to keys and publishes them on the Web as JSON-LD, and optionally as RDFa. This means that public key information is discoverable and human and machine-readable by default, which means that all of the key parameters can be read from the Web. The JWK mechanism does assign a key ID to keys, but does not require that they are published to the Web if they are to be used in message exchanges. The JWK specification could be extended to enable this, but by default, doesn’t provide this functionality.
  3. The Secure Messaging format is also capable of specifying an identity that owns the key, which allows a key to be tied to an identity and that identity to be used for thinks like Access Control to Web resources and REST APIs. The JWK format has no such mechanism outlined in the specification.

Bottom line: The Secure Messaging specification provides four major advantages over the JWK format: 1) the key information is expressed at a higher level, which makes it easier to work with for Web developers, 2) it allows key information to be discovered by deferencing the key ID, 3) the key information can be published (and extended) in a variety of Linked Data formats, and 4) it provides the ability to assign ownership information to keys.

JSON Web Tokens vs. Secure Messaging

The JSON Web Tokens (JWT) specification defines a way of representing claims such as “Bob was born on November 15th, 1984″. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications. Here is an example of a JWT document:

{
  "iss": "joe",
  "exp": 1300819380,
  "http://example.com/is_root": true
}

JWT documents contain keys that are public, such as iss and exp above, and keys that are private (which could conflict with keys from the JWT specification). The data format is fairly free-form, meaning that any data can be placed inside a JWT Claims Set like the one above.

Since the Secure Messaging specification utilizes JSON-LD for its data expression mechanism, it takes a fundamentally different approach. There are no headers or claims sets in the Secure Messaging specification, just data. For example, the data below is effectively a JWT claims set expressed in JSON-LD:

{
  "@context": "http://json-ld.org/contexts/person",
  "@type": "Person",
  "name": "Manu Sporny",
  "gender": "male",
  "homepage": "http://manu.sporny.org/"
}

Note that there are no keywords specific to the Secure Messaging specification, just keys that are mapped to URLs (to prevent collisions) and data. In JSON-LD, these keys and data are machine-interpretable in a standards-compliant manner (unlike JWT data), and can be merged with other data sources without the danger of data being overwritten or colliding with other application data.

Bottom line: The Secure Messaging specifications use of a native Linked Data format removes the requirement for a specification like JWT. As far as the Secure Messaging specification is concerned, there is just data, which you can then digitally sign and encrypt. This makes the data easier to work with for Web developers as they can continue to use their application data as-is instead of attempting to restructure it into a JWT.

JSON Web Encryption vs. Secure Messaging

The JSON Web Encryption (JWE) specification defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way. A JWE-encrypted message looks like this:

{
  "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0",
  "unprotected": {"jku": "https://server.example.com/keys.jwks"},
  "recipients": [{
    "header": {
      "alg":"RSA1_5"
        "kid":"2011-04-29",
        "enc":"A128CBC-HS256",
        "jku":"https://server.example.com/keys.jwks"
      },
      "encrypted_key": "UGhIOgu ... MR4gp_A"
    }]
  }],
  "iv": "AxY8DCtDaGlsbGljb3RoZQ",
  "ciphertext": "KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY",
  "tag": "Mz-VPPyU4RlcuYv1IwIvzw"
}

To decrypt this information, an application would retrieve the private key associated with the recipients[0].header, and then decrypt the encrypted_key. Using the decrypted encrypted_key value, it would then use the iv to decrypt the protected header. Using the algorithm provided in the protected header, it would then use the decrypted encrypted_key, iv, the algorithm specified in the protected header, and the ciphertext to retrieve the original message as a result.

For comparison purposes, a Secure Messaging encrypted message looks like this:

{
  "@context": "https://w3id.org/security/v1",
  "@type": "EncryptedMessage2012",
  "data": "VTJGc2RH ... Fb009Cg==",
  "encryptionKey": "uATte ... HExjXQE=",
  "iv": "vcDU1eWTy8vVGhNOszREhSblFVqVnGpBUm0zMTRmcWtMrRX==",
  "publicKey": "https://example.com/people/john/keys/23"
}   

To decrypt this information, an application would use the private key associated with the publicKey to decrypt the encryptionKey and iv. It would then use the decrypted encryptionKey and iv to decrypt the value in data, retrieving the original message as a result.

The Secure Messaging encryption protocol is simpler than the JWE protocol for three major reasons:

  1. The @type of the message, EncryptedMessage2012, encapsulates all of the cryptographic algorithm information in a machine-readable way (that can also be hard-coded in implementations). The JWE specification utilizes the protected field to express the same sort of information, which is allowed to get far more complicated than the Secure Messaging equivalent, leading to more complexity.
  2. Key information is expressed in one entry, the publicKey entry, which is a link to a machine-readable document that can express not only the public key information, but who owns the key, the name of the key, creation and revocation dates for the key, as well as a number of other Linked Data values that result in a full-fledged Web-based PKI system. Not only is Secure Messaging encryption simpler than JWE, but it also enables many more types of extensibility.
  3. The key data is expressed in a PEM-encoded format, which is expressed as a base-64 encoded blob of information. This approach was taken because it is easier for developers to use the data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---).

The rest of the entries in the JSON are typically required for the encryption method selected to secure the message. There is not a great deal of difference between the two specifications when it comes to the parameters that are needed for the encryption algorithm.

Bottom line: The major difference between the Secure Messaging and JWE specification has to do with how the encryption parameters are specified as well as how many of them there can be. The Secure Messaging specification expresses only one encryption mechanism and outlines the algorithms and keys external to the message, which leads to a reduction in complexity. The JWE specification allows many more types of encryption schemes to be used, at the expense of added complexity.

JSON Web Signatures vs. Secure Messaging

The JSON Web Signatures (JWS) specification defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so. A JWS digital signature looks like the following:

{
  "payload": "eyJpc ... VlfQ",
  "signatures":[{
    "protected":"eyJhbGciOiJSUzI1NiJ9",
    "header": {
      "kid":"2010-12-29"
    },
    "signature": "cC4hi ... 77Rw"
  }]
}

For the purposes of comparison, a Secure Messaging message and signature looks like the following:

{
  "@context": ["https://w3id.org/security/v1", "http://json-ld.org/contexts/person"]
  "@type": "Person",
  "name": "Manu Sporny",
  "homepage": "http://manu.sporny.org/",
  "signature":
  {
    "@type": "GraphSignature2012",
    "creator": "http://example.org/manu/keys/5",
    "created": "2013-08-04T17:39:53Z",
    "signatureValue": "OGQzN ... IyZTk="
  }
}

There are a number of stark differences between the two specifications when it comes to digital signatures:

  1. The Secure Messaging specification does not need to base-64 encode the payload being signed. This makes it easier for a developer to see (and work with) the data that was digitally signed. Debugging signed messages is also simplified as special tools to decode the payload are unnecessary.
  2. The Secure Messaging specification does not require any header parameters for the payload, which reduces the number of things that can go wrong when verifying digitally signed messages. One could argue that this also reduces flexibility. The counter-argument is that different signature schemes can always be switched in by just changing the @type of the signature.
  3. The signer’s public key is available via a URL. This means that, in general, all Secure Messaging signatures can be verified by dereferencing the creator URL and utilizing the published key data to verify the signature.
  4. The Secure Messaging specification depends on a normalization algorithm that is applied to the message. This algorithm is non-trivial, typically implemented behind a JSON-LD library .normalize() method call. JWS does not require data normalization. The trade-off is simplicity at the expense of requiring your data to always be encapsulated in the message. For example, the Secure Messaging specification is capable of pointing to a digital signature expressed in RDFa on a website using a URL. An application can then dereference that URL, convert the data to JSON-LD, and verify the digital signature. This mechanism is useful, for example, when you want to publish items for sale along with their prices on a Web page in a machine-readable way. This sort of use case is not achievable with the JWS specification. All data is required to be in the message. In other words, Secure Messaging performs a signature on information that could exist on the Web where the JWS specification performs a signature on a string of text in a message.
  5. The JWS mechanism enables HMAC-based signatures while the Secure Messaging mechanism avoids the use of HMAC altogether, taking the position that shared secrets are typically a bad practice.

Bottom line: The Secure Messaging specification does not need to encode its payloads, but does require a rather complex normalization algorithm. It supports discovery of signature key data so that signatures can be verified using standard Web protocols. The JWS specification is more flexible from an algorithmic standpoint and simpler from a signature verification standpoint. The downside is that the only data input format must be from the message itself and can’t be from an external Linked Data source, like an HTML+RDFa web page listing items for sale.

Conclusion

The Secure Messaging and JOSE designs, while attempting to achieve the same basic goals, deviate in the approaches taken to accomplish those goals. The Secure Messaging specification leverages more of the Web with its use of a Linked Data format and URLs for identifying and verifying identity and keys. It also attempts to encapsulate a single best practice that will work for the vast majority of Web applications in use today. The JOSE specifications are more flexible in the type of cryptographic algorithms that can be used which results in more low-level primitives used in the protocol, increasing complexity for developers that must create interoperable JOSE-based applications.

From a specification size standpoint, the JOSE specs weigh in at 225 pages, the Secure Messaging specification weighs in at around 20 pages. This is rarely a good way to compare specifications, and doesn’t always result in an apples to apples comparison. It does, however, give a general idea of the amount of text required to explain the details of each approach, and thus a ballpark idea of the complexity associated with each specification. Like all specifications, picking one depends on the use cases that an application is attempting to support. The goal with the Secure Messaging specification is that it will be good enough for 95% of Web developers out there, and for the remaining 5%, there is the JOSE stack.

Technical Analysis of 2012 MozPay API Message Format

The W3C Web Payments group is currently analyzing a new API for performing payments via web browsers and other devices connected to the web. This blog post is a technical analysis of the MozPay API with a specific focus on the payment protocol and its use of JOSE (JSON Object Signing and Encryption). The first part of the analysis takes the approach of examining the data structures used today in the MozPay API and compares them against what is possible via PaySwarm. The second part of the analysis examines the use of JOSE to achieve the use case and security requirements of the MozPay API and compares the solution to JSON-LD, which is the mechanism used to achieve the use case and security requirements of the PaySwarm specification.

Before we start, it’s useful to have an example of what the current MozPay payment initiation message looks like. This message is generated by a MozPay Payment Provider and given to the browser to initiate a native purchase process:

jwt.encode({
  "iss": APPLICATION_KEY,
  "aud": "marketplace.firefox.com",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",
    "description": "Adventure Game item",
    "icons": {
      "64": "https://yourapp.com/img/icon-64.png",
      "128": "https://yourapp.com/img/icon-128.png"
    },
    "productData": "user_id=1234&my_session_id=XYZ",
    "postbackURL": "https://yourapp.com/payments/postback",
    "chargebackURL": "https://yourapp.com/payments/chargeback"
  }
}, APPLICATION_SECRET)

The message is effectively a JSON Web Token. I say effectively because it seems like it breaks the JWT spec in subtle ways, but it may be that I’m misreading the JWT spec.

There are a number of issues with the message that we’ve had to deal with when creating the set of PaySwarm specifications. It’s important that we call those issues out first to get an understanding of the basic concerns with the MozPay API as it stands today. The comments below use the JWT code above as a reference point.

Unnecessarily Cryptic JSON Keys

...
  "iss": APPLICATION_KEY,
  "aud": "marketplace.firefox.com",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,
...

This is more of an issue with the JOSE specs than it is the MozPay API. I can’t think of a good line of argumentation to shorten things like ‘issuer’ to ‘iss’ and ‘type’ to ‘typ’ (seriously :) , the ‘e’ was too much?). It comes off as 1980s protocol design, trying to save bits on the wire. Making code less readable by trying to save characters in a human-readable message format works against the notion that the format should be readable by a human. I had to look up what iss, aud, iat, and exp meant. The only reason that I could come up with for using such terse entries was that the JOSE designers were attempting to avoid conflicts with existing data in JWT claims objects. If this was the case, they should have used a prefix like “@” or “$”, or placed the data in a container value associated with a key like ‘claims’.

PaySwarm always attempts to use terminology that doesn’t require you to go and look at the specification to figure out basic things. For example, it uses creator for iss (issuer), validFrom for iat (issued at), and validUntil for exp (expire time).

iss and APPLICATION_KEY

...
  "iss": APPLICATION_KEY,
...

The MozPay API specification does not require the APPLICATION_KEY to be a URL. Since it’s not a URL, it’s not discoverable. The application key is also specific to each Marketplace, which means that one Marketplace could use a UUID, another could use a URL, and so on. If the system is intended to be decentralized and interoperable, the APPLICATION_KEY should either be dereferenceable on the public Web without coordination with any particular entity, or a format for the key should be outlined in the specification.

All identities and keys used in digital signatures in PaySwarm use URLs for the identifiers that must contain key information in some sort of machine-readable format (RDFa and JSON-LD, for now). This means that 1) they’re Web-native, 2) they can be dereferenced, and 3) when they’re dereferenced, a client can extract useful data from the document retrieved.

Audience

...
  "aud": "marketplace.firefox.com",
...

It’s not clear what the aud parameter is used for in the MozPay API, other than to identify the marketplace.

Issued At and Expiration Time

...
  "iat": 1337357297,
  "exp": 1337360897,
...

The iat (issued at) and exp (expiration time) values are encoded in the number of seconds since January 1st, 1970. These are not very human readable and make debugging issues with purchases more difficult than they need to be.

PaySwarm uses the W3C Date/Time format, which are human-readable strings that are also easy for machines to process. For example, November 5th, 2013 at 1:15:30 AM (Zulu / Universal Time) is encoded as: 2013-11-05T13:15:30Z.

The Request

...
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",
...

This object in the MozPay API is a description of the thing that is to be sold. Technically, it’s not really a request. The outer object is the request. There is a big of a conflation of terminology here that should probably be fixed at some point.

In PaySwarm, the contents of the MozPay request value is called an Asset. An asset is a description of the thing that is to be sold.

Request ID

...
{
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
...

The MozPay API encodes the request ID as a universally unique identifier (UUID). The major downside to this approach is that other applications can’t find the information on the Web to 1) discover more about the item being sold, 2) discuss the item being sold by referring to it by a universal ID, 3) feed it to a system that can read data published at the identifier address, and 4) index it for the purposes of searching.

The PaySwarm specifications use a URL for the identifier for assets and publish machine-readable data at the asset location so that other systems can discover more information about the item being sold, refer to the item being sold in discussions (like reviews of the item), start a purchase by referencing the URL, index the item being sold such that it may be utilized in price-comparison/search engines.

Price Point

...
  "request": {
...
    "pricePoint": 1,
...

The pricePoint for the item being sold is currently a whole number. This is problematic because prices are usually decimal numbers including a fraction and a currency.

PaySwarm publishes its pricing information in a currency agnostic way that is compatible with all known monetary systems. Some of these systems include USD, EUR, JYP, RMB, Bitcoin, Brixton Pound, Bernal Bucks, Ven, and a variety of other alternative currencies. The amount is specified as a decimal with fraction and a currency URL. A URL is utilized for the currency because PaySwarm supports arbitrary currencies to be created and managed external to the PaySwarm system.

Icons

...
  "request": {
...
    "icons": {
      "64": "https://yourapp.com/img/icon-64.png",
      "128": "https://yourapp.com/img/icon-128.png"
    },
...

Icon data is currently modeled in a way that is useful to developers by indexing the information as a square pixel size for the icon. This allows developers to access the data like so: icons.64 or icons.128. Values are image URLs, which is the right choice.

PaySwarm uses JSON-LD and can support this sort of data layout through a feature called data indexing. Another approach is to just have an array of objects for icons, which would allow us to include extended information about the icons. For example:

...
  "request": {
...
  "icon": [{size: 64, id: "https://yourapp.com/img/icon-64.png", label: "Magical Unicorn"}, ...]
...

Product Data

...
  "request": {
...
    "productData": "user_id=1234&my_session_id=XYZ",
...

If the payment technology we’re working on is going to be useful to society at large, we have to allow richer descriptions of products. For example, model numbers, rich markup descriptions, pictures, ratings, colors, and licensing terms are all important parts of a product description. The value needs to be larger than a 256 byte string and needs to support decentralized extensibility. For example, Home Depot should be able to list UPC numbers and internal reference numbers in the asset description and the payment protocol should preserve that extra information, placing it into digital receipts.

PaySwarm uses JSON-LD and thus supports decentralized extensibility for product data. This means that any vendor may express information about the asset in JSON-LD and it will be preserved in all digital contracts and digital receipts. This allows the asset and digital receipt format to be used as a platform that can be built on top of by innovative retailers. It also increases data fidelity by allowing far more detailed markup of asset information than what is currently allowed via the MozPay API.

Postback URL

...
  "request": {
...
    "postbackURL": "https://yourapp.com/payments/postback",
...

The postback URL is a pretty universal concept among Web-based payment systems. The payment processor needs a URL endpoint that the result of the purchase can be sent to. The postback URL serves this purpose.

PaySwarm has a similar concept, but just lists it in the request URL as ‘callback’.

Chargeback URL

...
  "request": {
...
    "chargebackURL": "https://yourapp.com/payments/chargeback"
...

The chargeback URL is a URL endpoint that is called whenever a refund is issued for a purchased item. It’s not clear if the vendor has a say in whether or not this should be allowed for a particular item. For example, what happens when a purchase is performed for a physical good? Should chargebacks be easy to do for those sorts of items?

PaySwarm does not build chargebacks into the core protocol. It lets the merchant request the digital receipt of the sale to figure out if the sale has been invalidated. It seems like a good idea to have a notification mechanism build into the core protocol. We’ll need more discussion on this to figure out how to correctly handle vendor-approved refunds and customer-requested chargebacks.

Conclusion

There are a number of improvements that could be made to the basic MozPay API that would enable more use cases to be supported in the future while keeping the level of complexity close to what it currently is. The second part of this analysis will examine the JavaScript Object Signature and Encryption (JOSE) technology stack and determine if there is a simpler solution that could be leveraged to simplify the digital signature requirements set forth by the MozPay API.

[UPDATE: The second part of this analysis is now available]

Verifiable Messaging over HTTP

Problem: Figure out a simple way to enable a Web client or server to authenticate and authorize itself to do a REST API call. Do this in one HTTP round-trip.

There is a new specification that is making the rounds called HTTP Signatures. It enables a Web client or server to authenticate and authorize itself when doing a REST API call and only requires one HTTP round-trip to accomplish the feat. The meat of the spec is 5 pages long, and the technology is simple and awesome.

We’re working on this spec in the Web Payments group at the World Wide Web Consortium because it’s going to be a fundamental part of the payment architecture we’re building into the core of the Web. When you send money to or receive money from someone, you want to make sure that the transaction is secure. HTTP Signatures help to secure that financial transaction.

However, the really great thing about HTTP Signatures is that it can be applied anywhere password or OAuth-based authentication and authorization is used today. Passwords, and shared secrets in general, are increasingly becoming a problem on the Web. OAuth 2 sucks for a number of reasons. It’s time for something simpler and more powerful.

HTTP Signatures:

  1. Work over both HTTP and HTTPS. You don’t need to spend money on expensive SSL/TLS security certificates to use it.
  2. Protect messages sent over HTTP or HTTPS by digitally signing the contents, ensuring that the data cannot be tampered with in transit. In the case that HTTPS security is breached, it provides an additional layer of protection.
  3. Identify the signer and establish a certain level of authorization to perform actions over a REST API. It’s like OAuth, only way simpler.

When coupled with the Web Keys specification, HTTP Signatures:

  1. Provide a mechanism where the digital signature key does not need to be registered in advance with the server. The server can automatically discover the key from the message and determine what level of access the client should have.
  2. Enable a fully distributed Public Key Infrastructure for the Web. This opens up new ways to more securely communicate over the Web, which is timely considering the recent news concerning the PRISM surveillance program.

If you’re interested in learning more about HTTP Signatures, the meat of the spec is 5 pages long and is a pretty quick read. You can also read (or listen to) the meeting notes where we discuss the HTTP Signatures spec a week ago, or today. If you want to keep up with how the spec is progressing, join the Web Payments mailing list.

Google adds JSON-LD support to Search and Google Now

Full disclosure: I’m one of the primary designers of JSON-LD and the Chair of the JSON-LD group at the World Wide Web Consortium.

Last week, Google announced support for JSON-LD markup in Gmail. Putting JSON-LD in front of 425 million people is a big validation of the technology.

Hot on the heels of last weeks announcement, Google has just announced additional JSON-LD support for two more of their core products! The first is their flagship product, Google Search. The second is their new intelligent personal assistant service called Google Now.

The addition of JSON-LD support to Google Search now allows you to do incredibly accurate personalized searches. For example, here’s an example search for “my flights”:

and here’s an example for “my hotel reservation for next week”:

Web developers that mark certain types of sort of information up as JSON-LD in the e-mails that they send to you can now enable new functionality in these core Google services. For example, using JSON-LD will make it really easy for you to manage flights, hotel bookings, reservations at restaurants, and events like concerts and movies from within Google’s ecosystem. It also makes it easy for services like Google Now to push a notification to your phone when your flight has been delayed:

Or, show your boarding pass on your mobile phone when you’ve arrived at the airport:

Or, let you know when you need to leave to make your reservation for a restaurant:

Google Search and Google Now can make these recommendations to you because the information that you received about these flights, boarding passes, hotels, reservations, and other events were marked up in JSON-LD format when they hit your Gmail inbox. The most exciting thing about all of this is that it’s just the beginning of what Linked Data can do to for all of us. Over the next decade, Linked Data will be at the center of getting computing and the monotonous details of our everyday grind out of the way so that we can focus more on enjoying our lives.

If you want to dive deeper into this technology, Google’s page on schemas is a good place to start.

Google adds JSON-LD support to Gmail

Google announced support for JSON-LD markup in Gmail at Google I/O 2013. The design team behind JSON-LD is delighted by this announcement and applaud the Google engineers that integrated JSON-LD with Gmail. This blog post examines what this announcement means for Gmail customers as well as providing some suggestions to the Google Gmail engineers on how they could improve their JSON-LD markup.

JSON-LD enables the representation of Linked Data in JSON by describing a common JSON representation format for expressing graphs of information (see Google’s Knowledge Graph). It allows you to mix regular JSON data with Linked Data in a single JSON document. The format has already been adopted by large companies such as Google in their Gmail product and is now available to over 425 million people via currently live software products around the world.

The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build inter-operable Linked Data Web services, and to store Linked Data in JSON-based storage engines.

For Google’s Gmail customers, this means that Gmail will now be able to recognize people, places, events and a variety of other Linked Data objects. You can then take actions on the Linked Data objects embedded in an e-mail. For example, if someone sends you an invitation to a party, you can do a single-click response on whether or not you’ll attend a party right from your inbox. Doing so will also create a reminder for the party in your calendar. There are other actions that you can perform on Linked Data objects as well, like approving an expense report, reviewing a restaurant, saving a coupon for a free online movie, making a flight, hotel, or restaurant reservation, and many other really cool things that you couldn’t do before from the inside of your inbox.

What Google Got Right and Wrong

Google followed the JSON-LD standard pretty closely, so the vast majority of the markup looks really great. However, there are four issues that the Google engineers will probably want to fix before pushing the technology out to developers.

Invalid Context URL

The first issue is a fairly major one. Google isn’t using the JSON-LD @context parameter correctly in any of their markup examples. It’s supposed to be a URL, but they’re using a text string instead. This means that their JSON-LD documents are unreadable by all of the conforming JSON-LD processors today. For example, Google does the following when declaring a context in JSON-LD:

  "@context": "schema.org"

When they should be doing this:

  "@context": "http://schema.org/"

It’s a fairly simple change; just add “http://” to the beginning of the “schema.org” value. If Google doesn’t make this change, it’ll mean that JSON-LD processors will have to include a special hack to translate “schema.org” to “http://schema.org/” just for this use case. I hope that this was just a simple oversight by the Google engineers that implemented these features and not something that was intentional.

Context isn’t Online

The second issue has to do with the JSON-LD Context for schema.org. There doesn’t seem to be a downloadable context for schema.org at the moment. Not having a Web-accessible JSON-LD context is bad because the context is at the heart and soul of a JSON-LD document. If you don’t publish a JSON-LD context on the Web somewhere, applications won’t be able to resolve any of the Linked Data objects in the document.

The Google engineers could fix this fairly easily by providing a JSON-LD Context document when a web client requests a document of type “application/ld+json” from the http://schema.org/ URL. The JSON-LD community would be happy to help the Google engineers create such a document.

Keyword Aliasing, FTW

The third issue is a minor usability issue with the markup. The Google help pages on the JSON-LD functionality use the @type keyword in JSON-LD to express the type of Linked Data object that is being expressed. The Google engineers that wrote this feature may not have been aware of the Keyword Aliasing feature in JSON-LD. That is, they could have just aliased @type to type. Doing so would mean that the Gmail developer documentation wouldn’t have to mention the “specialness” of the @type keyword.

Use RDFa Lite

The fourth issue concerns the use of Microdata. JSON-LD was designed to work seamlessly with RDFa Lite 1.1; you can easily and losslessly convert data between the two markup formats. JSON-LD is compatible with Microdata, but pairing the two is a sub-optimal design choice. When JSON-LD data is converted to Microdata, information is lost due to data fidelity issues in Microdata. For example, there is no mechanism to specify that a value is a URL in Microdata.

RDFa Lite 1.1 does not suffer from these issues and has been proven to be a drop-in replacement for Microdata without any of the downsides that Microdata has. The designers of JSON-LD are the same designers behind RDFa Lite 1.1 and have extensive experience with Microdata. We specifically did not choose to pair JSON-LD with Microdata because it was a bad design choice for a number of reasons. I hope that the Google engineers will seek out advice from the JSON-LD and RDFa communities before finalizing the decision to use Microdata, as there are numerous downsides associated with that decision.

Closing

All in all, the Google engineers did a good job of implementing JSON-LD in Gmail. With a few small fixes to the Gmail documentation and code examples, they will be fully compliant with the JSON-LD specifications. The JSON-LD community is excited about this development and looks forward to working with Google to improve the recent release of JSON-LD for Gmail.

Permanent Identifiers for the Web

Web applications that deal with data on the web often need to specify and use URLs that are very stable. They utilize services such as purl.org to ensure that applications using their URLs will always be re-directed to a working website. These “permanent URL” redirection services operate kind of like a switchboard, connecting requests for information with the true location of the information on the Web. These switchboards can be reconfigured to point to a new location if the old location stops working.

How Does it Work?

If the concept sounds a bit vague, perhaps an example will help. A web author could use the following link (https://w3id.org/payswarm/v1) to refer to an important document. That link is hosted on a permanent identifier service. When a Web browser attempts to retrieve that link, it will be re-directed to the true location of the document on the Web. Currently, that location is https://payswarm.com/contexts/payswarm-v1.jsonld. If the location of the payswarm-v1.jsonld document changes at any point in the future, the only thing that needs to be updated is the re-direction entry on w3id.org. That is, all Web applications that use the https://w3id.org/payswarm/v1 URL will be transparently re-directed to the new location of the document and will continue to “Just Work™”.

w3id.org Launches

Permanent identifiers on the Web are an important thing to support, but until today there was no organization that would back a service for the Web to keep these sorts of permanent identifiers operating over the course of multiple decades. A number of us saw that this is a real problem and so we launched w3id.org, which is a permanent identifier service for the Web. The purpose of w3id.org is to provide a secure, permanent URL re-direction service for Web applications. This service will be run and operated by the W3C Permanent Identifier Community Group.

Specifically, the following organizations that have pledged responsibility to ensure the operation of this service for the decades to come: Digital Bazaar, 3 Round Stones, OpenLink Software, Applied Testing and Technology, and Openspring. Many more organizations will join in time.

These organizations are responsible for all administrative tasks associated with operating the service. The social contract between these organizations gives each of them full access to all information required to maintain and operate the website. The agreement is setup such that a number of these companies could fail, lose interest, or become unavailable for long periods of time without negatively affecting the operation of the site.

Why not purl.org

While many web authors and data publishers currently use purl.org, there are a number of issues or concerns that we have about the website:

  1. The site was designed for the library community and was never intended to be used by the general Web.
  2. Requests for information or changes to the service frequently go unanswered.
  3. The site does not support HTTPS connections, which means it cannot be used to serve documents for security-sensitive industries such as medicine and finance. Requests to migrate the site to HTTPs have gone unanswered.
  4. There is no published backup or fail-over plan for the website.
  5. The site is run by a single organization, with a single part-time administrator, on a single machine. It suffers from multiple single points of failure.

w3id.org Features

The launch of the w3id.org website mitigates all of the issues outlined above with purl.org:

  1. The site is specifically designed for web developers, authors, and data publishers on the general Web. It is not tailored for any specific community.
  2. Requests for information can be sent to a public mailing list that contains multiple administrators that are accountable for answering questions publicly. All administrators have been actively involved in world standards for many years and know how to run a service at this scale.
  3. The site supports HTTPS security, which means it can be used to securely serve data for industries such as medicine and finance.
  4. Multiple organizations, with multiple administrators per organization have full access to administer all aspects of the site and recover it from any potential failure. All important site data is in version control and is mirrored across the world on a regular basis.
  5. The site is run by a consortium of organizations that have each pledged to maintain the site for as long as possible. If a member organization fails, a new one will be found to replace the failing organization while the rest of the members ensure the smooth operation of the site.

All identifiers associated with the w3id.org website are intended to be around for as long as the Web is around. This means decades, if not centuries. If the final destination for popular identifiers used by this service fail in such a way as to be a major inconvenience or danger to the Web, the community will mirror the information for the popular identifier and setup a working redirect to restore service to the rest of the Web.

Adding a Permanent Identifier

Anyone with a github account and knowledge of simple Apache redirect rules can add a permanent identifier to w3id.org by performing the following steps:

  1. Fork w3id.org on Github.
  2. Add a new redirect entry and commit your changes.
  3. Submit a pull request for your changes.

If you wish to engage the community in discussion about this service for your Web application, please send an e-mail to the public-perma-id@w3.org mailing list. If you are interested in helping to maintain this service for the Web, please join the W3C Permanent Identifier Community Group.


Note: The letters ‘w3′ in the w3id.org domain name stand for “World Wide Web”. Other than hosting the software for the Permanent Identifier Community Group, the “World Wide Web Consortium” (W3C) is not involved in the support or management of w3id.org in any way.

Browser Payments 1.0

Kumar McMillan (Mozilla/FirefoxOS) and I (PaySwarm/Web Payments) have just published the first draft of the Browser Payments 1.0 API. The purpose of the spec is to establish a way to initiate payments from within the browser. It is currently a direct port of the mozPay API framework that is integrated into Firefox OS. It enables Web content to initiate payment or issue a refund for a product or service. Once implemented in the browser, a Web author may issue navigator.payment() function to initiate a payment.

This is work that we intend to pursue in the Web Payments Community Group at W3C. The work will eventually be turned over to a Web Payments Working Group at W3C, which we’re trying to kick-start at some point this year.

The current Browser Payments 1.0 spec can be read here:

http://web-payments.github.io/browser-payments/

The github repository for the spec is here:

https://github.com/web-payments/browser-payments/

Keep in mind that this is a very early draft of the spec. There are lots of prose issues as well as bugs that need to be sorted out. There are also a number of things that we need to discuss about the spec and how it fits into the larger Web ecosystem. Things like how it integrates with Persona and PaySwarm are still details that we need to suss out. There is a bug and issue tracker for the spec here:

https://github.com/web-payments/browser-payments/issues

The Mozilla guys will be on next week’s Web Payments telecon (Wednesday, 11am EST) for a Q/A session about this specification. Join us if you’re interested in payments in the browser. The call is open to the public, details about joining and listening in can be found here:

https://payswarm.com/minutes/

Identifiers in JSON-LD and RDF

TL;DR: This blog post argues that the extension of blank node identifiers in JSON-LD and RDF for the purposes of identifying predicates and naming graphs is important. It is important because it simplifies the usage of both technologies for developers. The post also provides a less-optimal solution if the RDF Working Group does not allow blank node identifiers for predicates and graph names in RDF 1.1.

We need identifiers as humans to convey complex messages. Identifiers let us refer to a certain thing by naming it in a particular way. Not only do humans need identifiers, but our computers need identifiers to refer to data in order to perform computations. It is no exaggeration to say that our very civilization depends on identifiers to manage the complexity of our daily lives, so it is no surprise that people spend a great deal of time thinking about how to identify things. This is especially true when we talk about the people that are building the software infrastructure for the Web.

The Web has a very special identifier called the Uniform Resource Locator (URL). It is probably one of the best known identifiers in the world, mostly because everybody that has been on the Web has used one. URLs are great identifiers because they are very specific. When I give you a URL to put into your Web browser, such as the link to this blog post, I can be assured that when you put the URL into your browser that you will see what I see. URLs are globally scoped, they’re supposed to always take you to the same place.

There is another class of identifier on the Web that is not globally scoped and is only used within a document on the Web. In English, these identifiers are used when we refer to something as “that thing”, or “this widget”. We can really only use this sort of identifier within a particular context where the people participating in the conversation understand the context. Linguists call this concept deixis. “Thing” doesn’t always refer to the same subject, but based on the proper context, we can usually understand what is being identified. Our consciousness tags the “thing” that is being talked about with a tag of sorts and then refers to that thing using this pseudo-identifier. Most of this happens unconsciously (notice how your mind unconsciously tied the use of ‘this’ in this sentence to the correct concept?).

The take-away is that there are globally-scoped identifiers like URLs, and there are also locally-scoped identifiers, that require a context in order to understand what they refer to.

JSON and JSON-LD

In JSON, developers typically express data like this:

{
  "name": "Joe"
}

Note how that JSON object doesn’t have an identifier associated with it. JSON-LD creates a straight-forward way of giving that object an identifier:

{
  "@context": ...,
  "@id": "http://example.com/people/joe",
  "name": "Joe"
}

Both you and I can refer to that object using http://example.com/people/joe and be sure that we’re talking about the same thing. There are times that assigning a global identifier to every piece of data that we create is not desired. For example, it doesn’t make much sense to assign an identifier to a transient message that is a request to get a sensor reading. This is especially true if there are millions of these types or requests and we never want to refer to the request once it has been transmitted. This is why JSON-LD doesn’t force developers to assign an identifier to the objects that they express. The people that created the technology understand that not everything needs a global identifier.

Computers are less forgiving, they need identifiers for most everything, but a great deal of that complexity can be hidden from developers. When an identifier becomes necessary in order to perform computations upon the data, the computer can usually auto-generate an identifier for the data.

RDF, Graphs, and Blank Node Identifiers

The Resource Description Framework (RDF) primarily uses an identifier called the Internationalized Resource Identifier (IRI). Where URLs can typically only express links in Western languages, an IRI can express links in almost every language in use today including Japanese, Tamil, Russian and Mandarin. RDF also defines a special type of identifier called a blank node identifier. This identifier is auto-generated and is locally scoped to the document. It’s an advanced concept, but is one that is pretty useful when you start dealing with transient data, where creating a global identifier goes beyond the intended usage of the data. An RDF-compatible program will step in and create blank node identifiers on your behalf, but only when necessary.

Both JSON-LD and RDF have the concept of a Statement, Graph, and a Dataset. A Statement consists of a subject, predicate, and an object (for example: “Dave likes cookies”). A Graph is a collection of Statements (for example: Graph A contains all the things that Dave said and Graph B contains all the things that Mary said). A Dataset is a collection of Graphs (for example: Dataset Z contains all of the things Dave and Mary said yesterday).

In JSON-LD, at present, you can use a blank node identifier for subjects, predicates, objects, and graphs. In RDF, you can only use blank node identifiers for subjects and objects. There are people, such as myself, in the RDF WG that think this is a mistake. There are people that think it’s fine. There are people that think it’s the best compromise that can be made at the moment. There is a wide field of varying opinions strewn between the various extremes.

The end result is that the current state of affairs have put us into a position where we may have to remove blank node identifier support for predicates and graphs from JSON-LD, which comes across as a fairly arbitrary limitation to those not familiar with the inner guts of RDF. Don’t get me wrong, I feel it’s a fairly arbitrary limitation. There are those in the RDF WG that don’t think it is and that may prevent JSON-LD from being able to use what I believe is a very useful construct.

Document-local Identifiers for Predicates

Why do we need blank node identifiers for predicates in JSON-LD? Let’s go back to the first example in JSON to see why:

{
  "name": "Joe"
}

The JSON above is expressing the following Statement: “There exists a thing whose name is Joe.”

The subject is “thing” (aka: a blank node) which is legal in both JSON-LD and RDF. The predicate is “name”, which doesn’t map to an IRI. This is fine as far as the JSON-LD data model is concerned because “name”, which is local to the document, can be mapped to a blank node. RDF cannot model “name” because it has no way of stating that the predicate is local to the document since it doesn’t support blank nodes for predicates. Since the predicate doesn’t map to an IRI, it can’t be modeled in RDF. Finally, “Joe” is a string used to express the object and that works in both JSON-LD and RDF.

JSON-LD supports the use of blank nodes for predicates because there are some predicates, like every key used in JSON, that are local to the document. RDF does not support the use of blank nodes for predicates and therefore cannot properly model JSON.

Document-local Identifiers for Graphs

Why do we need blank node identifiers for graphs in JSON-LD? Let’s go back again to the first example in JSON:

{
  "name": "Joe"
}

The container of this statement is a Graph. Another way of writing this in JSON-LD is this:

{
  "@context": ...,
  "@graph": {
    "name": "Joe"
  }
}

However, what happens when you have two graphs in JSON-LD, and neither one of them is the RDF default graph?

{
  "@context": ...,
  "@graph": [
    {
      "@graph": {
        "name": "Joe"
      }
    }, 
    {
      "@graph": {
        "name": "Susan"
      }
    }
  ]
}

In JSON-LD, at present, it is assumed that a blank node identifier may be used to name each graph above. Unfortunately, in RDF, the only thing that can be used to name a graph is an IRI, and a blank node identifier is not an IRI. This puts JSON-LD in an awkward position, either JSON-LD can:

  1. Require that developers name every graph with an IRI, which seems like a strange demand because developers don’t have to name all subjects and objects with an IRI, or
  2. JSON-LD can auto-generate a regular IRI for each predicate and graph name, which seems strange because blank node identifiers exist for this very purpose (not to mention this solution won’t work in all cases, more below), or
  3. JSON-LD can auto-generate a special IRI for each predicate and graph name, which would basically re-invent blank node identifiers.

The Problem

The problem surfaces itself when you try to convert a JSON-LD document to RDF. If the RDF Working Group doesn’t allow blank node identifiers for predicates and graphs, then what do you use to identify predicates and graphs that have blank node identifiers associated with them in the JSON-LD data model? This is a feature we do want to support because there are a number of important use cases that it enables. The use cases include:

  1. Blank node predicates allow JSON to be mapped directly to the JSON-LD and RDF data models.
  2. Blank node graph names allow developers to use graphs without explicitly naming them.
  3. Blank node graph names make the RDF Dataset Normalization algorithm simpler.
  4. Blank node graph names prevent the creation of a parallel mechanism to generate and manage blank node-like identifiers.

It’s easy to see the problem exposed when performing RDF Dataset Normalization, which we need to do in order to digitally sign information expressed in JSON-LD and RDF. The rest of this post will focus on this area, as it exposes the problems with not supporting blank node identifiers for predicates and graph names. In JSON-LD, the two-graph document above could be normalized to this NQuads (subject, predicate, object, graph) representation:

_:bnode0 _:name "Joe" _:graph1 .
_:bnode1 _:name "Susan" _:graph2 .

This is illegal in RDF since you can’t have a blank node identifier in the predicate or graph position. Even if we were to use an IRI in the predicate position, the problem (of not being able to normalize “un-labeled” JSON-LD graphs like the ones in the previous section) remains.

The Solutions

This section will cover the proposed solutions to the problem in order least desirable to most desirable.

Don’t allow blank node identifiers for predicates and graph names

Doing this in JSON-LD ignores the point of contention. The same line of argumentation can be applied to RDF. The point is that by forcing developers to name graphs using IRIs, we’re forcing them to do something that they don’t have to do with subjects and objects. There is no technical reason that has been presented where the use of a blank node identifier in the predicate or graph position is unworkable. Telling developers that they must name graphs using IRIs will be surprising to them, because there is no reason that the software couldn’t just handle that case for them. Requiring developers to do things that a computer can handle for them automatically is anti-developer and will harm adoption in the long run.

Generate fragment identifiers for graph names

One solution is to generate fragment identifiers for graph names. This, coupled with the base IRI would allow the data to be expressed legally in NQuads:

_:bnode0 <http://example.com/base#name> "Joe" <http://example.com/base#graph1> .
_:bnode1 <http://example.com/base#name> "Susan" <http://example.com/base#graph2> .

The above is legal RDF. The approach is problematic when you don’t have a base IRI, such as when JSON-LD is used as a messaging protocol between two systems. In that use case, you end up with something like this:

_:bnode0 <#name> "Joe" <#graph1> .
_:bnode1 <#name> "Susan" <#graph2> .

RDF requires absolute IRIs and so the document above is illegal from an RDF perspective. The other down-side is that you have to keep track of all fragment identifiers in the output and make sure that you don’t pick fragment identifiers that are used elsewhere in the document. This is fairly easy to do, but now you’re in the position of tracking and renaming both blank node identifiers and fragment IDs. Even if this approach worked, you’d be re-inventing the blank node identifier. This approach is unworkable for systems like PaySwarm that use transient JSON-LD messages across a REST API; there is no base IRI in this use case.

Skolemize to create identifiers for graph names

Another approach is skolemization, which is just a fancy way of saying: generate a unique IRI for the blank node when expressing it as RDF. The output would look something like this:

_:bnode0 <http://blue.example.com/.well-known/genid/2938570348579834> "Joe" <http://blue.example.com/.well-known/genid/348570293572375> .
_:bnode1 <http://blue.example.com/.well-known/genid/2938570348579834> "Susan" <http://blue.example.com/.well-known/genid/49057394572309457> .

This would be just fine if there was only one application reading and consuming data. However, when we are talking about RDF Dataset Normalization, there are cases where two applications must read and independently verify the representation of a particular IRI. One scenario that illustrates the example fairly nicely is the blind verification scenario. In this scenario, two applications de-reference an IRI to fetch a JSON-LD document. Each application must perform RDF Dataset Normalization and generate a hash of that normalization to see if they retrieved the same data. Based on a strict reading of the skolemization rules, Application A would generate this:

_:bnode0 <http://blue.example.com/.well-known/genid/2938570348579834> "Joe" <http://blue.example.com/.well-known/genid/348570293572375> .
_:bnode1 <http://blue.example.com/.well-known/genid/2938570348579834> "Susan" <http://blue.example.com/.well-known/genid/49057394572309457> .

and Application B would generate this:

_:bnode0 <http://red.example.com/.well-known/genid/J8Sfei8f792Fd3> "Joe" <http://red.example.com/.well-known/genid/j28cY82Pa88> .
_:bnode1 <http://red.example.com/.well-known/genid/J8Sfei8f792Fd3> "Susan" <http://red.example.com/.well-known/genid/k83FyUuwo89DF> .

Note how the two graphs would never hash to the same value because the Skolem IRIs are completely different. The RDF Dataset Normalization algorithm would have no way of knowing which IRIs are blank node stand-ins and which ones are legitimate IRIs. You could say that publishers are required to assign the skolemized IRIs to the data they publish, but that ignores the point of contention, which is that you don’t want to force developers to create identifiers for things that they don’t care to identify. You could argue that the publishing system could generate these IRIs, but then you’re still creating a global identifier for something that is specifically meant to be a document-scoped identifier.

A more lax reading of the Skolemization language might allow one to create a special type of Skolem IRI that could be detected by the RDF Dataset Normalization algorithm. For example, let’s say that since JSON-LD is the one that is creating these IRIs before they go out to the RDF Dataset Normalization Algorithm, we use the tag IRI scheme. The output would look like this for Application A:

_:bnode0 <tag:w3.org,2013:dsid:345> "Joe" <tag:w3.org,2013:dsid:254> .
_:bnode1 <tag:w3.org,2013:dsid:345> "Susan" <tag:w3.org,2013:dsid:363> .

and this for Application B:

_:bnode0 <tag:w3.org,2013:dsid:a> "Joe" <tag:w3.org,2013:dsid:b> .
_:bnode1 <tag:w3.org,2013:dsid:a> "Susan" <tag:w3.org,2013:dsid:c> .

The solution still doesn’t work, but we could add another step to the RDF Dataset Normalization algorithm that would allow it to rename any IRI starting with tag:w3.org,2013:. Keep in mind that this is exactly the same thing that we do with blank nodes, and it’s effectively duplicating that functionality. The algorithm would allow us to generate something like this for both applications doing a blind verification.

_:bnode0 <tag:w3.org,2013:dsid:predicate-1> "Joe" <tag:w3.org,2013:dsid:graph-1> .
_:bnode1 <tag:w3.org,2013:dsid:predicate-1> "Susan" <tag:w3.org,2013:dsid:graph-2> .

This solution does violate one strong suggestion in the Skolemization section:

Systems wishing to do this should mint a new, globally unique IRI (a Skolem IRI) for each blank node so replaced.

The IRI generated is definitely not globally unique, as there will be many tag:w3.org,2013:dsid:graph-1s in the world, each associated with data that is completely different. This approach also goes against something else in Skolemization that states:

This transformation does not appreciably change the meaning of an RDF graph.

It’s true that using tag IRIs doesn’t change the meaning of the graph when you assume that the document will never find its way into a database. However, once you place the document in a database, it certainly creates the possibility of collisions in applications that are not aware of the special-ness of IRIs starting with tag:w3.org,2013:dsid:. The data is fine taken by itself, but a disaster when merged with other data. We would have to put a warning in some specification for systems to make sure to rename the incoming tag:w3.org,2013:dsid: IRIs to something that is unique to the storage subsystem. Keep in mind that this is exactly what is done when importing blank node identifiers into a storage subsystem. So, we’ve more-or-less re-invented blank node identifiers at this point.

Allow blank node identifiers for graph names

This leads us to the question of why not just extend RDF to allow blank node identifiers for predicates and graph names? Ideally, that’s what I would like to see happen in the future as it places the least burden on developers, and allows RDF to easily model JSON. The responses from the RDF WG are varied. These are all of the current arguments against that I have heard:

There are other ways to solve the problem, like fragment identifiers and skolemization, than introducing blank nodes for predicates and graph names.

Fragment identifiers don’t work, as demonstrated above. There is really only one workable solution based on a very lax reading of skolemization, and as demonstrated above, even the best skolemization solution re-invents the concept of a blank node.

There are other use cases that are blocked by the introduction of blank node identifiers into the predicate and graph name position.

While this has been asserted, it is still unclear exactly what those use cases are.

Adding blank node identifiers for predicates and graph names will break legacy applications.

If blank nodes for predicates and graph names were illegal before, wouldn’t legacy applications reject that sort of input? The argument that there are bugs in legacy applications that make them not robust against this type of input is valid, but should that prevent the right solution from being adopted? There has been no technical reason put forward for why blank nodes for predicates or graph names cannot work, other than software bugs prevent it.

The PaySwarm work has chosen to model the data in a very strange way.

The people that have been working on RDFa, JSON-LD, and the Web Payments specifications for the past 5 years have spent a great deal of time attempting to model the data in the simplest way possible, and in a way that is accessible to developers that aren’t familiar with RDF. Whether or not it may seem strange is arguable since this response is usually levied by people not familiar with the Web Payments work. This blog post outlines a variety of use cases where the use of a blank node for predicates and graph naming is necessary. Stating that the use cases are invalid ignores the point of contention.

If we allow blank nodes to be used when naming graphs, then those blank nodes should denote the graph.

At present, RDF states that a graph named using an IRI may denote the graph or it may not denote the graph. This is a fancy way of saying that the IRI that is used for the graph name may be an identifier for something completely different (like a person), but de-referencing the IRI over the Web results in a graph about cars. I personally think that is a very dangerous concept to formalize in RDF, but there are others that have strong opinions to the contrary. The chances of this being changed in RDF 1.1 is next to none.

Others have argued that while that may be the case for IRIs, it doesn’t have to be the case for blank nodes that are used to name graphs. In this case, we can just state that the blank node denotes the graph because it couldn’t possibly be used for anything else since the identifier is local to the document. This makes a great deal of sense, but it is different from how an IRI is used to name a graph and that difference is concerning to a number of people in the RDF Working Group.

However, that is not an argument to disallow blank nodes from being used for predicates and graph names. The group could still allow blank nodes to be used for this purpose while stating that they may or may not be used to denote the graph.

The RDF Working Group does not have enough time left in its charter to make a change this big.

While this may be true, not making a decision on this is causing more work for the people working on JSON-LD and RDF Dataset Normalization. Having the tag:w3.org,2013:dsid: identifier scheme is also going to make many RDF-based applications more complex in the long run, resulting in a great deal more work than just allowing blank nodes for predicates and graph names.

Conclusion

I have a feeling that the RDF Working Group is not going to do the right thing on this one due to the time pressure of completing the work that they’ve taken on. The group has already requested, and has been granted, a charter extension. Another extension is highly unlikely, so the group wants to get everything wrapped up. This discussion could take several weeks to settle. That said, the solution that will most likely be adopted (a special tag-based skolem IRI) will cause months of work for people living in the JSON-LD and RDF ecosystem. The best solution in the long run would be to solve this problem now.

If blank node identifiers for predicates and graphs are rejected, here is the proposal that I think will move us forward while causing an acceptable amount of damage down the road:

  1. JSON-LD continues to support blank node identifiers for use as predicates and graph names.
  2. When converting JSON-LD to RDF, a special, relabelable IRI prefix will be used for blank nodes in the predicate and graph name position of the form tag:w3.org,2013:dsid:

Thanks to Dave Longley for proofing this blog post and providing various corrections.

DRM in HTML5

A few days ago, a proposal was put forward in the HTML Working Group (HTML WG) by Microsoft, Netflix, and Google to take DRM in HTML5 to the next stage of standardization at W3C. This triggered another uproar about the morality and ethics behind DRM and building it into the Web. There are good arguments about morality/ethics on both sides of the debate but ultimately, the HTML WG will decide whether or not to pursue the specification based on technical merit. I (@manusporny) am a member of the HTML WG. I was also the founder of a start-up that focused on building a legal, peer-to-peer, content distribution network for music and movies. It employed DRM much like the current DRM in HTML5 proposal. During the course of 8 years of technical development, we had talks with many of the major record labels. I have first-hand knowledge of the problem, and building a technical solution to address the problem.

TL;DR: The Encrypted Media Extensions (DRM in HTML5) specification does not solve the problem the authors are attempting to solve, which is the protection of content from opportunistic or professional piracy. The HTML WG should not publish First Public Working Drafts that do not effectively address the primary goal of a specification.

The Problem

The fundamental problem that the Encrypted Media Extensions (EME) specification seems to be attempting to solve is to find a way to reduce piracy (since eliminating piracy on the Web is an impossible problem to solve). This is a noble goal as there are many content creators and publishers that are directly impacted by piracy. These are not faceless corporations, they are people with families that depend on the income from their creations. It is with this thought in mind that I reviewed the specification on a technical basis to determine if it would lead to a reduction in piracy.

Review Notes for Encrypted Media Extensions (EME)

Introduction

The EME specification does not specify a DRM scheme in the specification, rather it explains the architecture for a DRM plug-in mechanism. This will lead to plug-in proliferation on the Web. Plugins are something that are detrimental to inter-operability because it is inevitable that the DRM plugin vendors will not be able to support all platforms at all times. So, some people will be able to view content, others will not.

A simple example of the problem is Silverlight by Microsoft. Take a look at the Plugin details for Silverlight, specifically, click on the “System Requirements” tab. Silverlight is Microsoft’s creation. Microsoft is a HUGE corporation with very deep pockets. They can and have thrown a great deal of money at solving very hard problems. Even Microsoft does not support their flagship plugin on Internet Explorer 8 on older versions of their operating system and the latest version of Chrome on certain versions of Windows and Mac. If Microsoft can’t make their flagship Web plugin work across all major Operating Systems today, what chance does a much smaller DRM plugin company have?

The purpose of a standard is to increase inter-operability across all platforms. It has been demonstrated that plug-ins, on the whole, harm inter-operability in the long run and often create many security vulnerabilities. The one shining exception is Flash, but we should not mistake an exception for the rule. Also note that Flash is backed by Adobe, a gigantic multi-national corporation with very deep pockets.

1.1 Goals

The goals section does not state the actual purpose of the specification. It states meta-purposes like: “Support a range of content security models, including software and hardware-based models” and “Support a wide range of use cases.”. While those are sub-goals, the primary goal isn’t stated once in the Goals section. The only rational primary goal is to reduce the amount of opportunistic piracy on the Web. Links to piracy data collected over the last decade could help make the case that this is worth doing.

1.2.1. Content Decryption Module (CDM)

When we were working on our DRM system, we took almost exactly the same approach that the EME specification does. We had a plug-in system that allowed different DRM modules to be plugged into the system. We assumed that each DRM scheme had a shelf-life of about 2-3 months before it was defeated, so our system would rotate the DRM modules every 3 months. We had plans to create genetic algorithms that would encrypt and watermark data into the file stream and mutate the encryption mechanism every couple of months to keep the pirates busy. It was a very complicated system to keep working because one slip up in the DRM module meant that people couldn’t view the content they had purchased. We did get the system working in the end, but it was a nightmare to make sure that the DRM modules to decrypt the information were rotated often enough to be effective while ensuring that they worked across all platforms.

Having first-hand knowledge of how such a system works, it’s a pretty terrible idea for the Web because it takes a great deal of competence and coordination to pull something like this off. I would expect the larger Content Protection companies to not have an issue with this. The smaller Content Protection companies, however, will inevitably have issues with ensuring that their DRM modules work across all platforms.

The bulk of the specification

The bulk of the specification is what you would expect from a system like this, so I won’t go into the gory details. There were two major technical concerns I had while reading through the implementation notes.

The first is that key retrieval is handled by JavaScript code, which means that anybody using a browser could copy the key data. This means that if a key is sent in the clear, the likelihood that the DRM system could be compromised goes up considerably because the person that is pirating the content knows the details necessary to store and decrypt the content.

If the goal is to reduce opportunistic piracy, all keys should be encrypted so that snooping by the browser doesn’t result in the system being compromised. Otherwise, all you would need to do is install a plugin that shares all clear-text keys with something like Mega. Pirates could use those keys to then decrypt byte-streams that do not mutate between downloads. To my knowledge, most DRM’ed media delivery does not encrypt content on a per-download basis. So, the spec needs to make it very clear that opaque keys MUST be used when delivering media keys.

One of the DRM systems we built, which became the primary way we did things, would actually re-encrypt the byte stream for every download. So even if a key was compromised, you couldn’t use the key to decrypt any other downloads. This was massively computationally expensive, but since we were running a peer-to-peer network, the processing was pushed out to the people downloading stuff in the network and not our servers. Sharing of keys was not possible in our DRM system, so we could send the decryption keys in the clear. I doubt many of the Content Protection Networks will take this approach as it would massively spike the cost of delivering content.

6. Simple Decryption

The “org.w3.clearkey” Key System indicates a plain-text clear (unencrypted) key will be used to decrypt the source. No additional client-side content protection is required.

Wow, what a fantastically bad idea.

  1. This sends the decryption key in the clear. This key can be captured by any Web browser plugin. That plugin can then share the decryption key and the byte stream with the world.
  2. It duplicates the purpose of Transport Layer Security (TLS).
  3. It doesn’t protect anything while adding a very complex way of shipping an encrypted byte stream from a Web server to a Web browser.

So. Bad. Seriously, there is nothing secure about this mechanism. It should be removed from the specification.

9.1. Use Cases: “user is not an adversary”

This is not a technical issue, but I thought it would be important to point it out. This “user is not an adversary” text can be found in the first question about use cases. It insinuates that people that listen to radio and watch movies online are potential adversaries. As a business owner, I think that’s a terrible way to frame your customers.

Thinking of the people that are using the technology that you’re specifying as “adversaries” is also largely wrong. 99.999% of people using DRM-based systems to view content are doing it legally. The folks that are pirating content are not sitting down and viewing the DRM stream, they have acquired a non-DRM stream from somewhere else, like Mega or The Pirate Bay, and are watching that. This language is unnecessary and should be removed from the specification.

Conclusion

There are some fairly large security issues with the text of the current specification. Those can be fixed.

The real goal of this specification is to create a framework that will reduce content piracy. The specification has not put forward any mechanism that demonstrates that it would achieve this goal.

Here’s the problem with EME – it’s easy to defeat. In the very worst case, there exist piracy rigs that allow you to point an HD video camera at a HD television and record the video and audio without any sort of DRM. That’s the DRM-free copy that will end up on Mega or the Pirate Bay. In practice, no DRM system has survived for more than a couple of years.

Content creators, if your content is popular, EME will not protect your content against a content pirate. Content publishers, your popular intellectual property will be no safer wrapped in anything that this specification can provide.

The proposal does not achieve the goal of the specification, it is not ready for First Public Working Draft publication via the HTML Working Group.

Aaron Swartz, PaySwarm, and Academic Journals

For those of you that haven’t heard yet, Aaron Swartz took his own life two days ago. Larry Lessig has a follow-up on one of the reasons he thinks led to his suicide (the threat of 50 years in jail over the JSTOR case).

I didn’t know Aaron at all. A large number of people that I deeply respect did, and have written about his life with great admiration. I, like most of you that have read the news, have done so while brewing a cauldron of mixed emotions. Saddened that someone that had achieved so much good in their life is no longer in this world. Angry that Aaron chose this ending. Sickened that this is the second recent suicide, Iilya’s being the first, involving a young technologist trying to make the world a better place for all of us. Afraid that other technologists like Aaron and Iilya will choose this path over persisting in their noble causes. Helpless. Helpless because this moment will pass, just like Iilya’s did, with no great change in the way our society deals with mental illness. With no great change, in what Aaron was fighting for, having been realized.

Nobody likes feeling helpless. I can’t mourn Aaron because I didn’t know him. I can mourn the idea of Aaron, of the things he stood for. While reading about what he stood for, several disconnected ideas kept rattling around in the back of my head:

  1. We’ve hit a point of ridiculousness in our society where people at HSBC knowingly laundering money for drug cartels get away with it, while people like Aaron are labeled a felon and face upwards of 50 years in jail for “stealing” academic articles. This, even after the publisher of said academic articles drops the charges. MIT never dropped their charges.
  2. MIT should make it clear that he was not a felon or a criminal. MIT should posthumously pardon Aaron and commend him for his life’s work.
  3. The way we do peer-review and publish scientific research has to change.
  4. I want to stop reading about all of this, it’s heartbreaking. I want to do something about it – make something positive out of this mess.

Ideas, Floating

I was catching up on news this morning when the following floated past on Twitter:

clifflampe: It seems to me that the best way for we academics to honor Aaron Swartz’s memory is to frigging finally figure out open access publishing.

1Copenut: @clifflampe And finally implement a micropayment system like @manusporny’s #payswarm. I don’t want the paper-but I’ll pay for the stories.

1Copenut: @manusporny These new developments with #payswarm are a great advance. Is it workable with other backends like #Middleman or #Sinatra?

This was interesting because we have been talking about how PaySwarm could be applied to academic publishing for a while now. All the discussions to this point have been internal, we didn’t know if anybody would make the connection between the infrastructure that PaySwarm provides and how it could be applied to academic journals. This is up on our ideas board as a potential area that PaySwarm could be applied:

  • Payswarm for peer-reviewed, academic publishing
    • Use Payswarm identity mechanism to establish trusted reviewer and author identities for peer review
    • Use micropayment mechanism to fund research
    • Enable university-based group-accounts for purchasing articles, or refunding researcher purchases

Journals as Necessary Evils

For those in academia, journals are often viewed as a necessary evil. They cost a fortune to subscribe to, farm out most of their work to academics that do it for free, and employ an iron-grip on the scientific publication process. Most academics that I speak with would do away with journal organizations in a heartbeat if there was a viable alternative. Most of the problem is political, which is why we haven’t felt compelled to pursue fixing it. Political problems often need a groundswell of support and a number of champions that are working inside the community. I think the groundswell is almost here. I don’t know who the set of academic champions are that will be the ones to push this forward. Additionally, if nobody takes the initiative to build such a system, things won’t change.

Here’s what we (Digital Bazaar) have been thinking. To fix the problem, you need at least the following core features:

  • Web-scale identity mechanisms – so that you can identify reviewers and authors for the peer-review process regardless of which site is publishing or reviewing a paper.
  • Decentralized solution – so that universities and researchers drive the process – not the publishers of journals.
  • Some form of remuneration system – you want to reward researchers with heavily cited papers, but in a way that makes it very hard to game the system.

Scientific Remuneration

PaySwarm could be used to implement each of these core features. At its core, PaySwarm is a decentralized payment mechanism for the Web. It also has a decentralized identity mechanism that is solid, but in a way that does not violate your privacy. There is a demo that shows how it can be applied to WordPress blogs where just an abstract is published, and if the reader wants to see more of the article, they can pay a small fee to read it. It doesn’t take a big stretch of the imagination to replace “blog article” with “research paper”. The hope is that researchers would set access prices on articles such that any purchase to access the research paper would then go to directly funding their current research. This would empower universities and researchers with an additional revenue stream while reducing the grip that scientific publishers currently have on our higher-education institutions.

A Decentralized Peer-review Process

Remuneration is just one aspect of the problem. Arguably, it is the lesser of the problems in academic publishing. The biggest technical problem is how you do peer review on a global, distributed scale. Quite obviously, you need a solid identity system that can identify scientists over the long term. You need to understand a scientists body of work and how respected their research is in their field. You also need a review system that is capable of pairing scientists and papers in need of review. PaySwarm has a strong identity system in place using the Web as the identification mechanism. Here is the PaySwarm identity that I use for development: https://dev.payswarm.com/i/manu. Clearly, paper publishing systems wouldn’t expose that identity URL to people using the system, but I include it to show what a Web-scale identifier looks like.

Web-scale Identity

If you go to that identity URL, you will see two sets of information: my public financial accounts and my digital signature keys. A PaySwarm Authority can annotate this identity with even more information, like whether or not an e-mail address has been verified against the identity. Is there a verified cellphone on record for the identity? Is there a verified driver’s license on record for the identity? What about a Twitter handle? A Google+ handle? All of these pieces of information can be added and verified by the PaySwarm Authority in order to build an identity that others can trust on the Web.

What sorts of pieces of information need to be added to a PaySwarm identity to trust its use for academic publishing? Perhaps a list of articles published by the identity? Review comments for all other papers that have been reviewed by the identity? Areas of research that other’s have certified that the identity is an expert on? This is pretty basic Web-of-trust stuff, but it’s important to understand that PaySwarm has this sort of stuff baked into the core of the design.

The Process

Leveraging identity to make decentralized peer-review work is the goal, and here is how it would work from a researcher perspective:

  1. A researcher would get a PaySwarm identity from any PaySwarm Authority, there is no cost associated with getting such an identity. This sub-system is already implemented in PaySwarm.
  2. A researcher would publish an abstract of their paper in a Linked Data format such as RDFa. This abstract would identify the authors of the paper and some other basic information about the paper. It would also have a digital signature on the information using the PaySwarm identity that was acquired in the previous step. The researcher would set the cost to access the full article using any PaySwarm-compatible system. All of this is already implemented in PaySwarm.
  3. A paper publishing system would be used to request a review among academic peers. Those peers would review the paper and publish digital signatures on review comments, possibly with a notice that the paper is ready to be published. This sub-system is fairly trivial to implement and would mirror the current review process with the important distinction that it would not be centralized at journal publications.
  4. Once a pre-set limit on the number of positive reviews has been met, the paper publishing system would place its stamp of approval on the paper. Note that different paper publishing systems may have different metrics just as journals have different metrics today. One benefit to doing it this way is that you don’t need a paper publishing system to put its stamp of approval on a paper at all. If you really wanted to, you could write the software to calculate whether or not the paper has gotten the appropriate amount of review because all of the information is on the Web by default. This part of the system would be fairly trivial to write once the metrics were known. It may take a year or two to get the correct set of metrics in place, but it’s not rocket science and it doesn’t need to be perfect before systems such as this are used to publish papers.

From a reviewer perspective, it would work like so:

  1. You are asked to review papers by your peers once you have an acceptable body of published work. All of your work can be verified because it is tied to your PaySwarm identity. All review comments can be verified as they are tied to other PaySwarm identities. This part is fairly trivial to implement, most of the work is already done for PaySwarm.
  2. Once you review a paper, you digitally sign your comments on the paper. If it is a good paper, you also include a claim that it is ready for broad publication. Again, technically simple to implement.
  3. Your reputation builds as you review more papers. The way that reputation is calculated is outside of the scope of this blog post mainly because it would need a great deal of input from academics around the world. Reputation is something that can be calculated, but many will argue about the algorithm and I would expect this to oscillate throughout the years as the system grows. In the end, there will probably be multiple reputation algorithms, not just one. All that matters is that people trust the reputation algorithms.

Freedom to Research and Publish

The end-goal is to build a system that empowers researchers and research institutions, is far more transparent than the current peer-reviewed publishing system, and remunerates the people doing the work more directly. You will also note that at no point does a traditional journal enter the picture to give you a stamp of approval and charge you a fee for publishing your paper. Researchers are in control of the costs at all stages. As I’ve said above, the hard part isn’t the technical nature of the project, it’s the political nature of it. I don’t know if this is enough of a pain-point among academics to actually start doing something about it today. I know some are, but I don’t know if many would use such a system over the draw of publications like Nature, PLOS, Molecular Genetics and Genomics, and Planta. Quite obviously, what I’ve proposed above isn’t a complete road map. There are issues and details that would need to be hammered out. However, I don’t understand why a system like this doesn’t already exist, so I implore the academic community to explain why what I’ve laid out above hasn’t been done yet.

It’s obvious that a system like this would be good for the world. Building such a system may have reduced the possibility of us losing someone like Aaron in the way that we did. He was certainly fighting for something like it. Talking about it makes me feel a bit less helpless than I did yesterday. Maybe making something good out of this mess will help some of you out there as well. If others offer to help, we can start building it.

So how about it researchers of the world, would you publish all of your research through such a system?

Objection to Microdata Candidate Recommendation

Full disclosure: I’m the current chair of the standards group at the World Wide Web Consortium that created the newest version of RDFa, editor of the HTML5+RDFa 1.1 and RDFa Lite 1.1 specifications, and I’m also a member of the HTML Working Group.

Edit: 2012-12-01 – Updated the article to rephrase some things, and include rationale and counter-arguments at the bottom in preparation for the HTML WG poll on the matter.

The HTML Working Group at the W3C is currently trying to decide if they should transition the Microdata specification to the next stage in the standardization process. There has been a call for consensus to transition the spec to the Candidate Recommendation stage. The problem is that we already have a set of specifications that are official W3C recommendations that do what Microdata does and more. RDFa 1.1 became an official W3C Recommendation last summer. From a standards perspective, this is a mistake and sends a confused signal to Web developers. Officially supporting two specification that do almost exactly the same thing in almost exactly the same way is, ultimately, a failure to standardize.

The fact that RDFa already does what Microdata does has been elaborated upon before:

Mythical Differences: RDFa Lite vs. Microdata
An Uber-comparison of RDFa, Microdata, and Microformats

Here’s the problem in a nutshell: The W3C is thinking of ratifying two completely different specifications that accomplish the same thing in basically the same way. The functionality of RDFa, which is already a W3C Recommendation, overlaps Microdata by a large margin. In fact, RDFa Lite 1.1 was developed as a plug-in replacement for Microdata. The full version of RDFa can also do a number of things that Microdata cannot, such as datatyping, associating more than one type per object, embed-ability in languages other than HTML, ability to easily publish and mix vocabularies, etc.

Microdata would have easily been dead in the water had it not been for two simple facts: 1) The editor of the specification works at Google, and 2) Google pushed Microdata as the markup language for schema.org before also accepting RDFa markup. The first enabled Google and the editor to work on schema.org without signalling to the public that it was creating a competitor to Facebook’s Open Graph Protocol. The second gave Microdata enough of a jump start to establish a foothold for schema.org markup. There have been a number of studies that show that Microdata’s sole use case (99% of Microdata markup) is for the markup of schema.org terms. Microdata is not widely used outside of that context, we now have data to back up what we had predicted would happen when schema.org made their initial announcement for Microdata-only support. Note that schema.org now supports both RDFa and Microdata.

It is typically a bad idea to have two formats published by the same organization that do the same thing. It leads to Web developer confusion surrounding which format to use. One of the goals of Web standards is to reduce, or preferably eliminate, the confusion surrounding the correct technology decision to make. The HTML Working Group and the W3C is failing miserably on this front. There is more confusion today about picking Microdata or RDFa because they accomplish the same thing in effectively the same way. The only reason both exist is due to political reasons.

If we step back and look at the technical arguments, there is no compelling reason that Microdata should be a W3C Recommendation. There is no compelling reason to have two specifications that do the same thing in basically the same way. Therefore, as a member of the HTML Working Group (not as a chair or editor of RDFa) I object to the publication of Microdata as a Candidate Recommendation.

Note that this is not a W3C formal objection. This is an informal objection to publish Microdata along the Recommendation track. This objection will not become an official W3C formal objection if the HTML Working Group holds a poll to gather consensus around whether Microdata should proceed along the Recommendation publication track. I believe the publication of a W3C Note will continue to allow Google to support Microdata in schema.org, but will hopefully correct the confused message that the W3C has been sending to Web developers regarding RDFa and Microdata. We don’t need two specifications that do almost exactly the same thing.

The message sent by the W3C needs to be very clear: There is one recommendation for doing structured data markup in HTML. That recommendation is RDFa. It addresses all of the use cases that have been put forth by the general Web community, and it’s ready for broad adoption and implementation today.

If you agree with this blog post, make sure to let the HTML Working Group know that you do not think that the W3C should ratify two specifications that do almost exactly the same thing in almost exactly the same way. Now is the time to speak up!

Summary of Facts and Arguments

Below is a summary of arguments presented as a basis for publishing Microdata along the W3C Note track:

  1. RDFa 1.1 is already a ratified Web standard as of June 7th 2012 and absorbed almost every Microdata feature before it became official. If the majority of the differences between RDFa and Microdata boil down to different attribute names (property vs. itemprop), then the two solutions have effectively converged on syntax and W3C should not ratify two solutions that do effectively the same thing in almost exactly the same way.
  2. RDFa is supported by all of the major search crawlers, including Google (and schema.org), Microsoft, Yahoo!, Yandex, and Facebook. Microdata is not supported by Facebook.
  3. RDFa Lite 1.1 is feature-equivalent to Microdata. Over 99% of Microdata markup can be expressed easily in RDFa Lite 1.1. Converting from Microdata to RDFa Lite is as simple as a search and replace of the Microdata attributes with RDFa Lite attributes. Conversely, Microdata does not support a number of the more advanced RDFa features, like being able to tell the difference between feet and meters.
  4. You can mix vocabularies with RDFa Lite 1.1, supporting both schema.org and Facebook’s Open Graph Protocol (OGP) using a single markup language. You don’t have to learn Microdata for schema.org and RDFa for Facebook – just use RDFa for both.
  5. The creator of the Microdata specification doesn’t like Microdata. When people are not passionate about the solutions that they create, the desire to work on those solutions and continue improve upon them is muted. The RDFa community is passionate about the technology that they have created together and have strived to make it better since the standardization of RDFa 1.0 back in 2008.
  6. RDFa Lite 1.1 is fully upward-compatible with RDFa 1.1, allowing you to seamlessly migrate to a more feature-rich language as your Linked Data needs grow. Microdata does not support any of the more advanced features provided by RDFa 1.1.
  7. RDFa deployment is broader than Microdata. RDFa deployment continues to grow at a rapid pace.
  8. The economic damage generated by publishing both RDFa and Microdata along the Recommendation track should not be underestimated. W3C should try to provide clear direction in an attempt to reduce the economic waste that a “let the market sort it out among two nearly identical solutions” strategy will generate. At some point, the market will figure out that both solutions are nearly identical, but only after publishing and building massive amounts of content and tooling for both.
  9. The W3C Technical Architecture Group (TAG), which is responsible for ensuring that the core architecture of the Web is sound, has raised their concern about the publication of both Microdata and RDFa as recommendations. After the W3C TAG raised their concerns, the RDFa Working Group created RDFa Lite 1.1 to be a near feature-equivalent replacement for Microdata that was also backwards-compatible with RDFa 1.0.
  10. Publishing a standard that does almost exactly the same thing as an existing standard in almost exactly the same way is a failure to standardize.

Counter-arguments and Rebuttals

[This is a] classic case of monopolistic anti-competitive protectionism.

No, this is an objection to publishing two specifications that do almost exactly the same thing in almost exactly the same way along the W3C Recommendation publication track. Protectionism would have asked that all work on Microdata be stopped and the work scuttled. The proposed resolution does not block anybody from using Microdata, nor does it try to stop or block the Microdata work from happening in the HTML WG. The objection asks that the W3C decide what the best path forward for Web developers is based on a fairly complicated set of predicted outcomes. This is not an easy decision. The objection is intended to ensure that the HTML Working Group has this discussion before we proceed to Candidate Recommendation with Microdata.

<manu1> I'd like the W3C to work as well, and I think publishing two specs that accomplish basically 
        the same thing in basically the same way shows breakage.
<annevk> Bit late for that. XDM vs DOM, XPath vs Selectors, XSL-FO vs CSS, XSLT vs XQuery, 
         XQuery vs XQueryX, RDF/XML vs Turtle, XForms vs Web Forms 2.0, 
         XHTML 1.0 vs HTML 4.01, XML 1.0 4th Edition vs XML 1.0 5th Edition, 
         XML 1.0 vs XML 1.1, etc.

[link to full conversation]

While W3C does have a history of publishing competing specifications, there have been features in each competing specification that were compelling enough to warrant the publication of both standards. For example, XHTML 1.0 provided a standard set of rules for validating documents that was aligned with XML and a decentralized extension mechanism that HTML4.01 did not. Those two major features were viewed as compelling enough to publish both specifications as Recommendations via W3C.

For authors, the differences between RDFa and Microdata are so small that, for 99% of documents in the wild, you can convert a Microdata document to an RDFa Lite 1.1 document with a simple search and replace of attribute names. That demonstrates that the syntaxes for both languages are different only in the names of the HTML attributes, and that does not seem like a very compelling reason to publish both specifications as Recommendations.

Microdata’s processing algorithm is vastly simpler, which makes the data
extracted more reliable and, when something does go wrong, makes it easier for 1) users to debug their own data, and 2) easier for me to debug it if they can’t figure it out on their own.

Microdata’s processing algorithm is simpler for two major reasons:

The complexity of implementing a processor has little bearing on how easy it is for developers to author documents. For example, XHTML 1.0 had a simpler processing model which made the data that was extracted more reliable and when something went wrong, it was easier to debug. However, HTML5 supported more use cases and recovers from errors in cases where it can, which made it more popular with Web developers in the long-run.

Additionally, authors of Microdata and RDFa should be using tools like RDFa Play to debug their markup. This is true for any Web technology. We debug our HTML, JavaScript, and CSS by loading it into a browser and bringing up the debugging tools. This is no different for Microdata and RDFa. If you want to make sure your markup does what you want, make sure to verify it by using a tool and not by trying to memorize the processing rules and running them through your head.

For what it is worth, I personally think RDFa is generally a technically better solution. But as Marcos says, “so what”? Our job at W3C is to make standards for the technology the market decides to use.

If we think one of these technologies is a technically better solution than the other one, we should signal that realization at some level. The most basic thing we could do is to make one an official Recommendation, and the other a Note. I also agree that our job at W3C is to make standards that the technology market decides to use, but clearly this particular case isn’t that cut-and-dried. Schema.org’s only option in the beginning was to use Microdata, and since authors didn’t want to risk not showing up in the search engines, they used Microdata. This forced the market to go in one direction.

This discussion would be in a different place had Google kept the playing field level. That is not to say that Google didn’t have good reasons for making the decisions that they did at the time, but those reasons influenced the development of RDFa, and RDFa Lite 1.1 was the result. The differences between Microdata and RDFa have been removed and a new question is in front of us: given two almost identical technologies, should the W3C publish two specifications that do almost exactly the same thing in almost exactly the same way?

… the [HTML] Working Group explicitly decided not to pick a winner between HTML Microdata and HTML+RDFa

The question before the HTML WG at the time was whether or not to split Microdata out of the HTML5 specification. The HTML Working Group did not discuss whether the publishing track for the Microdata document should be the W3C Note track or the W3C Recommendation track. At the time the decision was made, RDFa Lite 1.1 did not exist, RDFa Lite 1.1 was not a W3C Recommendation, nor did the RDFa and Microdata functionality so greatly overlap as they do now. Additionally, the HTML WG decision at that time states the following under the “Revisiting the issue” section:

“If Microdata and RDFa converge in syntax…”

Microdata and RDFa have effectively converged in syntax. Since Microdata can be interpreted as RDFa based on a simple search-and-replace of attributes that the languages have effectively converged on syntax except for the attribute names. The proposal is not to have work on Microdata stopped. Let work on Microdata proceed in this group, but let it proceed on the W3C Note publication track.

Closing Statements

I felt uneasy raising this issue because it’s a touchy and painful subject for everyone involved. Even if the discussion is painful, it is a healthy one for a standardization body to have from time to time. What I wanted was for the HTML Working Group to have this discussion. If the upcoming poll finds that the consensus of the HTML Working Group is to continue with the Microdata specification along the Recommendation track, I will not pursue a W3C Formal Objection. I will respect whatever decision the HTML Working Group makes as I trust the Chairs of that group, the process that they’ve put in place, and the aggregate opinion of the members in that group. After all, that is how the standardization process is supposed to work and I’m thankful to be a part of it.

The Problem with RDF and Nuclear Power

Full disclosure: I am the chair of the RDFa Working Group, the JSON-LD Community Group, a member of the RDF Working Group, as well as other Semantic Web initiatives. I believe in this stuff, but am critical about the path we’ve been taking for a while now.

The Resource Description Framework (a model for publishing data on the Web) has this horrible public perception akin to how many people in the USA view nuclear power. The coal industry campaigned quite aggressively to implant the notion that nuclear power was not as safe as coal. Couple this public misinformation campaign with a few nuclear-power-related catastrophes and it is no surprise that the current public perception toward nuclear power can be summarized as: “Not in my back yard”. Nevermind that, per tera-watt, nuclear power generation has killed far fewer people since its inception than coal. Nevermind that it is one of the more viable power sources if we gaze hundreds of years into Earth’s future, especially with the recent renewed interest in Liquid Flouride Thorium Reactors. When we look toward the future, the path is clear, but public perception is preventing us from proceeding down that path at the rate that we need to in order to prevent more damage to the Earth.

RDF shares a number of these similarities with nuclear power. RDF is one of the best data modeling mechanisms that humanity has created. Looking into the future, there is no equally-powerful, viable alternative. So, why has progress been slow on this very exciting technology? There was no public mis-information campaign, so where did this negative view of RDF come from?

In short, RDF/XML was the Semantic Web’s 3 Mile Island incident. When it was released, developers confused RDF/XML (bad) with the RDF data model (good). There weren’t enough people and time to counter-act the negative press that RDF was receiving as a result of RDF/XML and thus, we are where we are today because of this negative perception of RDF. Even Wikipedia’s page on the matter seems to imply that RDF/XML is RDF. Some purveyors of RDF think that the public perception problem isn’t that bad. I think that when developers hear RDF, they think: “Not in my back yard”.

The solution to this predicament: Stop mentioning RDF and the Semantic Web. Focus on tools for developers. Do more dogfooding.

To explain why we should adopt this strategy, we can look to Tesla for inspiration. Elon Musk, founder of PayPal and now the CEO of Tesla Motors, recently announced the Tesla Supercharger project. At a high-level, the project accomplishes the following jaw-dropping things:

  1. It creates a network of charging stations for electric cars that are capable of charging a Tesla in less than 30 minutes.
  2. The charging stations are solar powered and generate more electricity than the cars use, feeding the excess power into the local power grid.
  3. The charging stations are free to use for any person that owns a Tesla vehicle.
  4. The charging stations are operational and available today.

This means that, in 4-5 years, any owner of a Tesla vehicle be able to drive anywhere in the USA, for free, powered by the sun. No person in their right mind (with the money) would pass up that offer. No fossil fuel-based company will ever be able to provide “free”, clean energy. This is the sort of proposition we, the RDF/Linked Data/Semantic Web community, need to make; I think we can re-position ourselves to do just that.

Here is what the RDF and Linked Data community can learn from Tesla:

  1. The message shouldn’t be about the technology. It should be about the problems we have today and a concrete solution on how to address those problems.
  2. Demonstrate real value. Stop talking about the beauty of RDF, theoretical value, or design. Deliver production-ready, open-source software tools.
  3. Build a network of believers by spending more of your time working with Web developers and open-source projects to convince them to publish Linked Data. Dogfood our work.

Here is how we’ve applied these lessons to the JSON-LD work:

  1. We don’t mention RDF in the specification, unless absolutely necessary, and in many cases it isn’t necessary. RDF is plumbing, it’s in the background, and developers don’t need to know about it to use JSON-LD.
  2. We purposefully built production-ready tools for JSON-LD from day one; a playground, multiple production-ready implementations, and a JavaScript implementation of the browser-based API.
  3. We are working with Wikidata, Wikimedia, Drupal, the Web Payments and Read Write Web groups at W3C, and a number of other private clients to ensure that we’re providing real value and dogfooding our work.

Ultimately, RDF and the Semantic Web are of no interest to Web developers. They also have a really negative public perception problem. We should stop talking about them. Let’s shift the focus to be on Linked Data, explaining the problems that Web developers face today, and concrete, demonstrable solutions to those problems.

Note: This post isn’t meant as a slight against any one person or group. I was just working on the JSON-LD spec, aggressively removing prose discussing RDF, and the analogy popped into my head. This blog post was an exercise in organizing my thoughts on the matter.

HTML5 and RDFa 1.1

Full disclosure: I’m the chair of the newly re-chartered RDFa Working Group at the W3C as well as a member of the HTML WG.

The newly re-chartered RDFa Working Group at the W3C published a First Public Working Draft of HTML5+RDFa 1.1 today. This might be confusing to those of you that have been following the RDFa specifications. Keep in mind that HTML5+RDFa 1.1 is different from XHTML+RDFa 1.1, RDFa Core 1.1, and RDFa Lite 1.1 (which are official specs at this point). This is specifically about HTML5 and RDFa 1.1. The HTML5+RDFa 1.1 spec reached Last Call (aka: almost done) status at W3C via the HTML Working Group last year. So, why are we doing this now and what does it mean for the future of RDFa in HTML5?

Here’s the issue: the document was being unnecessarily held up by the HTML5 specification. In the most favorable scenario, HTML5 is expected to become an official standard in 2014. RDFa Core 1.1 became an official standard in June 2012. Per the W3C process, HTML5+RDFa 1.1 would have had to wait until 2014 to become an official W3C specification, even though it would be ready to go in a few months from now. W3C policy states that all specs that your spec depends on must reach the official spec status before your spec becomes official. Since HTML5+RDFa 1.1 is a language profile for RDFa 1.1 that is layered on top of HTML5, it had no choice but to wait for HTML5 to become official. Boo.

Thankfully the chairs of the HTML WG, RDFa WG, and W3C staff found an alternate path forward for HTML5+RDFa 1.1. Since the specification doesn’t depend on any “at risk” features in HTML5, and since all of the features that RDFa 1.1 uses in HTML5 have been implemented in all of the Web browsers, there is very little chance that those features will be removed in the future. This means that HTML5+RDFa 1.1 could become an official W3C specification before HTML5 reaches that status. So, that’s what we’re going to try to do. Here’s the plan:

  1. Get approval from W3C member companies to re-charter the RDFa WG to take over publishing responsibility of HTML5+RDFa 1.1. [Done]
  2. Publish the HTML5+RDFa 1.1 specification under the newly re-chartered RDFa WG. [Done]
  3. Start the clock on a new patent exclusion period and resolve issues. Wait a minimum of 6 months to go to W3C Candidate Recommendation (feature freeze) status, due to patent policy requirements.
  4. Fast-track to an official W3C specification (test suite is already done, inter-operable implementations are already done).

There are a few minor issues that still need to be ironed out, but the RDFa WG is on the job and those issues will get resolved in the next month or two. If everything goes according to plan, we should be able to publish HTML5+RDFa 1.1 as an official W3C standard in 7-9 months. That’s good for RDFa, good for Web Developers, and good for the Web.

HTML5+RDFa 1.1 published – pla…

HTML5+RDFa 1.1 published – plan to become official spec in 7 months! http://t.co/oCx8YS7S #w3c #html5 #rdfa

A very moving Haka performed f…

A very moving Haka performed for fallen soldiers in New Zealand (video): http://t.co/wxHhs4Of #visceral #haka #nz #kiwi

If you didn’t see Bill Clinton…

If you didn’t see Bill Clinton’s speech at the DNC, it was fantastically precise: http://t.co/lNBy5rSG #dnc #math #greatspeech

@rouninmedia Thanks – glad you…

@rouninmedia Thanks – glad you discovered RDFa and all the great work (and people) behind it. #w3c #rdfa

RT @rouninmedia: no need to le…

RT @rouninmedia: no need to learn Microdata for http://t.co/KJRNfw8o & RDFa for FB OpenGraph. RDFa suffices. Here comes the Semantic Web.

New RDFa WG publishes HTML5+RD…

New RDFa WG publishes HTML5+RDFa 1.1, intends to go to REC in 8-9 months: http://t.co/MdnJ2RAu #w3c #rdfa #html5

o_O – Have you /seen/ Michelle…

o_O – Have you /seen/ Michelle Obama’s speech!? Totally blows the doors off of every Obama speech ever given: http://t.co/Vq1LAqZQ

Occupy Wall Street Tech Workin…

Occupy Wall Street Tech Working Group drops by to chat with W3C Web Payments Working Group: http://t.co/vAbh8Wfu #ows #w3c #payswarm

JSON-LD group discusses NoSQL …

JSON-LD group discusses NoSQL talk, RDF terminology, syntax intro, and future of .flatten()/.frame(): http://t.co/fonVXXDH #w3c #jsonld

Web Foundation releases global…

Web Foundation releases global stats on the Web’s growth, utility and impact on people & nations: http://t.co/cpL4435S /via @timberners_lee

RT @ivan_herman: RDFa, microda…

RT @ivan_herman: RDFa, microdata, turtle-in-HTML, and RDFLib http://t.co/yS9YRYgF

@venessamiemis Happy birthday!…

@venessamiemis Happy birthday! Hope your weekend will be filled with celebrating. :)

@benadida congrats on your new…

@benadida congrats on your new little one (and your AMAZING SAVINGS!) – hope each of you are doing well – all the best.

Tea-partier picks fight with I…

Tea-partier picks fight with Irish president (2010), does not go well: http://t.co/htzV7EVu /via Nadine Hack

"Let’s build a goddamn Te…

“Let’s build a goddamn Tesla Museum” raises $1M in 8 days via Matt Inmann (The Oatmeal) & Indiegogo: http://t.co/Sg6TutHJ #tesla

Foul mouthed grannies let Akin…

Foul mouthed grannies let Akin really know how they feel about his “legitimate rape” comments: http://t.co/XUostHah #nomeansno #akin

@agebhard blame the people wit…

@agebhard blame the people with the opinions… besides, you should know better than to abet a religious war before getting on a plane. :)

@sideshowbarker … and browse…

@sideshowbarker … and browser manufacturers have stated very clearly that they’re not interested in an RDFa API.

@sideshowbarker JSON-LD API: h…

@sideshowbarker JSON-LD API: http://t.co/IiegGQtN (the issue is: browser manufacturers don’t care yet…)

@sideshowbarker I was kidding….

@sideshowbarker I was kidding… note the “:P *ducks*” in the o.p. <– This is why I’m not involved in governmental politics. /cc @danbri

Go see this! RT @gkellogg: Tal…

Go see this! RT @gkellogg: Talking about publishing structured data from wikis today at 2:00pm. #jsonld #mongodb #nosqlnow

+1 RT @gkellogg: I agree that …

+1 RT @gkellogg: I agree that #microdata made #rdfa better. Now that’s done, its time to move on and get with RDFa.

My new hobby: Trolling @danbri…

My new hobby: Trolling @danbri on Twitter. :P /cc @agebhard @scorlosquet @gkellogg

@danbri @gkellogg @scorlosquet…

@danbri @gkellogg @scorlosquet @agebhard RDFa is better than Microdata, that’s a fact. :P *ducks*

Great post on JSON-LD, MongoDB…

Great post on JSON-LD, MongoDB, & MediaWiki/Wikia: http://t.co/kE5S5JBW /by @gkellogg /via @ivan_herman #mongo #wiki #w3c #jsonld

@danbri @agebhard @scorlosquet…

@danbri @agebhard @scorlosquet Yes, absolutely! What Stephane said. (although, there were better ways of approaching that issue). :)

@danbri My point still stands …

@danbri My point still stands – no good technical reason to use Microdata.

@danbri That said, we’ve seen …

@danbri That said, we’ve seen very little interest in an in-browser API to extract metadata – that’s why we didn’t pursue that route.

@danbri Is there any large dep…

@danbri Is there any large deployment of the Microdata API? RDFa API is going to be RDFa -> JSON-LD, and we’re working on it.

@agebhard I agree. That said, …

@agebhard I agree. That said, now that RDFa Lite 1.1 exists – there is no good technical reason for Microdata: http://t.co/suLnJ1MQ

RT @bergie: still unconvinced …

RT @bergie: still unconvinced of the necessity for #Microdata in a #RDFa world, despite @linclark ‘s excellent #DrupalCon session

Earthworm-like robot oozes alo…

Earthworm-like robot oozes along ground, can survive sledgehammers and stomping from puny humans: http://t.co/KXp7ZuxP #mit #robotics

Autonomous robotic plane flies…

Autonomous robotic plane flies indoors, through parking garage at 10m/s: http://t.co/CgSEUI2O #mit #uav

@aymericbrisse it was, we took…

@aymericbrisse it was, we took care of it. Shouldn’t happen again (hopefully)

JSON-LD group discusses Drupal…

JSON-LD group discusses Drupal 8 support, optional features, property generators, language maps: http://t.co/UNyPT9r1 #w3c #jsonld

Web Payments group discusses p…

Web Payments group discusses payment code example, PaySwarm Alpha 4 release, HTML5 WebApp store: http://t.co/OKkL3312 #w3c #payswarm

PaySwarm Alpha 4 released (sup…

PaySwarm Alpha 4 released (support for HTML5 Web App stores, new release process, bug fixes): http://t.co/6ymrEvVR #w3c #payswarm

Brilliant talk by Nick Hanauer…

Brilliant talk by Nick Hanauer on the true job creators: http://t.co/bUILUBCx #ted #middle #class

Summary of all JSON-LD specifi…

Summary of all JSON-LD specification updates that have happened in the last month: http://t.co/VmDGaVyd #w3c #jsonld

RT @ptwobrussell: If true, thi…

RT @ptwobrussell: If true, this is unbelievably despicable: This is how Visa works: http://t.co/cXCmRlVk /via @rands

PaySwarm Alpha 4 released – su…

PaySwarm Alpha 4 released – support HTML5 app stores, new build system, bug fixes: http://t.co/OTEQ7Iw0 #w3c #payswarm

"Researching" HTML5 …

“Researching” HTML5 games at work… RAPT is awesome (as long as you have a friend you can play it with): http://t.co/uqcb6Qk1

RT @niklasl: I’m well on the w…

RT @niklasl: I’m well on the way towards implementing a redesigned RDFa DOM API: http://t.co/9Fon3Ryz Live updates, not triple-centric.

RT @danbri: We’re close to bei…

RT @danbri: We’re close to being able to round-trip the http://t.co/KJRNfw8o site through RDFa 1.1 #html5 #google #seo #rdfa

FACT: All dogs in Ukraine are …

FACT: All dogs in Ukraine are trained in Parkour from an early age: http://t.co/jtdVLEMa /via @bsletten #parkour #dogs #ukraine

Call Me Maybe + Chatroulette +…

Call Me Maybe + Chatroulette + Cross Dressing == http://t.co/T0t9dFRD #party

PaySwarm Alpha 3 released – W3…

PaySwarm Alpha 3 released – W3C Web Payments reference implementation nears commercialization: http://t.co/eQTET8LU #payswarm #w3c

W3C RDFa Working Group plans t…

W3C RDFa Working Group plans to take HTML5+RDFa to official standard in the next six months: http://t.co/m3CmMF6I #w3c #html5 #rdfa

@cygri while not perfect, I th…

@cygri while not perfect, I think this is a solid step forward: http://t.co/cN54FmhT #rdf #vocab #docs

RT @thelal: @payswarm = Univer…

RT @thelal: @payswarm = Universal #Payment Standard for the #Web and the New Economy http://t.co/IBjk5VPs #futureofmoney

Web Payments group discusses d…

Web Payments group discusses decentralized HTML5 Web App stores, listing assets for sale: http://t.co/kXqI101m #w3c #html5 #payswarm

W3C JSON-LD group discusses pr…

W3C JSON-LD group discusses pre-processing JSON, synchronous API, array-position-based properties: http://t.co/diGkWb9S #jsonld #w3c

If you missed the Curiosity to…

If you missed the Curiosity touchdown on Mars – here’s a video of what happened: http://t.co/uvurdKnB #drama #ridiculous #awesome

Watch live as Curiosity lands …

Watch live as Curiosity lands on Mars in 75 minutes – 10:30pm PST, 1:30am EST – live stream here: http://t.co/CpviIApr #msl

Why men can’t have it all: htt…

Why men can’t have it all: http://t.co/7zfeAu1L /via @pemo #fatherhood #startups

Current corporate office statu…

Current corporate office status: Gangnam Style – http://t.co/26KZ0pbG #korea #horse #dancing #techno

RT @doriantaylor: Paywalls are…

RT @doriantaylor: Paywalls are awesome because they are super effective reminders that I have better things to do with my time.

@edithyeung Great chatting wit…

@edithyeung Great chatting with you too – glad to hear about http://t.co/9UCLgdwv fighting for developers and the Web! :)

RT @edithyeung: @manusporny Gr…

RT @edithyeung: @manusporny Great chatting with you! :) You guys are doing some exciting @w3c stuff for payment! http://t.co/5sfzbsJA

JSON-LD support for Wikidata /…

JSON-LD support for Wikidata / Drupal 8 REST APIs (internationalization support): http://t.co/TNrOzcle /cc @Dries #jsonld #w3c

JavaScript on V8 now firmly ki…

JavaScript on V8 now firmly kicking PHP, Ruby, Python, and Perl’s keister: http://t.co/PMhtDBww /via @davegeist #programming

Bruce Schneier on the Aurora s…

Bruce Schneier on the Aurora shootings and ‘security theatre’: http://t.co/cXd3gCS2 /via @davegeist #usa #guns #security

Favorite quote of the day: &qu…

Favorite quote of the day: “By all measures, @scorlosquet is a semantic web bad ass.”: http://t.co/4Bl4y7Pl #rdfa #w3c #schema

Phase2 integrates RDFa, rNews …

Phase2 integrates RDFa, rNews & http://t.co/KJRNfw8o into publishing platform 4 news sites: http://t.co/XGA1Z6zJ #rdfa #rnews #w3c

@openpublish online publishing…

@openpublish online publishing platform improves RDFa support in Drupal 7: http://t.co/IhxN5nsG #rdfa #drupal

"…a vast porno cluster …

“…a vast porno cluster can be seen between Brazil and Japan…”: http://t.co/D4ArWcjJ #ohinternetyousofunny

A Google maps-like map of the …

A Google maps-like map of the Web: http://t.co/ZyLVK6Ev /via @webr3 #web #science

The dark future of retinal dis…

The dark future of retinal displays and biomods: http://t.co/PkiRXDbo /via +Gregory Esau #film #hmm

"OAuth 2.0… the biggest…

“OAuth 2.0… the biggest professional disappointment of my career.” — Eran Hammer, resigns as lead of OAuth: http://t.co/2vos4hT2

@agebhard I’ll see what I can …

@agebhard I’ll see what I can pull together for you… :P

"GNOME3 turned that stupi…

“GNOME3 turned that stupid up to eleven” — on how the Gnome project is dying: http://t.co/O3OxlZWI

@agebhard I could hire some cl…

@agebhard I could hire some clowns and juggle baby chicks while singing “Poker Face”… if that would help re-infuse some randomness?

pic of plane crash showing pil…

pic of plane crash showing pilot/passenger getting stuff out of the plane: http://t.co/kG3sB8Te

whoa – plane just buzzed 100ft…

whoa – plane just buzzed 100ft over the office, crashed on the other side of building – pilot/passenger OK – caught by the chain link fence.

Femto-photography – imaging at…

Femto-photography – imaging at a trillion frames per second: http://t.co/B173sB98 #ted #takethatcanon

Zynga management dumps stock a…

Zynga management dumps stock at 4x current stock price just before crash… booo: http://t.co/B5ELXlFO

Google Residential Fiber (holy…

Google Residential Fiber (holy crap this looks amazing / dammit it’s not offered in Blacksburg, VA): http://t.co/taX5ktVE

RT @cs_conferences: Congrats t…

RT @cs_conferences: Congrats to AKSW’s Ali Khalili, who won Best Paper at @compsac 2012 for “The RFDa Content Editor”! http://t.co/PO2CKCTH

The bear ladder technique (vid…

The bear ladder technique (video): http://t.co/IiQ3Kavs #rescue