All posts in Standards

The Marathonic Dawn of Web Payments

A little over six years ago, a group of doe-eyed Web developers, technologists, and economists decided that the way we send and receive money over the Web was fundamentally broken and needed to be fixed. The tiring dance of filling out your personal details on every website you visited seemed archaic. This was especially true when handing over your debit card number, which is basically a password into your bank account, to any fly by night operation that had something you wanted to buy. It took days to send money where an email would take milliseconds. Even with the advent of Bitcoin, not much has changed since 2007.

At the time, we naively thought that it wouldn’t take long for the technology industry to catch on to this problem and address it like they’ve addressed many of the other issues around publishing and communication over the Web. After all, getting paid and paying for services is something all of us do as a fundamental part of modern day living. Change didn’t come as fast as we had hoped. So we kept our heads down and worked for years gathering momentum to address this issue on the Web. I’m happy to say that we’ve just had a breakthrough.

The first ever W3C Web Payments Workshop happened two weeks ago. It was a success. Through it, we have taken significant steps toward a better future for the Web and those that make a living by using it. This is the story of how we got from there to here, what the near future looks like, and the broad implications this work has for the Web.

TL;DR: The W3C Web Payments Workshop was a success, we’re moving toward standardizing some technologies around the way we send and receive money on the Web; join the Web Payments Community Group if you want to find out more.

Primordial Web Payment Soup

In late 2007, our merry little band of collaborators started piecing together bits of the existing Web platform in an attempt to come up with something that could be standardized. After a while, it became painfully obvious that the Web Platform was missing some fundamental markup and security technologies. For example, there was no standard machine-readable or automate-able way of describing an item for sale on the Web. This meant that search engines can’t index all the things on the Web that are offered for sale. It also meant that all purchasing decisions had to be made by people. You couldn’t tell your Web browser something like “I trust the New York Times, let them charge me $0.05 per article up to $10 per month for access to their website”. Linked Data seemed like the right solution for machine-readable products, but the Linked Data technologies at the time seemed mired in complex, draconian solutions (SOAP, XML, XHTML, etc.): the bane of most Web Developers.

We became involved in the Microformats community and in the creation of technologies like RDFa in the hope that we could apply it to the Web Payments work. When it became apparent that RDFa was only going to solve part of the problem (and potentially produce a new set of problems), we created JSON-LD and started to standardize it through the W3C.

As these technologies started to grow out of the need to support payments on the Web, it became apparent that we needed to get more people from the general public, government, policy, traditional finance, and technology sectors involved.

Founding a Payment Incubator for the Web

We needed to build a movement around the Web Payments work and the founding of a community was the first step in that movement. In 2009, we founded the PaySwarm Community and worked on the technologies related to payments on the Web with a handful of individuals. In 2011, we transitioned the PaySwarm Community to the W3C and renamed the group to the Web Payments Community Group. To be clear, Community Groups at W3C are never officially sanctioned by W3C’s membership, but they are where most of the pre-standardization work happens. The purpose of the Web Payments Community Group was to incubate payment technologies and lobby W3C to start official standardization work related to how we exchange monetary value on the Web.

What started out as nine people spread across the world has grown into an active community of more than 150 people today. That community includes interesting organizations like Bloomberg, Mozilla, Stripe, Yandex, Ripple Labs, Citigroup, Opera, Joyent, and Telefónica. We have 14 technologies that are in the pre-standardization phase, ready to be placed into the standardization pipeline at W3C if we can get enough support from Web developers and the W3C member organizations.

Traction

In 2013, a number of us thought there was enough momentum to lobby W3C to hold the world’s first Web Payments Workshop. The purpose of the workshop would be to get major payment providers, government organizations, telecommunication providers, Web technologists, and policy makers into the same room to see if they thought that payments on the Web were broken and to see if people in the room thought that there was something that we could do about it.

In November of 2013, plans were hatched to hold the worlds first Web Payments Workshop. Over the next several months, the W3C, the Web Payments Workshop Program Committee, and the Web Payments Community Group worked to bring together as many major players as possible. The result was something better than we could have hoped for.

The Web Payments Workshop

In March 2014, the Web Payments Workshop was held in the beautiful, historic, and apropos Paris stock exchange, the Palais Brongniart. It was packed by an all-star list of financial and technology industry titans like the US Federal Reserve, Google, SWIFT, Yandex, Mozilla, Bloomberg, ISOC, Rabobank, and 103 other people and organizations that shape financial and Web standards. In true W3C form, every single session was minuted and is available to the public. The sessions focused on the following key areas related to payments and the Web. The entire contents of each session, all 14 hours of discussion, are linked to below:

  1. Introductions by W3C and European Commission
  2. Overview of Current and Future Payment Ecosystems
  3. Toward an Ideal Web Payments Experience
  4. Back End: Banks, Regulation, and Future Clearing
  5. Enhancing the Customer and Merchant Experience
  6. Front End: Wallets – Initiating Payment and Digital Receipts
  7. Identity, Security, and Privacy
  8. Wrap-up of Workshop and Next Steps

I’m not going to do any sort of deep dive into what happened during the workshop. W3C will be releasing a workshop report in the next few weeks that will do justice to summarizing what went on during the event. The rest of this blog post will focus on what will most likely happen after that workshop report comes out.

The Next Year in Web Payments

The next step of the W3C process is to convene an official group that will take all of the raw input from the Web Payments Workshop, the papers submitted to the event, input from various W3C Community Groups and from the industry at large, and reduce the scope of work down to something that is narrowly focused but will have a very large series of positive impacts on the Web.

This group will most likely operate for 6-12 months to make its initial set of recommendations for work that should start immediately in existing W3C Working Groups. It may also recommend that entirely new groups be formed at W3C to start standardization work. Once standardization work starts, it will be another 3-4 years before we see an official Web standard. While that sounds like a long time, keep in mind that large chunks of the work will happen in parallel, or have already happened. For example, the first iteration of the RDFa and JSON-LD bits of the Web Payments work are already done and standardized. The HTTP Signatures work is quite far along (from a technical standpoint, it still needs a thorough security review and consensus to move forward).

So, what kind of new work can we expect to get started at W3C? While nothing is certain, looking at the 14 pre-standards documents that the Web Payments Community Group is working on helps us understand where the future might take us. The payment problems of highest concern mentioned in the workshop papers also hint at the sorts of issues that need to be addressed for payments on the Web. Below are a few ideas of what may spin out of the work over the next year. Keep in mind that these predictions are mine and mine alone, they are in no way tied to any sort of official consensus either at the W3C or in the Web Payments Community Group.

Identity and Verified Credentials

One of the most fundamental problems that was raised at the workshop was the idea that identity on the Web is broken. That is, being able to prove who you are to a website, such as a bank or merchant, is incredibly difficult. Since it’s hard for us to prove who we are on the Web, fraud levels are much higher than they should be and peer-to-peer payments require a network of trusted intermediaries (which drive up the cost of the simplest transaction).

The Web Payments Community Group is currently working on technology called Identity Credentials that could be applied to this problem. It’s also closely related to the website login problem that Mozilla Persona was attempting to solve. Security and privacy concerns abound in this area, so we have to make sure to carefully design for those concerns. We need a privacy-conscious identity solution for the Web, and it’s possible that a new Working Group may need to be created to push forward initiatives like credential-based login for the Web. I personally think it would be unwise for W3C members to put off the creation of an Identity Working Group for much longer.

Wallets, Payment Initiation, and Digital Receipts

Another agreement that seemed to come out of the workshop was the belief that we need to create a level playing field for payments while also not attempting to standardize one payment solution for the Web. The desire was to standardize on the bare minimum necessary to make it so that websites only needed a few ways to initiate payments and receive confirmation for them. The ideal case was that your browser or wallet software would pick the best payment option for you based on your needs (best protection, fastest payment confirmation, lowest fees, etc.).

Digital wallets that hold different payment mechanisms, loyalty cards, personal data, and receipts were discussed. Unfortunately, the scope of a wallet’s functionality was not clear. Would a wallet consist of a browser-based API? Would it be cloud-based? Both? How would you sync data between wallets on different devices? What sort of functionality would be the bare minimum? These are questions that the upcoming W3C Payments Interest Group should answer. The desired outcome, however seemed to be fairly concrete: provide a way for people to do a one-click purchase on any website without having to hand over all of their personal information. Make it easy for Web developers to integrate this functionality into websites using a standards-based approach.

Shifting to use some Bitcoin-like protocol seemed to be a non-starter for most everyone in the room, however the idea that we could create Bitcoin/USD/Euro wallets that could initiate payment and provide a digital receipt proving that funds were moved seemed to be one possible implementation target. This would allow Visa, Mastercard, PayPal, Bitcoin, and banks to not have to reinvent their entire payment networks in order to support simple one-click purchases on the Web. The Web Payments Community Group does have a Web Commerce API specification and a Web Commerce protocol that covers this area, but it may need to be modified or expanded based on the outcome of the “What is a digital wallet and what does it do?” discussion.

Everything Else

The three major areas where it seemed like work could start at W3C revolved around verified identity, payment initiation, and digital receipts. In order to achieve those broad goals, we’re also going to have to work on some other primitives for the Web.

For example, JSON-LD was mentioned a number of times as the digital receipt format. If JSON-LD is going to be the digital receipt format, we’re going to have to have a way of digitally signing those receipts. JOSE is one approach, Secure Messaging is another, and there is currently a debate over which is best suited for digitally signing JSON-LD data.

If we are going to have digital receipts, then what goes into those receipts? How are we going to express the goods and services that someone bought in an interoperable way? We need something like the product ontology to help us describe the supply and demand for products and services on the Web.

If JSON-LD is going to be utilized, some work needs to be put into Web vocabularies related to commerce, identity, and security. If mobile-based NFC payment is a part of the story, we need to figure out how that’s going to fit into the bigger picture, and so on.

Make a Difference, Join us

As you can see, even if the payments scope is very narrow, there is still a great deal of work that needs to be done. The good news is that the narrow scope above would focus on concrete goals and implementations. We can measure progress for each one of those initiatives, so it seems like what’s listed above is quite achievable over the next few years.

There also seems to be broad support to address many of the most fundamental problems with payments on the Web. That’s why I’m calling this a breakthrough. For the first time, we have some broad agreement that something needs to be done and that W3C can play a major role in this work. That’s not to say that if a W3C Payments Interest Group is formed that they won’t self destruct for one reason or another, but based on the sensible discussion at the Web Payments Workshop, I wouldn’t bet on that outcome.

If the Web Payments work at W3C is successful, it means a more privacy-conscious, secure, and semantically rich Web for everyone. It also means it will be easier for you to make a living through the Web because the proper primitives to do things like one-click payments on the Web will finally be there. That said, it’s going to take a community effort. If you are a Web developer, designer, or technical writer, we need your help to make that happen.

If you want to become involved, or just learn more about the march toward Web Payments, join the Web Payments Community Group.

A Proposal for Credential-based Login

Mozilla Persona allows you to sign in to web sites using any of your existing email addresses without needing to create a new username and password on each website. It was a really promising solution for the password-based security nightmare that is login on the Web today.

Unfortunately, all the paid engineers for Mozilla Persona have been transitioned off of the project. While Mozilla is going to continue to support Persona into the foreseeable future, it isn’t going to directly put any more resources into improving Persona. Mozilla had very good reasons for doing this. That doesn’t mean that the recent events aren’t frustrating or sad. The Persona developers made a heroic effort. If you find yourself in the presence of Lloyd, Ben, Dan, Jed, Shane, Austin, or Jared (sorry if I missed someone!) be sure to thank them for their part in moving the Web forward.

If Not Persona, Then What?

At the moment, the Web’s future with respect to a better login experience is unclear. The current option seems to be OpenID Connect which, while implemented across millions of sites, is still not seeing the sort of adoption that you’d need for a general Web-based login. It’s also really complex, so complex that the lead editor of the foundation OpenID is built on left the work a long time ago in frustration.

Somewhere else on the Internet, the Web Payments Community Group is working on technology to build payments into the core architecture of the Web. Login and identity are a big part of payments. We need a solution that allows someone to login to a website and transmit their payment preferences at the same time. A single authorized click by you would provide your email address, shipping address, and preferred payment provider. Another authorized click by you would buy an item and have it shipped to your preferred address. There will be no need to fill out credit card information, shipping, or billing addresses and no need to create an email address and password for every site to which you want to send money. Persona was going to be this login solution for us, but that doesn’t seem achievable at this point.

What Persona Got Right

The Persona after-action review that Mozilla put together is useful. If you care about identity and login, you should read it. Persona did four groundbreaking things:

  1. It was intended to be fully decentralized, being integrated into the browser eventually.
  2. It focused on privacy, ensuring that your identity provider couldn’t track the sites that you were logging in to.
  3. It used an email address as your login ID, which is a proven approach to login on the Web.
  4. It was simple.

It failed for at least three important reasons that were not specific to Mozilla:

  1. It required email providers to buy into the protocol.
  2. It had a temporary, centralized solution that required a costly engineering team to keep it up and running.
  3. If your identity provider goes down, you can’t login to any website.

Finally, the Persona solution did one thing well. It provided a verified email credential, but is that enough for the Web?

The Need for Verifiable Credentials

There is a growing need for digitally verifiable credentials on the Web. Being able to prove that you are who you say you are is important when paying or receiving payment. It’s also important when trying to prove that you are a citizen of a particular country, of a particular age, licensed to perform a specific task (like designing a house), or have achieved a particular goal (like completing a training course). In all of these cases, it requires the ability for you to collect digitally signed credentials from a third party, like a university, and store it somewhere on the Web in an interoperable way.

The Web Payments group is working on just such a technology. It’s called the Identity Credentials specification.

We had somewhat of an epiphany a few weeks ago when it became clear that Persona was in trouble. An email address is just another type of credential. The process for transmitting a verified email address to a website should be the same as transmitting address information or your payment provider preference. Could we apply this concept and solve the login on the web problem as well as the transmission of verified credentials problem? It turns out that the answer is: most likely, yes.

Verified Credential-based Web Login

The process for credential-based login on the Web would more or less work like this:

  1. You get an account with an identity provider, or run one yourself. Not everyone wants to run one themselves, but it’s the Web, you should be able to easily do so if you want to.
  2. You show up at a website, it asks you to login by typing in your email address. No password is requested.
  3. The website then kick-starts a login process via navigator.id.login() that will be driven by a Javascript polyfill in the beginning, but will be integrated into the browser in time.
  4. A dialog is presented to you (that the website has no control over or visibility into) that asks you to login to your identity provider. Your identity provider doesn’t have to be your email provider. This step is skipped if you’ve logged in previously and your session with your identity provider is still active.
  5. A digitally signed assertion that you control your email address is given by your identity provider to the browser, which is then relayed on to the website you’re logging in to.

Details of how this process works can be found in the section titled Credential-based Login in the Identity Credentials specification. The important thing to note about this approach is that it takes all the best parts of Persona while overcoming key things that caused its demise. Namely:

  • Using an innovative new technology called Telehash, it is fully decentralized from day one.
  • It doesn’t require browser buy-in, but is implemented in such a way that allows it to be integrated into the browser eventually.
  • It is focused on privacy, ensuring that your identity provider can’t track the sites that you are logging into.
  • It uses an email address as your login ID, which is a proven approach to login on the Web.
  • It is simple, requiring far fewer web developer gymnastics than OpenID to implement. It’s just one Javascript library and one navigator.id.login() call.
  • It doesn’t require email providers to buy into the protocol like Persona did. Any party that the relying party website trusts can digitally sign a verified email credential.
  • If your identity provider goes down, there is still hope that you can login by storing your email credentials in a password-protected decentralized hash table on the Internet.

Why Telehash?

There is a part of this protocol that requires the system to map your email address to an identity provider. The way Persona did it was to query to see if your email provider was a Persona Identity Provider (decentralized), and if not, the system would fall back to Mozilla’s email-based verification system (centralized). Unfortunately, if Persona’s verification system was down, you couldn’t log into a website at all. This rarely happened, but that was more because Mozilla’s team was excellent at keeping the site up and there weren’t any serious attempts to attack the site. It was still a centralized solution.

The Identity Credentials specification takes a different approach to the problem. It allows any identity provider to claim an email address. This means that you no longer need buy-in from email providers. You just need buy-in from identity providers, and there are a ton of them out there that would be happy to claim and verify addresses like john.doe@gmail.com, or alice.smith@ymail.com. Unfortunately, this approach means that either you need browser support, or you need some sort of mapping mechanism that maps email addresses to identity providers. Enter Telehash.

Telehash is an Internet-wide distributed hashtable (DHT) based on the proven Kademlia protocol used by BitTorrent and Gnutella. All communication is fully encrypted. It allows you to store-and-replicate things like the following JSON document:

{
  "email": "john.doe@gmail.com",
  "identityService": "https://identity.example.com/identities"
}

If you want to find out who john.doe@gmail.com’s identity provider is, you just query the Telehash network. The more astute readers among you see the obvious problem in this solution, though. There are massive trust, privacy, and distributed denial of service attack concerns here.

Attacks on the Distributed Mapping Protocol

There are four problems with the system described in the previous section.

The first is that you can find out which email addresses are associated with which identity providers; that leaks information. Finding out that john.doe@gmail.com is associated with the https://identity.example.com/ identity provider is a problem. Finding out that they’re also associated with the https://public.cyberwarfare.usairforce.mil/ identity provider outs them as military personnel, which turns a regular privacy problem into a national security issue.

The second is that anyone on the network can claim to be an identity provider for that email address, which means that there is a big phishing risk. A nefarious identity provider need only put an entry for john.doe@gmail.com in the DHT pointing to their corrupt identity provider service and watch the personal data start pouring in.

The third is that a website wouldn’t know which digital signature on a email to trust. Which verified credential is trustworthy and which one isn’t?

The fourth is that you can easily harvest all of the email addresses on the network and spam them.

Attack Mitigation on the Distributed Mapping Protocol

There are ways to mitigate the problems raised in the previous section. For example, replacing the email field with a hash of the email address and passphrase would prevent attackers from both spamming an email address and figuring out how it maps to an identity provider. It would also lower the desire for attackers to put fake data into the DHT because only the proper email + passphrase would end up returning a useful result to a query. The identity service would also need to be encrypted with the passphrase to ensure that injecting bogus data into the network wouldn’t result in an entry collision.

In addition to these three mitigations, the network would employ a high CPU/memory proof-of-work to put a mapping into the DHT so the network couldn’t get flooded by bogus mappings. Keep in mind that the proof-of-work doesn’t stop bad data from getting into the DHT, it just slows its injection into the network.

Finally, figuring out which verified email credential is valid is tricky. One could easily anoint 10 non-profit email verification services that the network would trust, or something like the certificate-authority framework, but that could be argued as over-centralization. In the end, this is more of a policy decision because you would want to make sure email verification services are legally bound to do the proper steps to verify an email while ensuring that people aren’t gouged for the service. We don’t have a good solution to this problem yet, but we’re working on it.

With the modifications above, the actual data uploaded to the DHT will probably look more like this:

{
  "id": "c8e52c34a306fe1d487a0c15bc3f9bbd11776f30d6b60b10d452bcbe268d37b0",  <-- SHA256 hash of john.doe@gmail.com + >15 character passphrase
  "proofOfWork": "000000000000f7322e6add42",                                 <-- Proof of work for email to identity service mapping
  "identityService": "GZtJR2B5uyH79QXCJ...s8N2B5utJR2B54m0Lt"                <-- Passphrase-encrypted identity provider service URL
}

To query the network, the customer must provide both an email address and a passphrase which are hashed together. If the hash doesn't exist on the network, then nothing is returned by Telehash.

Also note that this entire Telehash-based mapping mechanism goes away once the technology is built into the browser. The telehash solution is merely a stop-gap measure until the identity credential solution is built into browsers.

The Far Future

In the far future, browsers would communicate with your identity providers to retrieve data that are requested by websites. When you attempt to login to a website, the website would request a set of credentials. Your browser would either provide the credentials directly if it has cached them, or it would fetch them from your identity provider. This system has all of the advantages of Persona and provides realistic solutions to a number of the scalability issues that Persona suffers from.

The greatest challenges ahead will entail getting a number of things right. Some of them include:

  • Mitigate the attack vectors for the Telehash + Javascript-based login solution. Even though the Telehash-based solution is temporary, it must be solid until browser implementations become the norm.
  • Ensure that there is buy-in from large companies wanting to provide credentials for people on the Web. We have a few major players in the pipeline at the moment, but we need more to achieve success.
  • Clearly communicate the benefits of this approach over OpenID and Persona.
  • Make sure that setting up your own credential-based identity provider is as simple as dropping a PHP file into your website.
  • Make it clear that this is intended to be a W3C standard by creating a specification that could be taken standards-track within a year.
  • Get buy-in from web developers and websites, which is going to be the hardest part.

JSON-LD and Why I Hate the Semantic Web

Full Disclosure: I am one of the primary creators of JSON-LD, lead editor on the JSON-LD 1.0 specification, and chair of the JSON-LD Community Group. This is an opinionated piece about JSON-LD. A number of people in this space don’t agree with my viewpoints. My statements should not be construed as official statements from the JSON-LD Community Group, W3C, or Digital Bazaar (my company) in any way, shape, or form. I’m pretty harsh about the technologies covered in this article and want to be very clear that I’m attacking the technologies, not the people that created them. I think most of the people that created and promote them are swell and I like them a lot, save for a few misguided souls, who are loveable and consistently wrong.

JSON-LD became an official Web Standard last week. This is after exactly 100 teleconferences typically lasting an hour and a half, fully transparent with text minutes and recorded audio for every call. There were 218+ issues addressed, 2,000+ source code commits, and 3,102+ emails that went through the JSON-LD Community Group. The journey was a fairly smooth one with only a few jarring bumps along the road. The specification is already deployed in production by companies like Google, the BBC, HealthData.gov, Yandex, Yahoo!, and Microsoft. There is a quickly growing list of other companies that are incorporating JSON-LD. We’re off to a good start.

In the previous blog post, I detailed the key people that brought JSON-LD to where it is today and gave a rough timeline of the creation of JSON-LD. In this post I’m going to outline the key decisions we made that made JSON-LD stand out from the rest of the technologies in this space.

I’ve heard many people say that JSON-LD is primarily about the Semantic Web, but I disagree, it’s not about that at all. JSON-LD was created for Web Developers that are working with data that is important to other people and must interoperate across the Web. The Semantic Web was near the bottom of my list of “things to care about” when working on JSON-LD, and anyone that tells you otherwise is wrong. :P

TL;DR: The desire for better Web APIs is what motivated the creation of JSON-LD, not the Semantic Web. If you want to make the Semantic Web a reality, stop making the case for it and spend your time doing something more useful, like actually making machines smarter or helping people publish data in a way that’s useful to them.

Why JSON-LD?

If you don’t know what JSON-LD is and you want to find out why it is useful, check out this video on Linked Data and this one on an Introduction to JSON-LD. The rest of this post outlines the things that make JSON-LD different from the traditional Semantic Web / Linked Data stack of technologies and why we decided to design it the way that we did.

Decision 1: Decrypt the Cryptic

Many W3C specifications are so cryptic that they require the sacrifice of your sanity and a secret W3C decoder ring to read. I never understood why these documents were so difficult to read, and after years of study on the matter, I think I found the answer. It turns out that most specification editors are just crap at writing.

It’s not like many of the things that are in most W3C specifications are complicated, it’s just that the editor is bad at explaining them to non-implementers, which are most of the web developers that end up reading these specification documents. This approach is often defended by raising the point that readability of the specification by non-implementers is viewed as secondary to its technical accuracy for implementers. The audience is the implementer, and you are expected to cater to them. To counter that point, though, we all know that technical accuracy is a bad excuse for crap writing. You can write something that is easy to understand and technically accurate, it just takes more effort to do that. Knowing your audience helps.

We tried our best to eliminate complex techno-babble from the JSON-LD specification. I made it a point to not mention RDF at all in the JSON-LD 1.0 specification because you didn’t need to go off and read about it to understand what was going on in JSON-LD. There was tremendous push back on this point, which I’ll go into later, but the point is that we wanted to communicate at a more conversational level than typical Internet and Web specifications because being pedantic too early in the spec sets the wrong tone.

It didn’t always work, but it certainly did set the tone we wanted for the community, which was that this Linked Data stuff didn’t have to seem so damn complicated. The JSON-LD 1.0 specification starts out by primarily using examples to introduce key concepts. It starts at basics, assuming that the audience is a web developer with modest training, and builds its way up slowly into more advanced topics. The first 70% of the specification contains barely any normative/conformance language, but after reading it, you know what JSON-LD can do. You can look at the section on the JSON-LD Context to get an idea of what this looks like in practice.

This approach wasn’t a wild success. Reading sections of the specification that have undergone feedback from more nitpicky readers still make me cringe because ease of understanding has been sacrificed at the alter of pedantic technical accuracy. However, I don’t feel embarrassed to point web developers to a section of the specification when they ask for an introduction to a particular feature of JSON-LD. There are not many specifications where you can do that.

Decision 2: Radical Transparency

One of the things that has always bothered me about W3C Working Groups is that you have to either be an expert to participate, or you have to be a member of the W3C, which can cost a non-trivial amount of money. This results in your typical web developer being able to comment on a specification, but not really having the ability to influence a Working Group decision with a vote. It also hobbles the standards-making community because the barrier to entry is perceived as impossibly high. Don’t get me wrong, the W3C staff does as much as they can to drive inclusion and they do a damn good job at it, but that doesn’t stop some of their member companies from being total dicks behind closed door sessions.

The W3C is a consortium of mostly for-profit companies and they have things they care about like market share, quarterly profits, and drowning goats (kidding!)… except for GoatCoats.com, anyone can join as long as you pay the membership dues! My point is that because there is a lack of transparency at times, it makes even the best Working Group less responsive to the general public, and that harms the public good. These closed door rules are there so that large companies can say certain things without triggering a lawsuit, which is sometimes used for good but typically results in companies being jerks and nobody finding out about it.

So, in 2010 we kicked off the JSON-LD work by making it radically open and we fought for that openness every step of the way. Anyone can join the group, anyone can vote on decisions, anyone can join the teleconferences, there are no closed door sessions, and we record the audio of every meeting. We successfully kept the technical work on the specification this open from the beginning to the release of JSON-LD 1.0 web standard a week ago. People came and went from the group over the years, but anyone could participate at any level and that was probably the thing I’m most proud of regarding the process that was used to create JSON-LD. Had we not have been this open, Markus Lanthaler may have never gone from being a gifted student in Italy to editor of the JSON-LD API specification and now leader of the Hypermedia Driven Web APIs community. We also may never have had the community backing to do some of the things we did in JSON-LD, like kicking RDF in the nuts.

Decision 3: Kick RDF in the Nuts

RDF is a shitty data model. It doesn’t have native support for lists. LISTS for fuck’s sake! The key data structure that’s used by almost every programmer on this planet and RDF starts out by giving developers a big fat middle finger in that area. Blank nodes are an abomination that we need, but they are applied inconsistently in the RDF data model (you can use them in some places, but not others). When we started with JSON-LD, RDF didn’t have native graph support either. For all the “RDF data model is elegant” arguments we’ve seen over the past decade, there are just as many reasons to kick it to the curb. This is exactly what we did when we created JSON-LD, and that really pissed off a number of people that had been working on RDF for over a decade.

I personally wanted JSON-LD to be compatible with RDF, but that’s about it. You could convert JSON-LD to and from RDF and get something useful, but JSON-LD had a more sane data model where lists were a first-class construct, you had generalized graphs, and you could use JSON-LD using a simple library and standard JSON tooling. To put that in perspective, to work with RDF you typically needed a quad store, a SPARQL engine, and some hefty libraries. Your standard web developer has no interest in that toolchain because it adds more complexity to the solution than is necessary.

So screw it, we thought, let’s create a graph data model that looks and feels like JSON, RDF and the Semantic Web be damned. That’s exactly what we did and it was working out pretty well until…

Decision 4: Work with the RDF Working Group. Whut?!

Around mid-2012, the JSON-LD stuff was going pretty well and the newly chartered RDF Working Group was going to start work on RDF 1.1. One of the work items was a serialization of RDF for JSON. The lead solutions for RDF in JSON were things like the aptly named RDF/JSON and JTriples, both of which would look incredibly foreign to web developers and continue the narrative that the Semantic Web community creates esoteric solutions to non-problems. The biggest problem being that many of the participants in the RDF Working Group at the time didn’t understand JSON.

The JSON-LD group decided to weigh in on the topic by pointing the RDF WG to JSON-LD as an example of what was needed to convince people that this whole Linked Data thing could be useful to web developers. I remember the discussions getting very heated over multiple months, and at times, thinking that the worst thing we could do to JSON-LD was to hand it over to the RDF Working Group for standardization.

It is at that point that David Wood, one of the chairs of the RDF Working Group, phoned me up to try and convince me that it would be a good idea to standardize the work through the RDF WG. I was very skeptical because there were people in the RDF Working Group who drove some of thinking that I had grown to see as toxic to the whole Linked Data / Semantic Web movement. I trusted Dave Wood, though. I had never seen him get religiously zealous about RDF like some of the others in the group and he seemed to be convinced that we could get JSON-LD through without ruining it. To Dave’s credit, he was mostly right. :)

Decision 5: Hate the Semantic Web

It’s not that the RDF Working Group was populated by people that are incompetent, or that I didn’t personally like. I’ve worked with many of them for years, and most of them are very intelligent, capable, gifted people. The problem with getting a room full of smart people together is that the group’s world view gets skewed. There are many reasons that a working group filled with experts don’t consistently produce great results. For example, many of the participants can be humble about their knowledge so they tend to think that a good chunk of the people that will be using their technology will be just as enlightened. Bad feature ideas can be argued for months and rationalized because smart people, lacking any sort of compelling real world data, are great at debating and rationalizing bad decisions.

I don’t want people to get the impression that there was or is any sort of animosity in the Linked Data / Semantic Web community because, as far as I can tell, there isn’t. Everyone wants to see this stuff succeed and we all have our reasons and approaches.

That said, after 7+ years of being involved with Semantic Web / Linked Data, our company has never had a need for a quad store, RDF/XML, N3, NTriples, TURTLE, or SPARQL. When you chair standards groups that kick out “Semantic Web” standards, but even your company can’t stomach the technologies involved, something is wrong. That’s why my personal approach with JSON-LD just happened to be burning most of the Semantic Web technology stack (TURTLE/SPARQL/Quad Stores) to the ground and starting over. It’s not a strategy that works for everyone, but it’s the only one that worked for us, and the only way we could think of jarring the more traditional Semantic Web community out of its complacency.

I hate the narrative of the Semantic Web because the focus has been on the wrong set of things for a long time. That community, who I have been consciously distancing myself from for a few years now, is schizophrenic in its direction. Precious time is spent in groups discussing how we can query all this Big Data that is sure to be published via RDF instead of figuring out a way of making it easy to publish that data on the Web by leveraging common practices in use today. Too much time is spent assuming a future that’s not going to unfold in the way that we expect it to. That’s not to say that TURTLE, SPARQL, and Quad stores don’t have their place, but I always struggle to point to a typical startup that has decided to base their product line on that technology (versus ones that choose MongoDB and JSON on a regular basis).

I like JSON-LD because it’s based on technology that most web developers use today. It helps people solve interesting distributed problems without buying into any grand vision. It helps you get to the “adjacent possible” instead of having to wait for a mirage to solidify.

Decision 6: Believe in Consensus

All this said, you can’t hope to achieve anything by standing on idealism alone and I do admit that some of what I say above is idealistic. At some point you have to deal with reality, and that reality is that there are just as many things that the RDF and Semantic Web initiative got right as it got wrong. The RDF data model is shitty, but because of the gauntlet thrown down by JSON-LD and a number of like-minded proponents in the RDF Working Group, the RDF Data Model was extended in a way that made it compatible with JSON-LD. As a result, the gap between the RDF model and the JSON-LD model narrowed to the point that it became acceptable to more-or-less base JSON-LD off of the RDF model. It took months to do the alignment, but it was consensus at its best. Nobody was happy with the result, but we could all live with it.

To this day I assert that we could rip the data model section out of the JSON-LD specification and it wouldn’t really affect the people using JSON-LD in any significant way. That’s consensus for you. The section is in there because other people wanted it in there and because the people that didn’t want it in there could very well have turned out to be wrong. That’s really the beauty of the W3C and IETF process. It allows people that have seemingly opposite world views to create specifications that are capable of supporting both world views in awkward but acceptable ways.

JSON-LD is a product of consensus. Nobody agrees on everything in there, but it all sticks together pretty well. There being a consensus on consensus is what makes the W3C, IETF, and thus the Web and the Internet work. Through all of the fits and starts, permathreads, pedantry, idealism, and deadlock, the way it brings people together to build this thing we call the Web is beautiful thing.

Postscript

I’d like to thank the W3C staff that were involved in getting JSON-LD to offical Web standard status (and the staff, in general, for being really fantastic people). Specifically, Ivan Herman for simultaneously pointing out all of the problems that lay in the road ahead while also providing ways to deal with each one as we came upon them. Sandro Hawke for pushing back against JSON-LD, but always offering suggestions about how we could move forward. I actually think he may have ended up liking JSON-LD in the end :) . Doug Schepers and Ian Jacobs for fighting for W3C Community Groups, without which JSON-LD would not have been able to plead the case for Web developers. The systems team and publishing team who are unknown to most of you, but work tirelessly to ensure that everything continues to operate, be published, and improve at W3C.

From the RDF Working group, the chairs (David Wood and Guus Schreiber), for giving JSON-LD a chance and ensuring that it got a fair shake. Richard Cyganiak for pushing us to get rid of microsyntaxes and working with us to try and align JSON-LD with RDF. Kingsley Idehen for being the first external implementer of JSON-LD after we had just finished scribbling the first design down on paper and tirelessly dogfooding what he preaches. Nobody does it better. The rest of the RDF Working Group members without which JSON-LD would have escaped unscathed from your influence, making my life a hell of a lot easier, but leaving JSON-LD and the people that use it in a worse situation had you not been involved.

The Origins of JSON-LD

Full Disclosure: I am one of the primary creators of JSON-LD, lead editor on the JSON-LD 1.0 specification, and chair of the JSON-LD Community Group. These are my personal opinions and not the opinions of the W3C, JSON-LD Community Group, or my company.

JSON-LD became an official Web Standard last week. This is after exactly 100 teleconferences typically lasting an hour and a half, fully transparent with text minutes and recorded audio for every call. There were 218+ issues addressed, 2,071+ source code commits, and 3,102+ emails that went through the JSON-LD Community Group. The journey was a fairly smooth one with only a few jarring bumps along the road. The specification is already deployed in production by companies like Google, the BBC, HealthData.gov, Yandex, Yahoo!, and Microsoft. There is a quickly growing list of other companies that are incorporating JSON-LD, but that’s the future. This blog post is more about the past, namely where did JSON-LD come from? Who created it and why?

I love origin stories. When I was in my teens and early twenties, the only origin stories I liked to read about were of the comic and anime variety. Spiderman, great origin story. Superman, less so, but entertaining. Nausicaä, brilliant. Major Motoko Kusanagi, nuanced. Spawn, dark. Those connections with characters fade over time as you understand that this world has more interesting ones. Interesting because they touch the lives of billions of people, and since I’m a technologist, some of my favorite origin stories today consist of finding out the personal stories behind how a particular technology came to be. The Web has a particularly riveting origin story. These stories are hard to find because they’re rarely written about, so this is my attempt at documenting how JSON-LD came to be and the handful of people that got it to where it is today.

The Origins of JSON-LD

When you’re asked to draft the press pieces on the launch of new world standards, you have two lists of people in your head. The first is the “all inclusive list”, which is every person that uttered so much as a word that resulted in a change to the specification. That list is typically very long, so you end up saying something like “We’d like to thank all of the people that provided input to the JSON-LD specification, including the JSON-LD Community, RDF Working Group, and individuals who took the time to send in comments and improve the specification.” With that statement, you are sincere and cover all of your bases, but feel like you’re doing an injustice to the people without which the work would never have survived.

The all inclusive list is very important, they helped refine the technology to the point that everyone could achieve consensus on it being something that is world class. However, 90% of the back breaking work to get the specification to the point that everyone else could comment on it is typically undertaken by a 4-5 people. It’s a thankless and largely unpaid job, and this is how the Web is built. It’s those people that I’d like to thank while exploring the origins of JSON-LD.

Inception

JSON-LD started around late 2008 as the work on RDFa 1.0 was wrapping up. We were under pressure from Microformats and Microdata, which we were also heavily involved in, to come up with a good way of programming against RDFa data. At around the same time, my company was struggling with the representation of data for the Web Payments work. We had already made the switch to JSON a few years previous and were storing that data in MySQL, mostly because MongoDB didn’t exist yet. We were having a hard time translating the RDFa we were ingesting (products for sale, pricing information, etc.) into something that worked well in JSON. At around the same time, Mark Birbeck, one of the creators of RDFa, and I were thinking about making something RDFa-like for JSON. Mark had proposed a syntax for something called RDFj, which I thought had legs, but Mark didn’t necessarily have the time to pursue.

The Hard Grind

After exchanging a few emails with Mark about the topic over the course of 2009, and letting the idea stew for a while, I wrote up a quick proposal for a specification and passed it by Dave Longley, Digital Bazaar’s CTO. We kicked the idea around a bit more and in May of 2010, published the first working draft of JSON-LD. While Mark was instrumental in injecting the first set of basis ideas into JSON-LD, Dave Longley would become the most important key technical mind behind how to make JSON-LD work for web programmers.

At that time, JSON-LD had a pretty big problem. You can represent data in JSON-LD in a myriad of different ways, making it hard to tell if two JSON-LD documents are the same or not. This was an important problem to Digital Bazaar because we were trying to figure out how to create product listings, digital receipts, and contracts using JSON-LD. We had to be able to tell if two product listings were the same, and we had to figure out a way to serialize the data so that products and their associated prices could be listed on the Web in a decentralized way. This meant digital signatures, and you have to be able to create a canonical/normalized form for your data if you want to be able to digitally sign it.

Dave Longley invented the JSON-LD API, JSON-LD Framing, and JSON-LD Graph Normalization to tackle these canonicalization/normalization issues and did the first four implementations of the specification in C++, JavaScript, PHP, and Python. The JSON-LD Graph Normalization problem itself took roughly 3 months of concentrated 70+ hour work weeks and dozens of iterations by Dave Longley to produce an algorithm that would work. To this day, I remain convinced that there are only a handful of people on this planet with a mind that is capable of solving those problems. He was the first and only one that cracked those problems. It requires a sort of raw intelligence, persistence, and ability to constantly re-evaluate the problem solving approach you’re undertaking in a way that is exceedingly rare.

Dave and I continued to refine JSON-LD, with him working on the API and me working on the syntax for the better part of 2010 and early 2011. When MongoDB started really taking off in 2010, the final piece just clicked into place. We had the makings of a Linked Data technology stack that would work for web developers.

Toward Stability

Around April 2011, we launched the JSON-LD Community Group and started our public push to try and put the specification on a standards track at the World Wide Web Consortium (W3C). It is at this point that Gregg Kellogg joined us to help refine the rough edges of the specification and provide his input. For those of you that don’t know Gregg, I know of no other person that has done complete implementations of the entire stack of Semantic Web technologies. He has Ruby implementations of quad stores, TURTLE, N3, NQuads, SPARQL engines, RDFa, JSON-LD, etc. If it’s associated with the Semantic Web in any way, he’s probably implemented it. His depth of knowledge of RDF-based technologies is unmatched and he focused that knowledge on JSON-LD to help us hone it to what it is today. Gregg helped us with key concepts, specification editing, implementations, tests, and a variety of input that left its mark on JSON-LD.

Markus Lanthaler also joined us around the same time (2011) that Gregg did. The story of how Markus got involved with the work is probably my favorite way of explaining how the standards process should work. Markus started giving us input while a masters student at Technische Universität Graz. He didn’t have a background in standards, he didn’t know anything about the W3C process or specification editing, he was as green as one can be with respect to standards creation. We all start where he did, but I don’t know of many people that became as influential as quickly as Markus did.

Markus started by commenting on the specification on the mailing list, then quickly started joining calls. He’d raise issues and track them, he started on his PHP implementation, then started making minor edits to the specifications, then major edits until earning our trust to become lead specification editor for the JSON-LD API specification and one of the editors for the JSON-LD Syntax specification. There was no deliberate process we used to make him lead editor, it just sort of happened based on all the hard work he was putting in, which is the way it should be. He went through a growth curve that normally takes most people 5 years in about a year and a half, and it happened exactly how it should happen in a meritocracy. He earned it and impressed us all in the process.

The Final Stretch

Of special mention as well is Niklas Lindström, who joined us starting in 2012 on almost every JSON-LD teleconference and provided key input to the specifications. Aside from being incredibly smart and talented, Niklas is particularly gifted in his ability to find a balanced technical solution that moved the group forward when we found ourselves deadlocked on a particular decision. Paul Kuykendall joined us toward the very end of the JSON-LD work in early 2013 and provided fresh eyes on what we were working on. Aside from being very level-headed, Paul helped us understand what was important to web developers and what wasn’t toward the end of the process. It’s hard to find perspective as work wraps up on a standard, and luckily Paul joined us at exactly the right moment to provide that insight.

There were literally hundreds of people that provided input on the specification throughout the years, and I’m very appreciative of that input. However, without this core of 4-6 people, JSON-LD would have never had a chance. I will never be able to find the words to express how deeply appreciative I am to Dave, Markus, Gregg, Niklas and Paul, who did the work on a primarily volunteer basis. At this moment in time, the Web is at the core of the way human kind communicates and the most ardent protectors of this public good create standards to ensure that the Web continues to serve all of us. It boils my blood to then know that they will go largely unrewarded by society for creating something that will benefit hundreds of millions of people, but that’s another post for another time.

The next post in this series tells the story of how JSON-LD was nearly eliminated on several occasions by its critics and proponents while on its journey toward a web standard.

Web Payments and the World Banking Conference

The standardization group for all of the banks in the world (SWIFT) was kind enough to invite me to speak at the world’s premier banking conference about the Web Payments work at the W3C. The conference, called SIBOS, happened last week and brings together 7,000+ people from banks and financial institutions around the world. The event was being held in Dubai this year. They wanted me to present on the new Web Payments work being done at the World Wide Web Consortium (W3C) including the work we’re doing with PaySwarm, Mozilla, the Bitcoin community, and Ripple Labs.

If you’ve never been to Dubai, I highly recommend visiting. It is a city of extremes. It contains the highest density of stunningly award-winning sky scrapers while the largest expanses of desert loom just outside of the city. Man-made islands dot the coastline, willed into shapes like that of a multi-mile wide palm tree or massive lumps of stone, sand, steel and glass resembling all of the countries of the world. I saw the largest in-mall aquarium in the world and ice skated in 105 degree weather. Poverty lines the outskirts of Dubai while ATMs that vend gold can be found throughout the city. Lamborghinis, Ferraris, Maybachs, and Porsches roared down the densely packed highways while plants struggled to survive in the oppressive heat and humidity.

The extravagances nestle closely to the other extremes of Dubai: a history of indentured servitude, women’s rights issues, zero-tolerance drug possession laws, and political self-censorship of the media. In a way, it was the perfect location for the worlds premier banking conference. The capital it took to achieve everything that Dubai had to offer flowed through the banks represented at the conference at some point in time.

The Structure of the Conference

The conference was broken into two distinct areas. The more traditional banking side was on the conference floor and resembled what you’d expect of a well-established trade show. It was large, roughly the size of four football fields. Innotribe, the less-traditional and much hipper innovation track, was outside of the conference hall and focused on cutting edge thinking, design, new technologies. The banks are late to the technology game, but that’s to be expected in any industry that has a history that can be measured in centuries. Innotribe is trying to fix the problem of innovation in banking.

“Customers”

One of the most surprising things that I learned during the conference was the different classes of customers a bank has and which class of customers are most profitable to the banks. Many people are under the false impression that the most valuable customer a bank can have is the one that walks into one of their branches and opens an account. In general, the technology industry tends to value the individual customer as the primary motivator for everything that it does. This impression, with respect to the banking industry, was shattered when I heard the head of an international bank utter the following with respect to banking branches: “80% of our customers add nothing but sand to our bottom line.” The banker was alluding to the perception that the most significant thing that customers bring into the banking branch is the sand on the bottom of their shoes. The implication is that most customers are not very profitable to banks and are thus not a top priority. This summarizes the general tone of the conference with respect to customers when it came to the managers of these financial institutions.

Fundamentally, a bank’s motives are not aligned with most of their customer’s needs because that’s not where they make the majority of their money. Most of a bank’s revenue comes from activities like short-term lending, utilizing leverage against deposits, float-based leveraging, high-frequency trading, derivatives trading, and other financial exercises that are far removed with what most people in the world think of when they think of the type of activities one does at a bank.

For example, it has been possible to do realtime payments over the current banking network for a while now. The standards and technology exists to do so within the vast majority of the bank systems in use today. In fact, enabling this has been put to a vote for the last five years in a row. Every time it has been up for a vote, the banks have voted against it. The banks make money on the day-to-day float against the transfers, so the longer it takes to complete a transfer, the more money the banks make.

I did hear a number of bankers state publicly that they cared about the customer experience and wanted to improve upon it. However, those statements rang pretty hollow when it came to the product focus on the show floor, which revolved around B2B software, high-frequency trading protocols, high net-value transactions, etc. There were a few customer-focused companies, but they were dwarfed by the size of the major banks and financial institutions in attendance at the conference.

The Standards Team

I was invited to the conference by two business units within SWIFT. The first was the innovation group inside of SWIFT, called Innotribe. The second was the core standards group at SWIFT. There are over 6,900 banks that participate in the SWIFT network. Their standards team is very big, many times larger than the W3C, and extremely well funded. The primary job of the standards team at SWIFT is to create standards that help their member companies exchange financial information with the minimum amount of friction. Their flagship product is a standard called ISO 20022, which is a 3,463 page document that outlines every sort of financial message that the SWIFT network supports today.

The SWIFT standards team are a very helpful group of people that are trying their hardest to pull their membership into the future. They fundamentally understand the work that we’re doing in the Web Payments group and are interested in participating more deeply. They know that technology is going to eventually disrupt their membership and they want to make sure that there is a transition path for their membership, even if their membership would like to view these new technologies, like Bitcoin, PaySwarm, and Ripple as interesting corner cases.

In general, the banks don’t view technical excellence as a fundamental part of their success. Most view personal relationships as the fundamental thing that keeps their industry ticking. Most bankers come from an accounting background of some kind and don’t think of technology as something that can replace the sort of work that they do. This means that standards and new technologies almost always take a back seat to other more profitable endeavors such as implementing proprietary high frequency trading and derivatives trading platforms (as opposed to customer-facing systems like PaySwarm).

SWIFT’s primary customers are the banks, not the bank’s customers. Compare this with the primary customer of most Web-based organizations and the W3C, which is the individual. Since SWIFT is primarily designed to serve the banks, and banks make most of their money doing things like derivatives and high-frequency trading, there really is no champion for the customer in the banking organizations. This is why using your bank is a fairly awful experience. Speaking from a purely capitalistic standpoint, individuals that have less than a million dollars in deposits are not a priority.

Hobbled by Complexity

I met with over 30 large banks while I was at SIBOS and had a number of low-level discussions with their technology teams. The banking industry seems to be crippled by the complexity of their current systems. Minor upgrades cost millions of dollars due to the requirement to keep backwards compatibility. For example, at one point during the conference, it was explained that there was a proposal to make the last digit in an IBAN number a particular value if the organization was not a bank. The amount of push-back on the proposal was so great that it was never implemented since it would cost thousands of banks several million dollars each to implement the feature. Many of the banks are still running systems as part of their core infrastructure that were created in the 1980s, written in COBOL or Fortran, and well past their initial intended lifecycles.

A bank’s legacy systems mean that they have a very hard time innovating on top of their current architecture, and it could be that launching a parallel financial systems architecture would be preferable to broadly interfacing with the banking systems in use today. Startups launching new core financial services are at a great advantage as long as they limit the number of places that they interface with these old technology infrastructures.

Commitment to Research and Development

The technology utilized in the banking industry is, from a technology industry point of view, archaic. For example, many of the high-frequency trading messages are short ASCII text strings that look like this:

8=FIX.4.1#9=112#35=0#49=BRKR#56=INVMGR#34=235#52=19980604-07:58:28#112=19980604-07:58:28#10=157#

Imagine anything like that being accepted as a core part of the Web. Messages are kept to very short sequences because they must be processed in less than 5 microseconds. There is no standard binary protocol, even for high-frequency trading. Many of the systems that are core to a bank’s infrastructure pre-date the Web, sometimes by more than a decade or two. At most major banking institutions, there is very little R&D investment into new models of value transfer like PaySwarm, Bitcoin, or Ripple. In a room of roughly 100 bank technology executives, when asked how many of them had an R&D or innovation team, only around 5% of the people in the room raised their hands.

Compare this with the technology industry, which devotes a significant portion of their revenue to R&D activities and tries to continually disrupt their industry through the creation of new technologies.

No Shared Infrastructure

The technology utilized in the banking industry is typically created and managed in-house. It is also highly fractured; the banks share the messaging data model, but that’s about it. The SWIFT data model is implemented over and over again by thousands of banks. There is no set of popular open source software that one can use to do banking, which means that almost every major bank writes their own software. There is a high degree of waste when it comes to technology re-use in the banking industry.

Compare this with how much of the technology industry shares in the development of core infrastructure like operating systems, Web servers, browsers, and open source software libraries. This sort of shared development model does not exist in the banking world and the negative effects of this lack of shared architecture are evident in almost every area of technology associated with the banking world.

Fear of Technology Companies

The banks are terrified of the thought of Google, Apple, or Amazon getting into the banking business. These technology companies have hundreds of millions of customers, deep brand trust, and have shown that they can build systems to handle complexity with relative ease. At one point it was said that if Apple, Google, or Amazon wanted to buy Visa, they could. Then in one fell swoop, one of these technology companies could remove one of the largest networks that banks rely on to move money in the retail space.

While all of the banks seemed to be terrified of being disrupted, there seemed to be very little interest in doing any sort of drastic change to their infrastructure. In many cases, the banks are just not equipped to deal with the Web. They tend to want to build everything internally and rarely acquire technology companies to improve their technology departments.

There was also a relative lack of executives at banks that I spoke with that were able to carry on a fairly high-level conversation about things like Web technology. It demonstrated that it is going to still be some time until the financial industry can understand the sort of disruption that things like PaySwarm, Bitcoin, and Ripple could trigger. Many know that there are going to be a large chunk of jobs that are going to be going away, but those same individuals do not have the skill set to react to the change, or are too busy with paying customers to focus on the coming disruption.

A Passing Interest in Disruptive Technologies

There was a tremendous amount of interest in Bitcoin, PaySwarm, Ripple and how it could disrupt banking. However, much like the music industry, all but a few of the banks seemed to want to learn how they could adopt or use the technology. Many of the conversations ended with a general malaise related to technological disruption with no real motivation to dig deeper lest they find something truly frightening. Most executives would express how nervous they were about competition from technology companies, but were not willing to make any deep technological changes that would undermine their current revenue streams. There were parallels between many bank executives I spoke with, the innovators dilemma, and how many of the music industry executives I had been involved with in the early 2000s reacted to the rise of Napster, peer-to-peer file trading networks, and digital music.

Many higher-level executives were dismissive about the sorts of lasting changes Web technologies could have on their core business, often to the point of being condescending when they spoke about technologies like Bitcoin, PaySwarm, and Ripple. Most arguments boiled down to the customer needing to trust some financial institution to carry out the transaction, demonstrating that they did not fundamentally understand the direction that technologies like Bitcoin and Ripple are headed.

Lessons Learned

We were able to get the message out about the sort of work that we’re doing at W3C when it comes to Web Payments and it was well received. I have already been asked to present at next year’s conference. There is a tremendous opportunity here for the technology sector to either help the banks move into the future, or to disrupt many of the services that have been seen as belonging to the more traditional financial institutions. There is also a big opportunity for the banks to seize the work that is being done in Web Payments, Bitcoin, and Ripple, and apply it to a number of the problems that they have today.

The trip was a big success in that the Web Payments group now has very deep ties into SWIFT, major banks, and other financial institutions. Many of the institutions expressed a strong desire to collaborate with them on future Web Payments work. The financial institutions we spoke with thought that many of these technologies were 10 years away from affecting them, so there was no real sense of urgency to integrate the technology. I’d put the timeline closer to 3-4 years than 10 years. That said, there was general agreement that these technologies mattered. The lines of communication are now more open than they used to be between the traditional financial industry and the Web Payments group at W3C. That’s a big step in the right direction.

Interested in becoming a part of the Web Payments work, or just peeking in from time to time? It’s open to the public. Join here.

The Downward Spiral of Microdata

Full disclosure: I’m the chair of the RDFa Working Group and have been heavily involved during the RDFa and Microdata standardization initiatives. I am biased, but also understand all of the nuanced decisions that were made during the creation of both specifications.

Support for the Microdata API has just been removed from Webkit (Apple Safari). Support for the Microdata API was also removed from Blink (Google Chrome) a few months ago. This means that Apple Safari and Google Chrome will no longer support the Microdata API. Removal of the feature from a browser also shows us a likely future for Microdata, which is less and less support.

In addition, this discussion on the Blink developer list demonstrates that there isn’t anyone to pick up the work of maintaining the Microdata implementation. Microdata has also been ripped out of the main HTML5 specification at the W3C, with the caveat that the Microdata specification will only continue “if editorial resources can be found”. Translation: if an editor doesn’t step up to edit the Microdata specification, Microdata is dead at W3C. It just takes someone to raise their hand to volunteer, so why is it that out of a group of hundreds of people, no one has stepped up to maintain, create a test suite for, and push the Microdata specification forward?

A number of observers have been surprised by these events, but for those that have been involved in the month-to-month conversation around Microdata, it makes complete sense. Microdata doesn’t have an active community supporting it. It never really did. For a Web specification to be successful, it needs an active community around it that is willing to do the hard work of building and maintaining the technology. RDFa has that in spades, Microdata does not.

Microdata was, primarily, a shot across the bow at RDFa. The warning worked because the RDFa community reacted by creating RDFa Lite, which matches Microdata feature-for-feature, while also supporting things that Microdata is incapable of doing. The existence of RDFa Lite left the HTML Working Group in an awkward position. Publishing two specifications that did the exact same thing in almost the exact same way is a position that no standards organization wants to be in. At that point, it became a race to see which community could create the developer tools and support web developers that were marking up pages.

Microdata, to this day, still doesn’t have a specification editor, an active community, a solid test suite, or any of the other things that are necessary to become a world class technology. To be clear, I’m not saying Microdata is dying (4 million out of 329 million domains use it), just that not having these basic things in place will be very problematic for the future of Microdata.

To put that in perspective, HTML5+RDFa 1.1 will become an official W3C Recommendation (world standard) next Thursday. There was overwhelming support from the W3C member companies to publish it as a world standard. There have been multiple specification editors for RDFa throughout the years, there are hundreds of active people in the community integrating RDFa into pages across the Web, there are 7 implementations of RDFa in a variety of programming languages, there is a mailing list, website and an IRC channel dedicated to answering questions for people learning RDFa, and there is a test suite with 800 tests covering RDFa in 6 markup languages (HTML4, HTML5, XML, SVG, XHTML1 and XHTML5). If you want to build a solution on a solid technology, with a solid community and solid implementations; RDFa is that solution.

JSON-LD is the Bee’s Knees

Full disclosure: I’m one of the primary authors and editors of the JSON-LD specification. I am also the chair of the group that created JSON-LD and have been an active participant in a number of Linked Data initiatives: RDFa (chair, author, editor), JSON-LD (chair, co-creator), Microdata (primary opponent), and Microformats (member, haudio and hvideo microformat editor). I’m biased, but also well informed.

JSON-LD has been getting a great deal of good press lately. It was adopted by Google, Yahoo, Yandex, and Microsoft for use in schema.org. The PaySwarm universal payment protocol is based on it. It was also integrated with Google’s Gmail service and the open social networking folks have also started integrating it into the Activity Streams 2.0 work.

That all of these positive adoption stories exist was precisely the reason why Shane Becker’s post on why JSON-LD is an Unneeded Spec was so surprising. If you haven’t read it yet, you may want to as the rest of this post will dissect the arguments he makes in his post (it’s a pretty quick 5 minute read). The post is a broad brush opinion piece based on a number of factual errors and misinformed opinion. I’d like to clear up these errors in this blog post and underscore some of the reasons JSON-LD exists and how it has been developed.

A theatrical interpretation of the “JSON-LD is Unneeded” blog post

Shane starts with this claim:

Today I learned about a proposed spec called JSON-LD. The “LD” is for linked data (Linked Data™ in the Uppercase “S” Semantic Web sense).

When I started writing the original JSON-LD specification, one of the goals was to try and merge lessons learned in the Microformats community with lessons learned during the development of RDFa and Microdata. This meant figuring out a way to marry the lowercase semantic web with the uppercase Semantic Web in a way that was friendly to developers. For developers that didn’t care about the uppercase Semantic Web, JSON-LD would still provide a very useful data structure to program against. In fact, Microformats, which are the poster-child for the lowercase semantic web, were supported by JSON-LD from day one.

Shane’s article is misinformed with respect to the assertion that JSON-LD is solely for the uppercase Semantic Web. JSON-LD is mostly for the lowercase semantic web, the one that developers can use to make their applications exchange and merge data with other applications more easily. JSON-LD is also for the uppercase Semantic Web, the one that researchers and large enterprises are using to build systems like IBM’s Watson supercomputer, search crawlers, Gmail, and open social networking systems.

Linked data. Web sites. Standards. Machine readable.
Cool. All of those sound good to me. But they all sound familiar, like we’ve already done this before. In fact, we have.


We haven’t done something like JSON-LD before. I wish we had because we wouldn’t have had to spend all that time doing research and development to create the technology. When writing about technology, it is important to understand the basics of a technology stack before claiming that we’ve “done this before”. An astute reader will notice that at no point in Shane’s article is any text from the JSON-LD specification quoted, just the very basic introductory material on the landing page of the website. More on this below.

Linked data
That’s just the web, right? I mean, we’ve had the <a href> tag since literally the beginning of HTML / The Web. It’s for linking documents. Documents are a representation of data.

Speaking as someone that has been very involved in the Microformats and RDFa communities, yes, it’s true that the document-based Web can be used to publish Linked Data. The problem is that standard way of expressing a link to another piece of data that can be followed did not carry over to the data-based Web. That is, most JSON-based APIs don’t have a standard way of encoding a hyperlink.

The other implied assertion with the statement above is that the document-based Web is all we need. If this were true, sending HTML documents to Web applications would be all we needed. Web developers know that this isn’t the case today for a number of obvious reasons. We send JSON data back and forth on the Web when we need to program against things like Facebook, Google, or Twitter’s services. JSON is a very useful data format for machine-to-machine data exchange. The problem is that JSON data has no standard way of doing a variety of things we do on the document-based Web, like expressing links, expressing the types of data (like times and dates), and a variety of other very useful features for the data-based Web. This is one of the problems that JSON-LD addresses.

Web sites
If it’s not wrapped in HTML and viewable in a browser it, is it really a website? JSON isn’t very useful in the browser by itself. It’s not style-able. It’s not very human-readable. And worst of all, it’s not clickable.

Websites are composed of many parts. It’s a weak argument to say that if a site is mainly composed of data that isn’t in HTML, and isn’t viewable in a browser, that it’s not a real website. The vast majority of websites like Twitter and Facebook are composed of data and API calls with a relatively thin varnish of HTML on top. JSON is the primary way that applications interact with these and other data-driven websites. It’s almost guaranteed these days that any company that has a popular API uses JSON in their Web service protocol.

Shane’s argument here is pretty confused. It assumes that the primary use of JSON-LD is to express data in an HTML page. Sure, JSON-LD can do that, but focusing on that brush stroke is missing the big picture. The big picture is that JSON-LD allows applications that use it to share data and interoperate in a way that is not possible with regular JSON, and it’s especially useful when used in conjunction with a Web service or a document-based database like MongoDB or CouchDB.

Standards based
To their credit, JSON-LD did license their website content Creative Commons CC0 Public Domain. But, the spec itself isn’t. It’s using (what seems to be) a W3C boilerplate copyright / license. Copyright © 2010-2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.


Nope. The JSON-LD specification has been released under a Creative Commons Attribution 3.0 license multiple times in the past, and it will be released under a Creative Commons license again, most probably CC0. The JSON-LD specification was developed in a W3C Community Group using a Creative Commons license and then released to be published as a Web standard via W3C using their W3C Community Final Specification Agreement (FSA), which allows the community to fork the specification at any point in time and publish it under a different license.

When you publish a document through the W3C, they have their own copyright, license, and patent policy associated with the document being published. There is a legal process in place at W3C that asserts that companies can implement W3C published standards in a patent and royalty-free way. You don’t get that with CC0, in fact, you don’t get any such vetting of the technology or any level of patent and royalty protection.

What we have with JSON-LD is better than what is proposed in Shane’s blog post. You get all of the benefits of having W3C member companies vet the technology for technical and patent issues while also being able to fork the specification at any point in the future and publish it under a license of your choosing as long as you state where the spec came from.

Machine readable
Ah… “machine readable”. Every couple of years the current trend of what machine readable data should look like changes (XML/JSON, RSS/Atom, xml-rpc/SOAP, rest/WS-*). Every time, there are the same promises. This will solve our problems. It won’t change. It’ll be supported forever. Interoperability. And every time, they break their promises. Today’s empires, tomorrow’s ashes.


At no point has any core designer of JSON-LD claimed 1) that JSON-LD will “solve our problems” (or even your particular problem), 2) that it won’t change, and 3) that it will be supported forever. These are straw-man arguments. The current consensus of the group is that JSON-LD is best suited to a particular class of problems and that some developers will have no need for it. JSON-LD is guaranteed to change in the future to keep pace with what we learn in the field, and we will strive for backward compatibility for features that are widely used. Without modification, standardized technologies have a shelf life of around 10 years, 20-30 if they’re great. The designers of JSON-LD understand that, like the Web, JSON-LD is just another grand experiment. If it’s useful, it’ll stick around for a while, if it isn’t, it’ll fade into history. I know of no great software developer or systems designer that has ever made these three claims and been serious about it.

We do think that JSON-LD will help Web applications interoperate better than they do with plain ‘ol JSON. For an explanation of how, there is a nice video introducing JSON-LD.

With respect to the “Today’s empires, tomorrow’s ashes” cynicism, we’ve already seen a preview of the sort of advances that Web-based machine-readable data can unleash. Google, Yahoo!, Microsoft, Yandex, and Facebook all use a variety of machine-readable data technologies that have only recently been standardized. These technologies allow for faster, more accurate, and richer search results. They are also the driving technology for software systems like Watson. These systems exist because there are people plugging away at the hard problem of machine readable data in spite of cynicism directed at past failures. Those failures aren’t ashes, they’re the bedrock of tomorrow’s breakthroughs.

Instead of reinventing the everything (over and over again), let’s use what’s already there and what already works. In the case of linked data on the web, that’s html web pages with clickable links between them.

Microformats, Microdata, and RDFa do not work well for data-based Web services. Using Linked Data with data-based Web services is one of the primary reasons that JSON-LD was created.

For open standards, open license are a deal breaker. No license is more open than Creative Commons CC0 Public Domain + OWFa. (See also the Mozilla wiki about standards/license, for more.) There’s a growing list of standards that are already using CC0+OWFa.

I think there might be a typo here, but if not, I don’t understand why open licenses are a deal breaker for open standards. Especially things like the W3C FSA or the Creative Commons licenses we’ve published the JSON-LD spec under. Additionally, CC0 + OWFa might be neat. Shane’s article was the first time that I had heard of OWFa and I’d be a proponent for pushing it in the group if it granted more freedom to the people using and developing JSON-LD than the current set of agreements we have in place. After glossing over the legal text of the OWFa, I can’t see what CC0 + OWFa buys us over CC0 + W3C patent attribution. If someone would like to make these benefits clear, I could take a proposal to switch to CC0 + OWFa to the JSON-LD Community Group and see if there is interest in using that license in the future.

No process is more open than a publicly editable wiki.

A counter-point to publicly accessible forums

Publicly editable wikis are notorious for edit wars, they are not a panacea. Just because you have a wiki, does not mean you have an open community. For example, the Microformats community was notorious for having a different class of unelected admins that would meet in San Francisco and make decisions about the operation of the community. This seemingly innocuous practice would creep its way into the culture and technical discussion on a regular basis leading to community members being banned from time to time. Similarly, Wikipedia has had numerous issues with publicly editable wikis and the behavior of their admins.

Depending on how you define “open”, there are a number of processes that are far more open than a publicly editable wiki. For example, the JSON-LD specification development process is completely open to the public, based on meritocracy, and is consensus-driven. The mailing list is open. The bug tracker is open. We have weekly design teleconferences where all the audio is recorded and minuted. We have these teleconferences to this day and will continue to have them into the future because we make transparency a priority. JSON-LD, as far as I know, is the first such specification in the world developed where all the previously described operating guidelines are standard practice.

(Mailing lists are toxic.)

A community is as toxic as its organizational structure enables it to be. The JSON-LD community is based on meritocracy, consensus, and has operated in a very transparent manner since the beginning (open meetings, all calls are recorded and minuted, anyone can contribute to the spec, etc.). This has, unsurprisingly, resulted in a very pleasant and supportive community. That said, there is no perfect communication medium. They’re all lossy and they all have their benefits and drawbacks. Sometimes, when you combine multiple communication channels as a part of how your community operates, you get better outcomes.

Finally, for machine readable data, nothing has been more widely adopted by publishers and consumers than microformats. As of June 2012, microformats represents about 70% of all of the structured data on the web. And of that ~70%, the vast majority was h-card and xfn. (All RDFa is about 25% and microdata is a distant third.)

Microformats are good if all you need to do is publish your basic contact and social information on the Web. If you want to publish detailed product information, financial data, medical data, or address other more complex scenarios, Microformats won’t help you. There have been no new Microformats released in the last 5 years and the mailing list traffic has been almost non-existent for around 5 years. From what I can tell, most everyone has moved on to RDFa, Microdata, or JSON-LD.

There are a few that are working on Microformats 2, but I haven’t seen anything that it provides that is not already provided by existing solutions that also have the added benefit of being W3C standards or backed by major companies like Google, Facebook, Yahoo!, Microsoft, and Yandex.

Maybe it’s because of the ease of publishing microformats. Maybe it’s the open process for developing the standards. Maybe it’s because microformats don’t require any additions to HTML. (Both RDFa and microdata required the use of additional attributes or XML namespaces.) Whatever the reason, microformats has the most uptake. So, why do people keep trying to reinvent what microformats is already doing well?

People aren’t reinventing what Microformats are already doing well, they’re attempting to address problems that Microformats do not solve.

For example, one of the reasons that Google adopted JSON-LD is because markup was much easier in JSON-LD than it was in Microformats, as evidenced by the example below:

Back to JSON-LD. The “Simple Example” listed on the homepage is a person object representing John Lennon. His birthday and wife are also listed on the object.

        {
          "@context": "http://json-ld.org/contexts/person.jsonld",
          "@id": "http://dbpedia.org/resource/John_Lennon",
          "name": "John Lennon",
          "born": "1940-10-09",
          "spouse": "http://dbpedia.org/resource/Cynthia_Lennon"
        }

I look at this and see what should have been HTML with microformats (h-card and xfn). This is actually a perfect use case for h-card and xfn: a person and their relationship to another person. Here’s how it could’ve been marked up instead.

        <div class="h-card">
          <a href="http://dbpedia.org/resource/John_Lennon" class="u-url u-uid p-name">John Lennon</a>
          <time class="dt-bday" datetime="1940-10-09">October 9<sup>th</sup>, 1940</time>
          <a rel="spouse" href="http://dbpedia.org/resource/Cynthia_Lennon">Cynthia Lennon</a>.
        </div>

I’m willing to bet that most people familiar with JSON will find the JSON-LD markup far easier to understand and get right than the Microformats-based equivalent. In addition, sending the Microformats markup to a REST-based Web service would be very strange. Alternatively, sending the JSON-LD markup to a REST-based Web service would be far more natural for a modern day Web developer.

This HTML can be easily understood by machine parsers and humans parsers. Microformats 2 parsers already exists for: JavaScript (in the browser), Node.js, PHP and Ruby. HTML + microformats2 means that machines can read your linked data from your website and so can humans. It means that you don’t need an “API” that is something other than your website.

You have been able to do the same thing, and much more, using RDFa and Microdata for far longer (since 2006) than you have been able to do it in Microformats 2. Let’s be clear, there is no significant advantage to using Microformats 2 over RDFa or Microdata. In fact, there are a number of disadvantages for using Microformats 2 at this point, like little to no support from the search companies, very little software tooling, and an anemic community (of which I am a member) for starters. Additionally, HTML + Microformats 2 does not address the Web service API issue at all.

Please don’t waste time and energy reinventing all of the wheels. Instead, please use what already works and what works the webby way.


Do not miss the irony of this statement. RDFa has been doing what Microformats 2 does today since 2006, and it’s a Web standard. Even if you don’t like RDFa 1.0, RDFa 1.1, RDFa Lite 1.1, and Microdata all came before Microformats 2. To assert that wheels should not be reinvented and then claim that Microformats 2, which was created far after there were already a number of well-established solutions, is quite a strange position to take.

Conclusion

JSON-LD was created by people that have been directly involved in the Linked Data, lowercase semantic web, uppercase Semantic Web, Microformats, Microdata, and RDFa work. It has proven to be useful to them. There are a number of very large technology companies that have adopted JSON-LD, further underscoring its utility. Expect more big announcements in the next six months. The JSON-LD specifications have been developed in a radically open and transparent way, the document copyright and licensing provisions are equally open. I hope that this blog post has helped clarify most of the misinformed opinion in Shane Becker’s blog post.

Most importantly, cynicism will not solve the problems that we face on the Web today. Hard work will, and there are very few communities that I know of that work harder and more harmoniously than the excellent volunteers in the JSON-LD community.

If you would like to learn more about Linked Data, a good video introduction exists. If you want to learn more about JSON-LD, there is a good video introduction to that as well.

Secure Messaging vs. Javascript Object Signing and Encryption

The Web Payments group at the World Wide Web Consortium (W3C) is currently performing a thorough analysis on the MozPay API. The first part of the analysis examined the contents of the payment messages . This is the second part of the analysis, which will focus on whether the use of the Javascript Object Signing and Encryption (JOSE) group’s solutions to achieve message security is adequate, or if the Web Payment group’s solutions should be used instead.

The Contenders

The IETF JOSE Working Group is actively standardizing the following specifications for the purposes of adding message security to JSON:

JSON Web Algorithms (JWA)
Details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm.
JSON Web Key (JWK)
Details a data structure that represents one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, this specification details how you do that in a standard way.
JSON Web Token (JWT)
Defines a way of representing claims such as “Bob was born on November 15th, 1984″. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications.
JSON Web Encryption (JWE)
Defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way.
JSON Web Signature (JWS)
Defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so.

The W3C Web Payments group is actively standardizing a similar specification for the purpose of adding message security to JSON messages:

Secure Messaging (code named: HTTP Keys)
Describes a simple, decentralized security infrastructure for the Web based on JSON, Linked Data, and public key cryptography. This system enables Web applications to establish identities for agents on the Web, associate security credentials with those identities, and then use those security credentials to send and receive messages that are both encrypted and verifiable via digital signatures.

Both groups are relying on technology that has existed and been used for over a decade to achieve secure communications on the Internet (symmetric and asymmetric cryptography, public key infrastructure, X509 certificates, etc.). The key differences between the two have to do more with flexibility, implementation complexity, and how the data is published on the Web and used between systems.

Basic Differences

In general, the JOSE group is attempting to create a flexible/generalized way of expressing cryptography parameters in JSON. They are then using that information and encrypting or signing specific data (called claims in the specifications).

The Web Payments group’s specification achieves the same thing, but while not trying to be as generalized as the JOSE group. Flexibility and generalization tends to 1) make the ecosystem more complex than it needs to be for 95% of the use cases, 2) make implementations harder to security audit, and 3) make it more difficult to achieve interoperability between all implementations. The Secure Messaging specification attempts to outline a single best practice that will work for 95% of the applications out there. The 5% of Web applications that need to do more than the Secure Messaging spec can use the JOSE specifications. The Secure Messaging specification is also more Web-y. The more Web-y nature of the spec gives us a number of benefits, such as giving us a Web-scale public key infrastructure as a pleasant side-effect, that we will get into below.

JSON-LD Advantages over JSON

Fundamentally, the Secure Messaging specification relies on the Web and Linked Data to remove some of the complexity that exists in the JOSE specs while also achieving greater flexibility from a data model perspective. Specifically, the Secure Messaging specification utilizes Linked Data via a new standards-track technology called JSON-LD to allow anyone to build on top of the core protocol in a decentralized way. JSON-LD data is fundamentally more Web-y than JSON data. Here are the benefits of using JSON-LD over regular JSON:

  • A universal identifier mechanism for JSON objects via the use of URLs.
  • A way to disambiguate JSON keys shared among different JSON documents by mapping them to URLs via a context.
  • A standard mechanism in which a value in a JSON object may refer to a JSON object on a different document or site on the Web.
  • A way to associate datatypes with values such as dates and times.
  • The ability to annotate strings with their language. For example, the word ‘chat’ means something different in English and French and it helps to know which language was used when expressing the text.
  • A facility to express one or more directed graphs, such as a social network, in a single document. Graphs are the native data structure of the Web.
  • A standard way to map external JSON application data to your application data domain.
  • A deterministic way to generate a hash on JSON data, which is helpful when attempting to figure out if two data sources are expressing the same information.
  • A standard way to digitally sign JSON data.
  • A deterministic way to merge JSON data from multiple data sources.

Plain old JSON, while incredibly useful, does not allow you to do the things mentioned above in a standard way. There is a valid argument that applications may not need this amount of flexibility, and for those applications, JSON-LD does not require any of the features above to be used and does not require the JSON data to be modified in any way. So people that want to remain in the plain ‘ol JSON bucket can do so without the need to jump into the JSON-LD bucket with both feet.

JSON Web Algorithms vs. Secure Messaging

The JSON Web Algorithms specification details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm. The specification is 70 pages long and is effectively just a collection of what values are allowed for each key used in JOSE-based JSON documents. The design approach taken for the JOSE specifications requires that such a document exists.

The Secure Messaging specification takes a different approach. Rather than declare all of the popular algorithms and cryptography schemes in use today, it defines just one digital signature scheme (RSA encryption with a SHA-256 hashing scheme), one encryption scheme (128-bit AES with cyclic block chaining), and one way of expressing keys (as PEM-formatted data). If placed into a single specification, like the JWA spec, it would be just a few pages long (really, just 1 page of actual content).

The most common argument against the Secure Messaging spec, with respect to the JWA specification, is that it lacks the same amount of cryptographic algorithm agility that the JWA specification provides. While this may seem like a valid argument on the surface, keep in mind that the core algorithms used by the Secure Messaging specification can be changed at any point to any other set of algorithms. So, the specification achieves algorithm agility while greatly reducing the need for a large 70-page specification detailing the allowable values for the various cryptographic algorithms. The other benefit is that since the cryptography parameters are outlined in a Linked Data vocabulary, instead of a process-heavy specification, that they can be added to at any point as long as there is community consensus. Note that while the vocabulary can be added to, thus providing algorithm agility if a particular cryptography scheme is weakened or broken, already defined cryptography schemes in the vocabulary must not be changed once the cryptography vocabulary terms become widely used to ensure that production deployments that use the older mechanism aren’t broken.

Providing just one way, the best practice at the time, to do digital signatures, encryption, and key publishing reduces implementation complexity. Reducing implementation complexity makes it easier to perform security audits on implementations. Reducing implementation complexity also helps ensure better interoperability and more software library implementations, as the barrier to creating a fully conforming implementation is greatly reduced.

The Web Payments group believes that new digital signature and encryption schemes will have to be updated every 5-7 years. It is better to delay the decision to switch to another primary algorithm as long as as possible (and as long as it is safe to do so). Delaying the cryptographic algorithm decision ensures that the group will be able to make a more educated decision than attempting to predict which cryptographic algorithms may be the successors to currently deployed algorithms.

Bottom line: The Secure Messaging specification utilizes a much simpler approach than the JWA specification while supporting the same level of algorithm agility.

JSON Web Key vs. Secure Messaging

The JSON Web Key (JWK) specification details a data structure that is capable of representing one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, JWK details how you do that in an standard way. A typical RSA public key looks like the following using the JWK specification:

{
  "keys": [{
    "kty":"RSA",
    "n": "0vx7agoe ... DKgw",
    "e":"AQAB",
    "alg":"RS256",
    "kid":"2011-04-29"
  }]
}

A similar RSA public key looks like the following using the Secure Messaging specification:

{
  "@context": "https://w3id.org/security/v1",
  "@id": "https://example.com/i/bob/keys/1",
  "@type": "Key",
  "owner": "https://example.com/i/bob",
  "publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBG0BA...OClDQAB\n-----END PUBLIC KEY-----\n"
}

There are a number of differences between the two key formats. Specifically:

  1. The JWK format expresses key information by specifying the key parameters directly. The Secure Messaging format places all of the key parameters into a PEM-encoded blob. This approach was taken because it is easier for developers to use the PEM data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---)
  2. In the general case, the Secure Messaging key format assigns URL identifiers to keys and publishes them on the Web as JSON-LD, and optionally as RDFa. This means that public key information is discoverable and human and machine-readable by default, which means that all of the key parameters can be read from the Web. The JWK mechanism does assign a key ID to keys, but does not require that they are published to the Web if they are to be used in message exchanges. The JWK specification could be extended to enable this, but by default, doesn’t provide this functionality.
  3. The Secure Messaging format is also capable of specifying an identity that owns the key, which allows a key to be tied to an identity and that identity to be used for thinks like Access Control to Web resources and REST APIs. The JWK format has no such mechanism outlined in the specification.

Bottom line: The Secure Messaging specification provides four major advantages over the JWK format: 1) the key information is expressed at a higher level, which makes it easier to work with for Web developers, 2) it allows key information to be discovered by deferencing the key ID, 3) the key information can be published (and extended) in a variety of Linked Data formats, and 4) it provides the ability to assign ownership information to keys.

JSON Web Tokens vs. Secure Messaging

The JSON Web Tokens (JWT) specification defines a way of representing claims such as “Bob was born on November 15th, 1984″. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications. Here is an example of a JWT document:

{
  "iss": "joe",
  "exp": 1300819380,
  "http://example.com/is_root": true
}

JWT documents contain keys that are public, such as iss and exp above, and keys that are private (which could conflict with keys from the JWT specification). The data format is fairly free-form, meaning that any data can be placed inside a JWT Claims Set like the one above.

Since the Secure Messaging specification utilizes JSON-LD for its data expression mechanism, it takes a fundamentally different approach. There are no headers or claims sets in the Secure Messaging specification, just data. For example, the data below is effectively a JWT claims set expressed in JSON-LD:

{
  "@context": "http://json-ld.org/contexts/person",
  "@type": "Person",
  "name": "Manu Sporny",
  "gender": "male",
  "homepage": "http://manu.sporny.org/"
}

Note that there are no keywords specific to the Secure Messaging specification, just keys that are mapped to URLs (to prevent collisions) and data. In JSON-LD, these keys and data are machine-interpretable in a standards-compliant manner (unlike JWT data), and can be merged with other data sources without the danger of data being overwritten or colliding with other application data.

Bottom line: The Secure Messaging specifications use of a native Linked Data format removes the requirement for a specification like JWT. As far as the Secure Messaging specification is concerned, there is just data, which you can then digitally sign and encrypt. This makes the data easier to work with for Web developers as they can continue to use their application data as-is instead of attempting to restructure it into a JWT.

JSON Web Encryption vs. Secure Messaging

The JSON Web Encryption (JWE) specification defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way. A JWE-encrypted message looks like this:

{
  "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0",
  "unprotected": {"jku": "https://server.example.com/keys.jwks"},
  "recipients": [{
    "header": {
      "alg":"RSA1_5"
        "kid":"2011-04-29",
        "enc":"A128CBC-HS256",
        "jku":"https://server.example.com/keys.jwks"
      },
      "encrypted_key": "UGhIOgu ... MR4gp_A"
    }]
  }],
  "iv": "AxY8DCtDaGlsbGljb3RoZQ",
  "ciphertext": "KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY",
  "tag": "Mz-VPPyU4RlcuYv1IwIvzw"
}

To decrypt this information, an application would retrieve the private key associated with the recipients[0].header, and then decrypt the encrypted_key. Using the decrypted encrypted_key value, it would then use the iv to decrypt the protected header. Using the algorithm provided in the protected header, it would then use the decrypted encrypted_key, iv, the algorithm specified in the protected header, and the ciphertext to retrieve the original message as a result.

For comparison purposes, a Secure Messaging encrypted message looks like this:

{
  "@context": "https://w3id.org/security/v1",
  "@type": "EncryptedMessage2012",
  "data": "VTJGc2RH ... Fb009Cg==",
  "encryptionKey": "uATte ... HExjXQE=",
  "iv": "vcDU1eWTy8vVGhNOszREhSblFVqVnGpBUm0zMTRmcWtMrRX==",
  "publicKey": "https://example.com/people/john/keys/23"
}   

To decrypt this information, an application would use the private key associated with the publicKey to decrypt the encryptionKey and iv. It would then use the decrypted encryptionKey and iv to decrypt the value in data, retrieving the original message as a result.

The Secure Messaging encryption protocol is simpler than the JWE protocol for three major reasons:

  1. The @type of the message, EncryptedMessage2012, encapsulates all of the cryptographic algorithm information in a machine-readable way (that can also be hard-coded in implementations). The JWE specification utilizes the protected field to express the same sort of information, which is allowed to get far more complicated than the Secure Messaging equivalent, leading to more complexity.
  2. Key information is expressed in one entry, the publicKey entry, which is a link to a machine-readable document that can express not only the public key information, but who owns the key, the name of the key, creation and revocation dates for the key, as well as a number of other Linked Data values that result in a full-fledged Web-based PKI system. Not only is Secure Messaging encryption simpler than JWE, but it also enables many more types of extensibility.
  3. The key data is expressed in a PEM-encoded format, which is expressed as a base-64 encoded blob of information. This approach was taken because it is easier for developers to use the data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---).

The rest of the entries in the JSON are typically required for the encryption method selected to secure the message. There is not a great deal of difference between the two specifications when it comes to the parameters that are needed for the encryption algorithm.

Bottom line: The major difference between the Secure Messaging and JWE specification has to do with how the encryption parameters are specified as well as how many of them there can be. The Secure Messaging specification expresses only one encryption mechanism and outlines the algorithms and keys external to the message, which leads to a reduction in complexity. The JWE specification allows many more types of encryption schemes to be used, at the expense of added complexity.

JSON Web Signatures vs. Secure Messaging

The JSON Web Signatures (JWS) specification defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so. A JWS digital signature looks like the following:

{
  "payload": "eyJpc ... VlfQ",
  "signatures":[{
    "protected":"eyJhbGciOiJSUzI1NiJ9",
    "header": {
      "kid":"2010-12-29"
    },
    "signature": "cC4hi ... 77Rw"
  }]
}

For the purposes of comparison, a Secure Messaging message and signature looks like the following:

{
  "@context": ["https://w3id.org/security/v1", "http://json-ld.org/contexts/person"]
  "@type": "Person",
  "name": "Manu Sporny",
  "homepage": "http://manu.sporny.org/",
  "signature":
  {
    "@type": "GraphSignature2012",
    "creator": "http://example.org/manu/keys/5",
    "created": "2013-08-04T17:39:53Z",
    "signatureValue": "OGQzN ... IyZTk="
  }
}

There are a number of stark differences between the two specifications when it comes to digital signatures:

  1. The Secure Messaging specification does not need to base-64 encode the payload being signed. This makes it easier for a developer to see (and work with) the data that was digitally signed. Debugging signed messages is also simplified as special tools to decode the payload are unnecessary.
  2. The Secure Messaging specification does not require any header parameters for the payload, which reduces the number of things that can go wrong when verifying digitally signed messages. One could argue that this also reduces flexibility. The counter-argument is that different signature schemes can always be switched in by just changing the @type of the signature.
  3. The signer’s public key is available via a URL. This means that, in general, all Secure Messaging signatures can be verified by dereferencing the creator URL and utilizing the published key data to verify the signature.
  4. The Secure Messaging specification depends on a normalization algorithm that is applied to the message. This algorithm is non-trivial, typically implemented behind a JSON-LD library .normalize() method call. JWS does not require data normalization. The trade-off is simplicity at the expense of requiring your data to always be encapsulated in the message. For example, the Secure Messaging specification is capable of pointing to a digital signature expressed in RDFa on a website using a URL. An application can then dereference that URL, convert the data to JSON-LD, and verify the digital signature. This mechanism is useful, for example, when you want to publish items for sale along with their prices on a Web page in a machine-readable way. This sort of use case is not achievable with the JWS specification. All data is required to be in the message. In other words, Secure Messaging performs a signature on information that could exist on the Web where the JWS specification performs a signature on a string of text in a message.
  5. The JWS mechanism enables HMAC-based signatures while the Secure Messaging mechanism avoids the use of HMAC altogether, taking the position that shared secrets are typically a bad practice.

Bottom line: The Secure Messaging specification does not need to encode its payloads, but does require a rather complex normalization algorithm. It supports discovery of signature key data so that signatures can be verified using standard Web protocols. The JWS specification is more flexible from an algorithmic standpoint and simpler from a signature verification standpoint. The downside is that the only data input format must be from the message itself and can’t be from an external Linked Data source, like an HTML+RDFa web page listing items for sale.

Conclusion

The Secure Messaging and JOSE designs, while attempting to achieve the same basic goals, deviate in the approaches taken to accomplish those goals. The Secure Messaging specification leverages more of the Web with its use of a Linked Data format and URLs for identifying and verifying identity and keys. It also attempts to encapsulate a single best practice that will work for the vast majority of Web applications in use today. The JOSE specifications are more flexible in the type of cryptographic algorithms that can be used which results in more low-level primitives used in the protocol, increasing complexity for developers that must create interoperable JOSE-based applications.

From a specification size standpoint, the JOSE specs weigh in at 225 pages, the Secure Messaging specification weighs in at around 20 pages. This is rarely a good way to compare specifications, and doesn’t always result in an apples to apples comparison. It does, however, give a general idea of the amount of text required to explain the details of each approach, and thus a ballpark idea of the complexity associated with each specification. Like all specifications, picking one depends on the use cases that an application is attempting to support. The goal with the Secure Messaging specification is that it will be good enough for 95% of Web developers out there, and for the remaining 5%, there is the JOSE stack.

Technical Analysis of 2012 MozPay API Message Format

The W3C Web Payments group is currently analyzing a new API for performing payments via web browsers and other devices connected to the web. This blog post is a technical analysis of the MozPay API with a specific focus on the payment protocol and its use of JOSE (JSON Object Signing and Encryption). The first part of the analysis takes the approach of examining the data structures used today in the MozPay API and compares them against what is possible via PaySwarm. The second part of the analysis examines the use of JOSE to achieve the use case and security requirements of the MozPay API and compares the solution to JSON-LD, which is the mechanism used to achieve the use case and security requirements of the PaySwarm specification.

Before we start, it’s useful to have an example of what the current MozPay payment initiation message looks like. This message is generated by a MozPay Payment Provider and given to the browser to initiate a native purchase process:

jwt.encode({
  "iss": APPLICATION_KEY,
  "aud": "marketplace.firefox.com",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",
    "description": "Adventure Game item",
    "icons": {
      "64": "https://yourapp.com/img/icon-64.png",
      "128": "https://yourapp.com/img/icon-128.png"
    },
    "productData": "user_id=1234&my_session_id=XYZ",
    "postbackURL": "https://yourapp.com/payments/postback",
    "chargebackURL": "https://yourapp.com/payments/chargeback"
  }
}, APPLICATION_SECRET)

The message is effectively a JSON Web Token. I say effectively because it seems like it breaks the JWT spec in subtle ways, but it may be that I’m misreading the JWT spec.

There are a number of issues with the message that we’ve had to deal with when creating the set of PaySwarm specifications. It’s important that we call those issues out first to get an understanding of the basic concerns with the MozPay API as it stands today. The comments below use the JWT code above as a reference point.

Unnecessarily Cryptic JSON Keys

...
  "iss": APPLICATION_KEY,
  "aud": "marketplace.firefox.com",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,
...

This is more of an issue with the JOSE specs than it is the MozPay API. I can’t think of a good line of argumentation to shorten things like ‘issuer’ to ‘iss’ and ‘type’ to ‘typ’ (seriously :) , the ‘e’ was too much?). It comes off as 1980s protocol design, trying to save bits on the wire. Making code less readable by trying to save characters in a human-readable message format works against the notion that the format should be readable by a human. I had to look up what iss, aud, iat, and exp meant. The only reason that I could come up with for using such terse entries was that the JOSE designers were attempting to avoid conflicts with existing data in JWT claims objects. If this was the case, they should have used a prefix like “@” or “$”, or placed the data in a container value associated with a key like ‘claims’.

PaySwarm always attempts to use terminology that doesn’t require you to go and look at the specification to figure out basic things. For example, it uses creator for iss (issuer), validFrom for iat (issued at), and validUntil for exp (expire time).

iss and APPLICATION_KEY

...
  "iss": APPLICATION_KEY,
...

The MozPay API specification does not require the APPLICATION_KEY to be a URL. Since it’s not a URL, it’s not discoverable. The application key is also specific to each Marketplace, which means that one Marketplace could use a UUID, another could use a URL, and so on. If the system is intended to be decentralized and interoperable, the APPLICATION_KEY should either be dereferenceable on the public Web without coordination with any particular entity, or a format for the key should be outlined in the specification.

All identities and keys used in digital signatures in PaySwarm use URLs for the identifiers that must contain key information in some sort of machine-readable format (RDFa and JSON-LD, for now). This means that 1) they’re Web-native, 2) they can be dereferenced, and 3) when they’re dereferenced, a client can extract useful data from the document retrieved.

Audience

...
  "aud": "marketplace.firefox.com",
...

It’s not clear what the aud parameter is used for in the MozPay API, other than to identify the marketplace.

Issued At and Expiration Time

...
  "iat": 1337357297,
  "exp": 1337360897,
...

The iat (issued at) and exp (expiration time) values are encoded in the number of seconds since January 1st, 1970. These are not very human readable and make debugging issues with purchases more difficult than they need to be.

PaySwarm uses the W3C Date/Time format, which are human-readable strings that are also easy for machines to process. For example, November 5th, 2013 at 1:15:30 AM (Zulu / Universal Time) is encoded as: 2013-11-05T13:15:30Z.

The Request

...
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",
...

This object in the MozPay API is a description of the thing that is to be sold. Technically, it’s not really a request. The outer object is the request. There is a big of a conflation of terminology here that should probably be fixed at some point.

In PaySwarm, the contents of the MozPay request value is called an Asset. An asset is a description of the thing that is to be sold.

Request ID

...
{
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
...

The MozPay API encodes the request ID as a universally unique identifier (UUID). The major downside to this approach is that other applications can’t find the information on the Web to 1) discover more about the item being sold, 2) discuss the item being sold by referring to it by a universal ID, 3) feed it to a system that can read data published at the identifier address, and 4) index it for the purposes of searching.

The PaySwarm specifications use a URL for the identifier for assets and publish machine-readable data at the asset location so that other systems can discover more information about the item being sold, refer to the item being sold in discussions (like reviews of the item), start a purchase by referencing the URL, index the item being sold such that it may be utilized in price-comparison/search engines.

Price Point

...
  "request": {
...
    "pricePoint": 1,
...

The pricePoint for the item being sold is currently a whole number. This is problematic because prices are usually decimal numbers including a fraction and a currency.

PaySwarm publishes its pricing information in a currency agnostic way that is compatible with all known monetary systems. Some of these systems include USD, EUR, JYP, RMB, Bitcoin, Brixton Pound, Bernal Bucks, Ven, and a variety of other alternative currencies. The amount is specified as a decimal with fraction and a currency URL. A URL is utilized for the currency because PaySwarm supports arbitrary currencies to be created and managed external to the PaySwarm system.

Icons

...
  "request": {
...
    "icons": {
      "64": "https://yourapp.com/img/icon-64.png",
      "128": "https://yourapp.com/img/icon-128.png"
    },
...

Icon data is currently modeled in a way that is useful to developers by indexing the information as a square pixel size for the icon. This allows developers to access the data like so: icons.64 or icons.128. Values are image URLs, which is the right choice.

PaySwarm uses JSON-LD and can support this sort of data layout through a feature called data indexing. Another approach is to just have an array of objects for icons, which would allow us to include extended information about the icons. For example:

...
  "request": {
...
  "icon": [{size: 64, id: "https://yourapp.com/img/icon-64.png", label: "Magical Unicorn"}, ...]
...

Product Data

...
  "request": {
...
    "productData": "user_id=1234&my_session_id=XYZ",
...

If the payment technology we’re working on is going to be useful to society at large, we have to allow richer descriptions of products. For example, model numbers, rich markup descriptions, pictures, ratings, colors, and licensing terms are all important parts of a product description. The value needs to be larger than a 256 byte string and needs to support decentralized extensibility. For example, Home Depot should be able to list UPC numbers and internal reference numbers in the asset description and the payment protocol should preserve that extra information, placing it into digital receipts.

PaySwarm uses JSON-LD and thus supports decentralized extensibility for product data. This means that any vendor may express information about the asset in JSON-LD and it will be preserved in all digital contracts and digital receipts. This allows the asset and digital receipt format to be used as a platform that can be built on top of by innovative retailers. It also increases data fidelity by allowing far more detailed markup of asset information than what is currently allowed via the MozPay API.

Postback URL

...
  "request": {
...
    "postbackURL": "https://yourapp.com/payments/postback",
...

The postback URL is a pretty universal concept among Web-based payment systems. The payment processor needs a URL endpoint that the result of the purchase can be sent to. The postback URL serves this purpose.

PaySwarm has a similar concept, but just lists it in the request URL as ‘callback’.

Chargeback URL

...
  "request": {
...
    "chargebackURL": "https://yourapp.com/payments/chargeback"
...

The chargeback URL is a URL endpoint that is called whenever a refund is issued for a purchased item. It’s not clear if the vendor has a say in whether or not this should be allowed for a particular item. For example, what happens when a purchase is performed for a physical good? Should chargebacks be easy to do for those sorts of items?

PaySwarm does not build chargebacks into the core protocol. It lets the merchant request the digital receipt of the sale to figure out if the sale has been invalidated. It seems like a good idea to have a notification mechanism build into the core protocol. We’ll need more discussion on this to figure out how to correctly handle vendor-approved refunds and customer-requested chargebacks.

Conclusion

There are a number of improvements that could be made to the basic MozPay API that would enable more use cases to be supported in the future while keeping the level of complexity close to what it currently is. The second part of this analysis will examine the JavaScript Object Signature and Encryption (JOSE) technology stack and determine if there is a simpler solution that could be leveraged to simplify the digital signature requirements set forth by the MozPay API.

[UPDATE: The second part of this analysis is now available]

Verifiable Messaging over HTTP

Problem: Figure out a simple way to enable a Web client or server to authenticate and authorize itself to do a REST API call. Do this in one HTTP round-trip.

There is a new specification that is making the rounds called HTTP Signatures. It enables a Web client or server to authenticate and authorize itself when doing a REST API call and only requires one HTTP round-trip to accomplish the feat. The meat of the spec is 5 pages long, and the technology is simple and awesome.

We’re working on this spec in the Web Payments group at the World Wide Web Consortium because it’s going to be a fundamental part of the payment architecture we’re building into the core of the Web. When you send money to or receive money from someone, you want to make sure that the transaction is secure. HTTP Signatures help to secure that financial transaction.

However, the really great thing about HTTP Signatures is that it can be applied anywhere password or OAuth-based authentication and authorization is used today. Passwords, and shared secrets in general, are increasingly becoming a problem on the Web. OAuth 2 sucks for a number of reasons. It’s time for something simpler and more powerful.

HTTP Signatures:

  1. Work over both HTTP and HTTPS. You don’t need to spend money on expensive SSL/TLS security certificates to use it.
  2. Protect messages sent over HTTP or HTTPS by digitally signing the contents, ensuring that the data cannot be tampered with in transit. In the case that HTTPS security is breached, it provides an additional layer of protection.
  3. Identify the signer and establish a certain level of authorization to perform actions over a REST API. It’s like OAuth, only way simpler.

When coupled with the Web Keys specification, HTTP Signatures:

  1. Provide a mechanism where the digital signature key does not need to be registered in advance with the server. The server can automatically discover the key from the message and determine what level of access the client should have.
  2. Enable a fully distributed Public Key Infrastructure for the Web. This opens up new ways to more securely communicate over the Web, which is timely considering the recent news concerning the PRISM surveillance program.

If you’re interested in learning more about HTTP Signatures, the meat of the spec is 5 pages long and is a pretty quick read. You can also read (or listen to) the meeting notes where we discuss the HTTP Signatures spec a week ago, or today. If you want to keep up with how the spec is progressing, join the Web Payments mailing list.

Google adds JSON-LD support to Search and Google Now

Full disclosure: I’m one of the primary designers of JSON-LD and the Chair of the JSON-LD group at the World Wide Web Consortium.

Last week, Google announced support for JSON-LD markup in Gmail. Putting JSON-LD in front of 425 million people is a big validation of the technology.

Hot on the heels of last weeks announcement, Google has just announced additional JSON-LD support for two more of their core products! The first is their flagship product, Google Search. The second is their new intelligent personal assistant service called Google Now.

The addition of JSON-LD support to Google Search now allows you to do incredibly accurate personalized searches. For example, here’s an example search for “my flights”:

and here’s an example for “my hotel reservation for next week”:

Web developers that mark certain types of sort of information up as JSON-LD in the e-mails that they send to you can now enable new functionality in these core Google services. For example, using JSON-LD will make it really easy for you to manage flights, hotel bookings, reservations at restaurants, and events like concerts and movies from within Google’s ecosystem. It also makes it easy for services like Google Now to push a notification to your phone when your flight has been delayed:

Or, show your boarding pass on your mobile phone when you’ve arrived at the airport:

Or, let you know when you need to leave to make your reservation for a restaurant:

Google Search and Google Now can make these recommendations to you because the information that you received about these flights, boarding passes, hotels, reservations, and other events were marked up in JSON-LD format when they hit your Gmail inbox. The most exciting thing about all of this is that it’s just the beginning of what Linked Data can do to for all of us. Over the next decade, Linked Data will be at the center of getting computing and the monotonous details of our everyday grind out of the way so that we can focus more on enjoying our lives.

If you want to dive deeper into this technology, Google’s page on schemas is a good place to start.

Google adds JSON-LD support to Gmail

Google announced support for JSON-LD markup in Gmail at Google I/O 2013. The design team behind JSON-LD is delighted by this announcement and applaud the Google engineers that integrated JSON-LD with Gmail. This blog post examines what this announcement means for Gmail customers as well as providing some suggestions to the Google Gmail engineers on how they could improve their JSON-LD markup.

JSON-LD enables the representation of Linked Data in JSON by describing a common JSON representation format for expressing graphs of information (see Google’s Knowledge Graph). It allows you to mix regular JSON data with Linked Data in a single JSON document. The format has already been adopted by large companies such as Google in their Gmail product and is now available to over 425 million people via currently live software products around the world.

The syntax is designed to not disturb already deployed systems running on JSON, but provide a smooth upgrade path from JSON to JSON-LD. It is primarily intended to be a way to use Linked Data in Web-based programming environments, to build inter-operable Linked Data Web services, and to store Linked Data in JSON-based storage engines.

For Google’s Gmail customers, this means that Gmail will now be able to recognize people, places, events and a variety of other Linked Data objects. You can then take actions on the Linked Data objects embedded in an e-mail. For example, if someone sends you an invitation to a party, you can do a single-click response on whether or not you’ll attend a party right from your inbox. Doing so will also create a reminder for the party in your calendar. There are other actions that you can perform on Linked Data objects as well, like approving an expense report, reviewing a restaurant, saving a coupon for a free online movie, making a flight, hotel, or restaurant reservation, and many other really cool things that you couldn’t do before from the inside of your inbox.

What Google Got Right and Wrong

Google followed the JSON-LD standard pretty closely, so the vast majority of the markup looks really great. However, there are four issues that the Google engineers will probably want to fix before pushing the technology out to developers.

Invalid Context URL

The first issue is a fairly major one. Google isn’t using the JSON-LD @context parameter correctly in any of their markup examples. It’s supposed to be a URL, but they’re using a text string instead. This means that their JSON-LD documents are unreadable by all of the conforming JSON-LD processors today. For example, Google does the following when declaring a context in JSON-LD:

  "@context": "schema.org"

When they should be doing this:

  "@context": "http://schema.org/"

It’s a fairly simple change; just add “http://” to the beginning of the “schema.org” value. If Google doesn’t make this change, it’ll mean that JSON-LD processors will have to include a special hack to translate “schema.org” to “http://schema.org/” just for this use case. I hope that this was just a simple oversight by the Google engineers that implemented these features and not something that was intentional.

Context isn’t Online

The second issue has to do with the JSON-LD Context for schema.org. There doesn’t seem to be a downloadable context for schema.org at the moment. Not having a Web-accessible JSON-LD context is bad because the context is at the heart and soul of a JSON-LD document. If you don’t publish a JSON-LD context on the Web somewhere, applications won’t be able to resolve any of the Linked Data objects in the document.

The Google engineers could fix this fairly easily by providing a JSON-LD Context document when a web client requests a document of type “application/ld+json” from the http://schema.org/ URL. The JSON-LD community would be happy to help the Google engineers create such a document.

Keyword Aliasing, FTW

The third issue is a minor usability issue with the markup. The Google help pages on the JSON-LD functionality use the @type keyword in JSON-LD to express the type of Linked Data object that is being expressed. The Google engineers that wrote this feature may not have been aware of the Keyword Aliasing feature in JSON-LD. That is, they could have just aliased @type to type. Doing so would mean that the Gmail developer documentation wouldn’t have to mention the “specialness” of the @type keyword.

Use RDFa Lite

The fourth issue concerns the use of Microdata. JSON-LD was designed to work seamlessly with RDFa Lite 1.1; you can easily and losslessly convert data between the two markup formats. JSON-LD is compatible with Microdata, but pairing the two is a sub-optimal design choice. When JSON-LD data is converted to Microdata, information is lost due to data fidelity issues in Microdata. For example, there is no mechanism to specify that a value is a URL in Microdata.

RDFa Lite 1.1 does not suffer from these issues and has been proven to be a drop-in replacement for Microdata without any of the downsides that Microdata has. The designers of JSON-LD are the same designers behind RDFa Lite 1.1 and have extensive experience with Microdata. We specifically did not choose to pair JSON-LD with Microdata because it was a bad design choice for a number of reasons. I hope that the Google engineers will seek out advice from the JSON-LD and RDFa communities before finalizing the decision to use Microdata, as there are numerous downsides associated with that decision.

Closing

All in all, the Google engineers did a good job of implementing JSON-LD in Gmail. With a few small fixes to the Gmail documentation and code examples, they will be fully compliant with the JSON-LD specifications. The JSON-LD community is excited about this development and looks forward to working with Google to improve the recent release of JSON-LD for Gmail.

Permanent Identifiers for the Web

Web applications that deal with data on the web often need to specify and use URLs that are very stable. They utilize services such as purl.org to ensure that applications using their URLs will always be re-directed to a working website. These “permanent URL” redirection services operate kind of like a switchboard, connecting requests for information with the true location of the information on the Web. These switchboards can be reconfigured to point to a new location if the old location stops working.

How Does it Work?

If the concept sounds a bit vague, perhaps an example will help. A web author could use the following link (https://w3id.org/payswarm/v1) to refer to an important document. That link is hosted on a permanent identifier service. When a Web browser attempts to retrieve that link, it will be re-directed to the true location of the document on the Web. Currently, that location is https://payswarm.com/contexts/payswarm-v1.jsonld. If the location of the payswarm-v1.jsonld document changes at any point in the future, the only thing that needs to be updated is the re-direction entry on w3id.org. That is, all Web applications that use the https://w3id.org/payswarm/v1 URL will be transparently re-directed to the new location of the document and will continue to “Just Work™”.

w3id.org Launches

Permanent identifiers on the Web are an important thing to support, but until today there was no organization that would back a service for the Web to keep these sorts of permanent identifiers operating over the course of multiple decades. A number of us saw that this is a real problem and so we launched w3id.org, which is a permanent identifier service for the Web. The purpose of w3id.org is to provide a secure, permanent URL re-direction service for Web applications. This service will be run and operated by the W3C Permanent Identifier Community Group.

Specifically, the following organizations that have pledged responsibility to ensure the operation of this service for the decades to come: Digital Bazaar, 3 Round Stones, OpenLink Software, Applied Testing and Technology, and Openspring. Many more organizations will join in time.

These organizations are responsible for all administrative tasks associated with operating the service. The social contract between these organizations gives each of them full access to all information required to maintain and operate the website. The agreement is setup such that a number of these companies could fail, lose interest, or become unavailable for long periods of time without negatively affecting the operation of the site.

Why not purl.org

While many web authors and data publishers currently use purl.org, there are a number of issues or concerns that we have about the website:

  1. The site was designed for the library community and was never intended to be used by the general Web.
  2. Requests for information or changes to the service frequently go unanswered.
  3. The site does not support HTTPS connections, which means it cannot be used to serve documents for security-sensitive industries such as medicine and finance. Requests to migrate the site to HTTPs have gone unanswered.
  4. There is no published backup or fail-over plan for the website.
  5. The site is run by a single organization, with a single part-time administrator, on a single machine. It suffers from multiple single points of failure.

w3id.org Features

The launch of the w3id.org website mitigates all of the issues outlined above with purl.org:

  1. The site is specifically designed for web developers, authors, and data publishers on the general Web. It is not tailored for any specific community.
  2. Requests for information can be sent to a public mailing list that contains multiple administrators that are accountable for answering questions publicly. All administrators have been actively involved in world standards for many years and know how to run a service at this scale.
  3. The site supports HTTPS security, which means it can be used to securely serve data for industries such as medicine and finance.
  4. Multiple organizations, with multiple administrators per organization have full access to administer all aspects of the site and recover it from any potential failure. All important site data is in version control and is mirrored across the world on a regular basis.
  5. The site is run by a consortium of organizations that have each pledged to maintain the site for as long as possible. If a member organization fails, a new one will be found to replace the failing organization while the rest of the members ensure the smooth operation of the site.

All identifiers associated with the w3id.org website are intended to be around for as long as the Web is around. This means decades, if not centuries. If the final destination for popular identifiers used by this service fail in such a way as to be a major inconvenience or danger to the Web, the community will mirror the information for the popular identifier and setup a working redirect to restore service to the rest of the Web.

Adding a Permanent Identifier

Anyone with a github account and knowledge of simple Apache redirect rules can add a permanent identifier to w3id.org by performing the following steps:

  1. Fork w3id.org on Github.
  2. Add a new redirect entry and commit your changes.
  3. Submit a pull request for your changes.

If you wish to engage the community in discussion about this service for your Web application, please send an e-mail to the public-perma-id@w3.org mailing list. If you are interested in helping to maintain this service for the Web, please join the W3C Permanent Identifier Community Group.


Note: The letters ‘w3′ in the w3id.org domain name stand for “World Wide Web”. Other than hosting the software for the Permanent Identifier Community Group, the “World Wide Web Consortium” (W3C) is not involved in the support or management of w3id.org in any way.

Browser Payments 1.0

Kumar McMillan (Mozilla/FirefoxOS) and I (PaySwarm/Web Payments) have just published the first draft of the Browser Payments 1.0 API. The purpose of the spec is to establish a way to initiate payments from within the browser. It is currently a direct port of the mozPay API framework that is integrated into Firefox OS. It enables Web content to initiate payment or issue a refund for a product or service. Once implemented in the browser, a Web author may issue navigator.payment() function to initiate a payment.

This is work that we intend to pursue in the Web Payments Community Group at W3C. The work will eventually be turned over to a Web Payments Working Group at W3C, which we’re trying to kick-start at some point this year.

The current Browser Payments 1.0 spec can be read here:

http://web-payments.github.io/browser-payments/

The github repository for the spec is here:

https://github.com/web-payments/browser-payments/

Keep in mind that this is a very early draft of the spec. There are lots of prose issues as well as bugs that need to be sorted out. There are also a number of things that we need to discuss about the spec and how it fits into the larger Web ecosystem. Things like how it integrates with Persona and PaySwarm are still details that we need to suss out. There is a bug and issue tracker for the spec here:

https://github.com/web-payments/browser-payments/issues

The Mozilla guys will be on next week’s Web Payments telecon (Wednesday, 11am EST) for a Q/A session about this specification. Join us if you’re interested in payments in the browser. The call is open to the public, details about joining and listening in can be found here:

https://payswarm.com/minutes/

Identifiers in JSON-LD and RDF

TL;DR: This blog post argues that the extension of blank node identifiers in JSON-LD and RDF for the purposes of identifying predicates and naming graphs is important. It is important because it simplifies the usage of both technologies for developers. The post also provides a less-optimal solution if the RDF Working Group does not allow blank node identifiers for predicates and graph names in RDF 1.1.

We need identifiers as humans to convey complex messages. Identifiers let us refer to a certain thing by naming it in a particular way. Not only do humans need identifiers, but our computers need identifiers to refer to data in order to perform computations. It is no exaggeration to say that our very civilization depends on identifiers to manage the complexity of our daily lives, so it is no surprise that people spend a great deal of time thinking about how to identify things. This is especially true when we talk about the people that are building the software infrastructure for the Web.

The Web has a very special identifier called the Uniform Resource Locator (URL). It is probably one of the best known identifiers in the world, mostly because everybody that has been on the Web has used one. URLs are great identifiers because they are very specific. When I give you a URL to put into your Web browser, such as the link to this blog post, I can be assured that when you put the URL into your browser that you will see what I see. URLs are globally scoped, they’re supposed to always take you to the same place.

There is another class of identifier on the Web that is not globally scoped and is only used within a document on the Web. In English, these identifiers are used when we refer to something as “that thing”, or “this widget”. We can really only use this sort of identifier within a particular context where the people participating in the conversation understand the context. Linguists call this concept deixis. “Thing” doesn’t always refer to the same subject, but based on the proper context, we can usually understand what is being identified. Our consciousness tags the “thing” that is being talked about with a tag of sorts and then refers to that thing using this pseudo-identifier. Most of this happens unconsciously (notice how your mind unconsciously tied the use of ‘this’ in this sentence to the correct concept?).

The take-away is that there are globally-scoped identifiers like URLs, and there are also locally-scoped identifiers, that require a context in order to understand what they refer to.

JSON and JSON-LD

In JSON, developers typically express data like this:

{
  "name": "Joe"
}

Note how that JSON object doesn’t have an identifier associated with it. JSON-LD creates a straight-forward way of giving that object an identifier:

{
  "@context": ...,
  "@id": "http://example.com/people/joe",
  "name": "Joe"
}

Both you and I can refer to that object using http://example.com/people/joe and be sure that we’re talking about the same thing. There are times that assigning a global identifier to every piece of data that we create is not desired. For example, it doesn’t make much sense to assign an identifier to a transient message that is a request to get a sensor reading. This is especially true if there are millions of these types or requests and we never want to refer to the request once it has been transmitted. This is why JSON-LD doesn’t force developers to assign an identifier to the objects that they express. The people that created the technology understand that not everything needs a global identifier.

Computers are less forgiving, they need identifiers for most everything, but a great deal of that complexity can be hidden from developers. When an identifier becomes necessary in order to perform computations upon the data, the computer can usually auto-generate an identifier for the data.

RDF, Graphs, and Blank Node Identifiers

The Resource Description Framework (RDF) primarily uses an identifier called the Internationalized Resource Identifier (IRI). Where URLs can typically only express links in Western languages, an IRI can express links in almost every language in use today including Japanese, Tamil, Russian and Mandarin. RDF also defines a special type of identifier called a blank node identifier. This identifier is auto-generated and is locally scoped to the document. It’s an advanced concept, but is one that is pretty useful when you start dealing with transient data, where creating a global identifier goes beyond the intended usage of the data. An RDF-compatible program will step in and create blank node identifiers on your behalf, but only when necessary.

Both JSON-LD and RDF have the concept of a Statement, Graph, and a Dataset. A Statement consists of a subject, predicate, and an object (for example: “Dave likes cookies”). A Graph is a collection of Statements (for example: Graph A contains all the things that Dave said and Graph B contains all the things that Mary said). A Dataset is a collection of Graphs (for example: Dataset Z contains all of the things Dave and Mary said yesterday).

In JSON-LD, at present, you can use a blank node identifier for subjects, predicates, objects, and graphs. In RDF, you can only use blank node identifiers for subjects and objects. There are people, such as myself, in the RDF WG that think this is a mistake. There are people that think it’s fine. There are people that think it’s the best compromise that can be made at the moment. There is a wide field of varying opinions strewn between the various extremes.

The end result is that the current state of affairs have put us into a position where we may have to remove blank node identifier support for predicates and graphs from JSON-LD, which comes across as a fairly arbitrary limitation to those not familiar with the inner guts of RDF. Don’t get me wrong, I feel it’s a fairly arbitrary limitation. There are those in the RDF WG that don’t think it is and that may prevent JSON-LD from being able to use what I believe is a very useful construct.

Document-local Identifiers for Predicates

Why do we need blank node identifiers for predicates in JSON-LD? Let’s go back to the first example in JSON to see why:

{
  "name": "Joe"
}

The JSON above is expressing the following Statement: “There exists a thing whose name is Joe.”

The subject is “thing” (aka: a blank node) which is legal in both JSON-LD and RDF. The predicate is “name”, which doesn’t map to an IRI. This is fine as far as the JSON-LD data model is concerned because “name”, which is local to the document, can be mapped to a blank node. RDF cannot model “name” because it has no way of stating that the predicate is local to the document since it doesn’t support blank nodes for predicates. Since the predicate doesn’t map to an IRI, it can’t be modeled in RDF. Finally, “Joe” is a string used to express the object and that works in both JSON-LD and RDF.

JSON-LD supports the use of blank nodes for predicates because there are some predicates, like every key used in JSON, that are local to the document. RDF does not support the use of blank nodes for predicates and therefore cannot properly model JSON.

Document-local Identifiers for Graphs

Why do we need blank node identifiers for graphs in JSON-LD? Let’s go back again to the first example in JSON:

{
  "name": "Joe"
}

The container of this statement is a Graph. Another way of writing this in JSON-LD is this:

{
  "@context": ...,
  "@graph": {
    "name": "Joe"
  }
}

However, what happens when you have two graphs in JSON-LD, and neither one of them is the RDF default graph?

{
  "@context": ...,
  "@graph": [
    {
      "@graph": {
        "name": "Joe"
      }
    }, 
    {
      "@graph": {
        "name": "Susan"
      }
    }
  ]
}

In JSON-LD, at present, it is assumed that a blank node identifier may be used to name each graph above. Unfortunately, in RDF, the only thing that can be used to name a graph is an IRI, and a blank node identifier is not an IRI. This puts JSON-LD in an awkward position, either JSON-LD can:

  1. Require that developers name every graph with an IRI, which seems like a strange demand because developers don’t have to name all subjects and objects with an IRI, or
  2. JSON-LD can auto-generate a regular IRI for each predicate and graph name, which seems strange because blank node identifiers exist for this very purpose (not to mention this solution won’t work in all cases, more below), or
  3. JSON-LD can auto-generate a special IRI for each predicate and graph name, which would basically re-invent blank node identifiers.

The Problem

The problem surfaces itself when you try to convert a JSON-LD document to RDF. If the RDF Working Group doesn’t allow blank node identifiers for predicates and graphs, then what do you use to identify predicates and graphs that have blank node identifiers associated with them in the JSON-LD data model? This is a feature we do want to support because there are a number of important use cases that it enables. The use cases include:

  1. Blank node predicates allow JSON to be mapped directly to the JSON-LD and RDF data models.
  2. Blank node graph names allow developers to use graphs without explicitly naming them.
  3. Blank node graph names make the RDF Dataset Normalization algorithm simpler.
  4. Blank node graph names prevent the creation of a parallel mechanism to generate and manage blank node-like identifiers.

It’s easy to see the problem exposed when performing RDF Dataset Normalization, which we need to do in order to digitally sign information expressed in JSON-LD and RDF. The rest of this post will focus on this area, as it exposes the problems with not supporting blank node identifiers for predicates and graph names. In JSON-LD, the two-graph document above could be normalized to this NQuads (subject, predicate, object, graph) representation:

_:bnode0 _:name "Joe" _:graph1 .
_:bnode1 _:name "Susan" _:graph2 .

This is illegal in RDF since you can’t have a blank node identifier in the predicate or graph position. Even if we were to use an IRI in the predicate position, the problem (of not being able to normalize “un-labeled” JSON-LD graphs like the ones in the previous section) remains.

The Solutions

This section will cover the proposed solutions to the problem in order least desirable to most desirable.

Don’t allow blank node identifiers for predicates and graph names

Doing this in JSON-LD ignores the point of contention. The same line of argumentation can be applied to RDF. The point is that by forcing developers to name graphs using IRIs, we’re forcing them to do something that they don’t have to do with subjects and objects. There is no technical reason that has been presented where the use of a blank node identifier in the predicate or graph position is unworkable. Telling developers that they must name graphs using IRIs will be surprising to them, because there is no reason that the software couldn’t just handle that case for them. Requiring developers to do things that a computer can handle for them automatically is anti-developer and will harm adoption in the long run.

Generate fragment identifiers for graph names

One solution is to generate fragment identifiers for graph names. This, coupled with the base IRI would allow the data to be expressed legally in NQuads:

_:bnode0 <http://example.com/base#name> "Joe" <http://example.com/base#graph1> .
_:bnode1 <http://example.com/base#name> "Susan" <http://example.com/base#graph2> .

The above is legal RDF. The approach is problematic when you don’t have a base IRI, such as when JSON-LD is used as a messaging protocol between two systems. In that use case, you end up with something like this:

_:bnode0 <#name> "Joe" <#graph1> .
_:bnode1 <#name> "Susan" <#graph2> .

RDF requires absolute IRIs and so the document above is illegal from an RDF perspective. The other down-side is that you have to keep track of all fragment identifiers in the output and make sure that you don’t pick fragment identifiers that are used elsewhere in the document. This is fairly easy to do, but now you’re in the position of tracking and renaming both blank node identifiers and fragment IDs. Even if this approach worked, you’d be re-inventing the blank node identifier. This approach is unworkable for systems like PaySwarm that use transient JSON-LD messages across a REST API; there is no base IRI in this use case.

Skolemize to create identifiers for graph names

Another approach is skolemization, which is just a fancy way of saying: generate a unique IRI for the blank node when expressing it as RDF. The output would look something like this:

_:bnode0 <http://blue.example.com/.well-known/genid/2938570348579834> "Joe" <http://blue.example.com/.well-known/genid/348570293572375> .
_:bnode1 <http://blue.example.com/.well-known/genid/2938570348579834> "Susan" <http://blue.example.com/.well-known/genid/49057394572309457> .

This would be just fine if there was only one application reading and consuming data. However, when we are talking about RDF Dataset Normalization, there are cases where two applications must read and independently verify the representation of a particular IRI. One scenario that illustrates the example fairly nicely is the blind verification scenario. In this scenario, two applications de-reference an IRI to fetch a JSON-LD document. Each application must perform RDF Dataset Normalization and generate a hash of that normalization to see if they retrieved the same data. Based on a strict reading of the skolemization rules, Application A would generate this:

_:bnode0 <http://blue.example.com/.well-known/genid/2938570348579834> "Joe" <http://blue.example.com/.well-known/genid/348570293572375> .
_:bnode1 <http://blue.example.com/.well-known/genid/2938570348579834> "Susan" <http://blue.example.com/.well-known/genid/49057394572309457> .

and Application B would generate this:

_:bnode0 <http://red.example.com/.well-known/genid/J8Sfei8f792Fd3> "Joe" <http://red.example.com/.well-known/genid/j28cY82Pa88> .
_:bnode1 <http://red.example.com/.well-known/genid/J8Sfei8f792Fd3> "Susan" <http://red.example.com/.well-known/genid/k83FyUuwo89DF> .

Note how the two graphs would never hash to the same value because the Skolem IRIs are completely different. The RDF Dataset Normalization algorithm would have no way of knowing which IRIs are blank node stand-ins and which ones are legitimate IRIs. You could say that publishers are required to assign the skolemized IRIs to the data they publish, but that ignores the point of contention, which is that you don’t want to force developers to create identifiers for things that they don’t care to identify. You could argue that the publishing system could generate these IRIs, but then you’re still creating a global identifier for something that is specifically meant to be a document-scoped identifier.

A more lax reading of the Skolemization language might allow one to create a special type of Skolem IRI that could be detected by the RDF Dataset Normalization algorithm. For example, let’s say that since JSON-LD is the one that is creating these IRIs before they go out to the RDF Dataset Normalization Algorithm, we use the tag IRI scheme. The output would look like this for Application A:

_:bnode0 <tag:w3.org,2013:dsid:345> "Joe" <tag:w3.org,2013:dsid:254> .
_:bnode1 <tag:w3.org,2013:dsid:345> "Susan" <tag:w3.org,2013:dsid:363> .

and this for Application B:

_:bnode0 <tag:w3.org,2013:dsid:a> "Joe" <tag:w3.org,2013:dsid:b> .
_:bnode1 <tag:w3.org,2013:dsid:a> "Susan" <tag:w3.org,2013:dsid:c> .

The solution still doesn’t work, but we could add another step to the RDF Dataset Normalization algorithm that would allow it to rename any IRI starting with tag:w3.org,2013:. Keep in mind that this is exactly the same thing that we do with blank nodes, and it’s effectively duplicating that functionality. The algorithm would allow us to generate something like this for both applications doing a blind verification.

_:bnode0 <tag:w3.org,2013:dsid:predicate-1> "Joe" <tag:w3.org,2013:dsid:graph-1> .
_:bnode1 <tag:w3.org,2013:dsid:predicate-1> "Susan" <tag:w3.org,2013:dsid:graph-2> .

This solution does violate one strong suggestion in the Skolemization section:

Systems wishing to do this should mint a new, globally unique IRI (a Skolem IRI) for each blank node so replaced.

The IRI generated is definitely not globally unique, as there will be many tag:w3.org,2013:dsid:graph-1s in the world, each associated with data that is completely different. This approach also goes against something else in Skolemization that states:

This transformation does not appreciably change the meaning of an RDF graph.

It’s true that using tag IRIs doesn’t change the meaning of the graph when you assume that the document will never find its way into a database. However, once you place the document in a database, it certainly creates the possibility of collisions in applications that are not aware of the special-ness of IRIs starting with tag:w3.org,2013:dsid:. The data is fine taken by itself, but a disaster when merged with other data. We would have to put a warning in some specification for systems to make sure to rename the incoming tag:w3.org,2013:dsid: IRIs to something that is unique to the storage subsystem. Keep in mind that this is exactly what is done when importing blank node identifiers into a storage subsystem. So, we’ve more-or-less re-invented blank node identifiers at this point.

Allow blank node identifiers for graph names

This leads us to the question of why not just extend RDF to allow blank node identifiers for predicates and graph names? Ideally, that’s what I would like to see happen in the future as it places the least burden on developers, and allows RDF to easily model JSON. The responses from the RDF WG are varied. These are all of the current arguments against that I have heard:

There are other ways to solve the problem, like fragment identifiers and skolemization, than introducing blank nodes for predicates and graph names.

Fragment identifiers don’t work, as demonstrated above. There is really only one workable solution based on a very lax reading of skolemization, and as demonstrated above, even the best skolemization solution re-invents the concept of a blank node.

There are other use cases that are blocked by the introduction of blank node identifiers into the predicate and graph name position.

While this has been asserted, it is still unclear exactly what those use cases are.

Adding blank node identifiers for predicates and graph names will break legacy applications.

If blank nodes for predicates and graph names were illegal before, wouldn’t legacy applications reject that sort of input? The argument that there are bugs in legacy applications that make them not robust against this type of input is valid, but should that prevent the right solution from being adopted? There has been no technical reason put forward for why blank nodes for predicates or graph names cannot work, other than software bugs prevent it.

The PaySwarm work has chosen to model the data in a very strange way.

The people that have been working on RDFa, JSON-LD, and the Web Payments specifications for the past 5 years have spent a great deal of time attempting to model the data in the simplest way possible, and in a way that is accessible to developers that aren’t familiar with RDF. Whether or not it may seem strange is arguable since this response is usually levied by people not familiar with the Web Payments work. This blog post outlines a variety of use cases where the use of a blank node for predicates and graph naming is necessary. Stating that the use cases are invalid ignores the point of contention.

If we allow blank nodes to be used when naming graphs, then those blank nodes should denote the graph.

At present, RDF states that a graph named using an IRI may denote the graph or it may not denote the graph. This is a fancy way of saying that the IRI that is used for the graph name may be an identifier for something completely different (like a person), but de-referencing the IRI over the Web results in a graph about cars. I personally think that is a very dangerous concept to formalize in RDF, but there are others that have strong opinions to the contrary. The chances of this being changed in RDF 1.1 is next to none.

Others have argued that while that may be the case for IRIs, it doesn’t have to be the case for blank nodes that are used to name graphs. In this case, we can just state that the blank node denotes the graph because it couldn’t possibly be used for anything else since the identifier is local to the document. This makes a great deal of sense, but it is different from how an IRI is used to name a graph and that difference is concerning to a number of people in the RDF Working Group.

However, that is not an argument to disallow blank nodes from being used for predicates and graph names. The group could still allow blank nodes to be used for this purpose while stating that they may or may not be used to denote the graph.

The RDF Working Group does not have enough time left in its charter to make a change this big.

While this may be true, not making a decision on this is causing more work for the people working on JSON-LD and RDF Dataset Normalization. Having the tag:w3.org,2013:dsid: identifier scheme is also going to make many RDF-based applications more complex in the long run, resulting in a great deal more work than just allowing blank nodes for predicates and graph names.

Conclusion

I have a feeling that the RDF Working Group is not going to do the right thing on this one due to the time pressure of completing the work that they’ve taken on. The group has already requested, and has been granted, a charter extension. Another extension is highly unlikely, so the group wants to get everything wrapped up. This discussion could take several weeks to settle. That said, the solution that will most likely be adopted (a special tag-based skolem IRI) will cause months of work for people living in the JSON-LD and RDF ecosystem. The best solution in the long run would be to solve this problem now.

If blank node identifiers for predicates and graphs are rejected, here is the proposal that I think will move us forward while causing an acceptable amount of damage down the road:

  1. JSON-LD continues to support blank node identifiers for use as predicates and graph names.
  2. When converting JSON-LD to RDF, a special, relabelable IRI prefix will be used for blank nodes in the predicate and graph name position of the form tag:w3.org,2013:dsid:

Thanks to Dave Longley for proofing this blog post and providing various corrections.

DRM in HTML5

A few days ago, a proposal was put forward in the HTML Working Group (HTML WG) by Microsoft, Netflix, and Google to take DRM in HTML5 to the next stage of standardization at W3C. This triggered another uproar about the morality and ethics behind DRM and building it into the Web. There are good arguments about morality/ethics on both sides of the debate but ultimately, the HTML WG will decide whether or not to pursue the specification based on technical merit. I (@manusporny) am a member of the HTML WG. I was also the founder of a start-up that focused on building a legal, peer-to-peer, content distribution network for music and movies. It employed DRM much like the current DRM in HTML5 proposal. During the course of 8 years of technical development, we had talks with many of the major record labels. I have first-hand knowledge of the problem, and building a technical solution to address the problem.

TL;DR: The Encrypted Media Extensions (DRM in HTML5) specification does not solve the problem the authors are attempting to solve, which is the protection of content from opportunistic or professional piracy. The HTML WG should not publish First Public Working Drafts that do not effectively address the primary goal of a specification.

The Problem

The fundamental problem that the Encrypted Media Extensions (EME) specification seems to be attempting to solve is to find a way to reduce piracy (since eliminating piracy on the Web is an impossible problem to solve). This is a noble goal as there are many content creators and publishers that are directly impacted by piracy. These are not faceless corporations, they are people with families that depend on the income from their creations. It is with this thought in mind that I reviewed the specification on a technical basis to determine if it would lead to a reduction in piracy.

Review Notes for Encrypted Media Extensions (EME)

Introduction

The EME specification does not specify a DRM scheme in the specification, rather it explains the architecture for a DRM plug-in mechanism. This will lead to plug-in proliferation on the Web. Plugins are something that are detrimental to inter-operability because it is inevitable that the DRM plugin vendors will not be able to support all platforms at all times. So, some people will be able to view content, others will not.

A simple example of the problem is Silverlight by Microsoft. Take a look at the Plugin details for Silverlight, specifically, click on the “System Requirements” tab. Silverlight is Microsoft’s creation. Microsoft is a HUGE corporation with very deep pockets. They can and have thrown a great deal of money at solving very hard problems. Even Microsoft does not support their flagship plugin on Internet Explorer 8 on older versions of their operating system and the latest version of Chrome on certain versions of Windows and Mac. If Microsoft can’t make their flagship Web plugin work across all major Operating Systems today, what chance does a much smaller DRM plugin company have?

The purpose of a standard is to increase inter-operability across all platforms. It has been demonstrated that plug-ins, on the whole, harm inter-operability in the long run and often create many security vulnerabilities. The one shining exception is Flash, but we should not mistake an exception for the rule. Also note that Flash is backed by Adobe, a gigantic multi-national corporation with very deep pockets.

1.1 Goals

The goals section does not state the actual purpose of the specification. It states meta-purposes like: “Support a range of content security models, including software and hardware-based models” and “Support a wide range of use cases.”. While those are sub-goals, the primary goal isn’t stated once in the Goals section. The only rational primary goal is to reduce the amount of opportunistic piracy on the Web. Links to piracy data collected over the last decade could help make the case that this is worth doing.

1.2.1. Content Decryption Module (CDM)

When we were working on our DRM system, we took almost exactly the same approach that the EME specification does. We had a plug-in system that allowed different DRM modules to be plugged into the system. We assumed that each DRM scheme had a shelf-life of about 2-3 months before it was defeated, so our system would rotate the DRM modules every 3 months. We had plans to create genetic algorithms that would encrypt and watermark data into the file stream and mutate the encryption mechanism every couple of months to keep the pirates busy. It was a very complicated system to keep working because one slip up in the DRM module meant that people couldn’t view the content they had purchased. We did get the system working in the end, but it was a nightmare to make sure that the DRM modules to decrypt the information were rotated often enough to be effective while ensuring that they worked across all platforms.

Having first-hand knowledge of how such a system works, it’s a pretty terrible idea for the Web because it takes a great deal of competence and coordination to pull something like this off. I would expect the larger Content Protection companies to not have an issue with this. The smaller Content Protection companies, however, will inevitably have issues with ensuring that their DRM modules work across all platforms.

The bulk of the specification

The bulk of the specification is what you would expect from a system like this, so I won’t go into the gory details. There were two major technical concerns I had while reading through the implementation notes.

The first is that key retrieval is handled by JavaScript code, which means that anybody using a browser could copy the key data. This means that if a key is sent in the clear, the likelihood that the DRM system could be compromised goes up considerably because the person that is pirating the content knows the details necessary to store and decrypt the content.

If the goal is to reduce opportunistic piracy, all keys should be encrypted so that snooping by the browser doesn’t result in the system being compromised. Otherwise, all you would need to do is install a plugin that shares all clear-text keys with something like Mega. Pirates could use those keys to then decrypt byte-streams that do not mutate between downloads. To my knowledge, most DRM’ed media delivery does not encrypt content on a per-download basis. So, the spec needs to make it very clear that opaque keys MUST be used when delivering media keys.

One of the DRM systems we built, which became the primary way we did things, would actually re-encrypt the byte stream for every download. So even if a key was compromised, you couldn’t use the key to decrypt any other downloads. This was massively computationally expensive, but since we were running a peer-to-peer network, the processing was pushed out to the people downloading stuff in the network and not our servers. Sharing of keys was not possible in our DRM system, so we could send the decryption keys in the clear. I doubt many of the Content Protection Networks will take this approach as it would massively spike the cost of delivering content.

6. Simple Decryption

The “org.w3.clearkey” Key System indicates a plain-text clear (unencrypted) key will be used to decrypt the source. No additional client-side content protection is required.

Wow, what a fantastically bad idea.

  1. This sends the decryption key in the clear. This key can be captured by any Web browser plugin. That plugin can then share the decryption key and the byte stream with the world.
  2. It duplicates the purpose of Transport Layer Security (TLS).
  3. It doesn’t protect anything while adding a very complex way of shipping an encrypted byte stream from a Web server to a Web browser.

So. Bad. Seriously, there is nothing secure about this mechanism. It should be removed from the specification.

9.1. Use Cases: “user is not an adversary”

This is not a technical issue, but I thought it would be important to point it out. This “user is not an adversary” text can be found in the first question about use cases. It insinuates that people that listen to radio and watch movies online are potential adversaries. As a business owner, I think that’s a terrible way to frame your customers.

Thinking of the people that are using the technology that you’re specifying as “adversaries” is also largely wrong. 99.999% of people using DRM-based systems to view content are doing it legally. The folks that are pirating content are not sitting down and viewing the DRM stream, they have acquired a non-DRM stream from somewhere else, like Mega or The Pirate Bay, and are watching that. This language is unnecessary and should be removed from the specification.

Conclusion

There are some fairly large security issues with the text of the current specification. Those can be fixed.

The real goal of this specification is to create a framework that will reduce content piracy. The specification has not put forward any mechanism that demonstrates that it would achieve this goal.

Here’s the problem with EME – it’s easy to defeat. In the very worst case, there exist piracy rigs that allow you to point an HD video camera at a HD television and record the video and audio without any sort of DRM. That’s the DRM-free copy that will end up on Mega or the Pirate Bay. In practice, no DRM system has survived for more than a couple of years.

Content creators, if your content is popular, EME will not protect your content against a content pirate. Content publishers, your popular intellectual property will be no safer wrapped in anything that this specification can provide.

The proposal does not achieve the goal of the specification, it is not ready for First Public Working Draft publication via the HTML Working Group.

Aaron Swartz, PaySwarm, and Academic Journals

For those of you that haven’t heard yet, Aaron Swartz took his own life two days ago. Larry Lessig has a follow-up on one of the reasons he thinks led to his suicide (the threat of 50 years in jail over the JSTOR case).

I didn’t know Aaron at all. A large number of people that I deeply respect did, and have written about his life with great admiration. I, like most of you that have read the news, have done so while brewing a cauldron of mixed emotions. Saddened that someone that had achieved so much good in their life is no longer in this world. Angry that Aaron chose this ending. Sickened that this is the second recent suicide, Iilya’s being the first, involving a young technologist trying to make the world a better place for all of us. Afraid that other technologists like Aaron and Iilya will choose this path over persisting in their noble causes. Helpless. Helpless because this moment will pass, just like Iilya’s did, with no great change in the way our society deals with mental illness. With no great change, in what Aaron was fighting for, having been realized.

Nobody likes feeling helpless. I can’t mourn Aaron because I didn’t know him. I can mourn the idea of Aaron, of the things he stood for. While reading about what he stood for, several disconnected ideas kept rattling around in the back of my head:

  1. We’ve hit a point of ridiculousness in our society where people at HSBC knowingly laundering money for drug cartels get away with it, while people like Aaron are labeled a felon and face upwards of 50 years in jail for “stealing” academic articles. This, even after the publisher of said academic articles drops the charges. MIT never dropped their charges.
  2. MIT should make it clear that he was not a felon or a criminal. MIT should posthumously pardon Aaron and commend him for his life’s work.
  3. The way we do peer-review and publish scientific research has to change.
  4. I want to stop reading about all of this, it’s heartbreaking. I want to do something about it – make something positive out of this mess.

Ideas, Floating

I was catching up on news this morning when the following floated past on Twitter:

clifflampe: It seems to me that the best way for we academics to honor Aaron Swartz’s memory is to frigging finally figure out open access publishing.

1Copenut: @clifflampe And finally implement a micropayment system like @manusporny’s #payswarm. I don’t want the paper-but I’ll pay for the stories.

1Copenut: @manusporny These new developments with #payswarm are a great advance. Is it workable with other backends like #Middleman or #Sinatra?

This was interesting because we have been talking about how PaySwarm could be applied to academic publishing for a while now. All the discussions to this point have been internal, we didn’t know if anybody would make the connection between the infrastructure that PaySwarm provides and how it could be applied to academic journals. This is up on our ideas board as a potential area that PaySwarm could be applied:

  • Payswarm for peer-reviewed, academic publishing
    • Use Payswarm identity mechanism to establish trusted reviewer and author identities for peer review
    • Use micropayment mechanism to fund research
    • Enable university-based group-accounts for purchasing articles, or refunding researcher purchases

Journals as Necessary Evils

For those in academia, journals are often viewed as a necessary evil. They cost a fortune to subscribe to, farm out most of their work to academics that do it for free, and employ an iron-grip on the scientific publication process. Most academics that I speak with would do away with journal organizations in a heartbeat if there was a viable alternative. Most of the problem is political, which is why we haven’t felt compelled to pursue fixing it. Political problems often need a groundswell of support and a number of champions that are working inside the community. I think the groundswell is almost here. I don’t know who the set of academic champions are that will be the ones to push this forward. Additionally, if nobody takes the initiative to build such a system, things won’t change.

Here’s what we (Digital Bazaar) have been thinking. To fix the problem, you need at least the following core features:

  • Web-scale identity mechanisms – so that you can identify reviewers and authors for the peer-review process regardless of which site is publishing or reviewing a paper.
  • Decentralized solution – so that universities and researchers drive the process – not the publishers of journals.
  • Some form of remuneration system – you want to reward researchers with heavily cited papers, but in a way that makes it very hard to game the system.

Scientific Remuneration

PaySwarm could be used to implement each of these core features. At its core, PaySwarm is a decentralized payment mechanism for the Web. It also has a decentralized identity mechanism that is solid, but in a way that does not violate your privacy. There is a demo that shows how it can be applied to WordPress blogs where just an abstract is published, and if the reader wants to see more of the article, they can pay a small fee to read it. It doesn’t take a big stretch of the imagination to replace “blog article” with “research paper”. The hope is that researchers would set access prices on articles such that any purchase to access the research paper would then go to directly funding their current research. This would empower universities and researchers with an additional revenue stream while reducing the grip that scientific publishers currently have on our higher-education institutions.

A Decentralized Peer-review Process

Remuneration is just one aspect of the problem. Arguably, it is the lesser of the problems in academic publishing. The biggest technical problem is how you do peer review on a global, distributed scale. Quite obviously, you need a solid identity system that can identify scientists over the long term. You need to understand a scientists body of work and how respected their research is in their field. You also need a review system that is capable of pairing scientists and papers in need of review. PaySwarm has a strong identity system in place using the Web as the identification mechanism. Here is the PaySwarm identity that I use for development: https://dev.payswarm.com/i/manu. Clearly, paper publishing systems wouldn’t expose that identity URL to people using the system, but I include it to show what a Web-scale identifier looks like.

Web-scale Identity

If you go to that identity URL, you will see two sets of information: my public financial accounts and my digital signature keys. A PaySwarm Authority can annotate this identity with even more information, like whether or not an e-mail address has been verified against the identity. Is there a verified cellphone on record for the identity? Is there a verified driver’s license on record for the identity? What about a Twitter handle? A Google+ handle? All of these pieces of information can be added and verified by the PaySwarm Authority in order to build an identity that others can trust on the Web.

What sorts of pieces of information need to be added to a PaySwarm identity to trust its use for academic publishing? Perhaps a list of articles published by the identity? Review comments for all other papers that have been reviewed by the identity? Areas of research that other’s have certified that the identity is an expert on? This is pretty basic Web-of-trust stuff, but it’s important to understand that PaySwarm has this sort of stuff baked into the core of the design.

The Process

Leveraging identity to make decentralized peer-review work is the goal, and here is how it would work from a researcher perspective:

  1. A researcher would get a PaySwarm identity from any PaySwarm Authority, there is no cost associated with getting such an identity. This sub-system is already implemented in PaySwarm.
  2. A researcher would publish an abstract of their paper in a Linked Data format such as RDFa. This abstract would identify the authors of the paper and some other basic information about the paper. It would also have a digital signature on the information using the PaySwarm identity that was acquired in the previous step. The researcher would set the cost to access the full article using any PaySwarm-compatible system. All of this is already implemented in PaySwarm.
  3. A paper publishing system would be used to request a review among academic peers. Those peers would review the paper and publish digital signatures on review comments, possibly with a notice that the paper is ready to be published. This sub-system is fairly trivial to implement and would mirror the current review process with the important distinction that it would not be centralized at journal publications.
  4. Once a pre-set limit on the number of positive reviews has been met, the paper publishing system would place its stamp of approval on the paper. Note that different paper publishing systems may have different metrics just as journals have different metrics today. One benefit to doing it this way is that you don’t need a paper publishing system to put its stamp of approval on a paper at all. If you really wanted to, you could write the software to calculate whether or not the paper has gotten the appropriate amount of review because all of the information is on the Web by default. This part of the system would be fairly trivial to write once the metrics were known. It may take a year or two to get the correct set of metrics in place, but it’s not rocket science and it doesn’t need to be perfect before systems such as this are used to publish papers.

From a reviewer perspective, it would work like so:

  1. You are asked to review papers by your peers once you have an acceptable body of published work. All of your work can be verified because it is tied to your PaySwarm identity. All review comments can be verified as they are tied to other PaySwarm identities. This part is fairly trivial to implement, most of the work is already done for PaySwarm.
  2. Once you review a paper, you digitally sign your comments on the paper. If it is a good paper, you also include a claim that it is ready for broad publication. Again, technically simple to implement.
  3. Your reputation builds as you review more papers. The way that reputation is calculated is outside of the scope of this blog post mainly because it would need a great deal of input from academics around the world. Reputation is something that can be calculated, but many will argue about the algorithm and I would expect this to oscillate throughout the years as the system grows. In the end, there will probably be multiple reputation algorithms, not just one. All that matters is that people trust the reputation algorithms.

Freedom to Research and Publish

The end-goal is to build a system that empowers researchers and research institutions, is far more transparent than the current peer-reviewed publishing system, and remunerates the people doing the work more directly. You will also note that at no point does a traditional journal enter the picture to give you a stamp of approval and charge you a fee for publishing your paper. Researchers are in control of the costs at all stages. As I’ve said above, the hard part isn’t the technical nature of the project, it’s the political nature of it. I don’t know if this is enough of a pain-point among academics to actually start doing something about it today. I know some are, but I don’t know if many would use such a system over the draw of publications like Nature, PLOS, Molecular Genetics and Genomics, and Planta. Quite obviously, what I’ve proposed above isn’t a complete road map. There are issues and details that would need to be hammered out. However, I don’t understand why a system like this doesn’t already exist, so I implore the academic community to explain why what I’ve laid out above hasn’t been done yet.

It’s obvious that a system like this would be good for the world. Building such a system may have reduced the possibility of us losing someone like Aaron in the way that we did. He was certainly fighting for something like it. Talking about it makes me feel a bit less helpless than I did yesterday. Maybe making something good out of this mess will help some of you out there as well. If others offer to help, we can start building it.

So how about it researchers of the world, would you publish all of your research through such a system?

Objection to Microdata Candidate Recommendation

Full disclosure: I’m the current chair of the standards group at the World Wide Web Consortium that created the newest version of RDFa, editor of the HTML5+RDFa 1.1 and RDFa Lite 1.1 specifications, and I’m also a member of the HTML Working Group.

Edit: 2012-12-01 – Updated the article to rephrase some things, and include rationale and counter-arguments at the bottom in preparation for the HTML WG poll on the matter.

The HTML Working Group at the W3C is currently trying to decide if they should transition the Microdata specification to the next stage in the standardization process. There has been a call for consensus to transition the spec to the Candidate Recommendation stage. The problem is that we already have a set of specifications that are official W3C recommendations that do what Microdata does and more. RDFa 1.1 became an official W3C Recommendation last summer. From a standards perspective, this is a mistake and sends a confused signal to Web developers. Officially supporting two specification that do almost exactly the same thing in almost exactly the same way is, ultimately, a failure to standardize.

The fact that RDFa already does what Microdata does has been elaborated upon before:

Mythical Differences: RDFa Lite vs. Microdata
An Uber-comparison of RDFa, Microdata, and Microformats

Here’s the problem in a nutshell: The W3C is thinking of ratifying two completely different specifications that accomplish the same thing in basically the same way. The functionality of RDFa, which is already a W3C Recommendation, overlaps Microdata by a large margin. In fact, RDFa Lite 1.1 was developed as a plug-in replacement for Microdata. The full version of RDFa can also do a number of things that Microdata cannot, such as datatyping, associating more than one type per object, embed-ability in languages other than HTML, ability to easily publish and mix vocabularies, etc.

Microdata would have easily been dead in the water had it not been for two simple facts: 1) The editor of the specification works at Google, and 2) Google pushed Microdata as the markup language for schema.org before also accepting RDFa markup. The first enabled Google and the editor to work on schema.org without signalling to the public that it was creating a competitor to Facebook’s Open Graph Protocol. The second gave Microdata enough of a jump start to establish a foothold for schema.org markup. There have been a number of studies that show that Microdata’s sole use case (99% of Microdata markup) is for the markup of schema.org terms. Microdata is not widely used outside of that context, we now have data to back up what we had predicted would happen when schema.org made their initial announcement for Microdata-only support. Note that schema.org now supports both RDFa and Microdata.

It is typically a bad idea to have two formats published by the same organization that do the same thing. It leads to Web developer confusion surrounding which format to use. One of the goals of Web standards is to reduce, or preferably eliminate, the confusion surrounding the correct technology decision to make. The HTML Working Group and the W3C is failing miserably on this front. There is more confusion today about picking Microdata or RDFa because they accomplish the same thing in effectively the same way. The only reason both exist is due to political reasons.

If we step back and look at the technical arguments, there is no compelling reason that Microdata should be a W3C Recommendation. There is no compelling reason to have two specifications that do the same thing in basically the same way. Therefore, as a member of the HTML Working Group (not as a chair or editor of RDFa) I object to the publication of Microdata as a Candidate Recommendation.

Note that this is not a W3C formal objection. This is an informal objection to publish Microdata along the Recommendation track. This objection will not become an official W3C formal objection if the HTML Working Group holds a poll to gather consensus around whether Microdata should proceed along the Recommendation publication track. I believe the publication of a W3C Note will continue to allow Google to support Microdata in schema.org, but will hopefully correct the confused message that the W3C has been sending to Web developers regarding RDFa and Microdata. We don’t need two specifications that do almost exactly the same thing.

The message sent by the W3C needs to be very clear: There is one recommendation for doing structured data markup in HTML. That recommendation is RDFa. It addresses all of the use cases that have been put forth by the general Web community, and it’s ready for broad adoption and implementation today.

If you agree with this blog post, make sure to let the HTML Working Group know that you do not think that the W3C should ratify two specifications that do almost exactly the same thing in almost exactly the same way. Now is the time to speak up!

Summary of Facts and Arguments

Below is a summary of arguments presented as a basis for publishing Microdata along the W3C Note track:

  1. RDFa 1.1 is already a ratified Web standard as of June 7th 2012 and absorbed almost every Microdata feature before it became official. If the majority of the differences between RDFa and Microdata boil down to different attribute names (property vs. itemprop), then the two solutions have effectively converged on syntax and W3C should not ratify two solutions that do effectively the same thing in almost exactly the same way.
  2. RDFa is supported by all of the major search crawlers, including Google (and schema.org), Microsoft, Yahoo!, Yandex, and Facebook. Microdata is not supported by Facebook.
  3. RDFa Lite 1.1 is feature-equivalent to Microdata. Over 99% of Microdata markup can be expressed easily in RDFa Lite 1.1. Converting from Microdata to RDFa Lite is as simple as a search and replace of the Microdata attributes with RDFa Lite attributes. Conversely, Microdata does not support a number of the more advanced RDFa features, like being able to tell the difference between feet and meters.
  4. You can mix vocabularies with RDFa Lite 1.1, supporting both schema.org and Facebook’s Open Graph Protocol (OGP) using a single markup language. You don’t have to learn Microdata for schema.org and RDFa for Facebook – just use RDFa for both.
  5. The creator of the Microdata specification doesn’t like Microdata. When people are not passionate about the solutions that they create, the desire to work on those solutions and continue improve upon them is muted. The RDFa community is passionate about the technology that they have created together and have strived to make it better since the standardization of RDFa 1.0 back in 2008.
  6. RDFa Lite 1.1 is fully upward-compatible with RDFa 1.1, allowing you to seamlessly migrate to a more feature-rich language as your Linked Data needs grow. Microdata does not support any of the more advanced features provided by RDFa 1.1.
  7. RDFa deployment is broader than Microdata. RDFa deployment continues to grow at a rapid pace.
  8. The economic damage generated by publishing both RDFa and Microdata along the Recommendation track should not be underestimated. W3C should try to provide clear direction in an attempt to reduce the economic waste that a “let the market sort it out among two nearly identical solutions” strategy will generate. At some point, the market will figure out that both solutions are nearly identical, but only after publishing and building massive amounts of content and tooling for both.
  9. The W3C Technical Architecture Group (TAG), which is responsible for ensuring that the core architecture of the Web is sound, has raised their concern about the publication of both Microdata and RDFa as recommendations. After the W3C TAG raised their concerns, the RDFa Working Group created RDFa Lite 1.1 to be a near feature-equivalent replacement for Microdata that was also backwards-compatible with RDFa 1.0.
  10. Publishing a standard that does almost exactly the same thing as an existing standard in almost exactly the same way is a failure to standardize.

Counter-arguments and Rebuttals

[This is a] classic case of monopolistic anti-competitive protectionism.

No, this is an objection to publishing two specifications that do almost exactly the same thing in almost exactly the same way along the W3C Recommendation publication track. Protectionism would have asked that all work on Microdata be stopped and the work scuttled. The proposed resolution does not block anybody from using Microdata, nor does it try to stop or block the Microdata work from happening in the HTML WG. The objection asks that the W3C decide what the best path forward for Web developers is based on a fairly complicated set of predicted outcomes. This is not an easy decision. The objection is intended to ensure that the HTML Working Group has this discussion before we proceed to Candidate Recommendation with Microdata.

<manu1> I'd like the W3C to work as well, and I think publishing two specs that accomplish basically 
        the same thing in basically the same way shows breakage.
<annevk> Bit late for that. XDM vs DOM, XPath vs Selectors, XSL-FO vs CSS, XSLT vs XQuery, 
         XQuery vs XQueryX, RDF/XML vs Turtle, XForms vs Web Forms 2.0, 
         XHTML 1.0 vs HTML 4.01, XML 1.0 4th Edition vs XML 1.0 5th Edition, 
         XML 1.0 vs XML 1.1, etc.

[link to full conversation]

While W3C does have a history of publishing competing specifications, there have been features in each competing specification that were compelling enough to warrant the publication of both standards. For example, XHTML 1.0 provided a standard set of rules for validating documents that was aligned with XML and a decentralized extension mechanism that HTML4.01 did not. Those two major features were viewed as compelling enough to publish both specifications as Recommendations via W3C.

For authors, the differences between RDFa and Microdata are so small that, for 99% of documents in the wild, you can convert a Microdata document to an RDFa Lite 1.1 document with a simple search and replace of attribute names. That demonstrates that the syntaxes for both languages are different only in the names of the HTML attributes, and that does not seem like a very compelling reason to publish both specifications as Recommendations.

Microdata’s processing algorithm is vastly simpler, which makes the data
extracted more reliable and, when something does go wrong, makes it easier for 1) users to debug their own data, and 2) easier for me to debug it if they can’t figure it out on their own.

Microdata’s processing algorithm is simpler for two major reasons:

The complexity of implementing a processor has little bearing on how easy it is for developers to author documents. For example, XHTML 1.0 had a simpler processing model which made the data that was extracted more reliable and when something went wrong, it was easier to debug. However, HTML5 supported more use cases and recovers from errors in cases where it can, which made it more popular with Web developers in the long-run.

Additionally, authors of Microdata and RDFa should be using tools like RDFa Play to debug their markup. This is true for any Web technology. We debug our HTML, JavaScript, and CSS by loading it into a browser and bringing up the debugging tools. This is no different for Microdata and RDFa. If you want to make sure your markup does what you want, make sure to verify it by using a tool and not by trying to memorize the processing rules and running them through your head.

For what it is worth, I personally think RDFa is generally a technically better solution. But as Marcos says, “so what”? Our job at W3C is to make standards for the technology the market decides to use.

If we think one of these technologies is a technically better solution than the other one, we should signal that realization at some level. The most basic thing we could do is to make one an official Recommendation, and the other a Note. I also agree that our job at W3C is to make standards that the technology market decides to use, but clearly this particular case isn’t that cut-and-dried. Schema.org’s only option in the beginning was to use Microdata, and since authors didn’t want to risk not showing up in the search engines, they used Microdata. This forced the market to go in one direction.

This discussion would be in a different place had Google kept the playing field level. That is not to say that Google didn’t have good reasons for making the decisions that they did at the time, but those reasons influenced the development of RDFa, and RDFa Lite 1.1 was the result. The differences between Microdata and RDFa have been removed and a new question is in front of us: given two almost identical technologies, should the W3C publish two specifications that do almost exactly the same thing in almost exactly the same way?

… the [HTML] Working Group explicitly decided not to pick a winner between HTML Microdata and HTML+RDFa

The question before the HTML WG at the time was whether or not to split Microdata out of the HTML5 specification. The HTML Working Group did not discuss whether the publishing track for the Microdata document should be the W3C Note track or the W3C Recommendation track. At the time the decision was made, RDFa Lite 1.1 did not exist, RDFa Lite 1.1 was not a W3C Recommendation, nor did the RDFa and Microdata functionality so greatly overlap as they do now. Additionally, the HTML WG decision at that time states the following under the “Revisiting the issue” section:

“If Microdata and RDFa converge in syntax…”

Microdata and RDFa have effectively converged in syntax. Since Microdata can be interpreted as RDFa based on a simple search-and-replace of attributes that the languages have effectively converged on syntax except for the attribute names. The proposal is not to have work on Microdata stopped. Let work on Microdata proceed in this group, but let it proceed on the W3C Note publication track.

Closing Statements

I felt uneasy raising this issue because it’s a touchy and painful subject for everyone involved. Even if the discussion is painful, it is a healthy one for a standardization body to have from time to time. What I wanted was for the HTML Working Group to have this discussion. If the upcoming poll finds that the consensus of the HTML Working Group is to continue with the Microdata specification along the Recommendation track, I will not pursue a W3C Formal Objection. I will respect whatever decision the HTML Working Group makes as I trust the Chairs of that group, the process that they’ve put in place, and the aggregate opinion of the members in that group. After all, that is how the standardization process is supposed to work and I’m thankful to be a part of it.

The Problem with RDF and Nuclear Power

Full disclosure: I am the chair of the RDFa Working Group, the JSON-LD Community Group, a member of the RDF Working Group, as well as other Semantic Web initiatives. I believe in this stuff, but am critical about the path we’ve been taking for a while now.

The Resource Description Framework (a model for publishing data on the Web) has this horrible public perception akin to how many people in the USA view nuclear power. The coal industry campaigned quite aggressively to implant the notion that nuclear power was not as safe as coal. Couple this public misinformation campaign with a few nuclear-power-related catastrophes and it is no surprise that the current public perception toward nuclear power can be summarized as: “Not in my back yard”. Nevermind that, per tera-watt, nuclear power generation has killed far fewer people since its inception than coal. Nevermind that it is one of the more viable power sources if we gaze hundreds of years into Earth’s future, especially with the recent renewed interest in Liquid Flouride Thorium Reactors. When we look toward the future, the path is clear, but public perception is preventing us from proceeding down that path at the rate that we need to in order to prevent more damage to the Earth.

RDF shares a number of these similarities with nuclear power. RDF is one of the best data modeling mechanisms that humanity has created. Looking into the future, there is no equally-powerful, viable alternative. So, why has progress been slow on this very exciting technology? There was no public mis-information campaign, so where did this negative view of RDF come from?

In short, RDF/XML was the Semantic Web’s 3 Mile Island incident. When it was released, developers confused RDF/XML (bad) with the RDF data model (good). There weren’t enough people and time to counter-act the negative press that RDF was receiving as a result of RDF/XML and thus, we are where we are today because of this negative perception of RDF. Even Wikipedia’s page on the matter seems to imply that RDF/XML is RDF. Some purveyors of RDF think that the public perception problem isn’t that bad. I think that when developers hear RDF, they think: “Not in my back yard”.

The solution to this predicament: Stop mentioning RDF and the Semantic Web. Focus on tools for developers. Do more dogfooding.

To explain why we should adopt this strategy, we can look to Tesla for inspiration. Elon Musk, founder of PayPal and now the CEO of Tesla Motors, recently announced the Tesla Supercharger project. At a high-level, the project accomplishes the following jaw-dropping things:

  1. It creates a network of charging stations for electric cars that are capable of charging a Tesla in less than 30 minutes.
  2. The charging stations are solar powered and generate more electricity than the cars use, feeding the excess power into the local power grid.
  3. The charging stations are free to use for any person that owns a Tesla vehicle.
  4. The charging stations are operational and available today.

This means that, in 4-5 years, any owner of a Tesla vehicle be able to drive anywhere in the USA, for free, powered by the sun. No person in their right mind (with the money) would pass up that offer. No fossil fuel-based company will ever be able to provide “free”, clean energy. This is the sort of proposition we, the RDF/Linked Data/Semantic Web community, need to make; I think we can re-position ourselves to do just that.

Here is what the RDF and Linked Data community can learn from Tesla:

  1. The message shouldn’t be about the technology. It should be about the problems we have today and a concrete solution on how to address those problems.
  2. Demonstrate real value. Stop talking about the beauty of RDF, theoretical value, or design. Deliver production-ready, open-source software tools.
  3. Build a network of believers by spending more of your time working with Web developers and open-source projects to convince them to publish Linked Data. Dogfood our work.

Here is how we’ve applied these lessons to the JSON-LD work:

  1. We don’t mention RDF in the specification, unless absolutely necessary, and in many cases it isn’t necessary. RDF is plumbing, it’s in the background, and developers don’t need to know about it to use JSON-LD.
  2. We purposefully built production-ready tools for JSON-LD from day one; a playground, multiple production-ready implementations, and a JavaScript implementation of the browser-based API.
  3. We are working with Wikidata, Wikimedia, Drupal, the Web Payments and Read Write Web groups at W3C, and a number of other private clients to ensure that we’re providing real value and dogfooding our work.

Ultimately, RDF and the Semantic Web are of no interest to Web developers. They also have a really negative public perception problem. We should stop talking about them. Let’s shift the focus to be on Linked Data, explaining the problems that Web developers face today, and concrete, demonstrable solutions to those problems.

Note: This post isn’t meant as a slight against any one person or group. I was just working on the JSON-LD spec, aggressively removing prose discussing RDF, and the analogy popped into my head. This blog post was an exercise in organizing my thoughts on the matter.

HTML5 and RDFa 1.1

Full disclosure: I’m the chair of the newly re-chartered RDFa Working Group at the W3C as well as a member of the HTML WG.

The newly re-chartered RDFa Working Group at the W3C published a First Public Working Draft of HTML5+RDFa 1.1 today. This might be confusing to those of you that have been following the RDFa specifications. Keep in mind that HTML5+RDFa 1.1 is different from XHTML+RDFa 1.1, RDFa Core 1.1, and RDFa Lite 1.1 (which are official specs at this point). This is specifically about HTML5 and RDFa 1.1. The HTML5+RDFa 1.1 spec reached Last Call (aka: almost done) status at W3C via the HTML Working Group last year. So, why are we doing this now and what does it mean for the future of RDFa in HTML5?

Here’s the issue: the document was being unnecessarily held up by the HTML5 specification. In the most favorable scenario, HTML5 is expected to become an official standard in 2014. RDFa Core 1.1 became an official standard in June 2012. Per the W3C process, HTML5+RDFa 1.1 would have had to wait until 2014 to become an official W3C specification, even though it would be ready to go in a few months from now. W3C policy states that all specs that your spec depends on must reach the official spec status before your spec becomes official. Since HTML5+RDFa 1.1 is a language profile for RDFa 1.1 that is layered on top of HTML5, it had no choice but to wait for HTML5 to become official. Boo.

Thankfully the chairs of the HTML WG, RDFa WG, and W3C staff found an alternate path forward for HTML5+RDFa 1.1. Since the specification doesn’t depend on any “at risk” features in HTML5, and since all of the features that RDFa 1.1 uses in HTML5 have been implemented in all of the Web browsers, there is very little chance that those features will be removed in the future. This means that HTML5+RDFa 1.1 could become an official W3C specification before HTML5 reaches that status. So, that’s what we’re going to try to do. Here’s the plan:

  1. Get approval from W3C member companies to re-charter the RDFa WG to take over publishing responsibility of HTML5+RDFa 1.1. [Done]
  2. Publish the HTML5+RDFa 1.1 specification under the newly re-chartered RDFa WG. [Done]
  3. Start the clock on a new patent exclusion period and resolve issues. Wait a minimum of 6 months to go to W3C Candidate Recommendation (feature freeze) status, due to patent policy requirements.
  4. Fast-track to an official W3C specification (test suite is already done, inter-operable implementations are already done).

There are a few minor issues that still need to be ironed out, but the RDFa WG is on the job and those issues will get resolved in the next month or two. If everything goes according to plan, we should be able to publish HTML5+RDFa 1.1 as an official W3C standard in 7-9 months. That’s good for RDFa, good for Web Developers, and good for the Web.

Mythical Differences: RDFa Lite vs. Microdata

Full disclosure: I’m the current chair of the standards group at the World Wide Web Consortium that created the newest version of RDFa.

RDFa 1.1 became an official Web specification last month. Google started supporting RDFa in Google Rich Snippets some time ago and has recently announced that they will support RDFa Lite for schema.org as well. These announcements have led to a weekly increase in the number of times the following question is asked by Web developers on Twitter and Google+:

“What should I implement on my website? Microdata or RDFa?”

This blog post attempts to answer the question once and for all. It dispels some of the myths around the Microdata vs. RDFa debate and outlines how the two languages evolved to solve the same problem in almost exactly the same way.

 

Here’s the short answer for those of you that don’t have the time to read this entire blog post: Use RDFa Lite – it does everything important that Microdata does, it’s an official standard, and has the strongest deployment of the two.

Functionally Equivalent

Microdata was initially designed as a simple subset of RDFa and Microformats, primarily focusing on the core features of RDFa. Unfortunately, when this was done, the choice was made to break compatibility with RDFa and effectively fork the specification. Conversely, RDFa Lite highlights the subset of RDFa that Microdata did, but does it in a way that does not break backwards compatibility with RDFa. This was done on purpose, so that Web developers wouldn’t have a hard decision in front of them.

RDFa Lite contains all of the simplicity of Microdata coupled with the extensibility of and compatibility with RDFa. This is an important point that is often lost in the debate – there is no solid technical reason for choosing Microdata over RDFa Lite anymore. There may have been a year ago, but RDFa Lite made a few tweaks in such a way as to achieve feature-parity with Microdata today while being able to do much more than Microdata if you ever need the flexibility. If you don’t want to code yourself into a corner – use RDFa Lite.

To examine why RDFa Lite is a better choice, let’s take a look at the markup attributes for Microdata and the functionally equivalent ones provided by RDFa Lite:

Microdata 1.0 RDFa Lite 1.1 Purpose
itemid resource Used to identify the exact thing that is being described using a URL, such as a specific person, event, or place.
itemprop property Used to identify a property of the thing being described, such as a name, date, or location.
itemscope not needed Used to signal that a new thing is being described.
itemtype typeof Used to identify the type of thing being described, such as a person, event, or place.
itemref not needed Used to copy-paste a piece of data and associate it with multiple things.
not supported vocab Used to specify a default vocabulary that contains terms that are used by markup.
not supported prefix Used to mix different vocabularies in the same document, like ones provided by Facebook, Google, and open source projects.

As you can see above, both languages have exactly the same number of attributes. There are nuanced differences on what each attribute allows one to do, but Web developers only need to remember one thing from this blog post: Over 99% of all Microdata markup in the wild can be expressed in RDFa Lite just as easily. This is a provable fact – replace all Microdata attributes with the equivalent RDFa Lite attributes, add vocab="http://schema.org/" to the markup block, and you’re done.

At this point, you may be asking yourself why the two languages are so similar. There is almost 8 years of history here, but to summarize: RDFa was created around the 2004 time frame, Microdata came much later and used RDFa as a design template. Microdata chose a subset of the original RDFa design to support, but did so in an incompatible way. RDFa Lite then highlighted the subset of the functionality that Microdata did, but in a way that is backwards compatible with RDFa. RDFa Lite did this while keeping the flexibility of the original RDFa intact.

That leaves us where we are today – with two languages, Microdata and RDFa Lite, that accomplish the same things using the same markup patterns. The reason both exist is a very long story involving politics, egos, and a fair amount of disfunctionality between various standards groups – all of which doesn’t have any impact on the actual functionality of either language. The bottom line is that we now have two languages that do almost exactly the same thing. One of them, RDFa Lite 1.1, is currently an official standard. The other one, Microdata, probably won’t become a standard until 2014.

Markup Similarity

The biggest deployment of Microdata on the Web is for implementing the schema.org vocabulary by Google. Recently, with the release of RDFa Lite 1.1, Google has announced their intent to “officially” support RDFa as well. To see what this means for Web developers, let’s take a look at some markup. Here is a side-by-side comparison of two markup examples – one in Microdata and another in RDFa Lite 1.1:

Microdata 1.0 RDFa Lite 1.1
<div itemscope itemtype="http://schema.org/Product">
  <img itemprop="image" src="dell-30in-lcd.jpg" />
  <span itemprop="name">Dell UltraSharp 30" LCD Monitor</span>
</div>
<div vocab="http://schema.org/" typeof="Product">
  <img property="image" src="dell-30in-lcd.jpg" />
  <span property="name">Dell UltraSharp 30" LCD Monitor</span>
</div>

If the markup above looks similar to you, that was no accident. RDFa Lite 1.1 is designed to function as a drop-in replacement for Microdata.

The Bits that Don’t Matter

Only two features of Microdata aren’t supported by RDFa Lite; itemref and itemscope. Regarding itemref, the RDFa Working Group discussed the addition of that property and, upon reviewing Microdata markup in the wild, saw almost no use of itemref in production code. The schema.org examples steer clear of using itemref as well, so it was fairly clear that itemref is, and will continue to be, an unused feature of Microdata. The itemscope property is redundant in RDFa Lite and is thus unnecessary.

5 Reasons

For those of you that still are not convinced, here are the top five reasons that you should pick RDFa Lite 1.1 over Microdata:

  1. RDFa is supported by all of the major search crawlers, including Google (and schema.org), Microsoft, Yahoo!, Yandex, and Facebook. Microdata is not supported by Facebook.
  2. RDFa Lite 1.1 is feature-equivalent to Microdata. Over 99% of Microdata markup can be expressed easily in RDFa Lite 1.1. Converting from Microdata to RDFa Lite is as simple as a search and replace of the Microdata attributes with RDFa Lite attributes. Conversely, Microdata does not support a number of the more advanced RDFa features, like being able to tell the difference between feet and meters.
  3. You can mix vocabularies with RDFa Lite 1.1, supporting both schema.org and Facebook’s Open Graph Protocol (OGP) using a single markup language. You don’t have to learn Microdata for schema.org and RDFa for Facebook – just use RDFa for both.
  4. RDFa Lite 1.1 is fully upward-compatible with RDFa 1.1, allowing you to seamlessly migrate to a more feature-rich language as your Linked Data needs grow. Microdata does not support any of the more advanced features provided by RDFa 1.1.
  5. RDFa deployment is greater than Microdata. RDFa deployment continues to grow at a rapid pace.

Hopefully the reasons above are enough to convince most Web developers that RDFa Lite is the best bet for expressing Linked Data in web pages, boosting your Search Engine Page rank, and ensuring that you’re future-proofing your website as your data markup needs grow over the next several years. If it’s not, please leave a comment below explaining why you’re still not convinced.

If you’d like to learn more about RDFa, try the rdfa.info website. If you’d like to see more RDFa Lite examples and play around with the live RDFa editor, check out RDFa Play.

Thanks to Tattoo Tabatha for the artwork in this blog piece.

Blindingly Fast RDFa 1.1 Processing

The fastest RDFa processor in the world just got a big update – librdfa 1.1 has just been released! librdfa is a SAX-based RDFa processor, written in pure C – which makes it very portable to a variety of different software and hardware architectures. It’s also tiny and fast – the binary is smaller than this web page (around 47KB), and it’s capable of extracting roughly 5,000 triples per second per CPU core from an HTML or XML document. If you use Raptor or the Redland libraries, you use librdfa.

The timing for this release coincides with the push for a full standard at W3C for RDFa 1.1. The RDFa 1.1 specification has been in feature-freeze for over a month and is proceeding to W3C vote to finalize it as an officially recognized standard. There are now 5 fully conforming implementations for RDFa in a variety of languages – librdfa in C, PyRDFa in Python, RDF::RDFa in Ruby, Green Turtle in JavaScript, and clj-rdfa in Clojure.

It took about a month of spare-time hacking on librdfa to update it to support RDFa 1.1. It has also been given a new back-end document processor. A migration from libexpat to libxml2 was performed in order to better support processing of badly authored HTML documents as well as well formed XML documents. Support for all of the new features in RDFa 1.1 have been added, including the @vocab attribute, @prefix, and @inlist. Full support for RDFa Lite 1.1 has also been included. A great deal of time was also put into making sure that there were absolutely no memory leaks or pointer issues across all 700+ tests in the RDFa 1.1 Test Suite. There is still some work that needs to be done to add HTML5 @datetime attribute support and fix xml:base processing in SVG files, but that’s fairly small stuff that will be implemented over the next month or two.

Many thanks to Daniel Richard G., who updated the build system to be more cross-platform and pure C compliant on a variety of different architectures. Also thanks to Dave Longley who fixed the very last memory leak, which turned out to be a massive pain to find and resolve. This version of librdfa is ready for production use for processing all XML+RDFa and XHTML+RDFa documents. This version also supports both RDFa 1.0 and RDFa 1.1, as well as RDFa Lite 1.1. While support for HTML5+RDFa is 95% of the way there, I expect that it will be 100% in the next month or two.

Google Indexing RDFa 1.0 + schema.org Markup

Full disclosure: I am the chair of the RDF Web Applications Working Group at the World Wide Web Consortium – RDFa is one of the technologies that we’re working on.

Google is building a gigantic Knowledge Graph that will change search forever. The purpose of the graph is to understand the conceptual “things” on a web page and produce better search results for the world. Clearly, the people and companies that end up in this Knowledge Graph first will have a huge competitive advantage over those that do not. So, what can you do today to increase your organization’s chances of ending up in this Knowledge Graph, and thus ending up higher in the Search Engine Result Pages (SERPs)?

One possible approach is to mark your pages up with RDFa and schema.org. “But wait”, you might ask, “schema.org doesn’t support RDFa, does it?”. While schema.org launched with only Microdata support, Google has said that they will support RDFa 1.1 Lite, which is slated to become an official specification in the next couple of months.

However, that doesn’t mean that the Google engineers are sitting still while the RDFa 1.1 spec moves toward official standard status. RDFa 1.0 became an official specification in October 2008. Many people have been wondering if Google would start indexing RDFa 1.0 + schema.org markup while we wait for RDFa 1.1 to become official. We have just discovered that Google is not only indexing schema.org expressed as RDFa 1.0, but they’re enhancing search result listings based on data gleaned from schema.org markup expressed as RDFa 1.0!

Here’s what it looks like in the live Google search results:

Enhanced Google search result showing event information
The image above shows a live, enhanced Google search result with event information extracted from the RDFa 1.0 + schema.org data, including date and location of the event.

Enhanced Google search result showing recipe preparation time information
The image above shows a live, enhanced Google search result with recipe preparation time information, also extracted from the RDFa 1.0 + schema.org data that was on the page.

Enhanced Google search result showing detailed event information with click-able links
The image above shows a live, enhanced Google search result with very detailed event information gleaned from the RDFa 1.0 + schema.org data, including date, location and links to the individual event pages.

Looking at the source code for the pages above, a few things become apparent:

  1. All of the pages contain a mixture of RDFa 1.0 + schema.org markup. There is no Microformats or Microdata markup used to express the data shown in the live search listings. The RDFa 1.0 + schema.org data is definitely being used in live search listing displays.
  2. The Drupal schema.org module seems to be used for all of the pages, so if you use Drupal, you will probably want to install that module if you want the benefit of enhanced Google search listings.
  3. The search and social companies are serious about indexing RDFa content, which means that you may want to get serious about adding it into your pages before your competitors do.

Google isn’t the only company that is building a giant global graph of knowledge. Last year, Facebook launched a similar initiative called the Open Graph, which is also built on RDFa. The end-result of all of this work are better search listings, more relevant social interactions, and a more unified way of expressing “things” in Web pages using RDFa.

Does your website talk about any of the following things: Applications, Authors, Events, Movies, Music, People, Products, Recipes, Reviews, and/or TV Episodes? If so, you should probably be expressing that structured data as RDFa so that both Facebook and Google can give you better visibility over those that don’t in the coming years. You can get started by viewing the RDFa schema.org examples or reading more about Facebook’s Open Graph markup. If you don’t know anything about RDFa, you may want to start with the RDFa Lite document, or the RDFa Primer.

Many thanks to Stéphane Corlosquet for spotting this and creating the Drupal 7 schema.org module. Also thanks to Chris Olafson for spotting that RDFa 1.0 + schema.org markup is now consistently being displayed in live Google search results.

Searching for Microformats, RDFa, and Microdata Usage in the Wild

A few weeks ago, we announced the launch of the Data Driven Standards Community Group at the World Wide Web Consortium (W3C). The focus is on researching, analyzing and publicly documenting current usage patterns on the Internet. Inspired by the Microformats Process, the goal of this group is to enlighten standards development with real-world data. This group will collect and report data from large Web crawls, produce detailed reports on protocol usage across the Internet, document yearly changes in usage patterns and promote findings that demonstrate that the current direction of a particular specification should be changed based on publicly available data. All data, research, and analysis will be made publicly available to ensure the scientific rigor of the findings. The group will be a collection of search engine companies, academic researchers, hobbyists, protocol designers and specification editors in search of data that will guide the Internet toward a brighter future.

We had launched the group with the intent of regularly analyzing the Common Crawl data set. The goal of Common Crawl is to build and maintain an open crawl of the web that can be used by researchers, educators and innovators. The crawl currently contains roughly 40TB of compressed data, around 5 billion web pages, and is hosted on Amazon’s S3 service. To analyze the data, you have to write a small piece of analysis software that is then applied to all of the data using Amazon’s Elastic Map Reduce service.

I spent a few hours a couple of nights ago and wrote the analysis software, which is available as open source on github. This blog post won’t go into how the software was written, but rather the methodology and data that resulted from the analysis. There were three goals that I had in mind when performing this trial run:

  • Quickly hack something together to see if Microformats, RDFa and Microdata analysis was feasible.
  • Estimate the cost of performing a full analysis.
  • See if the data correlates with the Yahoo! study or the Web Data Commons project.

Methodology

The analysis software was executed against a very small subset of the Common Crawl data set. The directory that was analyzed (/commoncrawl-crawl-002/2010/01/07/18/) contained 1,273 ARC files, each weighing in at 100MBs each for around 124GBs of data processed. It took 8 EC2 machines a total of 14 hours and 23 minutes to process the data, for a grand total of 120 CPU hours utilized.

The analysis software streams each file from disk, decompresses it and breaks each file into the data that was retrieved from a particular URL. The file is checked to ensure that it is an HTML or XHTML file, if it isn’t, it is skipped. If the file is an XHTML or HTML file, an HTML4 DOM is constructed from the file using a very forgiving tag soup parser. At that point, CSS selectors are executed on the resulting DOM to search for HTML elements that contain attributes for each language. For example, the CSS selector “[property]” is executed to retrieve a count of all RDFa property attributes on the page. The same was performed for Microdata and Microformats. You can see the exact CSS queries used in the source file for the markup language detector.

Findings

Here are the types of documents that we found in the sample set:

Document Type Count Percentage
HTML or XHTML 10,598,873 100%
Microformats 14,881 0.14%
RDFa 4726 0.045%
Microdata* 0 0%
* The sample size was clearly too small since there were reports of Microdata out in the wild before this point in time in 2010.

The numbers above clearly deviate from both the Yahoo! study and the Web Data Commons project. The problem with our data set was that it was probably too small to really tell us anything useful, so please don’t use the numbers in this blog post for anything of importance.

The analysis software also counted the RDFa 1.1 attributes:

RDFa Attribute Count
property 3746
about 3671
resource 833
typeof 302
datatype 44
prefix 31
vocab 1

The property, about, resource, typeof, and datatype attributes have a usage pattern that is not very surprising. I didn’t check for combinations of attributes like property and content on the same element due to time constraints. I only had one night to figure out how to write the software, write it and run it. This sort of co-attribute detection should be included in future analysis of the data. What was surprising was that the prefix and vocab attributes were used somewhere out there before the features were introduced into RDFa 1.1, but not to the degree that it would be of concern to the people designing the RDFa 1.1 language.

The Good and the Bad

The good news is that it does not take a great deal of effort to write a data analysis tool and run it against the Common Crawl data set. I’ve published both our methodology and findings such that anybody could re-create them if they so desired. So, this is good for open Web Science initiatives everywhere.

However, there is bad news. It cost around $12.46 USD to run the test using Amazon’s Elastic Map Reduce system. The Common Crawl site states that they believe that it would cost roughly $150 to process the entire data set, but my calculations show a very different picture when you start doing non-trivial analysis. Keep in mind that 124GBs was processed of a total 40TB of data. That is, only about 0.31% of the data set was processed for $12.46. To process the entire Common Crawl corpus, it would cost around $4,020 USD. Clearly far more than what any individual would want to spend, but still very much within the reach of small companies and research groups.

Funding a full analysis of the entire Common Crawl dataset seemed within reach, but after discoverng what the price would be, I’m having second thoughts about performing the full analysis without a few other companies or individuals pitching in to cover the costs.

Potential Ways Forward

We may have run the analysis in a way that caused the price to far exceed what was predicted by the Common Crawl folks. I will be following up with them to see if there is a trick to reducing the cost of the EC2 instances.

One option would be to bid a very low price for Amazon EC2 Spot Instances. The down-side is that processing would happen only when nobody else was willing to bid the price we would and therefore, the processing job could take weeks. Another approach would have us use regular expressions to process the document instead of building an in-memory HTML DOM for the document. Regular expressions would be able to detect RDFa/Microdata and Microformats using far less CPU than the DOM-based approach. Yet another approach would have an individual or company with $4K to spend on this research project fund the analysis of the full data set.

Overall, I’m excited that doing this sort of analysis is becoming available to those of us without access to Google or Facebook-like resources. It is only a matter of time before we will be able to do a full analysis on the Common Crawl data set. If you are reading this and think you can help fund this work, please leave a comment on this blog, or e-mail me directly at: msporny@digitalbazaar.com.

Web Data Commons Launches

Some interesting numbers were just published regarding Microformats, RDFa and Microdata adoption as of October 2010 (fifteen months ago). The source of the data is the new CommonCrawl dataset, which is being analyzed by the Web Data Commons project. They sampled 1% of the 40 Terabyte data set (1.3 million pages) and came up with the following number of total statements (triples) made by pages in the sample set:

Markup Format Statements
Microformats 30,706,071
RDFa 1,047,250
Microdata 17,890
Total 31,771,211

Based on this preliminary data, of the structured data on the Web: 96.6% of it was Microformats, 3.2% of it was RDFa, and 0.05% of it was Microdata. Microformats is the clear winner in October 2010, with the vast majority of the data consisting of markup of people (hCard) and their relationships with one another (xfn). I also did a quick calculation on percentage of the 1.3 million URLs that contain Microformats, RDFa and Microdata markup:

Format Percentage of Pages
Microformats 88.9%
RDFa 12.1%
Microdata 0.09%

These findings deviate wildly from the findings by Yahoo around the same time. Additionally, the claim that 88.9% of all pages on the Web contain Microformats markup, even though I’d love to see that happen, is wishful thinking.

There are a few things that could have caused these numbers to be off. The first is that the Web Data Commons’ parsers are generating false positives or negatives, resulting in bad statement counts. A quick check of the data, which they released in full, will reveal if this is true. The other cause could be that the Yahoo study was flawed in the same way, but we may never know if that is true because they will probably never release their data set or parsers for public viewing. By looking at the RDFa usage numbers (3.2% for the Yahoo study vs. 12.1% for Web Data Commons) and the Microformats usage numbers (roughly 5% for the Yahoo study vs. 88.9% for Web Data Commons), the Web Data Commons numbers seem far more suspect. Data publishing in HTML is taking off, but it’s not that popular yet.

I would be wary of doing anything with these preliminary findings until the Web Data Commons folks release something more final. Nevertheless, it is interesting as a data point and I’m looking forward toward the full analysis that these researchers do in the coming months.

Web Payments: PaySwarm vs. OpenTransact Shootout (Part 3)

This is a continuing series of blog posts analyzing the differences between PaySwarm and OpenTransact. The originating blog post and subsequent discussion is shown below:

  1. Web Payments: PaySwarm vs. OpenTransact Shootout by Manu Sporny
  2. OpenTransact the payment standard where everything is out of scope by Pelle Braendgaard
  3. Web Payments: PaySwarm vs. OpenTransact Shootout (Part 2) by Manu Sporny
  4. OpenTransact vs PaySwarm part 2 – yes it’s still mostly out of scope by Pelle Braendgaard

It is the last post of Pelle’s that this blog post will address. All of the general points made in the previous analysis still hold and so familiarizing yourself with them before continuing will give you some context. In summary,

TL;DR – The OpenTransact standard does not specify the minimum necessary algorithms and processes required to implement an interoperable, open payment network. It, accidentally, does the opposite – further enforcing silo-ed payment networks, which is exactly what PaySwarm is attempting to prevent.

You may jump to each section of this blog post:

  1. Why OpenTransact Fails To Standardize Web Payments
  2. General Misconceptions (continued)
  3. Detailed Rebuttal (continued)
  4. Conclusion

Why OpenTransact Fails to Standardize Web Payments

After analyzing OpenTransact over the past few weeks, the major issue of the technology is becoming very clear. The Web Payments work is about writing a world standard. The purpose of a standard is to formalize the data formats and protocols, in explicit detail, that teaches the developer how two pieces of software should interoperate. If any two pieces of software implement the standard, it is known that they will be able to communicate and carry out any of the actions defined in the standard. The OpenTransact specification does not achieve this most fundamental goal of a standard. It does not specify how any two payment processors may interoperate, instead, it is a document that suggests one possible way for a single payment processor to implement its Web API.

Here is why this is a problem: When a vendor lists an OpenTransact link on their website, and a customer clicks on that link, the customer is taken to the vendor’s OpenTransact payment processor. If the customer does not have an account on that payment processor, they must get an account, verify their information, put money in the account, and go through all of the hoops required to get an account on that payment provider. In other words, OpenTransact changes absolutely nothing about how payment is performed online today.

For example, if you go to a vendor and they have a PayPal button on their site, you have to go to PayPal and get an account there in order to pay the vendor. If they have an Amazon Payments button instead, you have to go to Amazon and get an account there in order to pay the vendor. Even worse, OpenTransact doesn’t specify how individuals are identified on the network. One OpenTransact provider could use e-mail addresses for identification, while another one might use Facebook accounts or Twitter handles. There is no interoperability because these problems are considered out of scope for the OpenTransact standard.

PaySwarm, on the other hand, defines exactly how payment processors interoperate and identities are used. A customer may choose their payment processor independently of the vendor, and the vendor may choose their payment processor independently of the customer. The PaySwarm specification also details how a vendor can list items for sale in an interoperable way such that any transaction processor may process a sale of the item. PaySwarm enables choice in a payment processor, OpenTransact does not.

OpenTransact continues to lock in customers and merchants into a particular payment processor. It requires that they both choose the same one if they are to exchange payment. While Pelle has asserted that this is antithetical to OpenTransact, the specification fails to detail how a customer and a merchant could use two different payment processors to perform a purchase. Leaving something as crucial as sending payment from one payment processor to the next as unspecified will only mean that many payment processors will implement mechanisms that are non-interoperable across all payment processors. Given this scenario, it doesn’t really matter what the API is for the payment processor as everyone has to be using the same system anyway.

Therefore, the argument that OpenTransact can be used as a basic building block for online commerce is fatally flawed. The only thing that you can build on top of OpenTransact is a proprietary walled garden of payments, an ivory tower of finance. This is exactly what payment processors do today, and will do with OpenTransact. It is in their best interest to create closed financial networks as it strips market power away from the vendor and the customer and places it into their ivory tower.

Keep this non-interoperability point in mind when you see an “out of scope” argument on behalf of OpenTransact – there are some things that can be out of scope, but not at the expense of choice and interoperability.

General Misconceptions (continued)

There are a number of misconceptions that Pelle’s latest post continues to hold regarding PaySwarm that demonstrate a misunderstanding of the purpose of the specification. These general misconceptions are addressed below, followed by a detailed analysis of the rest of Pelle’s points.

PaySwarm is a fully featured idealistic multi layered approach where you must buy into a whole different way running your business.

The statement is hyperbolic – no payment technology requires you to “buy into a whole different way of running your business”. Vendors on the Web list items for sale and accept payment for those items in a number of different ways. This is usually accomplished by using shopping cart software that supports a variety of different payment mechanisms – eCheck, credit card, PayPal, Google Checkout, etc. PaySwarm would be one more option that a vendor could employ to receive payment.

PaySwarm standardizes an open, interoperable way that items are listed for sale on the Web, the protocol that is used to perform a transaction on the Web, and how transaction processors may interoperate with one another.

PaySwarm is a pragmatic approach that provides individuals and businesses with a set of tools to make Web-based commerce easier for their customers and thus provides a competitive advantage for those businesses that choose to adopt it. Businesses don’t need to turn off their banking and credit card processing services to use PaySwarm – it would be foolish for any standard to take that route.

PaySwarm doesn’t force any sort of out-right replacement of what businesses and vendors do today, it is something that can be phased in gradually. Additionally, it provides built-in functionality that you cannot accomplish via traditional banking and credit card services. Functionality like micro-payments, crowd-funding, a simple path to browser-integration, digital receipts, and a variety of innovative new business models for those willing to adopt them. That is, individuals and businesses will adopt PaySwarm because; 1) it provides a competitive advantage, 2) it allows new forms of economic value exchange to happen on the Web, 3) it is designed to fight vendor-lock in, and 4) it thoroughly details how to achieve interoperability as an open standard.

It is useless to call a technology idealistic, as every important technology starts from idealism and then gets whittled down into a practical form – PaySwarm is no different. The proposal for the Web was idealistic at the time, it was a multi-layered approach, and it does require a whole different way of running a Web-based business (only because Web-based businesses did not exist before the Web). It’s clear today that all of those adjectives (“idealistic”, “multi-layered”, and “different”) were some of the reasons that the Web succeeded, even if none of those words apply to PaySwarm in the negative way that is asserted in Pelle’s blog post.

However the basic PaySwarm philosophy of wanting to design a whole world view is very similar to central planning or large standards bodies like ANSI, IEEE etc. OpenTransact follows the market based approach that the internet was based on of small standards that do one thing well.

As stated previously, PaySwarm is limited in scope by the use cases that have been identified as being important to solve. It is important to understand the scope of the problem before attempting a solution. OpenTransact fails to grasp the scope of the problem and thus falls short of providing a specification that defines how interoperability is achieved.

Furthermore, it is erroneous to assert that the Internet was built using a market-based approach and small standards. The IEEE, ANSI, and even government had a very big part to play in the development of the Internet and the Web as we know it today.

Here are just a few of the technologies that we enjoy today because of the IEEE: Ethernet, WiFi, Mobile phones, Mobile Broadband, POSIX, and VHDL. Here are the technologies that we enjoy today because of ANSI: The standard for encoding most of the letters on your screen right now (ASCII and UTF-8), the C programming language standard, and countless safety specifications covering everything from making sure that commercial airliners are inspected properly, to hazardous waste disposal guidelines that take human health into account, to a uniform set of colors for warning and danger signs in the workplace.

The Internet wouldn’t exist in the form that we enjoy today without these IEEE and ANSI standards: Ethernet, ASCII, the C programming language, and many of the Link Layer technologies developed by the IEEE on which the foundation of the Internet was built. It is incorrect to assume that the Internet followed purely market-based forces and small standards. Let’s not forget that the Internet was a centrally planned, government funded (DARPA) project.

The point is that technologies are developed and come into existence through a variety of channels. There is not one overriding philosophy that is correct in every instance. The development of some technologies require one to move fast in the market, some require thoughtful planning and oversight, and some require a mixture of both. What is important in the end is that the technology works, is well thought out, and achieves the use cases it set out to achieve.

There are many paths to a standard. What is truly important in the end is that the technology works in an interoperable fashion, and in that vein, the assertion that OpenTransact does not meet the basic interoperability requirements of an open Web standard has still not been addressed.

Detailed Rebuttal (continued)

In the responses below, Pelle’s comment on his latest blog post is quoted and the rebuttal follows below each section of quoted text. Pay particular attention to how most of the responses are effectively that the “feature is out of scope”, but no solution is forthcoming to the problem that the feature is designed to address. That is, the problem is just kicked down the road for OpenTransact, where the PaySwarm specification makes a concerted effort to address each problem via the feature under discussion.

Extensible Machine Readable Metadata

Again this falls completely out of the scope. An extension could easily be done using JSON-LD as JSON-LD is simply an extension to JSON. I don’t think it would help the standard by specifying how extensions should be done at this point. I think JSON-LD is a great initiative and it may well be that which becomes an extension format. But there are also other simpler extensions that might better be called conventions that probably do not need the complication of JSON-LD. Such as Lat/Lng which has become a standard geo location convention in many different applications.

The need for extensible machine-readable metadata was explained previously. Addressing this problem is a requirement for PaySwarm because without it you have a largely inflexible messaging format. Pelle mentions that the extensibility issue could be addressed using JSON-LD, which is what PaySwarm does, but does not provide any concrete plans to do this for OpenTransact. That is, the question is left unaddressed in OpenTransact and thus the extensibility and interoperability issue remains.

When writing standards, one cannot assert that a solution “could easily be done”. Payment standards are never easy and hand waving technical issues away is not the same thing as addressing those technical issues. If the solution is easy, then surely something could be written on the topic on the OpenTransact website.

Transactions (part 1)

I don’t like the term transaction as Manu is using it here. I believe it is being used here using computer science terminology. But leaving that aside. OpenTransact does not support multi step transactions in itself right now. I think most of these can be easily implemented in the Application Layer and thus is out of scope of OpenTransact.

The term transaction is being used in the traditional English sense, the Merriam-Webster Dictionary defines a transaction as: something transacted; especially: an exchange or transfer of goods, services, or funds (electronic transactions). Wikipedia defines a transaction as: an agreement, communication, or movement carried out between separate entities or objects, often involving the exchange of items of value, such as information, goods, services, and money. Further, a financial transaction is defined as: an event or condition under the contract between a buyer and a seller to exchange an asset for payment. It involves a change in the status of the finances of two or more businesses or individuals. This demonstrates that the use of “transaction” in PaySwarm is in-line with its accepted English meaning.

The argument that multi-step transactions can be easily implemented is put forward again. This is technical hand-waving. If the solution is so simple, then it shouldn’t take but a single blog post to outline how a multi-step transaction happens between a decentralized set of transaction processors. The truth of the matter is that multi-step transactions are a technically challenging problem to solve in a decentralized manner. Pushing the problem up to the application layer just pushes the problem off to someone else rather than solving it in the standard so that the application developers don’t have to create their own home-brew multi-part transaction mechanism.

Transactions (part 2)

I could see a bulk payment extension supporting something similar in the future. If the need comes up lets deal with [it].

Here are a few reasons why PaySwarm supports multiple financial transfers to multiple financial accounts as a part of a single transaction; 1) it makes the application layer simpler, and thus the developer’s life easier, 2) ensuring that all financial transfers made it to their destination prevents race conditions where some people get paid and some people do not (read: you could be sued for non-payment), 3) assuming a transfer where money is disbursed to 100 people, doing it in one HTTP request is faster and more efficient than doing it in 100 separate requests. The need for multiple financial transfers in a single transaction is already there. For example, paying taxes on items sold is a common practice; in this case, the transaction is split between at least two entities: the vendor and the taxing authority.

OpenTransact does not address the problem of performing multiple financial transfers in a single transaction and thus pushes the problem on to the application developer, who must then know quite a bit about financial systems in order to create a valid solution. If the application developer makes a design mistake, which is fairly easy to do when dealing with decentralized financial systems, they could place their entire company at great financial risk.

Currency Exchange

…most of us have come to the conclusion that we may be able to get away with just using plain open transact for this.

While the people working on OpenTransact may have come to this conclusion, there is absolutely no specification text outlining how to accomplish the task of performing a currency exchange. The analysis was on features that are supported by each specification and the OpenTransact specification still does not intend to provide any specification text on how a currency exchange could be implemented. Saying that a solution exists, but then not elaborating upon the solution in the specification in an interoperable way is not good standards-making. It does not address the problem.

Decentralized Publishing of X (part 1)

These features listed are necessary if you subscribe to the world view that the entire worlds commerce needs to be squeezed into a web startup.

I don’t quite understand what Pelle is saying here, so I’m assuming this interpretation: “The features listed are necessary if you subscribe to the world view that all of the worlds commerce needs have to be squeezed into a payment standard.”

This is not the world-view that PaySwarm assumes. As stated previously, PaySwarm assumes a limited set of use cases that were identified by the Web Payments community as being important. Decentralization is important to PaySwarm because is ensures; 1) that the system is resistant to failure, 2) that the customer is treated fairly due to very low transaction processor switching costs, and 3) that market forces act quickly on the businesses providing PaySwarm services.

OpenTransact avoids the question of how to address these issues and instead, accidentally, further enforces silo-ed payment networks and walled gardens of finance.

Decentralized Publishing of X (part 2)

I think [decentralized publishing] would make a great standard in it’s own right that could be published separately from the payment standard. Maybe call it CommerceSwarm or something like that.

There is nothing preventing PaySwarm from splitting out the listing of assets, and listings from the main specification once we have addressed the limited set of use cases put forth by the Web Payments community. As stated previously, the PaySwarm specification can always be broken down into simpler, modularized specifications. This is an editorial issue, not a design issue.

The concern about the OpenTransact specification is not an editorial issue, it is a design issue. OpenTransact does not specify how multiple transaction processors interoperate nor does it describe how one publishes assets, listings and other information associated with the payment network on the Web. Thus, OpenTransact, accidentally, supports silo-ed payment networks and walled gardens of finance.

Decentralized Publishing of X (part 3)

If supporting these are a requirement for an open payment standard, I think it will be very hard for any existing payment providers or e-commerce suppliers to support it as it requires a complete change in their business, where OpenTransact provides a fairly simple easy implementable payment as it’s only requirement.

This argument is spurious for at least two reasons.

The first is that OpenTransact only has one requirement and thus all a business would have to implement was that one requirement. Alternatively, if businesses only want to implement simple financial transfers in PaySwarm (roughly equivalent to transactions in OpenTransact), they need only do that. Therefore, PaySwarm can be as simple as OpenTransact to the vast majority of businesses that only require simple financial transfers. However, if more advanced features are required, PaySwarm can support those as well.

The second reason is that it is effectively the buggy whip argument – if you were to ask businesses that depended on horses to transport their goods before the invention of the cargo truck, most would recoil at the thought of having to replace their investment in horses with a new investment in trucks. However, new businesses would choose the truck because of its many advantages. Some would use a mixture of horses and trucks until the migration to the better technology was complete. The same applies to both PaySwarm and OpenTransact – the only thing that is going to cause individuals and businesses to switch is that the technology provides a competitive advantage to them. The switching costs for new businesses are going to be less than the switching costs for old businesses with a pre-existing payment infrastructure.

Verifiable Receipts (part 1)

However I don’t want us to stall the development and implementation of OpenTransact by inventing a new form of PKI or battling out which of the existing PKI methods we should use. See my section on Digital Signatures in the last post.

A new form of PKI has not been invented for PaySwarm. It uses the industry standard for both encryption and digital signatures – AES and RSA. The PKI methods are clearly laid out in the specification and have been settled for quite a while, not a single person has mentioned that they want to use a different set of PKI methods or implementations, nor have they raised any technical issues related to the PKI portion of the specification.

Pelle might be referring to how PaySwarm specifies how to register public keys on the Web, but if he is, there is very little difference between that and having to manage OAuth 2 tokens, which is a requirement imposed on developers by the OpenTransact specification.

Verifiable Receipts (part 2)

Thus we have taken the pragmatic approach of letting businesses do what they are already doing now. Sending an email and providing a transaction record via their web site.

PaySwarm does not prevent businesses from doing what they do now. Sending an e-mail and providing a transaction record via their website are still possible using PaySwarm. However, these features become increasingly unnecessary since PaySwarm has a digital receipt mechanism built into the standard. That is, businesses no longer need to send an e-mail or have a transaction record via their website because PaySwarm transaction processors are responsible for holding on to this information on behalf of the customer. This means far less development and financial management headaches for website operators.

Additionally, neither e-mail nor proprietary receipts support Data Portability or system interoperability. That is, these are not standard, machine-readable mechanisms for information exchange. More to the point, OpenTransact is kicking the problem down the road instead of attempting to address the problem of machine-verifiable receipts.

Secure X-routed Purchases

These are neat applications that could be performed in some way through an application. You know I’m going to say it’s out of scope of OpenTransact. OpenTransact was designed as a simple way of performing payments over the web. Off line standards are thus out of scope.

The phrase “performed in some way through an application” is technical hand-waving. OpenTransact does not propose any sort of technical solution to a use case that has been identified by the Web Payments community as being important. Purchasing an item using an NFC-enabled mobile phone at a Web-enabled kiosk is not a far fetched use case – many of these kiosks exist today and more will become Web-enabled over time. That is, if one device has Web connectivity – is the standard extensible enough to allow a transaction to occur?

With PaySwarm, the answer is “yes” and we will detail exactly how to accomplish this in a PaySwarm specification. Note that it will probably not be in the main PaySwarm specification, but an application developer specification that thoroughly documents how to perform a purchase through a PaySwarm proxy.

Currency Mints

Besides BitCoin all modern alternative currencies have the mint and the transaction processor as the same entity.

These are but a few of the modern alternative currencies where the mint and the transaction processor are not the same entity (the year that the currency was launched is listed beside the currency): BerkShares (2006), Calgary Dollar (1996), Ithica Hours (1998), Liberty Dollar (1998-2009), and a variety of LETS and InterLETS systems (as recently as 2011).

OpenTransact assumes that the mint and the transaction processor are the same entity, but as demonstrated above, this is not the case in already successful alternative currencies. The alternative currencies above, where the mint and the transaction processor are different, should be supported by a payment system that purports to support alternative currencies. Making the assumption that the mint and the transaction processor are one and the same ignores a large part of the existing alternative currency market. It also does not protect against monopolistic behavior on behalf of the mint. That is, if a mint handles all minting and transaction processing, processing fees are at the whim of the mint, not the market. Conflating a currency mint with a transaction processor results in negative market effects – a separation of concerns is a necessity in this case.

Crowd-funding

Saying that you can not do crowd funding with OpenTransact is like saying you can’t do Crowd Funding with http. Obviously KickStarter and many others are doing so and yes you can do so with OpenTransact as a lower level building block.

The coverage of the Crowd Funding feature was never about whether OpenTransact could be used to perform Crowd Funding, but rather how one could perform Crowd Funding with OpenTransact and whether that would be standardized. The answer to each question is still “Out of Scope” and “No”.

Quite obviously there are thousands of ways technology can be combined with value exchange mechanisms to support crowd funding. The assertion was that OpenTransact does not provide any insight into how it would be accomplished and furthermore, contains a number of design issues that would make it very inefficient and difficult to implement Crowd Funding, as described in the initial analysis, on top of the OpenTransact platform.

Data Portability

We are very aware of concerns of vendor lock in, but as OpenTransact is a much simpler lower level standard only concerned with payments, data portability is again outside the scope. We do want to encourage work in this area.

PaySwarm adopts the philosophy that data portability and vendor lock-in are important concerns and must be addressed by a payment standard. Personal financial data belongs to those transacting, not to the payment processors. Ultimately, solutions that empower people become widely adopted.

OpenTransact, while encouraging work in the area, adopts no such philosophy for Data Portability as evidenced in the specification.

Conclusion

In doing this analysis between PaySwarm and OpenTransact, a few things have come to light that we did not know before:

  1. There are some basic philosophies that are shared between PaySwarm and OpenTransact, but there are many others that are not. Most fundamentally, PaySwarm attempts to think about the problem broadly where OpenTransact only attempts to think about one aspect of the Web payments problem.
  2. There are a number of security concerns that were raised when performing the review of the OpenTransact specification, more of which will be detailed in a follow-up blog post.
  3. There were a number of design concerns that we found in OpenTransact. One of the most glaring issues is something that was an issue with PaySwarm in its early days, until the design error was fixed. In the case that OpenTransact adopts digital receipts, excessive HTTP traffic and the duplication of functionality between digital signatures and OAuth 2 will become a problem.
  4. While we assumed that Data Portability was important to the OpenTransact specification, it was a surprise that there were no plans to address the issue at all.
  5. There was an assumption that the OpenTransact specification would eventually detail how transaction processors may interoperate with one another, but Pelle has made it clear that there are no current plans to detail interoperability requirements.

In order for the OpenTransact specification to continue along the standards track, it should be demonstrated that the design concerns, security concerns, and interoperability concerns have been addressed. Additionally, the case should be made for why the Web Payments community should accept that the list of features not supported by OpenTransact is acceptable from the standpoint of a world standards setting organization. These are all open questions and concerns that OpenTransact will eventually have to answer as a part of the standardization process.

* Many thanks to Dave Longley, who reviewed this post and suggested a number of very helpful changes.

Web Payments: PaySwarm vs. OpenTransact Shootout (Part 2)

The Web Payments Community group is currently evaluating two designs for an open payment platform for the Web. A thorough analysis of PaySwarm and OpenTransact was performed a few weeks ago, followed by a partial response by one of the leads behind the OpenTransact work. This blog post will analyze the response by the OpenTransact folks, offer corrections to many of the claims made in the response, and further elaborate on why PaySwarm actually solves the hard problem of creating a standard for an interoperable, open payment platform for the Web.

TL;DR – The OpenTransact standard does not specify the minimum necessary algorithms and processes required to implement an interoperable, open payment network. It, accidentally, does the opposite – further enforcing silo-ed payment networks, which is exactly what PaySwarm is attempting to prevent.

You can jump to each section below:

  1. The Purpose of a Standard
  2. Web Payments – The Hard Problem
  3. The Problem Space
  4. General Misconceptions
  5. Detailed Rebuttal
  6. Continuing the Discussion

The Purpose of a Standard

Ultimately, the purpose of a standard is to propose a solution to a problem that ensures interoperability among implementations of that standard. Furthermore, standards that establish a network of systems, like the Web, must detail how interoperability functions among the various systems in the network. This is the golden rule of standards – if you don’t detail how interoperability is accomplished, you don’t have a standard.

In Pelle’s blog post, he states:

OpenTransact [is] the payment standard where everything is out of scope

This is the major issue with OpenTransact. By declaring that just about everything is out of scope for OpenTransact, it fails to detail how systems on the payment network communicate with one another and thus does not support the golden rule of standards – interoperability. This is the point that I will be hammering home in this blog post, so keep it in mind while reading the rest of this article.

What OpenTransact does is outline “library interoperability”. The specification enables developers to write one software library that can be used to initiate monetary transfers by building OpenTransact URLs, but then does not specify what happens when you go to the URL. It does not specify how money gets from one system to the next, nor does it specify how those messages are created and passed from system to system. OpenTransact overly simplifies the problem and proposes a solution that is insufficient for use as a global payment standard.

In short, it does not solve the hard problem of creating an open payment platform.

Web Payments – The Hard Problem

Overall, the general argument put forward by Pelle on behalf of the OpenTransact specification is that it focuses on merely initiating a monetary transfer and nothing else because that is the fundamental building block for every other economic activity. His argument is that we should standardize the most basic aspect of transferring value and leave the other stuff out until the basic OpenTransact specification gains traction.

The problem with this line of reasoning is this: When you don’t plan ahead, you run the very high risk of creating a solution that works for the simple use cases, but is not capable of addressing the real problems of creating an interoperable, open payment platform. PaySwarm acknowledges that we need to plan ahead if we are to create a standard that can be applied to a variety of use cases. This does not mean that every use case must be addressed. Rather, the assertion is made that designing solutions that solve more than just the initiation of a simple monetary transfer is important because the world of commerce consists of much more than the initiation of simple monetary transfers.

Clearly, we should not implement solutions to every use case, but rather figure out the maximum number of use cases that can be solved by a minimal design. “Don’t bloat the specification” is often repeated as guidance throughout the standardization process. Where to draw the line on spec bloat is one of the primary topics of conversation in standards groups. It should be an ongoing discussion within the community, not a hard-line philosophical stance.

The hard problem has always been interoperability and the OpenTransact specification postpones addressing that issue to a later point in time. The point of a standard is to establish interoperability such that anyone can read the standard, implement it, and is guaranteed interoperability from others that have implemented the standard. From Pelle’s response:

We don’t specify how one payment provider transacts with another payment provider, but it is generally understood that they do so with OpenTransact.

and

An exchange system between multiple financial institutions can be achieved by many different means as they are today. But all of these methods are implementation details and the developer or end user does not need to understand what is going on inside the black box.

A specification elaborates on the implementation details so that you can guarantee interoperability among those that implement the standard. Implementation details are important because without those, you do not have interoperability and without interoperability, you do not have an open payment platform. Without interoperability, you have the state of online payment providers today – payment vendor lock-in.

General Misconceptions

There are a number of general misconceptions that are expressed in Pelle’s response that need to be corrected before addressing the rest of his feedback:

PaySwarm attempts to solve every single problem up front and thus creates a standard that is very smart in many ways but also very complex.

What PaySwarm attempts to do is identify real-world use cases that exist today with payment online and proposes a way to address those use cases. There are a number of use cases that the community postponed because we didn’t feel that addressing them now was reasonable. There were also use cases that we dropped entirely because we didn’t see a need to support those use cases now or in the future. To say that “PaySwarm attempts to solve every single problem up front” is hyperbolic. It is true that PaySwarm is more complex than OpenTransact today, but that’s because it attempts to address a much larger set of real-world use cases.

It’s background is I understand in a P2P media market place called Bitmunk where licenses, distribution contacts and other media DRM issues are considered important.

PaySwarm did start out as a platform to enable peer-to-peer media marketplace transactions. That was in 2004. The technology and specification have evolved considerably since that time. For example, mandatory DRM was never implemented, but watermarking was – both technologies have been dropped from the specification due to the needless complexity introduced by supporting those features. There was never the concept of a “distribution contract”, but digital contracts – outlining exactly what was purchased, the licenses associated with that purchase, and the costs associated with the transaction seem like reasonable things to support in an open payment platform.

Manu Sporny of Digital Bazaar has also been a chair of the RDFa working group so PaySwarm comes with a lot of linked data luggage as well.

I’m also the Chair of the RDF Web Applications Working Group and the JSON-LD Community Group, am a member of the HTML Working Group, founded the Data-Driven Standards Community Group, and am a member of the Semantic Web Coordination Group. Based on those qualifications, I would like to think that I know my way around Web standards and Linked Data – others may disagree :) . While I don’t know if Pelle meant “luggage” in a negative sense, if he did, one must ask what the alternative is? If we are going to create an open payment platform that is interoperable and decentralized like the Web, then what alternative is there to Linked Data?

Many people do not know that we started working with RDFa and JSON-LD because we needed a viable solution to the machine-readable decentralized listing of things for sale problem in PaySwarm. That is, we didn’t get involved with Linked Data first and then carried that work into PaySwarm. We started out with PaySwarm and needed Linked Data to solve the machine-readable decentralized listing of things for sale problem.

OpenTransact comes from the philosophy that we don’t solve a problem until the problem exists and several people have real experiences solving it.

This is a perfectly reasonable philosophy to employ. In fact, PaySwarm adheres to the same philosophy. PaySwarm’s implementation of the philosophy diverges from OpenTransact because it takes more real-world problems into account. Online e-commerce has existed for over a decade now, with a fairly rich history of lessons-learned with regard to how the Web has been used for commerce. This history includes many more types of transactions than just a simple monetary transfer and therefore, PaySwarm attempts to take these other types of transactions into account during the design process.

The Problem Space

In his response, Pelle outlines a number of lessons learned from OpenID and OAuth development. These are all good lessons and we should make sure that we do not fall into the same trap that OpenID did in the beginning – attempting to solve too many problems, too soon in the process.

Pelle implies that PaySwarm falls into this trap and that OpenTransact avoids the trap by being very focused on just initiating payment transfers. The reasoning is spurious as the world is composed of many more types of value exchange than just a simple payment initiation. The main design failure of OpenTransact is to not attempt to detail how the standard applies to the real-world online payment use cases established over the past decade.

It is not that PaySwarm attempts to address too many use cases too soon, but rather that OpenTransact attempts to do too little, and by being hyper-focused, does not solve the problem of creating an open payment platform that is applicable to the state of online commerce today.

Detailed Rebuttal

The following section provides detailed responses to a number of points that are made in Pelle’s blog post:

IRIs for Identifiers

I’m sorry calling URI’s IRI just smells of political correctness. Everyone calls them URI’s and knows what it means. No one knows what a IRI is. Even though W3C pushes it I’m going to use the term URI to avoid confusion.

Wikipedia defines the Internationalized Resource Identifier (IRI) as: a generalization of the Uniform Resource Identifier (URI). While URIs are limited to a subset of the ASCII character set, IRIs may contain characters from the Universal Character Set (Unicode/ISO 10646), including Chinese or Japanese kanji, Korean, Cyrillic characters, and so forth. It is defined by RFC 3987.

PaySwarm is on a world-standards track and thus takes the position that being able to express identifiers in one’s native language is important. When writing standards, it is important to be technically specific and use terminology that has been previously defined by standards groups. Usage of the term IRI is not only technically correct, it acknowledges the notion that we must support non-English identifiers in a payment standard meant for the world to use.

IRIs for Identifiers (cont.)

We don’t want to specify what lives at the end of an account URI. There are many other proposals for standardizing it, we don’t need to deal with that. Until the day that a universal machine readable account URI standard exist, implementers of OpenTransact can either do some sensing of the URI as they already do today (Twitter, Facebook, Github) or use proposals like WebFinger or even enter the world of linked data and use that.

The problem with the argument is expressed in this phrase – Until the day that a universal machine readable account URI standard exist[s]. PaySwarm defines a universal, machine-readable account URI standard. This mechanism is important for interoperability – without it, it becomes difficult to publish information in a decentralized, machine-readable fashion. Without describing what lives at the end of an account IRI, you can’t figure out who owns a financial account, you can’t understand what the currency of the account is, nor can you extend the information associated with the account in an interoperable way. PaySwarm asserts that we cannot just gloss over this part of the problem space as it is important for interoperability.

Basic Financial Transfer

OpenTransact does not specify how a transfer physically happens as that is an implementation detail. It could be creating a record in a ledger, uploading a text file to a mainframe via ftp, calling multiple back end systems, generating a bitcoin, shipping a gold coin by fedex, etc.

At no point does the PaySwarm specification state what must happen physically. What happens physically is outside of the scope of the specification. What matters is how the digital exchange happens. This is primarily because any open payment platform for the Web is digitally native. That is, when you pay someone, the transfer is executed and recorded digitally, at times, between two payment processors. This is the same sort of procedure that happens at banks today. Rarely does physical cash move when you use your credit or debit card.

The point of supporting a Basic Financial Transfer between systems boils down to interoperability. OpenTransact doesn’t mention how you transfer $1 from PaymentServiceA to PaymentServiceB. That is, if you are bob@mypay.com and you want to send $1 to jane@superfund.com, how do you initiate the transfer from mypay.com to superfund.com? That is, what is the protocol? OpenTransact is silent on how this cross-payment processor transfer happens. PaySwarm asserts that specifying this payment processor monetary exchange protocol in great detail is vital to ensure that the standard enables a fair and efficient transaction processor marketplace. That is, specifying how this works is vital for ensuring that new payment processor competitors can enter the marketplace with as little friction as possible. If a payment standard does not specify how this works, it enables vendor lock-in and payment network silos.

When it comes to standards, implementation details like this matter because without explicitly stating how two systems may exchange money with one another, interoperability suffers.

Transacted Item Identifer

It would be great to have a standard way of specifying every single item in this world and that is pretty much what RDF is about. However until such day that every single object in the world is specified by RDF, we believe it is best to just identify the purchased item with a url.

This argument seems to be saying two contradictory things; 1) It would be great to have a standard way of describing machine-readable items on the Web and 2) until that happens, we should just use URLs.

PaySwarm defines exactly how to express machine-readable items on the Web. Since the first part of the statement is true today, the last part of the statement becomes unnecessary. Furthermore, both OpenTransact and PaySwarm use IRIs for transacted item identifiers – that was never in question. OpenTransact uses an IRI to identify the transacted item. PaySwarm uses an IRI to identify the transacted item, but also ensures that the item is machine-readable and digitally signed by the seller for security purposes.

There are at least two reasons that you cannot just depend on URLs for describing items on the Web without also specifying how those items can be machine-readable and verifiable.

The first reason is because the seller can change what is at the end of a URL over time and that is a tremendous liability to those purchasing that item if the item’s description is not stored at the time of sale. For example, assume someone sells you an item described by the URL http://example.com/products/82737. When you look at that URL just before you buy the item, it states that you are purchasing tickets to a concert. However, after you make the purchase, the person that sold you the item changes the item at the end of the URL to make it seem as if you purchased an article about their experience at the concert and not the ticket to go to the concert. PaySwarm protects against this attack by ensuring that the machine-readable description of what is being transacted is machine-readable and that machine-readable description is shown to the buyer before the sale and then embedded in the receipt of sale.

The second reason is that the URL, if served over HTTP, can be intercepted and changed, such that the buyer ends up purchasing something that the seller did not approve for sale. PaySwarm addresses this security issue by ensuring that all offers for sale must be digitally signed by the seller.

Alternative Currencies

In most cases the currency mint is equal to the transaction processor.

The currency mint is not equivalent to the transaction processor. Making that assertion conflates two important concepts; 1) the issuer of a currency, and 2) the transaction processors that are capable of transacting in that currency. To put it in different terms, that’s as if one were to say that the US Treasury (the issuer of the USD currency) is the same thing as a local bank in San Francisco (an entity transacting in USD).

Access Control Delegation

But Digital Signatures only solve the actual access process so you have to create your home built authorization and revocation scheme to match what OAuth 2 gives us for free.

In software development, nothing is free. There are always design trade-offs and the design trade-off that OpenTransact has made is to adopt OAuth 2 and punt on the problem of machine-readable and verifiable assets, licenses, listings, and digital contracts. While Pelle makes the argument that OpenTransact may add digital signature support in the future, the final solution would require that both OAuth 2 and digital signatures be implemented.

PaySwarm does not reject OAuth 2 as a useful specification, it rejects it because it overly-complicates the implementation details of the open payment platform. PaySwarm relies on digital signatures instead of OAuth 2 for the same reason that it relies on JSON instead of XML. XML is a perfectly good technology, but JSON is simpler and solves the problem in a more elegant way. That is, adding XML to the specification would needlessly over-complicate the solution, which is why it was rejected.

Furthermore, PaySwarm had previously been implemented using OAuth and we found it to be overly complicated because of this very reason. OAuth and digital signatures largely duplicate functionality and since PaySwarm requires digital signatures to offer a secure, distributed, open payment platform, the most logical thing was to remove OAuth. By removing OAuth, no functionality was sacrificed and the overall system was simplified as a result.

Machine Readable Metadata

Every aspect of PaySwarm falls apart if everything isn’t created using machine readable metadata. This would be great in a perfect greenfield world. However while meta data is improving as people are adding open graph and other stuff to their pages for better SEO and Facebook integration, there are many ways of doing it and a payment standard should not be specifying how every product is listed, sold or otherwise.

This argument is a bit strange – on one hand, it is asserted that it would be great if a product could be listed in a way that is machine-readable while simultaneously stating that a payment standard shouldn’t do it. More simply, the argument is – it would be great if we did X, but we shouldn’t do X.

Why shouldn’t a payment platform standard specify how items should be listed for sale? If there are many ways of specifying how a product should be listed for sale, isn’t that a good candidate for standardization? After all, when product listings are machine-readable, we can automate a great deal of what previously required human intervention.

The reason that Open Graph and Facebook integration happened so quickly across a variety of websites is because it provided good value for the website owners as well as Facebook. It allowed websites to be more accurately listed in feeds. It also allowed Facebook to leverage the people in its enormous social network to categorize and label content, something that had been impossible on a large scale before. The same is true for Google Rich Snippets and the recent schema.org work launched by Google, Microsoft, Yahoo! and Yandex. Website owners can now mark up people, events, products, reviews, and recipes in a way that is machine-readable and that shows up directly, in an enhanced form from regular search listings, in the search engine results pages.

Making things in a web page machine-readable, like products for sale, automates a very large portion of what used to require human oversight. When we automate processes like these, we are able to gain efficiencies and innovate on top of that automation. Specifying how a product should be marked up on the Web in order to be transacted via an open Web payment platform is exactly what should be standardized in a specification, and this is exactly what PaySwarm does.

Recurring payments

With OpenTransact we are still discussing how to specify recurring payments. Before we add it to the standard we would like a couple of real world implementations experiment with it.

This is a chicken and egg problem – at some point, someone has to propose a way to perform recurring payments for an open payment platform. When the OpenTransact specification states that it won’t implement recurring payments until somebody else implements recurring payments, then the problem is just shifted to another community that must do the hard work of figuring out how to implement recurring payments.

PaySwarm has gone to the trouble of specifying exactly how recurring payments are performed. There are many other implementations of recurring payments implemented by the many credit card transaction processors, PayPal, Google Checkout, and Amazon Payments, to name a few. There are many real-world implementations of recurring payments today, so it is difficult to understand exactly what the designers of OpenTransact are waiting on.

Financial Institution Interoperability

OpenTransact is most certainly capable of interoperability between financial institutions. We don’t specify how one payment provider transacts with another payment provider, but it is generally understood that they do so with OpenTransact.

The statement above seems to contradict itself. On one hand, it states that OpenTransact is capable of interoperability between financial institutions. On the other hand, it states that OpenTransact does not specify how one payment provider transacts with another payment provider.

By definition, you do not have interoperability if you do not specify how one system interoperates with another. Furthermore, claiming that two systems interoperate but then not specifying how they interoperate is an invitation for collusion between financial institutions and is a step backwards when looking at how financial institutions operate today. That is, at least there is an inter-bank monetary transfer protocol that you can utilize if you are a bank. This functionality, of detailing how two payment processors interact, is out of scope for OpenTransact.

Digital Signatures

Digital signatures are beautiful engineering constructs that most engineers who have worked with them tend to hold up in near religious reverence. You often hear that a digital signature makes a contract valid and it supports non-repudiation.

PaySwarm does not hold up digital signatures in religious reverence, nor does it assert that by using a digital signature, a digital contract is automatically a legally enforceable agreement. What PaySwarm does do is utilize digital signatures as a tool to provide system security. It also utilizes digital signatures so that simple forgeries on digital contracts cannot be performed.

By not supporting digital signatures in its core protocol, OpenTransact greatly limits itself regarding the use cases that can be addressed with the standard. These use cases that are not addressed by OpenTransact are regarded as very important to the PaySwarm work and thus, cannot be ignored.

Secure Communication over HTTP

We are not trying to reinvent TLS because certs are expensive, which is what PaySwarm proposes.

PaySwarm does not try to re-invent TLS. PaySwarm utilizes TLS to provide security against man-in-the-middle attacks. It also utilizes TLS to create a secure channel from the customer to their PaySwarm Authority and from the merchant to their PaySwarm Authority. What PaySwarm also does is allow sellers on the system to run an online storefront from their website over regular HTTP, thus greatly reducing the cost of setting up and operating an online store-front. The goal is to make the barrier to entry for a vendor on PaySwarm cost absolutely nothing, thus enabling a large group of people that were previously unable to participate in electronic commerce via their website to do so in a way that does not require an up-front monetary investment.

Continuing the Discussion

The fundamental point made in this blog post is that by being hyper-focused on initiating payment transfers, OpenTransact misses the bigger picture of ensuring interoperability in an open payment platform. Until this issue is addressed and it is demonstrated that OpenTransact is capable of addressing more than just a few of the simplest use cases supported by PaySwarm, I fear that it will not pass the rigors of the standardization process.

In his blog post, Pelle only responded to around half of the analysis on OpenTransact and thus further analysis will be performed on his responses when he is able to find time to post them.

If you are interested in listening in on or participating in the discussion, please consider joining the Web Payments Community Group mailing list at the World Wide Web Consortium (W3C) (it’s free, and anyone can join!).

Follow-up to this blog post

[Update 2012-01-02: Second part of response by Pelle to this blog post: OpenTransact vs PaySwarm part 2 - yes it's still mostly out of scope]

[Update 2012-01-08: Rebuttal to second part of response by Pelle to this blog post: Web Payments: PaySwarm vs. OpenTransact Shootout (Part 3)]

* Many thanks to Dave Longley, who reviewed this post and suggested a number of very useful changes.

Web Payments: PaySwarm vs. OpenTransact Shootout

The W3C Web Payments Community Group was officially launched in August 2011 for the purpose of standardizing technologies for performing Web-based payments. The group launched at that time because Digital Bazaar had made a commitment to publish the PaySwarm technology as an open standard and eventually place it under the standardization direction of the W3C. During last week’s Web Payments telecon, a discussion ensued about using the OpenTransact specification as the basis for the Web Payments work at W3C. Inevitably, the group will have to thoroughly vet both technologies to see if it should standardize PaySwarm, OpenTransact, or both.

This blog post is a comparison of the list of features that both technologies have outlined as being standardization candidates for version 1.0 of each specification. The comparison uses the latest published specifications as of the time of this blog post, OpenTransact (October 19th, 2011) and PaySwarm (December 14th, 2011). Here is a brief summary table on the list of features supported by each proposed standard:

Feature PaySwarm 1.0 OpenTransact 1.0
IRIs for Identifiers Yes Yes
Basic Financial Transfer Yes Yes
Payment Links Yes Yes
Item For Sale Identifier Yes Yes
Micropayments Yes Yes
Access Control Delegation Digital Signatures OAuth 2.0
Alternative Currencies Yes Centralized
Machine Readable Metadata Yes Only for Items for sale
Recurring Payments Yes Use case exists, but no spec text
Transaction Processor Interoperability Yes No
Extensible Machine Readable Metadata Yes No
Transactions Yes No
Currency Exchange Yes No
Digital Signatures Yes No
Secure Communication over HTTP Yes No
Decentralized Publishing of Items for Sale Yes No
Decentralized Publishing of Licenses Yes No
Decentralized Publishing of Listings Yes No
Digital Contracts Yes No
Verifiable Receipts Yes No
Affiliate Sales Yes No
Secure Vendor-routed Purchases Yes No
Secure Customer-routed Purchases Yes No
Currency Mints Yes No
Crowd-funding Yes No
Data Portability Yes No

IRIs for Identifiers

If a payment technology is going to integrate cleanly with the Web, it should identify the things that it operates on in a Web-friendly way. Identifiers are at the heart of most Internet-based systems, and therefore it is important that the identifier work in a way that allows it to be used across the world and beyond and across different systems operating in different locations. The Internationalized Resource Identifier (IRI), of which Uniform Resource Locators (URLs) are a sub-set, provide a globally scale-able mechanism for creating distributed, de-reference-able identifiers.

PaySwarm uses IRIs to identify things like identities (http://example.com/identities/jane), financial accounts (http://example.com/accounts/jane/college-fund), assets (http://example.com/ebooks/the-republic), licenses (http://example.com/licenses/personal-use), listings (http://example.com/ebooks/the-republic#half-off-retail), transactions (http://example.com/transactions/2011/12/18/12345), contracts (http://example.com/contracts/2011/12/18/54321), and a variety of the other things that must be expressed when building an open protocol for a financial system.

OpenTransact uses IRIs to identify Asset Services (http://megabank.example.com/current), identities (bill@test.example.com), transfer receipts (http://epay.example.com/transactions/aaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d), callbacks (http://epay.example.com/payment-callback), and providers of items for sale (http://vendor.example.com/). While OpenTransact does not detail many of the “things” that PaySwarm does, the implicit assumption is that those “things” would also have IRIs as their identifiers.

Basic Financial Transfer

An open payment protocol for the Web must be able to perform a simple financial transfer from one financial account to another. This simple exchange is different from the more complex transaction, as outlined below, which allows multiple financial transfers to occur across multiple accounts during a single transaction.

PaySwarm supports transfers from one financial account to another both within systems and between systems.

OpenTransact outlines transfers from one account to another within a system. It is unclear whether it supports financial transfers between systems since the implementation details of how the transfer occurs are not explained in the specification.

A Payment Link is a click-able link in a Web browser that enables one person or organization on the Web to request payment from another person or organization on the Web. Clicking on the link initiates the transfer process. The URL query parameters are standardized such that all payment processors on the Web would implement a single set of known query parameters, thus easing implementation burden when integrating with multiple payment systems.

A PaySwarm Authority will depend on the Payment Links specification to specify the proper query parameters for a Payment Link. These query parameters are intended to overlap almost entirely with the OpenTransact query parameters.

The OpenTransact specification outlines a set of query parameters for payment links.

Transacted Item Identifier

Being able to identify an item that is the cause of a financial transfer on the Web is important because it enables the payment system to understand why a particular transfer occurred. Furthermore, ensuring that the item identifier is de-reference-able allows humans and in some cases, machines, to view details about the item for sale.

PaySwarm calls an item that can be transacted an asset and ensures that the description of every asset on the network meet five important criteria; the asset is identified via an IRI, the asset description can be retrieved by de-referencing the IRI, the asset description is human-readable, the asset description is machine-readable and that the asset description can be frozen in time (to ensure the description of the asset does not change from one purchase to the next).

OpenTransact leaves the identification of the item being transacted a bit more open-ended and does not assign a conceptual name to the item. It ensures that the item is identified by an IRI and that de-referencing the IRI results in an item description. The specification is silent on whether or not the item description is required to be human-readable or machine-readable and does not require that the item description can be frozen in time. Since the item IRI is saved in the receipt but a machine-readable description is not, it allows the vendor to change the description of the purchased item at any point. For example, if the buyer purchased an ebook on day one, the vendor can change the purchase to seem as if it were a rental of the ebook on day two.

Micropayments

Micropayments allow the transmission of very small amounts of money from sender to receiver. For example, paying $0.05 for access to an article would be considered a micropayment. One of the primary benefits of micropayments is that they enable easy-to-use pay-as-you-go services. Micropayments also allow one to transfer small amounts of funds at a time without incurring high per-transaction fees.

The smallest amount for a PaySwarm transactions is unlimited. However, most transaction processors will limit payments up to 1/10,000th of the smallest whole denomination of a currency. For example, the smallest transaction possible using the PaySwarm software that Digital Bazaar is developing in US Dollars is $0.0000001 (one ten-millionth of a US Dollar).

OpenTransact is not limited in the smallest amount that is transferable.

Alternative Currencies

Alternative currencies like Ven, Bitcoin, time banking, Bernal Bucks and various gaming currencies like XBox Points are being increasingly used to solve niche economic problems that traditional fiat currencies have not been able to address. In order to ensure that experimentation with alternative currencies are supported, a Web-based payment protocol should ensure that those currencies can be easily created and exchanged.

PaySwarm allows currencies to be specified by either an internationalized currency code, like “USD”, or by an IRI that identifies a currency. This means that anyone capable of minting an IRI, which is just about anybody on the Web, has the ability to create a new alternative currency. The one draw-back for alternative currencies is that there must also be a location, or network, on the Web that acts as the currency mint. The concept of a currency mint will be covered later in this blog post.

OpenTransact supports alternative currencies by creating what it calls Asset Service endpoints. These endpoints can be for the transmission of fixed currencies, stocks, bonds, or other financial instruments.

Access Control Delegation

When implementing features like recurring payments, often some form of access control is required to ensure that a vendor honors the agreement that they created with the buyer. For example, giving permission to a vendor to bill you for up to $10 USD per month requires that some sort of access control privileges are assigned to the vendor. This access control mechanism allows the vendor to make withdrawals without needing to repeatedly bother the buyer. It also gives power to the buyer if they ever want to halt payment to the vendor. There are two primary approaches to access control on the Web; OAuth and digital signatures.

PaySwarm relies on digital signatures to ensure that a request for financial transfer is coming from the proper vendor. Setting up access control is a fairly simple process in PaySwarm, consisting of two steps. The first step requires the vendor to request access from the buyer via a Web browser link. The second step requires that the buyer select which privileges, and limits on those privileges, they are granting to the vendor. These privileges and limits may effectively make statements like: “I authorize this vendor to bill me up to $10 per month”. The vendor may then make financial transfer requests against the buyer’s financial account for up to $10 per month.

OpenTransact enables access control via the OAuth 2 protocol, which is capable of supporting similar privileges and limitations as PaySwarm. However, the OAuth 2 protocol does not allow for digital signatures external to the protocol and thus would also require a separate digital signature stack in order to support things like machine-verifiable receipts and decentralized assets, licenses and listings.

Machine Readable Metadata

One of the benefits of creating an open payment protocol is that the protocol can be further automated by computers if certain information is exposed in a machine-readable way. For example, if a digital receipt were exposed such that it was machine-readable, access to a website could be granted merely by transmitting the digital receipt to the website.

PaySwarm requires that all data that participates in a transaction is machine readable in a deterministic way. This ensures that assets, licenses, listings, contracts and digital receipts (to name a few) can be transacted and verified without a human in the loop. This increased automation leads to far better customer experiences on websites, where a great deal of the annoying practice of filling out forms can be skipped entirely because machines can automatically process things like digital receipts.

OpenTransact describes exactly what receipt metadata looks like – it’s a JSON object that contains a number of fields. It also outlines that implementations could ask for descriptions of OpenTransact Assets and the associated list of transactions that are associated with those Assets. However, there is no mechanism that allows this metadata to be extended in a deterministic fashion. This limitation will be further detailed below.

Recurring Payments

A recurring payment enables a buyer to specify that a particular vendor can bill them at a periodic interval and removes the burden of buyers having to remember to pay bills every month. Recurring payments requires a certain level of Access Control Delegation.

Recurring payments are supported in PaySwarm by pre-authorizing a vendor to spend a certain limit at a pre-specified time interval. Many other rules can be put into place as well. For example, the buyer could limit the time period that the vendor can operate or the transaction processor could send an e-mail every time the vendor withdraws money from the buyer.

While a recurring payments use case exists for OpenTransact, no specification text has been written on how one can accomplish this feat from a technical standpoint.

Transaction Processor Interoperability

For an open payment protocol to be truly open, it must provide interoperability between systems. At the most basic level, this means that a transaction processor must be able to take funds from an account on one system and transfer those funds to a different account on a different system. Interoperability often goes deeper than that, however, as transferring transaction history, accounts, and preference information from one system to the next is important as well.

Since PaySwarm lists financial accounts in a decentralized way (as IRIs), there is no reason that two financial accounts must reside on the same system. In fact, PaySwarm is built with the assumption that payments will freely flow between various payment processors during a single transaction. This means that a single transaction could contain multiple payees and each one of those payees could reside on a different system, and as long as each system adheres to the PaySwarm protocol, the money will be transmitted to each account. While the specification text has not yet been written, the PaySwarm system will not be fully standardized until the protocol for performing the back-haul communication between each PaySwarm Authority is detailed in the specification. That is, system to system inter-operability is a requirement of the protocol.

The OpenTransact specification identifies senders and receivers in a decentralized way (as IRIs). There are no plans to specify system-to-system transactions in the OpenTransact 1.0 specification. The type of interoperability that OpenTransact provides is library API-level compatibility. That is, those writing software libraries for financial services need only implement one set of query parameters to work with an OpenTransact system. However, the specification does not specify how money flows from one system to the next. There is no system-to-system interoperability in OpenTransact and thus each implementation does nothing to prevent transaction processor lock-in.

Extensible Machine Readable Metadata

Having machine-readable data allows computers to automate certain processes, such as receipt verification or granting access based on details in a digital contract. However, machine-readable data comes at the cost of having to use rigid data structures. These rigid data structures can be extended in a variety of ways that provide the best of both worlds – machine readability and extensibility. Allowing extensibility in the data structures enables innovation. For example, the addition of new terms in a contract or license would enable new business models not considered by the designers of the core protocol.

PaySwarm utilizes JSON-LD to express data in such a way as to be easily usable by Web programmers, but extensible in a way that is guaranteed to not conflict with future versions of the protocol. This means that assets, licenses, listings, digital contracts, and receipts may be extended by transaction processors in order to enable new business models without needing to coordinate with all transaction processors as a first step.

OpenTransact utilizes JSON to provide machine-readable data for receipts, Assets and transactions. However, it does not specify any mechanism that allows one to extend the data structures without running the risk of conflicting with future advancements to the language.

Transactions

A transaction is defined as a collection of one or more transfers. An example of a transaction is paying a bill at a restaurant. Typically, that transaction results in a number of transfers – there is one from the buyer to the restaurant, another from the buyer to a tax authority, and yet another transfer from the buyer to tip the waiter (in certain countries). While the restaurant gives you a single receipt, the transfer of money is often more complex than just a simple transmission from one sender to one receiver.

PaySwarm models transactions in the way that we model them in the real world – a transaction is a collection of monetary transfers from a sender to multiple receivers. An example of a transaction can be found in the PaySwarm Commerce vocabulary.

OpenTransact models only the most low-level concept of a financial transfer and leaves implementation of transactions to a higher-level application. That is, transactions are considered out-of-scope for the OpenTransact specification and are expected to be implemented at the application layer.

Currency Exchange

A currency exchange allows one to exchange a set amount of one currency for a set amount of another currency. For example, exchanging US Dollars for Japanese Yen, or exchanging goats for chickens. A currency exchange functions as a mechanism for transforming something of value into something else of value.

PaySwarm supports currency exchanges via digital contracts that outline the terms of the exchange.

OpenTransact asserts that currency exchanges can be implemented on top of the specification, but states that the details of that implementation are strictly outside of the specification. One of the primary features that are required for a functioning currency exchange is the concept of a currency mint, which is also outside of the scope of OpenTransact.

Digital Signatures

A digital signature is a mechanism that is used to verify the authenticity of digital messages and documents. Much like a hand-written signature, it can be used to prove the identity of the person sending the message or signing a document. Digital signatures have a variety of uses in financial systems, including; sending and receiving verifiable messages, access control delegation, sending and receiving secure/encrypted messages, counter-signing financial agreements, and ensuring authenticity of digital goods.

PaySwarm has direct support for digital signatures and utilizes them to provide the following capabilities; sending and receiving verifiable messages, access control delegation, sending and receiving secure/encrypted messages, counter-signing financial agreements, ensuring the authenticity of assets, licenses, listings, digital contracts, intents to purchase, and verifiable receipts.

OpenTransact does not support digital signatures in the specification.

Secure Communication over HTTP

While HTTP has served as the workhorse protocol for the Web, it’s major weakness is that it is not secure unless wrapped in a Transport Layer Security (TLS, aka SSL) connection. Unfortunately, TLS/SSL Certificates are costly and one of the design goals for an open protocol for Web Payments should be reducing the financial burden placed on the network participants. Another mechanism that can be used to secure HTTP traffic is by using AES to encrypt/decrypt or digitally sign parts of the HTTP message. This approach results in a zero-financial-cost solution for implementing a secure message channel inside of an HTTP message.

Since PaySwarm supports AES and digital signatures by default, it is also capable of securing communication over HTTP.

OpenTransact relies on OAuth 2 and thus requires a financial commitment from the participant if they want to secure their network traffic via TLS. There is no way to use OpenTransact over HTTP in a secure manner without TLS. However, this is not a problem for the subset of use cases that OpenTransact aims to solve. It does not address the cases where a digital signature or encryption is required to communicate over an un-encrypted HTTP message channel.

Decentralized Publishing of Items for Sale

A vendor would like to have their products listed for sale as widely as possible while ensuring that their control over the item’s machine-readable description is ensured, regardless of where it is listed on the Web. It is important to be able to list items for sale in a secure manner, but allow the flexibility for that item to be expressed on sites that are not under your control. Decoupling the machine-readable description of an item for sale from the payment processor allows both mechanisms to be innovated upon on different time-frames by different people. Centralized solutions are often easier to implement, but far less flexible from decentralized solutions. This holds true for how items for sale are listed. Allowing vendors to have full control over how their items for sale should appear to those browsing their wares is something that a centralized solution cannot easily offer.

PaySwarm establishes the concept of an asset and describes how an asset can be expressed in a secure, digitally signed, decentralized manner.

OpenTransact does not support machine-readable descriptions of items, nor does it support digital signatures to ensure that items cannot be tampered with by third-parties, or even the originating party.

Decentralized Publishing of Licenses

When a sale occurs, there is typically a license that governs the terms of sale. Often, this license is implied based on the laws of commerce governing the transaction in the region in which the transaction occurs. What would be better is if the license could be encapsulated into the receipt of sale and specified in a way that is decentralized to the financial transaction processor and to the item being purchased. This would ensure that people and organizations that specialize in law could innovate and standardize a set of licenses independently of the rest of the financial system.

PaySwarm establishes the concept of a license and describes how it can be expressed in a secure, digitally signed, decentralized manner. Licenses typically contain boilerplate text which are sprinkled with configurable parameters such as “warranty period in days from purchase”, and “maximum number of physical copies” (for things like manufacturing).

OpenTransact does not support machine-readable licenses, embedding licenses, nor does it support digital signatures to ensure that the license cannot be tampered with by third-parties. License tampering isn’t just a problem when transmitting the license over insecure channels, it is also an issue if the the originator of the license changes the contents of the license.

Decentralized Publishing of Listings

A listing specifies the payment details and license under which an asset can be transacted. Giving a vendor full control over when, where and how a listing is published is vital to ensuring that new business models that depend on when and how items are listed for sale can be innovated upon independently of the financial network. So, it becomes important that listings can not only be expressed in a decentralized manner, but they are also tamper-proof and re-distribute-able across the Web while ensuring that the vendor stays in control of how long a particular offer lasts.

PaySwarm establishes the concept of a listing and describes how it can be expressed in a secure, digitally signed, decentralized manner. Decentralized listings allow assets described in the listings to be sold on a separate site, under terms set forth by the original asset owner. That is, a pop-star could release their song as a listing on their website, and the fans could sell it on behalf of the pop-star while making a small profit from the sale. In this scenario, the pop-star gets the royalties they want, the fan gets a cut of the sale, and mass-distribution of the song is made possible through a strongly motivated grass-roots effort by the fans.

OpenTransact does not support machine-readable listings, nor does it support digital signatures to ensure that the license cannot be tampered with by third-parties.

Digital Contracts

A contract is the result of a commercial transaction and contains information such as the item purchased, the pricing information, the the parties involved in the transaction, the license outlining the rights to the item, and payment information associated with the transaction. A digital contract is machine-readable, is self-contained and is digitally signed to ensure authenticity.

PaySwarm supports digital contracts as the primary mechanism for performing complex exchanges of value. Digital contracts support business practices like intent-to-purchase, being able to purchase an asset under different licenses (such as personal use and broadcast use), and digital receipts.

OpenTransact does not support digital contracts nor does it support digital signatures.

Verifiable Receipts

A verifiable receipt is a receipt that contains a digital signature such that you can verify the accuracy of the receipt contents. Verifiable receipts are helpful when you need to show the receipt to a third party to assert ownership over a physical or virtual good. For example, a music fan could show a verifiable receipt confirming that they purchased a certain song from an artist to get a discount on tickets to an upcoming show. There would not need to be any coordination between the original vendor of the songs and the vendor of the tickets if a verifiable receipt was used as a proof-of-purchase.

PaySwarm supports verifiable receipts, even when the signatory of the receipt is offline. The only piece of information necessary is the public key of the PaySwarm Authority. This means that receipts can be verified even if the transaction processor is offline or goes away entirely.

OpenTransact does support receipts delivered via the Asset Service, using OAuth 2 as the access control mechanism. This means that retrieving a receipt for validation requires having a human in the loop. A verifying website would need to request an OAuth 2 token, the receipt-holder would need to grant access to the verifying website, and then the verifying website would use the token to retrieve the receipt. Currently, OpenTransact does not support digitally signed receipts, and thus it does not support receipt verification if the Asset Service is offline.

Affiliate Sales

When creating content for the Web, getting massively wide distribution typically leads to larger profits. Therefore, it is important for people that create items for sale to be able to grant others the ability to redistribute and profit off of the redistribution, as long as the original creator is compensated under their terms for their creation. Typically, this is called the affiliate model – where a single creator allows wide-distribution of their content through a network of affiliate sellers.

PaySwarm supports affiliate re-sale through digitally signed listings. A listing associates an asset for sale, the license under which asset use is governed and the payment amount and rules associated with the asset. These listings can be used on their originating site, or a third party site. Security of the terms of sale specified in the listings are ensured through the use of digital signatures.

OpenTransact does not support affiliate sales.

Secure Vendor-routed Purchases

At times, network-connectivity can be a barrier for performing a financial transaction. In these cases, the likelihood that a at least one of the transaction participants has an available network connection is high. For example, if a vendor has a physical place of business with a network connection, the customers that frequent that location can depend on the vendor’s network connection instead of requiring their own when processing payments. This is useful when implementing a payment device in hardware, like a smart payment card, without also requiring that hardware device to have a large-area network communication capability, like a mobile phone. A typical payment flow is outlined below:

  1. The vendor presents a bill to the buyer.
  2. The buyer views the bill and digitally signs the bill, stating that they agree to the charges.
  3. The vendor takes the digitally signed bill and forwards it to the buyer’s payment processor for payment.

PaySwarm supports vendor-routed purchases through the use of digital signatures on digital contracts. When the vendor provides a digital contract to a buyer, the buyer may accept the terms of sale by counter-signing the digital contract from the vendor. The counter-signed digital contract can then be uploaded to the buyer’s PaySwarm Authority to transfer the funds from the buyer to the vendor. The digital contract is returned to the vendor with the PaySwarm Authority’s signature on the contract to assert that all financial transfers listed in the contract have been processed.

OpenTransact does not support vendor-routed purchases, requiring instead that both buyer and vendor have network connectivity when performing a purchase.

Secure Customer-routed Purchases

At times, network-connectivity can be a barrier for performing a financial transaction. In these cases, the likelihood that a at least one of the transaction participants has an available network connection is high. For example, a vendor could setup a sales kiosk without any network connection (such as a vending machine) and route purchase processing via a customer’s mobile-phone. A typical payment flow is outlined below:

  1. The buyer selects the product that they would like to purchase from the kiosk (like a soda).
  2. The kiosk generates a bill and transmits it to the mobile device via a NFC connection.
  3. The buyer’s mobile phone digitally signs the bill and sends it to their payment processor for processing.
  4. The digitally signed receipt is delivered back to the buyer’s mobile device, which then transmits it via NFC to the kiosk.
  5. The kiosk checks the digital signature and upon successful verification and delivers the product to the buyer (a cold, refreshing soda).

PaySwarm supports customer-routed purchases through the use of digital signatures on digital contracts. When the buyer receives the digital contract for the purchase from the kiosk, it is already signed by the vendor which implies that the vendor is amenable to the terms in the contract. The buyer then counter-signs the contract and sends it up to the PaySwarm Authority for processing. The PaySwarm Authority then counter-signs the contract, which is delivered back to the buyer, which then routes the finalized contract back to the kiosk. The kiosk checks the digital signature of the PaySwarm Authority on the contract and delivers the product to the buyer.

OpenTransact does not support customer-routed purchases, requiring instead that both buyer and vendor have network connectivity when performing a purchase.

Currency Mints

A currency mint is capable of creating new units of a particular currency. A currency mint is closely related to the topic of alternative currencies, as previously mentioned in this blog post. In order for an alternative currency to enter the financial network, there must be a governing authority or algorithm that ensures that the generation of the currency is limited. If the generation of a currency is not limited in any way, hyperinflation of the currency becomes a risk. The most vital function of a currency mint is to carefully generate and feed currency into payment processors.

PaySwarm supports currency mints by allowing the currency mint to specify an alternative currency via an IRI on the network. The currency IRI is then used as the origin of currency across all PaySwarm systems. The currency mint can then deposit amounts of the alternative currency into accounts on any PaySwarm Authority through an externally defined process.

OpenTransact does not support currency mints. It does support alternative currencies, but does not specify how the alternative currency can be used across multiple payment processors and therefore only supports alternative currencies in a non-inter-operable way.

Crowd-funding

Crowd-funding is the act of pooling a set of funds together with the goal of effecting a change of some kind. Kickstarter is a great example of crowd-funding in action. There are a number of requirements when crowd-funding; ensure that money is not exchanged until a funding goal has been reached (an intent to fund), if a funding goal is reached – allow the mass collection of money (bulk transfer), if a funding goal is not reached – invalidate the intent to fund (cancellation).

PaySwarm supports crowd-funding. The digital contracts that PaySwarm uses can express an intent to fund. Mass-collection of a list of digital contracts expressing an intent to fund can occur in an atomic operation, which is important to make sure that the entire amount is available at once. Finally, the digital contracts containing an intent to fund also contain an expiration time for the offer to fund.

OpenTransact does not support crowd-funding as described above.

Data Portability

Data portability provides the mechanism that allows people and organizations to easily transfer their data across inter-operable systems. This includes identity, financial transaction history, public keys, and other financial data in a way that ensures that payment processors cannot take advantage of platform lock-in. Making data portability a key factor of the protocol ensures that the customers always have the power to walk away if they become unhappy with their payment processor, thus ensuring strong market competition among payment processors.

PaySwarm ensures data portability by expressing all of its Web Service data as JSON-LD. There will also be a protocol defined that ensures that data portability is a requirement for all payment processors implementing the PaySwarm protocol. That is, data portability is a fundamental design goal for PaySwarm.

OpenTransact does not specify any data portability requirements, nor does it provide any system inter-operability requirements. While certainly not done on purpose, this creates a dangerous formula for vendor lock-in and non-interoperability between payment processors.

Follow-up to this blog post

[Update 2011-12-21: Partial response by Pelle to this blog post: OpenTransact the payment standard where everything is out of scope]

[Update 2012-01-01: Rebuttal to Pelle's partial response to this blog post: Web Payments: PaySwarm vs. OpenTransact Shootout (Part 2)]

[Update 2012-01-02: Second part of response by Pelle to this blog post: OpenTransact vs PaySwarm part 2 - yes it's still mostly out of scope]

[Update 2012-01-08: Rebuttal to second part of response by Pelle to this blog post: Web Payments: PaySwarm vs. OpenTransact Shootout (Part 3)]

W3Conf – Day Two

W3Conf 2011: HTML5 and the Open Web Platform

The W3C, the folks that create many of the Web technologies you use today, is holding its first conference. If you want to know more about the future of HTML5 and the open Web platform – you’ve come to the right place.

The second day of events are being live-blogged on this page.

If you have access to online video, the event is being live streamed right now. If you don’t have access to video, we’ll be covering what’s going on this page. If the page doesn’t auto-refresh for you every minute, just hit the refresh button to see the latest.

[liveblog]

W3Conf LiveBlog – Day One

W3Conf 2011: HTML5 and the Open Web Platform

The W3C, the folks that create many of the Web technologies you use today, is holding its first conference. If you want to know more about the future of HTML5 and the open Web platform – you’ve come to the right place.

The first and second day of events are being live-blogged on this page.

If you have access to online video, the event is being live streamed right now. If you don’t have access to video, we’ll be covering what’s going on this page. If the page doesn’t auto-refresh for you every minute, just hit the refresh button to see the latest.

[liveblog]

The Need for Data-Driven Standards

Summary: The way we create standards used by Web designers and authors (e.g. HTML, CSS, RDFa) needs to employ more publicly-available usage data on how each standard is being used in the field. The Data-Driven Standards Community Group at the W3C is being created to accomplish this goal – please get an account and join the group.

Over the past month, there have been two significant events demonstrating that the way that we are designing languages for the Web could be improved upon. The latest one was the swift removal of the <time> element from HTML5 and then the even swifter re-introduction of the same. The other was a claim by Google that Web authors were getting a very specific type of RDFa markup wrong 30% of the time, which went counter to the RDFa Community’s experiences. Neither side’s point was backed up with publicly-available usage data. Nevertheless, the RDFa Community decided to introduce RDFa Lite with the assumption that Google’s private analysis drew the correct conclusions and understanding that the group would verify the claims, somehow, before RDFa Lite became an official standard.

Here is what is wrong with the current state of affairs: No public data or analysis methodologies were presented by people on either side of the debate, and that’s just bad science.

Does it do What you Want it to do?

How do you use science to design a Web standard such as HTML or RDFa? Let’s look, first, at the kinds of technologies that we employ on the Web.

A dizzying array of technologies were just leveraged to show this Web page to you. To take bits off of a hard drive and blast them toward you over the Web at, quite literally, the speed of light is an amazing demonstration of human progress over the last century. The more you know about how the Web fits together, the more amazing it is that it works with such dependability – the same way for billions of people around the world, each and every day.

There are really two sides to the question of how well the Web “works”. The first side questions how well it works from a technical standpoint. That is, does the page even get to you? What is the failure rate of the hardware and software relaying the information to you? The other side asks the question of how easy it was for someone to author the page in the first place. The first side has to do more with back-end technologies, the second with front-end technologies. It is the design of these front-end technologies that this blog post will be discussing today.

Let’s take a look at the technologies that went into delivering this page to you and try to put them into two broad categories; back-end and front-end.

Here are the two (incomplete) lists of technologies that are typically used to get a Web page to you:

Back-end Technologies: Ethernet, 802.11, TCP/IP, HTTP, TrueType, JavaScript*, PHP*
Front-end Technologies: HTML, CSS, RDFa, Microdata, Microformats

* It is debatable that JavaScript and PHP should also go in the front-end category, but since you can write a web page without them, let’s keep things simple and ignore them for now.

Back-end technologies, such as TCP/IP, tend to more prescriptive and thus easier to test. The technology either works reliably, or it doesn’t. There is very little wiggle room in most back-end technology specifications. They are fairly strict in what they expect as input and output.

Front-end technologies, such as HTML and RDFa, tend to be more expressive and thus much more difficult to test. That is, the designers of a language know what the intent of particular elements and attributes are, but the intent can be mis-interpreted by the people that use the technology. Much like the English language can be butchered by people that don’t write good, the same principle applies to front-end technologies. An example of this is the rev attribute in HTML – experts know its purpose, but it has traditionally not been used correctly (or at all) by Web authors.

So, how do we make sure that people are using the front-end technologies in the way that they were intended to be used?

Data Leads the Way

Many front-end technology standards, like HTML and RDFa, are frustrating to standardize because the language designers rarely have a full picture of how the technology will be used in the future. There is always an intent to how the technology should be used, but how it is used in the field can deviate wildly from the intent.

During standards development, it is common to have a gallery of people yelling “You’re doing it wrong!” from the sidelines. More frustratingly, for everyone involved, some of them may be right, but there is no way to tell which ones are and which ones are not. This is one of the places that the scientific process can help us. Data-driven science has a successful track record of answering questions that are difficult for language designers, as individuals with biases, to answer. Data can help shed light on a situation when your community of authors cannot.

While the first draft of a front-end language won’t be able to fully employ data-driven design, most Web standards go through numerous revisions. It is during the design process of the latter revisions that one can utilize good usage data on the Web to influence a better direction for the language.

Unfortunately, good data is exactly what is missing from most of the front-end technology standardization work that all of us do. The whole <time> element fiasco could have been avoided if the editor of the HTML5 specification had just pointed to a set of public data that showed, conclusively, that very few people were using the element. The same assertion holds true for the changes to the property attribute in RDFa. If Google could have just pointed us to some solid, publicly-available data, it would have been easy for us to make the decision to extend the property attribute. Neither happened because we just don’t have the infrastructure necessary to do good data-driven design, and that’s what we intend to change.

Doing a Web-scale Crawl

The problem with getting good usage data on Web technologies is that none of us have the crawling infrastructure that Google or Microsoft have built over the years. A simple solution would be to leverage that large corporate infrastructure to continuously monitor the Web for important changes that impact Web authors. We have tried to get data from the large search companies over the years. Getting data from large corporations is problematic for at least three reasons. The first is that there are legal hurdles that both people outside the organization and people inside the organization must overcome to publish any data publicly. These hurdles often take multiple months to overcome. The second is that some see the data as a competitive advantage and are unwilling to publish the data publicly. The third is that the raw data and the methodology are not always available, resulting in just the publication of the findings, which puts the public in the awkward position of having to trust that a corporation has their best interests in mind.

Thankfully, new custom search services have recently launched that allow us to do Web-Scale crawls. We have an opportunity now to create excellent, timely crawl data that can be publicly published. One of these new services is called 80legs, which does full, customized crawls of the Web. The other is called Common Crawl, which indexes roughly 5 billion web pages and is provided as a non-profit service to researchers. These two places are where we are going to start asking the questions that we should have been asking all along.

What Are We Looking For?

To kick-start the work, there is interest in answering the following questions:

  1. How many Web pages are using the <time> element?
  2. How many Web pages are using ARIA accessibility attributes?
  3. How many Web pages are using the <article> and <aside> elements?
  4. How many sites are using OGP vs. Schema.org markup?
  5. How many Web pages are using the RDFa property attribute incorrectly?

Getting answers to these questions will allow the front-end technology developers to make more educated decisions about the standards that all of us will end up using. More importantly, having somewhere that we can all go and ask these questions is vitally important to the standards that will drive the future of the Web.

The Data-Driven Standards Community Group

I propose that we start a Data-Driven Standards Community Group at the W3C. The Data Driven Standards Community Group will focus on researching, analyzing and publicly documenting current usage patterns on the Internet. Inspired by the Microformats Process, the goal of this group is to enlighten standards development with real-world data. This group will collect and report data from large Web crawls, produce detailed reports on protocol usage across the Internet, document yearly changes in usage patterns and promote findings that demonstrate that the current direction of a particular specification should be changed based on publicly available data. All data, research, and analysis will be made publicly available to ensure the scientific rigor of the findings. The group will be a collection of search engine companies, academic researchers, hobbyists, web authors, protocol designers and specification editors in search of data that will guide the Internet toward a brighter future.

If you support this initiative, please go to the W3C Community Groups page and join to show you support the group.

A New Way Forward for HTML5

A New Way Forward for HTML5

By halting the XHTML2 work and announcing more resources for theHTML5 project, the World Wide Web Consortium has sent a clear signalon the future markup language for the Web: it will be HTML5.Unfortunately, the decision comes at a time when many working withWeb standards have taken issue with the way the HTML5 specification is being developed.

The shut down of the XHTML2 Working Group has brought to a head along-standing set of concerns related to how the new specification isbeing developed. This page outlines the current state of developmentand suggests that there is a more harmonious way to move forward. Byadopting some or all of the proposals outlined below, the standardscommunity will ensure that the greatest features for the Web areintegrated into HTML5.

What’s wrong with HTML5?

There are likely as many reasons for why HTML5 is problematic as there are for why HTML5 will succeed where XHTML2 didn’t. Some of these reasons are technical in nature, some are based on process, and others may lack sufficient evidential support.

Many, including the author of this document, have praised the WHAT WG for making steady progress on the next version of HTML. Using implementation data to back up additions, removals or re-writes to the core HTML specification has helped improve the standard. In general, browser implementors have been very supportive of the current direction, so there is much to be celebrated when it comes to HTML5.

The HTML5 editorial process, however, has also created several complaints among long-time members of the web standards community that find themselves marginalized as the specification proceeds. The problem has more to do with politics than it does science, but as we find in the real world — the politics are shaping the science.

The biggest complaint with the current process is that the power to change the originating specification lies with one individual. This gives that one individual, or group of individuals, an advantage that has created an acrimonious environment.

In a Consortium like the W3C, a process that shows favor to certain members by giving them privileges that other members can never attain is fundamentally unfair.

In this particular case, it tilts the table toward the current HTML5 specification editor and toward the browser manufacturers. Search companies, tool developers, web designers and developers, usability experts and many others have suddenly found themselves without a voice. Some say that this approach is a good thing — it focuses on those that must adhere to the standard and on those that are producing results. Unfortunately, the approach also creates conflict. The secondary communities feel a sense of unfairness because they feel their needs for the Web are not being met. It is not a simple problem with an easy solution.

HTML5 is now the way forward. In order to ensure that dissenting argumentation can have an impact on the specification, if the argumentation is valid, we must subtly change the editorial process. The changes should not affect the speed at which HTML5 is proceeding, so there is a certain finess that must be employed to any action we perform to make the HTML5 community better.

This set of proposals addresses our current situation and what other similar communities have done to improve their own development processes.

The Goal

The goal of the actions listed in this document is to allow all of the communities interested in HTML5 to collaborate on the specification in an efficient and agreeable manner. The tools that we elect to use have an effect on the perceived editorial process in place. Currently, the process does not allow for wide-scale collaboration and the sharing and discussion of proposals that support consortium-based specification authoring.

The Strategy

The strategy for moving HTML5 forward should focus on being inclusive without increasing disruption, red tape, or maintenance headaches for those who are contributing the most to the current HTML5 specification. Any strategy employed should ensure that we create a more open, egalitarian environment where everyone who would like to contribute to the HTML5 specification has the ability to do so without the barriers to entry that exist today.

About the Author

Manu Sporny is a Founder of Digital Bazaar and the Commons Design Initiative, an Invited Expert to the W3C, the editor for the hAudio Microformat specification, the RDF Audio and Video vocabularies, a member of the RDFa Task Force and the editor of the HTML5+RDFa specification.

Over the next six months, he will be raising funding from private enterprise and public institutions to address many of the issues outlined below. If your company depends on the Internet and can afford to fund just a small fraction of the work below (minimum $8K grant), then please contact him at msporny@digitalbazaar.com. If you know of an institution that is able to fund the work described on this page, please have them contact Manu.

The Issues

The majority of this document lists some of the current HTML5 issues and attempts to provide actions that the standards community could take to address them.

Problem: A Kitchen Sink Specification

The HTML5 specification currently weighs in at 4.1 megabytes of HTML. If one were to print it out, single-spaced and using 12-point font, the document would span 844 pages. It is not even complete yet and it is already roughly the same length as the new edition of “War and Peace” – a full 3 inches thick.

Reading an 844 page technical specification is daunting for even the most masochistic of standards junkies. The HTML 4.01 specification, problematic in its own right, is 389 pages. Not only are large specifications overwhelming, but it can be almost impossible to find all of the information you need in them. Clearly the specification needs to be long enough to be specific, but not any larger. The issue has more to do with focus and accessibility to web authors and developers than length.

The current HTML5 specification contains information that is of interest to web authors and designers, parser writers, browser manufacturers, HTML rendering engine developers, and CSS developers. Unfortunately, the spec attempts to address all of these audiences at once, quickly losing its focus. A document that is not focused on its intended audience will reduce its utility for all audiences. Therefore, some effort should be placed into making the current document more coherent for each intended audience.

Action: Splitting HTML5 into Logically Targeted Documents

“Know your audience” is a lesson that many creative writers learn far before their first comic, book, or novel is published. Instead of creating a single uber-specification, we should instead split the document into logically targeted sections that are more focused on their intended audience. For example, the following is one such break-out:

  • HTML5: The Language – Syntax and Semantics
    • This document lists all of the language elements and their intended use
    • Useful to authors and content creators
  • HTML5: Parsing and Processing Rules
    • This document lists all of the text->DOM parsing and conversion rules
    • Useful for validator and parser writers
  • HTML5: Rendering and Display Rules
    • This document lists all of the document rendering rules
    • Useful for browser manufacturers and developers of otherapplications that perform visual and auditory display
  • HTML5: An Implementers Guide
    • This document lists implementation guidelines, common algorithms, and other implementation details that don’t fit cleanly into the other 3 documents.
    • Useful for application writers who consume HTML5

Problem: Commit Then Review

The HTML5 specification, to date, has been edited in a way that has enjoyed large success in other open source projects. It uses a development philosophy called Commit-Then-Review (CTR). This process is used from time to time at the W3C for small changes, with larger, possibly contentious changes using a process called Consensus-Then-Commit (CTC).

In a CTR process, developers make changes to an open source project and commit their changes for review by other developers. If the new changes work better, they are kept. If they don’t work, they are removed. Source control systems such as CVS, Subversion, and Git are heavily relied upon to provide the ability to rewind history and recover lost changes. This process of making additions to a specification can cause an unintended psychological effect; ideas that are already in a specification are granted more weight than those that are not. In the worst case, one may use the fact that text exists to solve a certain problem to squelch arguments for better solutions.

In a CTC process, as used at the W3C, consensus should be reached before an editor changes a document. This approach assumes that it is much harder to remove language than it is to add it. The approach is often painfully slow and it can take weeks to reach consensus on particularly touchy items. Many have asserted that the HTML5 specification could not have been developed via CTC, an assertion that is also held by the author of this document.

There is nothing wrong with either approach as long as certain presumptions hold. One of the presumptions that CTR makes is that there are many people who may edit a given project. This ensures that good ideas are improved upon, bad ideas are quickly replaced by better ideas, and that no one has the ability to impose his or her views on an entire community without being challenged. However, in HTML5, there is only one person who has editing privileges for the source document. This shifts the power to that individual and requires everyone else in the project to react to changes implemented by that committer.

CTR also requires a community where there is mutual trust among the project leaders and contributors. The HTML5 community is, unfortunately, not in that position yet.

Action: More Committers + Distributed Source Control

Commit Then Review is a valuable philosophy, but it is dangerous when you only have one committer. Having only one committer creates a barrier to dissenting opinion. It does not allow anyone else to make a lasting impact on the only product of HTML WG and WHAT WG – the HTML5 specification.

It is imperative that more editors are empowered in order to level the playing field for the HTML5 specification. It is also important that edit-wars are prevented by adopting a system that allows for distributed editing and source management.

The Git source control system is one such distributed editing and management solution. It doesn’t require linear development and it is used to develop the Linux kernel – a project dealing with many more changes per day than the HTML5 specification. It allows for separate change sets to be used and, most importantly, there is no one in “control” of the repository at any given point. There is no central authority, no political barrier to entry, and no central gatekeeper preventing someone from working on any part of the specification.

Distributed source control systems are like peer-to-peer networks. They make the playing field flat and are very difficult to censor. The more people that have the power to clone and edit the source repository, the larger the network effects. We would go from the one editor we have now, to ten editors in a very short time, and perhaps a hundred contributors over the next decade.

Problem: No Way for Experts to Contribute in a Meaningful Way

There have been three major instances spanning 12-24 months in which expert opinion was not respected during the development of the current HTML5 specification. These instances concerned Scalable Vector Graphics (SVG), Resource Description Framework in Attributes (RDFa), and the Web Accessibility Initiative (WAI).

The situation has received enough attention so that there are now web comics and blogs devoted to the conflict between various web experts and the author of the HTML5 specification. These disagreements have become a spectacle with many experts now refusing to take part in the development of the HTML5 specification citing others’ unsuccessful attempts to convey decades of research to the current editor of the HTML5 specification.

Similarly, the editor of the current specification has many valid reasons not to integrate suggestions into the massive document that is HTML5. However, an imperfection in a particular suggestion does not eliminate the need for a discussion of alternative proposals. The way the problem is being approached, by both sides, is fundamentally flawed.

Action: Alternate, Swappable Specification Sections

The HTML5 document should be broken up into smaller sections. Let’s call them microsections. The current specification source is 3.4 megabytes of editable text and is very difficult to author. It is especially daunting to someone who only wants to edit a small section related to their area of expertise. It is even more challenging if one wishes to re-arrange how the sections fit together or to re-use a section in two documents without having to keep them in sync with one another.

Ideally, certain experts, W3C Task Forces, Working Groups, and technology providers could edit the HTML5 specification microsections without having to worry about larger formatting, editing, merging, or layout issues. These microsections could then be processed by a documentation build system into different end-products. For example, HTML5+RDFa or HTML5+RDFa+ARIA, or HTML5+ARIA-RDFa-Microdata. Specifications containing different technologies could be produced very quickly without the overhead of having to author and maintain an entirely new set of documents. You wouldn’t need an editor per proposal to keep all of them in sync with one another, thus reducing the cost of making new proposals. Some microsections could even be used across 2-3 HTML5-related documents.

This approach, coupled with the move to a more decentralized source control mechanism, would provide a path for anyone who can clone a Git repository and edit an HTML file to contribute to the HTML5 specification. Merging changes into the “official” repository could be done via W3C staff contacts to ensure fair treatment to all proposals.

Problem: Mixing Experimental Features with Stable Ones

There is language in the HTML5 specification that indicates that different parts of the specification are at different levels of maturity. However, it is difficult to tell which parts of the specification are at which level of maturity without deep knowledge of the history of the document. This needs to change if we are going to start giving the impression of a stable HTML5 specification.

When someone who is not knowledgeable about the arcane history of the HTML5 Editors Draft sees the <datagrid> element in the same document as the <canvas> element, it is difficult for them to discern the level of maturity of each. In other words, canvas is implemented in many browsers while datagrid is not, yet they are outlined in the same document. The HTML5 specification does have a pop-up noting which features are implemented, have test cases, and are marked for removal, however it is difficult to read the document knowing exactly which paragraphs, sentences and features are experimental and which ones are not.

This is not to say that either <datagrid> or <canvas> shouldn’t be in the HTML5 specification. Rather, there should be different HTML5 specification maturity levels and only features that have reached maturity should be placed into a document entering Last Call at the W3C. We should clearly mark what is and isn’t experimental in the HTML5 specification. We should not standardize on any features that don’t have working implementations.

Action: Shifting to an Unstable/Testing/Stable Release Model

There is much that the standards community can learn from the release processes of larger communities like Debian, Ubuntu, RedHat, the Linux kernel, FreeBSD and others. Each of those communities clearly differentiates software that is very experimental, software that is entering a pre-release testing phase, and software that is tested and intended for public consumption.

If the HTML5 specification generation process adopts Microsections and a distributed source control mechanism, it should be easy to add, remove, and migrate features from an experimental document (Editors Draft), to a testing specification (Working Draft), to a stable specification (Recommendation).

While it may seem as if this is how W3C already operates, note that there is usually only a single stage that is being worked on at a time. This doesn’t fit well with the way HTML5 is being developed in that there are many people working on stable features, testing features, and experimental features simultaneously. A new release process is needed to ensure a smooth, non-disruptive transition from one phase to the next.

Problem: Two Communities, One Specification

When the WHAT WG started what was to become HTML5, the group was asked to do the work outside of the World Wide Web Consortium process. As a benefit of working outside the W3C process, the HTML5 specification was authored and gained support and features very quickly. Letting anybody join the group, having a single editor, dealing directly with the browser manufacturers and focusing on backwards compatability all resulted in the HTML5 specification as we know it today.

When the W3C and WHAT WG decided to collaborate on HTML5 as the future language of the web, it was decided that work would continue both in the HTML WG and the WHAT WG. People from the WHAT WG joined the HTML WG and vice-versa to show good faith and move towards openly collaborating with one another. At first, it seemed as if things were fine between the two communities. That is, until the emergence of an “us vs. them” undercurrent – both at the W3C and in WHAT WG.

Keeping both communities active was and will continue to be a mistake. Instead of combining mailing lists, source control systems, bug trackers and the variety of other resources controlled by each group, we now operate with duplicates of many of the systems. It is not only confusing, but inefficient to have duplicate resources for a community that is supposed to be working on the same specification. It sends the wrong signal to the public. Why would two communities that are working on the same thing continue to separate themselves from one another, unless there was a more fundamental issue that existed?

Action: Merging the Communities

The communities should be merged slowly. Data should be migrated from each duplicate system and a single system should be selected. The mailing lists should be one of the first things to be merged. If either community feels that the other community isn’t the proper place for a list, then a completely new community should be created that merges everyone into a single, cohesive group.

The two communities should bid on an html5.xyz domain (html5.org, html5.info, html5.us) and consolidate resources. This would not only be more efficient, but also eventually remove the “us vs. them” undercurrent.

Problem: Specification Ambiguity

A common problem for specification writers is that, over the years, their familiarity with the specification makes them unable to see ambiguities and errors. This is one of the reasons why all W3C specifications must go through a very rigorous internal review process as well as a public review process. Public review and feedback are necessary in order to clarify specification details, gather implementation feedback, and ultimately produce a better specification.

In order to contribute bug reports, features, or comments on the HTML5 specification, one must send an e-mail to either the HTML WG or the WHAT WG. The combined HTML5 and WHAT WG mailing list traffic can range from 600 to 1,200 very technical e-mails a month. Asking those who would like to comment on the HTML5 specification to devote a large amount of their time to write and respond to mailing list traffic is a formidable request. So formidable that many choose not to participate in the development of HTML5.

Action: In-line Specification Commenting and Bug Tracking

There are many websites that allow people to interactively comment on articles or reply to comments on web pages. We need to ensure that there are as few barriers as possible for commenting on the HTML5 specification. Sending an e-mail to the HTML WG or WHAT WG mailing lists should not be a requirement for providing specification feedback. It should be fairly easy to create a system that allows specification readers to comment directly on text ambiguities, propose alternate solutions, or suggest text deletions when viewing the HTML5 specification.

The down-side to this approach is that there may be a large amount of noise and a small amount of signal but that would be better than no signal at all. We must understand that many web developers, authors, and those who have an interest in the future of the Web cannot put as much time into the HTML5 specification as those who are paid to work on it.

Problem: No Way to Propose Lasting Alternate Proposals

If one were to go to the trouble of adding several new sections into HTML5 or modifying parts of the document, they would then need to keep their changes in sync with the latest version of the document at svn.whatwg.org. This is because there is currently only one person who is allowed to make changes to the “official” HTML5 specification. Keeping documents in sync is time consuming and should not be required of participants in order to affect change.

Since the HTML5 specification document is changed on an almost daily basis, the current approach forces editors to play a perpetual game of catch-up. This results in them spending more time merging changes than contributing to the HTML5 document. It may also cause the feeling that their changes are less important than those further upstream.

Action: At-will Specification Generation from Interchangeable Modules

As previously mentioned, moving to a microsectioned approach can help in this case as well. There can be a number of alternative microsections that editors may author so that each section can be a drop-in replacement. The document build system could be instructed, via a configuration file, on which microsections to use for a particular output product. Therefore if someone wanted to construct a version of HTML5 with a certain feature X, all that would be required is the authoring of the microsection and an instruction for the documentation build system to generate an alternative specification with the microsection included.

Problem: Partial Distributed Extensibility

One of the things that will vanish when the XHTML2 Working Group is shut down at the end of this year is the concept that there would be a unified, distributed platform extensibility mechanism for the web. In short, distributed platform extensibility allows for the HTML language to be extended to contain any XML document. Examples of XHTML-based extensibility include embedding SVG, MathML, and a variety of other specialized XML-based markup languages into XHTML documents.

HTML5 is specifically designed not to be extended in a decentralized manner for the non-XML version of the language. It special-cases SVG and MathML into the HTML platform. It also disallows platform extensibility and language extensibility in HTML5 (not XHTML5) using the same restricted rubric when they are clearly different types of extensibility mechanisms.

Many proponents of distributed extensibility are very concerned by this rubric and resulting design decision. At the heart of distributed extensibility is the assertion that anyone should be able to extend HTML in the future to address their markup needs. It is a forward-looking statement that asserts that the current generation cannot know how the world might want to extend HTML. The power should be placed into the hands of web developers so that they may have more tools available to them to solve their particular set of problems.

Action: A Set of Proposals for Distributed Extensibility

Whether or not distributed extensibility will ever be used on a large scale is not the issue. The issue is that there are currently no proposals for distributed extensibility in HTML5 (again, not XHTML5). Without a proposal, there is no counter-point for the “no distributed extensibility” assertion that HTML5 makes. Thus, if the W3C were to form consensus at this point, there would be only one option.

Consensus around a single option is not consensus. In the very least, HTML5 needs draft language for distributed extensibility. It doesn’t need to be the same solution as XHTML5 provides, it doesn’t even need to be close, but alternatives should exist. XHTML spent many years solving this problem and because of that, SVG and MathML found a home in XHTML documents. Enabling specialist communities, such as the visual arts and mathematics, to extend HTML to do things that were not conceivable during the early days of the Web is a fundamental pillar of the way the Web operates today.

Similarly, a set of tools to provide data extensibility do not exist in a form that are acceptable to the standards community. These tools are also going to fundamentally shape the way we embed and transfer information in web pages. If we are realistic about the expanse of problems that the Web is being called upon to solve, we should ensure that data extensibility capabilities are provided as we move forward.

We cannot be everything to everyone, we should provide some combination of features like Javascript, embedding XML documents, and RDFa, in both HTML5 and XHTML5 to help web developers solve their own problems without needing to affect change in the HTML5 specification.

Problem: Disregarding Input from the Accessibility Community

Accessibility is rarely seen as important until one finds oneself in a position where they or a loved one’s vision, hearing, or motor skills do not function at a level that makes it easy to navigate the web. Accessible websites are important not only to those with disabilities, but also to those who cannot interact with a website in a typical fashion. For example, web accessibility is also important when one is on a small form factor device, using a text-only interface, or a sound-based interface.

Members of the Website Accessibility Initiative (WAI) and the creators of The Accessible Rich Internet Applications (ARIA) technical specification have noted on a number of occasions that they feel as if they are being ignored by the HTML5 community.

Action: Integrate the Accessibility Community’s Input

Empowering WAI to edit the HTML5 specification in a way that does not conflict with others, but produces an accessibility-enhanced HTML5 specification is important to the future of the Web. Microsections and distributed source control would allow this type of collaboration without affecting the speed at which HTML5 is being developed. It may be that WAI needs a specification writer that is capable of producing unambiguous language that will enable browser manufacturers to easily create interoperable implementations.

The Plan of Action

In order to provide a greater impact on the near-term health of HTML5, the proposals listed above should be performed in the following order (which is not the order in which they are presented above):

  1. Implementation of git for distributed source control.
  2. Microsection splitter and documentation build system.
  3. Recruit more committers into the HTML5 community.
  4. Split features based on their experimental nature into unstable and testing during Last Call.
  5. Implement in-line feedback mechanism for HTML5 spec.
  6. Distributed extensibility proposals for HTML5.
  7. Better, more precise accessibility language for HTML5.
  8. Merge the HTML WG and WHAT WG communities.

Acknowledgements

The author would like to thank the following people for reviewing this document and providing feedback and guidance (in alphabetical order): Ben Adida, John Allsopp, Tab Atkins Jr., L. David Baron, Dan Connolly, John Drinkwater, Micah Dubinko, Michael Hausenblas, Ian Hickson, Mike Johnson, David I. Lehn, Dave Longley, Samantha Longley, Shelley Powers, Sam Ruby, Doug Schepers, and Kyle Weems.