All posts in PaySwarm

Web Payments: The Architect, the Sage, and the Moral Voice

Three billion people use the Web today. That number will double to more than six billion people by the year 2020. Let that sink in for a second; in five years, the Web will reach 90% of all humans over the age of 6. A very large percentage of these people will be using their mobile phones as their only means of accessing the Web.

TL;DR: In 2015, the World Wide Web Consortium (The Architect), the US Federal Reserve (The Sage), and the Bill and Melinda Gates Foundation (The Moral Voice) have each started initiatives to dramatically improve the worlds payment systems. The organizations are highly aligned in their thinking. Imagine what we could accomplish if we joined forces.

The Problem

With the power of the Web, we can send a message from New York to New Delhi in the blink of an eye. Millions of people around the globe can read a story published by a citizen journalist, writing about a fragile situation, from a previously dark corner of the world. Yet, when it comes to exchanging economic value, the Web has not fulfilled a promise to ease the way we exchange information.

While it costs fractions of a penny to send a message around the globe, on average, it costs tens of thousands of times that to send money the same distance. Furthermore, two and a half billion adults on this planet do not have access to modern financial infrastructure, which places families in precarious situations when a shock such as a medical emergency hits the household. Worse, it leaves these families in a vicious cycle of living hand-to-mouth, unable to fully engage in the local economy much less the global one.

The Architect

What if we were able to use the most ubiquitous communication network that exists today to move economic value in the same way that we move instant messages? What if we could lower the costs to the point that we could pull those 2.5 billion people out of the precarious situation in which they’re operating now? Doing this would have dramatically positive financial and societal effects for those people as well as the people and businesses operating in industrialized nations.

That’s the premise behind the Web Payments Activity at the World Wide Web Consortium (W3C). The W3C, along with its 400 member organizations, standardized the Web and is one of the main reasons you’re able to view this web page today from wherever you’re sitting on this planet of ours. For the last five years or so, a number of us have been involved in trying to standardize the way money is sent and received by building that mechanism into the core architecture of the Web.

It has been a monumental effort, and we’re very far from being done, but the momentum we’ve gained so far is more than we predicted by far. For example, here is just a sampling of the organizations involved in the work: Bloomberg, Google, The US Federal Reserve, Alibaba, Tencent, Apple, Opera, Target, Intel, Deutsche Telekom, Ripple Labs, Oracle, Yandex… the list goes on. What was a pipe dream a few years ago at W3C is a very real possibility today. There is a strong upside here for customers, merchants, financial institutions, and developers.

The Sage

At one time, the US had the most advanced payment system in the world. One of the problems with being the first is that you quickly start accruing technical debt. Today, the US Payment system is among the worst in the world in many categories in which you rate these sorts of things. For the last several years, the US Fed has been running an initiative to improve the state of the US payment ecosystem.

Two of the US Fed’s strengths are 1) pouring through massive amounts of research on our financial system and producing a cohesive summary of the state of the US financial system and 2) its ability to convene a large number of financial players around a particular topic of interest. Their call for papers on ideas around improving the US payment system resulted in 190 submitted papers and a coherent strategy for fixing the US payment system. Their most recent Faster Payments Task Force has attracted over 320 organizations that will be attempting to propose systems to fix a number of the US payment systems rough spots.

If we are going to try to upgrade the payment systems in the world, it’s important to be able to make decisions based on data. The research and convening ability of the US Fed is a powerful force and the W3C and US Fed are already collaborating on the Web Payments work. The plan should be to deepen these relationships over the next couple of years.

The Moral Voice

The Bill and Melinda Gates Foundation just announced the LevelOne Project, which is an initiative to dramatically increase financial inclusion around the world by building a system that will work for the 2.5 billion people that have little to no access to financial infrastructure in their countries. This isn’t just a developing world problem. At least 30% of people in places like the United States and the European Union don’t have access to modern financial infrastructure.

The Gates Foundation has just proposed a research-backed formula for success for launching a new payment system designed to foster financial inclusion, and here’s where it gets interesting.

The Collaboration

Building the next generation payments system for the world requires answering the ‘what’, ‘how’, and ‘why’. The organizations mentioned previously will play a crucial role in elaborating on those answers. The US Fed (the sage) can influence what we are building; they can explain what has been and what should be. The W3C (the architect) can influence how we build what is needed; they can explain how all the pieces fit together into a cohesive whole. Finally, the Gates Foundation (the moral voice) can explain the why behind what we are building in the way that we’re constructing it.

I’ve had the great pleasure of working with the people in these initiatives over the past several years. Aside from everyone I’ve spoken with being deeply dedicated to the task at hand, I can also say from first-hand experience that there is a tremendous amount of alignment between the three organizations. It’ll take time to figure out the logistics of how to most effectively work together, but it is certainly something worth pursuing. At a minimum each organization should be publicly supportive of each others work. My hope is that the organizations start to become deeply involved with each other where it makes sense to do so.

The first opportunity to collaborate in person is going to be the Web Payments face-to-face meeting in New York City happening on June 16th-18th 2015. W3C and US Fed will be there. We need to get someone from the Gates Foundation there.

If this collaboration ends up being successful, the future is looking very bright for the Web and the 6 billion people that will have access to Web Payments in a few years time.

Identity Credentials and Web Login

In a previous blog post, I outlined the need for a better login solution for the Web and why Mozilla Persona, WebID+TLS, and OpenID Connect currently don’t address important use cases that we’re considering in the Web Payments Community Group. The blog post contained a proposal for a new login mechanism for the Web that was simultaneously more decentralized, more extensible, enabled a level playing field, and was more privacy-aware than the previously mentioned solutions.

In the private conversations we have had with companies large and small, the proposal was met with a healthy dose of skepticism and excitement. There was enough excitement generated to push us to build a proof-of-concept of the technology. We are releasing this proof-of-concept to the Web today so that other technologists can take a look at it. It’s by no means done, there are plenty of bugs and security issues that we plan to fix in the next several weeks, but the core of the idea is there and you can try it out.

TL;DR: There is now an open source demo of credential-based login for the Web. We think it’s better than Persona, WebID+TLS, and OpenID Connect. If we can build enough support for Identity Credentials over the next year, we’d like to standardize it via the W3C.

The Demo

The demonstration that we’re releasing today is a proof-of-concept asserting that we can have a unified, secure identity and login solution for the Web. The technology is capable of storing and transmitting your identity credentials (email address, payment processor, shipping address, driver’s license, passport, etc.) while also protecting your privacy from those that would want to track and sell your online browsing behavior. It is in the same realm of technology as Mozilla Persona, WebID+TLS, and OpenID Connect. Benefits of using this technology include:

  • Solving the NASCAR login problem in a way that greatly increases identity provider competition.
  • Removing the need for usernames and passwords when logging into 99.99% of the websites that you use today.
  • Auto-filling information that you have to repeat over and over again (shipping address, name, email, etc.).
  • Solving the NASCAR payments problem in a way that greatly increases payment processor competition.
  • Storage and transmission of credentials, such as email addresses, driver’s licenses, and digital passports, via the Web that cryptographically prove that you are who you say you are.

The demonstration is based on the Identity Credentials technology being developed by the Web Payments Community Group at the World Wide Web Consortium. It consists of an ecosystem of four example websites. The purpose of each website is explained below:

Identity Provider (

The Identity Provider stores your identity document and any information about you including any credentials that other sites may issue to you. This site is used to accomplish several things during the demo:

  • Create an identity.
  • Register your identity with the Login Hub.
  • Generate a verified email credential and store it in your identity.

Login Hub (

This site helps other websites discover your identity provider in a way that protects your privacy from both the website you’re logging into as well as your identity provider. Eventually the functionality of this website will be implemented directly in browsers, but until that happens, it is used to bootstrap the discovery of and login/credential transmission process for the identity provider. This site is used to do the following things during the demo:

  • Register your identity, creating an association between your identity provider and the email address and passphrase you use on the login hub.
  • Login to a website.

Credential Issuer (

This site is responsible for verifying information about you like your home address, driver’s license, and passport information. Once the site has verified some information about you, it can issue a credential to you. For the purposes of the demonstration, all verifications are simulated and you will immediately be given a credential when you ask for one. All credentials are digitally signed by the issuer which means their validity can be proven without the need to contact the issuer (or be online). This site is used to do the following things during the demo:

  • Login using an email credential.
  • Issue other credentials to yourself like a business address, proof of age, driver’s license, and digital passport.

Single Sign-On Demo

The single sign-on website, while not implemented yet, will be used to demonstrate the simplicity of credential-based login. The sign-on process requires you to click a login button, enter your email and passphrase on the Login Hub, and then verify that you would like to transmit the requested credential to the single sign-on website. This website will allow you to do the following in a future demo:

  • Present various credentials to log in.

How it Works

The demo is split into four distinct parts. Each part will be explained in detail in the rest of this post. Before you try the demo, it is important that you understand that this is a proof-of-concept. The demo is pretty easy to break because we haven’t spent any time polishing it. It’ll be useful for technologists that understand how the Web works. It has only been tested in Google Chrome, versions 31 – 35. There are glaring security issues with the demo that have solutions which have not been implemented yet due to time constraints. We wanted to publish our work as quickly as possible so others could critique it early rather than sitting on it until it was “done”. With those caveats clearly stated up front, let’s dive in to the demo.

Creating an Identity

The first part of the demo requires you to create an identity for yourself. Do so by clicking the link in the previous sentence. Your short name can be something simple like your first name or a handle you use online. The passphrase should be something long and memorable that is specific to you. When you click the Create button, you will be redirected to your new identity page.

Note the text displayed in the middle of the screen. This is your raw identity data in JSON-LD format. It is a machine-readable representation of your credentials. There are only three pieces of information in it in the beginning. The first is the JSON-LD @context value,, which tells machines how to interpret the information in the document. The second is the id value, which is the location of this particular identity on the Web. The third is the sysPasswordHash, which is just a bcrypt hash of your login password to the identity website.

Global Web Login Network

Now that you have an identity, you need to register it with the global Web login network. The purpose of this network is to help map your preferred email address to your identity provider. Keep in mind that in the future, the piece of software that will do this mapping will be your web browser. However, until this technology is built into the browser, we will need to bootstrap the email to identity document mapping in another way.

The way that both Mozilla Persona and OpenID do it is fairly similar. OpenID assumes that your email address maps to your identity provider. So, an OpenID login name of assumes that is your identity provider. Mozilla Persona went a step further by saying that if wouldn’t vouch for your email address, that they would. So Persona would first check to see if spoke the Persona protocol, and if it didn’t, the burden of validating the email address would fall back to Mozilla. This approach put Mozilla in the unenviable position of running a lot of infrastructure to make sure this entire system stayed up and running.

The Identity Credentials solution goes a step further than Mozilla Persona and states that you are the one that decides which identity provider your email address maps to. So, if you have an email address like, you can use as your identity provider. You can probably imagine that this makes the large identity providers nervous because it means that they’re now going to have to compete for your business. You have the choice of who is going to be your identity provider regardless of what your email address is.

So, let’s register your new identity on the global web login network. Click the text on the screen that says “Click here to register”. That will take you to a site called This website serves two purposes. The first is to map your preferred email address to your identity provider. The second is to protect your privacy as you send information from your identity provider and send it to other websites on the Internet (more on this later).

You should be presented with a screen that asks you for three pieces of information. Your preferred email address, a passphrase, and a verification of that passphrase. When you enter this information, it will be used to do a number of things. The first thing that will happen is that a public/private keypair will be generated for the device that you’re using (your web browser, for instance). This key will be used as a second factor of authentication in later steps in this process. The second thing that will happen is that your email address and passphrase will be used to generate a query token, which will be later used to query the decentralized Telehash-based identity network. The third thing that will happen is that your query token to identity document mapping will be encrypted and placed onto the Telehash network.

The Decentralized Database (Telehash)

We could spend an entire blog post itself on Telehash, but the important thing to understand about it is that it provides a mechanism to store data in a decentralized database and query that database at a later time for the data. By storing this query token and query response in the decentralized database, it allows us to find your identity provider mapping regardless of which device you’re using to access the Web and independent of who your email provider is.

In fact, note that I said that you use your “preferred email address” above? It doesn’t need to be an email address, it could be a simple string like “Bob” and a unique passphrase. Even though there are many “Bob”s in the world, the likelyhood that they’d use the same 20+ character passphrase is unlikely and therefore one could use just a first name and a complex passphrase. That said, we’re suggesting that most non-technical people use a preferred email address because most people won’t understand the dangers of SHA-256 collisions on username+passphrase combinations like sha256(“Bob” + “password”). In addition to this aside, the decentralized database solution doesn’t need to be Telehash. It could just as easily be a decentralized ledger like Namecoin or Ripple.

Once you have filled out your preferred email address and passphrase, click the Register button. You will be sent back to your identity provider and will see two new pieces of information. The first piece of information is sysIdpMapping, which is the decentralized database query token (query) and passphrase-encrypted mapping (queryResponse). The second piece of information is sysDeviceKeys, which is the public key associated with the device that you registered your identity through and which will be used as a second factor of authentication in later versions of the demo. The third piece of information is sysRegistered, which is an indicator that the identity has been registered with the decentralized database.

Acquiring an Email Credential

At this point, you can’t really do much with your identity since it doesn’t have any useful credential information associated with it. So, the next step is to put something useful into your identity. When you create an account on most websites, the first thing the website asks you for is an email address. It uses this email address to communicate with you. The website will typically verify that it can send and receive an email to that address before fully activating your account. You typically have to go through this process over and over again, once for each new site that you join. It would be nice if an identity solution designed for the Web would take care of this email registration process for you. For those of you familiar with Mozilla Persona, this approach should sound very familiar to you.

The Identity Credentials technology is a bit different from Mozilla Persona in that it enables a larger number of organizations to verify your email address than just your email provider or Mozilla. In fact, we see a future where there could be tens, if not hundreds, of organizations that could provide email verification. For the purposes of the demo, the Identity Provider will provide a “simulated verification” (aka fake) of your email address. To get this credential, click on the text that says “Click here to get one”.

You will be presented with a single input field for your email address. Any email address will do, but you may want to use the preferred one you entered earlier. Once you have entered your email address, click “Issue Email Credential”. You will be sent back to your identity page and you should see your first credential listed in your JSON-LD identity document beside the credential key. Let’s take a closer look at what constitutes a credential in the system.

The EmailCredential is a statement that a 3rd party has done an email verification on your account. Any credential that conforms to the Identity Credentials specification is composed of a set of claims and a signature value. The claims tie the information that the 3rd party is asserting, such as an email address, to the identity. The signature is composed of a number of fields that can be used to cryptographically prove that only the issuer of the credential was capable of issuing this specific credential. The details of how the signature is constructed can be found in the Secure Messaging specification.

Now that you have an email credential, you can use it to log into a website. The next demonstration will use the email credential to log into a credential issuer website.

Credential-based Login

Most websites will only require an email credential to log in. There are other sites, such as ecommerce sites or high-security websites, that will require a few more credentials to successfully log in or use their services. For example, a ecommerce site might require your payment processor and shipping address to send you the goods you purchased. A website that sells local wines might request that you provide a credential proving that you are above the required drinking age in your locality. A travel website might request your digital passport to ease your security clearing process if you are traveling internationally. There are many more types of speciality credentials that one may issue and use via the Identity Credentials technology. The next demo will entail issuing some of these credentials to yourself. However, before we do that, we have to login to the credential issuer website using our newly acquired email credential.

Go to the website and click on the “Login” button. This will immediately send you to the login hub website where you had previously registered your identity. The request sent to the login hub by will effectively be a request for your email credential. Once you’re on, enter your preferred email address and passphrase and then click “Login”.

While you were entering your email address and passphrase, the page connected to the Telehash network and readied itself to send a query. When you click “Login”, your email address and passphrase are SHA-256’d and sent as a query to the Telehash network. Your identity provider will receive the request and respond to the query with an encrypted message that will then be decrypted using your passphrase. The contents of that message will tell the login hub where your identity provider is holding your identity. The request for the email credential is then forwarded to your identity provider. Note that at this point your identity provider has no idea where the request for your email credential is coming from because it is masked by the login hub website. This masking process protects your privacy.

Once the request for your email credential is received by your identity provider, a response is packaged up and sent back to, which then relays that information back to Once recieves your email credential, it will log you into the website. Note at this point that you didn’t have to enter a single password on the website, all you needed was an email credential to log in. Now that you have logged in, you can start issuing additional credentials to yourself.

Issuing Additional Credentials

The previous section introduced the notion that you can issue many different types of credentials. Once you have logged into the website, you may now issue a few of these credentials to yourself. Since this is a demonstration, no attempt will be made to verify those credentials by a 3rd party. The credentials that you can issue to yourself include a business address, proof of age, payment processor, driver’s license, and passport. You many specify any information that you’d like to specify in the input fields to see how the credential would look if it held real data.

Once you have filled out the various fields, click the blue button to issue the credential. The credential will be digitally signed and sent to your identity provider, which will then show you the credential that was issued to you. You have a choice to accept or reject the credential. If you accept the credential, it is written to your identity.

You may repeat this process as many times as you would like. Note that on the passport credential how there is an issued on date as well as an expiration date to demonstrate that credentials can have a time limit associated with them.

Known Issues

As mentioned throughout this post, this demonstration has a number of shortcomings and areas that need improvement, among them are:

  • Due to a lack of time, we didn’t setup our own HTTPS Telehash seed. Since we didn’t setup the HTTPS Telehash seed, we couldn’t run secured by TLS due to security settings in most web browsers related to WebSocket connections. Not using TLS results in a gigantic man-in-the-middle attack possibility. A future version will, of course, use both TLS and HSTS on the website.
  • The Telehash query/response database isn’t decentralized yet. There are a number of complexities associated with creating a decentralized storage/query network, and we haven’t decided on what the proper approach should be. There is no reason why the decentralized database couldn’t be NameCoin or Ripple-based, and it would probably be good if we had multiple backend databases that supported the same query/response protocol.
  • We don’t check digital signatures yet, but will soon. We were focused on the flow of data first and ensuring security parameters were correct second. Clearly, you would never want to run such a system in production, but we will improve it such that all digital signatures are verified.
  • We do not use the public/private keypair generated in the browser to limit the domain and validity length of credentials yet. When the system is productionized, implementing this will be a requirement and will protect you even if your credentials are stolen through a phishing attack on
  • We expect there to be many more security vulnerabilities that we haven’t detected yet. That said, we do believe that there are no major design flaws in the system and are releasing the proof-of-concept, along with source code, to the general public for feedback.

Feedback and Future Work

If you have any questions or concerns about this particular demo, please leave them as comments on this blog post or send them as comments to the mailing list.

Just as you logged in to the website using your email credential, you may also use other credentials such as your driver’s license or passport to log in to websites. Future work on this demo will add functionality to demonstrate the use of other forms of credentials to perform logins while also addressing the security issues outlined in the previous section.

The Marathonic Dawn of Web Payments

A little over six years ago, a group of doe-eyed Web developers, technologists, and economists decided that the way we send and receive money over the Web was fundamentally broken and needed to be fixed. The tiring dance of filling out your personal details on every website you visited seemed archaic. This was especially true when handing over your debit card number, which is basically a password into your bank account, to any fly by night operation that had something you wanted to buy. It took days to send money where an email would take milliseconds. Even with the advent of Bitcoin, not much has changed since 2007.

At the time, we naively thought that it wouldn’t take long for the technology industry to catch on to this problem and address it like they’ve addressed many of the other issues around publishing and communication over the Web. After all, getting paid and paying for services is something all of us do as a fundamental part of modern day living. Change didn’t come as fast as we had hoped. So we kept our heads down and worked for years gathering momentum to address this issue on the Web. I’m happy to say that we’ve just had a breakthrough.

The first ever W3C Web Payments Workshop happened two weeks ago. It was a success. Through it, we have taken significant steps toward a better future for the Web and those that make a living by using it. This is the story of how we got from there to here, what the near future looks like, and the broad implications this work has for the Web.

TL;DR: The W3C Web Payments Workshop was a success, we’re moving toward standardizing some technologies around the way we send and receive money on the Web; join the Web Payments Community Group if you want to find out more.

Primordial Web Payment Soup

In late 2007, our merry little band of collaborators started piecing together bits of the existing Web platform in an attempt to come up with something that could be standardized. After a while, it became painfully obvious that the Web Platform was missing some fundamental markup and security technologies. For example, there was no standard machine-readable or automate-able way of describing an item for sale on the Web. This meant that search engines can’t index all the things on the Web that are offered for sale. It also meant that all purchasing decisions had to be made by people. You couldn’t tell your Web browser something like “I trust the New York Times, let them charge me $0.05 per article up to $10 per month for access to their website”. Linked Data seemed like the right solution for machine-readable products, but the Linked Data technologies at the time seemed mired in complex, draconian solutions (SOAP, XML, XHTML, etc.): the bane of most Web Developers.

We became involved in the Microformats community and in the creation of technologies like RDFa in the hope that we could apply it to the Web Payments work. When it became apparent that RDFa was only going to solve part of the problem (and potentially produce a new set of problems), we created JSON-LD and started to standardize it through the W3C.

As these technologies started to grow out of the need to support payments on the Web, it became apparent that we needed to get more people from the general public, government, policy, traditional finance, and technology sectors involved.

Founding a Payment Incubator for the Web

We needed to build a movement around the Web Payments work and the founding of a community was the first step in that movement. In 2009, we founded the PaySwarm Community and worked on the technologies related to payments on the Web with a handful of individuals. In 2011, we transitioned the PaySwarm Community to the W3C and renamed the group to the Web Payments Community Group. To be clear, Community Groups at W3C are never officially sanctioned by W3C’s membership, but they are where most of the pre-standardization work happens. The purpose of the Web Payments Community Group was to incubate payment technologies and lobby W3C to start official standardization work related to how we exchange monetary value on the Web.

What started out as nine people spread across the world has grown into an active community of more than 150 people today. That community includes interesting organizations like Bloomberg, Mozilla, Stripe, Yandex, Ripple Labs, Citigroup, Opera, Joyent, and Telefónica. We have 14 technologies that are in the pre-standardization phase, ready to be placed into the standardization pipeline at W3C if we can get enough support from Web developers and the W3C member organizations.


In 2013, a number of us thought there was enough momentum to lobby W3C to hold the world’s first Web Payments Workshop. The purpose of the workshop would be to get major payment providers, government organizations, telecommunication providers, Web technologists, and policy makers into the same room to see if they thought that payments on the Web were broken and to see if people in the room thought that there was something that we could do about it.

In November of 2013, plans were hatched to hold the worlds first Web Payments Workshop. Over the next several months, the W3C, the Web Payments Workshop Program Committee, and the Web Payments Community Group worked to bring together as many major players as possible. The result was something better than we could have hoped for.

The Web Payments Workshop

In March 2014, the Web Payments Workshop was held in the beautiful, historic, and apropos Paris stock exchange, the Palais Brongniart. It was packed by an all-star list of financial and technology industry titans like the US Federal Reserve, Google, SWIFT, Yandex, Mozilla, Bloomberg, ISOC, Rabobank, and 103 other people and organizations that shape financial and Web standards. In true W3C form, every single session was minuted and is available to the public. The sessions focused on the following key areas related to payments and the Web. The entire contents of each session, all 14 hours of discussion, are linked to below:

  1. Introductions by W3C and European Commission
  2. Overview of Current and Future Payment Ecosystems
  3. Toward an Ideal Web Payments Experience
  4. Back End: Banks, Regulation, and Future Clearing
  5. Enhancing the Customer and Merchant Experience
  6. Front End: Wallets – Initiating Payment and Digital Receipts
  7. Identity, Security, and Privacy
  8. Wrap-up of Workshop and Next Steps

I’m not going to do any sort of deep dive into what happened during the workshop. W3C has released a workshop report that does justice to summarizing what went on during the event. The rest of this blog post will focus on what will most likely happen if we continue to move down the path we’ve started on wrt. Web Payments at W3C.

The Next Year in Web Payments

The next step of the W3C process is to convene an official group that will take all of the raw input from the Web Payments Workshop, the papers submitted to the event, input from various W3C Community Groups and from the industry at large, and reduce the scope of work down to something that is narrowly focused but will have a very large series of positive impacts on the Web.

This group will most likely operate for 6-12 months to make its initial set of recommendations for work that should start immediately in existing W3C Working Groups. It may also recommend that entirely new groups be formed at W3C to start standardization work. Once standardization work starts, it will be another 3-4 years before we see an official Web standard. While that sounds like a long time, keep in mind that large chunks of the work will happen in parallel, or have already happened. For example, the first iteration of the RDFa and JSON-LD bits of the Web Payments work are already done and standardized. The HTTP Signatures work is quite far along (from a technical standpoint, it still needs a thorough security review and consensus to move forward).

So, what kind of new work can we expect to get started at W3C? While nothing is certain, looking at the 14 pre-standards documents that the Web Payments Community Group is working on helps us understand where the future might take us. The payment problems of highest concern mentioned in the workshop papers also hint at the sorts of issues that need to be addressed for payments on the Web. Below are a few ideas of what may spin out of the work over the next year. Keep in mind that these predictions are mine and mine alone, they are in no way tied to any sort of official consensus either at the W3C or in the Web Payments Community Group.

Identity and Verified Credentials

One of the most fundamental problems that was raised at the workshop was the idea that identity on the Web is broken. That is, being able to prove who you are to a website, such as a bank or merchant, is incredibly difficult. Since it’s hard for us to prove who we are on the Web, fraud levels are much higher than they should be and peer-to-peer payments require a network of trusted intermediaries (which drive up the cost of the simplest transaction).

The Web Payments Community Group is currently working on technology called Identity Credentials that could be applied to this problem. It’s also closely related to the website login problem that Mozilla Persona was attempting to solve. Security and privacy concerns abound in this area, so we have to make sure to carefully design for those concerns. We need a privacy-conscious identity solution for the Web, and it’s possible that a new Working Group may need to be created to push forward initiatives like credential-based login for the Web. I personally think it would be unwise for W3C members to put off the creation of an Identity Working Group for much longer.

Wallets, Payment Initiation, and Digital Receipts

Another agreement that seemed to come out of the workshop was the belief that we need to create a level playing field for payments while also not attempting to standardize one payment solution for the Web. The desire was to standardize on the bare minimum necessary to make it so that websites only needed a few ways to initiate payments and receive confirmation for them. The ideal case was that your browser or wallet software would pick the best payment option for you based on your needs (best protection, fastest payment confirmation, lowest fees, etc.).

Digital wallets that hold different payment mechanisms, loyalty cards, personal data, and receipts were discussed. Unfortunately, the scope of a wallet’s functionality was not clear. Would a wallet consist of a browser-based API? Would it be cloud-based? Both? How would you sync data between wallets on different devices? What sort of functionality would be the bare minimum? These are questions that the upcoming W3C Payments Interest Group should answer. The desired outcome, however seemed to be fairly concrete: provide a way for people to do a one-click purchase on any website without having to hand over all of their personal information. Make it easy for Web developers to integrate this functionality into websites using a standards-based approach.

Shifting to use some Bitcoin-like protocol seemed to be a non-starter for most everyone in the room, however the idea that we could create Bitcoin/USD/Euro wallets that could initiate payment and provide a digital receipt proving that funds were moved seemed to be one possible implementation target. This would allow Visa, Mastercard, PayPal, Bitcoin, and banks to not have to reinvent their entire payment networks in order to support simple one-click purchases on the Web. The Web Payments Community Group does have a Web Commerce API specification and a Web Commerce protocol that covers this area, but it may need to be modified or expanded based on the outcome of the “What is a digital wallet and what does it do?” discussion.

Everything Else

The three major areas where it seemed like work could start at W3C revolved around verified identity, payment initiation, and digital receipts. In order to achieve those broad goals, we’re also going to have to work on some other primitives for the Web.

For example, JSON-LD was mentioned a number of times as the digital receipt format. If JSON-LD is going to be the digital receipt format, we’re going to have to have a way of digitally signing those receipts. JOSE is one approach, Secure Messaging is another, and there is currently a debate over which is best suited for digitally signing JSON-LD data.

If we are going to have digital receipts, then what goes into those receipts? How are we going to express the goods and services that someone bought in an interoperable way? We need something like the product ontology to help us describe the supply and demand for products and services on the Web.

If JSON-LD is going to be utilized, some work needs to be put into Web vocabularies related to commerce, identity, and security. If mobile-based NFC payment is a part of the story, we need to figure out how that’s going to fit into the bigger picture, and so on.

Make a Difference, Join us

As you can see, even if the payments scope is very narrow, there is still a great deal of work that needs to be done. The good news is that the narrow scope above would focus on concrete goals and implementations. We can measure progress for each one of those initiatives, so it seems like what’s listed above is quite achievable over the next few years.

There also seems to be broad support to address many of the most fundamental problems with payments on the Web. That’s why I’m calling this a breakthrough. For the first time, we have some broad agreement that something needs to be done and that W3C can play a major role in this work. That’s not to say that if a W3C Payments Interest Group is formed that they won’t self destruct for one reason or another, but based on the sensible discussion at the Web Payments Workshop, I wouldn’t bet on that outcome.

If the Web Payments work at W3C is successful, it means a more privacy-conscious, secure, and semantically rich Web for everyone. It also means it will be easier for you to make a living through the Web because the proper primitives to do things like one-click payments on the Web will finally be there. That said, it’s going to take a community effort. If you are a Web developer, designer, or technical writer, we need your help to make that happen.

If you want to become involved, or just learn more about the march toward Web Payments, join the Web Payments Community Group.

If you are a part of an organization that would specifically like to provide input to the Web Payments Steering Group Charter at W3C, join here.

Web Payments and the World Banking Conference

The standardization group for all of the banks in the world (SWIFT) was kind enough to invite me to speak at the world’s premier banking conference about the Web Payments work at the W3C. The conference, called SIBOS, happened last week and brings together 7,000+ people from banks and financial institutions around the world. The event was being held in Dubai this year. They wanted me to present on the new Web Payments work being done at the World Wide Web Consortium (W3C) including the work we’re doing with PaySwarm, Mozilla, the Bitcoin community, and Ripple Labs.

If you’ve never been to Dubai, I highly recommend visiting. It is a city of extremes. It contains the highest density of stunningly award-winning sky scrapers while the largest expanses of desert loom just outside of the city. Man-made islands dot the coastline, willed into shapes like that of a multi-mile wide palm tree or massive lumps of stone, sand, steel and glass resembling all of the countries of the world. I saw the largest in-mall aquarium in the world and ice skated in 105 degree weather. Poverty lines the outskirts of Dubai while ATMs that vend gold can be found throughout the city. Lamborghinis, Ferraris, Maybachs, and Porsches roared down the densely packed highways while plants struggled to survive in the oppressive heat and humidity.

The extravagances nestle closely to the other extremes of Dubai: a history of indentured servitude, women’s rights issues, zero-tolerance drug possession laws, and political self-censorship of the media. In a way, it was the perfect location for the worlds premier banking conference. The capital it took to achieve everything that Dubai had to offer flowed through the banks represented at the conference at some point in time.

The Structure of the Conference

The conference was broken into two distinct areas. The more traditional banking side was on the conference floor and resembled what you’d expect of a well-established trade show. It was large, roughly the size of four football fields. Innotribe, the less-traditional and much hipper innovation track, was outside of the conference hall and focused on cutting edge thinking, design, new technologies. The banks are late to the technology game, but that’s to be expected in any industry that has a history that can be measured in centuries. Innotribe is trying to fix the problem of innovation in banking.


One of the most surprising things that I learned during the conference was the different classes of customers a bank has and which class of customers are most profitable to the banks. Many people are under the false impression that the most valuable customer a bank can have is the one that walks into one of their branches and opens an account. In general, the technology industry tends to value the individual customer as the primary motivator for everything that it does. This impression, with respect to the banking industry, was shattered when I heard the head of an international bank utter the following with respect to banking branches: “80% of our customers add nothing but sand to our bottom line.” The banker was alluding to the perception that the most significant thing that customers bring into the banking branch is the sand on the bottom of their shoes. The implication is that most customers are not very profitable to banks and are thus not a top priority. This summarizes the general tone of the conference with respect to customers when it came to the managers of these financial institutions.

Fundamentally, a bank’s motives are not aligned with most of their customer’s needs because that’s not where they make the majority of their money. Most of a bank’s revenue comes from activities like short-term lending, utilizing leverage against deposits, float-based leveraging, high-frequency trading, derivatives trading, and other financial exercises that are far removed with what most people in the world think of when they think of the type of activities one does at a bank.

For example, it has been possible to do realtime payments over the current banking network for a while now. The standards and technology exists to do so within the vast majority of the bank systems in use today. In fact, enabling this has been put to a vote for the last five years in a row. Every time it has been up for a vote, the banks have voted against it. The banks make money on the day-to-day float against the transfers, so the longer it takes to complete a transfer, the more money the banks make.

I did hear a number of bankers state publicly that they cared about the customer experience and wanted to improve upon it. However, those statements rang pretty hollow when it came to the product focus on the show floor, which revolved around B2B software, high-frequency trading protocols, high net-value transactions, etc. There were a few customer-focused companies, but they were dwarfed by the size of the major banks and financial institutions in attendance at the conference.

The Standards Team

I was invited to the conference by two business units within SWIFT. The first was the innovation group inside of SWIFT, called Innotribe. The second was the core standards group at SWIFT. There are over 6,900 banks that participate in the SWIFT network. Their standards team is very big, many times larger than the W3C, and extremely well funded. The primary job of the standards team at SWIFT is to create standards that help their member companies exchange financial information with the minimum amount of friction. Their flagship product is a standard called ISO 20022, which is a 3,463 page document that outlines every sort of financial message that the SWIFT network supports today.

The SWIFT standards team are a very helpful group of people that are trying their hardest to pull their membership into the future. They fundamentally understand the work that we’re doing in the Web Payments group and are interested in participating more deeply. They know that technology is going to eventually disrupt their membership and they want to make sure that there is a transition path for their membership, even if their membership would like to view these new technologies, like Bitcoin, PaySwarm, and Ripple as interesting corner cases.

In general, the banks don’t view technical excellence as a fundamental part of their success. Most view personal relationships as the fundamental thing that keeps their industry ticking. Most bankers come from an accounting background of some kind and don’t think of technology as something that can replace the sort of work that they do. This means that standards and new technologies almost always take a back seat to other more profitable endeavors such as implementing proprietary high frequency trading and derivatives trading platforms (as opposed to customer-facing systems like PaySwarm).

SWIFT’s primary customers are the banks, not the bank’s customers. Compare this with the primary customer of most Web-based organizations and the W3C, which is the individual. Since SWIFT is primarily designed to serve the banks, and banks make most of their money doing things like derivatives and high-frequency trading, there really is no champion for the customer in the banking organizations. This is why using your bank is a fairly awful experience. Speaking from a purely capitalistic standpoint, individuals that have less than a million dollars in deposits are not a priority.

Hobbled by Complexity

I met with over 30 large banks while I was at SIBOS and had a number of low-level discussions with their technology teams. The banking industry seems to be crippled by the complexity of their current systems. Minor upgrades cost millions of dollars due to the requirement to keep backwards compatibility. For example, at one point during the conference, it was explained that there was a proposal to make the last digit in an IBAN number a particular value if the organization was not a bank. The amount of push-back on the proposal was so great that it was never implemented since it would cost thousands of banks several million dollars each to implement the feature. Many of the banks are still running systems as part of their core infrastructure that were created in the 1980s, written in COBOL or Fortran, and well past their initial intended lifecycles.

A bank’s legacy systems mean that they have a very hard time innovating on top of their current architecture, and it could be that launching a parallel financial systems architecture would be preferable to broadly interfacing with the banking systems in use today. Startups launching new core financial services are at a great advantage as long as they limit the number of places that they interface with these old technology infrastructures.

Commitment to Research and Development

The technology utilized in the banking industry is, from a technology industry point of view, archaic. For example, many of the high-frequency trading messages are short ASCII text strings that look like this:


Imagine anything like that being accepted as a core part of the Web. Messages are kept to very short sequences because they must be processed in less than 5 microseconds. There is no standard binary protocol, even for high-frequency trading. Many of the systems that are core to a bank’s infrastructure pre-date the Web, sometimes by more than a decade or two. At most major banking institutions, there is very little R&D investment into new models of value transfer like PaySwarm, Bitcoin, or Ripple. In a room of roughly 100 bank technology executives, when asked how many of them had an R&D or innovation team, only around 5% of the people in the room raised their hands.

Compare this with the technology industry, which devotes a significant portion of their revenue to R&D activities and tries to continually disrupt their industry through the creation of new technologies.

No Shared Infrastructure

The technology utilized in the banking industry is typically created and managed in-house. It is also highly fractured; the banks share the messaging data model, but that’s about it. The SWIFT data model is implemented over and over again by thousands of banks. There is no set of popular open source software that one can use to do banking, which means that almost every major bank writes their own software. There is a high degree of waste when it comes to technology re-use in the banking industry.

Compare this with how much of the technology industry shares in the development of core infrastructure like operating systems, Web servers, browsers, and open source software libraries. This sort of shared development model does not exist in the banking world and the negative effects of this lack of shared architecture are evident in almost every area of technology associated with the banking world.

Fear of Technology Companies

The banks are terrified of the thought of Google, Apple, or Amazon getting into the banking business. These technology companies have hundreds of millions of customers, deep brand trust, and have shown that they can build systems to handle complexity with relative ease. At one point it was said that if Apple, Google, or Amazon wanted to buy Visa, they could. Then in one fell swoop, one of these technology companies could remove one of the largest networks that banks rely on to move money in the retail space.

While all of the banks seemed to be terrified of being disrupted, there seemed to be very little interest in doing any sort of drastic change to their infrastructure. In many cases, the banks are just not equipped to deal with the Web. They tend to want to build everything internally and rarely acquire technology companies to improve their technology departments.

There was also a relative lack of executives at banks that I spoke with that were able to carry on a fairly high-level conversation about things like Web technology. It demonstrated that it is going to still be some time until the financial industry can understand the sort of disruption that things like PaySwarm, Bitcoin, and Ripple could trigger. Many know that there are going to be a large chunk of jobs that are going to be going away, but those same individuals do not have the skill set to react to the change, or are too busy with paying customers to focus on the coming disruption.

A Passing Interest in Disruptive Technologies

There was a tremendous amount of interest in Bitcoin, PaySwarm, Ripple and how it could disrupt banking. However, much like the music industry, all but a few of the banks seemed to want to learn how they could adopt or use the technology. Many of the conversations ended with a general malaise related to technological disruption with no real motivation to dig deeper lest they find something truly frightening. Most executives would express how nervous they were about competition from technology companies, but were not willing to make any deep technological changes that would undermine their current revenue streams. There were parallels between many bank executives I spoke with, the innovators dilemma, and how many of the music industry executives I had been involved with in the early 2000s reacted to the rise of Napster, peer-to-peer file trading networks, and digital music.

Many higher-level executives were dismissive about the sorts of lasting changes Web technologies could have on their core business, often to the point of being condescending when they spoke about technologies like Bitcoin, PaySwarm, and Ripple. Most arguments boiled down to the customer needing to trust some financial institution to carry out the transaction, demonstrating that they did not fundamentally understand the direction that technologies like Bitcoin and Ripple are headed.

Lessons Learned

We were able to get the message out about the sort of work that we’re doing at W3C when it comes to Web Payments and it was well received. I have already been asked to present at next year’s conference. There is a tremendous opportunity here for the technology sector to either help the banks move into the future, or to disrupt many of the services that have been seen as belonging to the more traditional financial institutions. There is also a big opportunity for the banks to seize the work that is being done in Web Payments, Bitcoin, and Ripple, and apply it to a number of the problems that they have today.

The trip was a big success in that the Web Payments group now has very deep ties into SWIFT, major banks, and other financial institutions. Many of the institutions expressed a strong desire to collaborate with them on future Web Payments work. The financial institutions we spoke with thought that many of these technologies were 10 years away from affecting them, so there was no real sense of urgency to integrate the technology. I’d put the timeline closer to 3-4 years than 10 years. That said, there was general agreement that these technologies mattered. The lines of communication are now more open than they used to be between the traditional financial industry and the Web Payments group at W3C. That’s a big step in the right direction.

Interested in becoming a part of the Web Payments work, or just peeking in from time to time? It’s open to the public. Join here.

Linked Data Signatures vs. Javascript Object Signing and Encryption

The Web Payments Community Group at the World Wide Web Consortium (W3C) is currently performing a thorough analysis on the MozPay API. The first part of the analysis examined the contents of the payment messages . This is the second part of the analysis, which will focus on whether the use of the Javascript Object Signing and Encryption (JOSE) group’s solutions to achieve message security is adequate, or if the Web Payment group’s solutions should be used instead.

The Contenders

The IETF JOSE Working Group is actively standardizing the following specifications for the purposes of adding message security to JSON:

JSON Web Algorithms (JWA)
Details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm.
JSON Web Key (JWK)
Details a data structure that represents one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, this specification details how you do that in a standard way.
JSON Web Token (JWT)
Defines a way of representing claims such as “Bob was born on November 15th, 1984”. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications.
JSON Web Encryption (JWE)
Defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way.
JSON Web Signature (JWS)
Defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so.

The W3C Web Payments group is actively standardizing a similar specification for the purpose of adding message security to JSON messages:

Linked Data Signatures (code named: HTTP Keys)
Describes a simple, decentralized security infrastructure for the Web based on JSON, Linked Data, and public key cryptography. This system enables Web applications to establish identities for agents on the Web, associate security credentials with those identities, and then use those security credentials to send and receive messages that are both encrypted and verifiable via digital signatures.

Both groups are relying on technology that has existed and been used for over a decade to achieve secure communications on the Internet (symmetric and asymmetric cryptography, public key infrastructure, X509 certificates, etc.). The key differences between the two have to do more with flexibility, implementation complexity, and how the data is published on the Web and used between systems.

Basic Differences

In general, the JOSE group is attempting to create a flexible/generalized way of expressing cryptography parameters in JSON. They are then using that information and encrypting or signing specific data (called claims in the specifications).

The Web Payments group’s specification achieves the same thing, but while not trying to be as generalized as the JOSE group. Flexibility and generalization tends to 1) make the ecosystem more complex than it needs to be for 95% of the use cases, 2) make implementations harder to security audit, and 3) make it more difficult to achieve interoperability between all implementations. The Linked Data Signatures specification attempts to outline a single best practice that will work for 95% of the applications out there. The 5% of Web applications that need to do more than the Linked Data Signatures spec can use the JOSE specifications. The Linked Data Signatures specification is also more Web-y. The more Web-y nature of the spec gives us a number of benefits, such as giving us a Web-scale public key infrastructure as a pleasant side-effect, that we will get into below.

JSON-LD Advantages over JSON

Fundamentally, the Linked Data Signatures specification relies on the Web and Linked Data to remove some of the complexity that exists in the JOSE specs while also achieving greater flexibility from a data model perspective. Specifically, the Linked Data Signatures specification utilizes Linked Data via a new standards-track technology called JSON-LD to allow anyone to build on top of the core protocol in a decentralized way. JSON-LD data is fundamentally more Web-y than JSON data. Here are the benefits of using JSON-LD over regular JSON:

  • A universal identifier mechanism for JSON objects via the use of URLs.
  • A way to disambiguate JSON keys shared among different JSON documents by mapping them to URLs via a context.
  • A standard mechanism in which a value in a JSON object may refer to a JSON object on a different document or site on the Web.
  • A way to associate datatypes with values such as dates and times.
  • The ability to annotate strings with their language. For example, the word ‘chat’ means something different in English and French and it helps to know which language was used when expressing the text.
  • A facility to express one or more directed graphs, such as a social network, in a single document. Graphs are the native data structure of the Web.
  • A standard way to map external JSON application data to your application data domain.
  • A deterministic way to generate a hash on JSON data, which is helpful when attempting to figure out if two data sources are expressing the same information.
  • A standard way to digitally sign JSON data.
  • A deterministic way to merge JSON data from multiple data sources.

Plain old JSON, while incredibly useful, does not allow you to do the things mentioned above in a standard way. There is a valid argument that applications may not need this amount of flexibility, and for those applications, JSON-LD does not require any of the features above to be used and does not require the JSON data to be modified in any way. So people that want to remain in the plain ‘ol JSON bucket can do so without the need to jump into the JSON-LD bucket with both feet.

JSON Web Algorithms vs. Linked Data Signatures

The JSON Web Algorithms specification details the cryptographic algorithms and identifiers that are meant to be used with the JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Token (JWT), and JSON Web Key (JWK) specifications. For example, when specifying an encryption algorithm, a JSON key/value pair that has alg as the key may have HS256 as the value, which means HMAC using the SHA-256 hash algorithm. The specification is 70 pages long and is effectively just a collection of what values are allowed for each key used in JOSE-based JSON documents. The design approach taken for the JOSE specifications requires that such a document exists.

The Linked Data Signatures specification takes a different approach. Rather than declare all of the popular algorithms and cryptography schemes in use today, it defines just one digital signature scheme (RSA encryption with a SHA-256 hashing scheme), one encryption scheme (128-bit AES with cyclic block chaining), and one way of expressing keys (as PEM-formatted data). If placed into a single specification, like the JWA spec, it would be just a few pages long (really, just 1 page of actual content).

The most common argument against the Linked Data Signatures spec, with respect to the JWA specification, is that it lacks the same amount of cryptographic algorithm agility that the JWA specification provides. While this may seem like a valid argument on the surface, keep in mind that the core algorithms used by the Linked Data Signatures specification can be changed at any point to any other set of algorithms. So, the specification achieves algorithm agility while greatly reducing the need for a large 70-page specification detailing the allowable values for the various cryptographic algorithms. The other benefit is that since the cryptography parameters are outlined in a Linked Data vocabulary, instead of a process-heavy specification, that they can be added to at any point as long as there is community consensus. Note that while the vocabulary can be added to, thus providing algorithm agility if a particular cryptography scheme is weakened or broken, already defined cryptography schemes in the vocabulary must not be changed once the cryptography vocabulary terms become widely used to ensure that production deployments that use the older mechanism aren’t broken.

Providing just one way, the best practice at the time, to do digital signatures, encryption, and key publishing reduces implementation complexity. Reducing implementation complexity makes it easier to perform security audits on implementations. Reducing implementation complexity also helps ensure better interoperability and more software library implementations, as the barrier to creating a fully conforming implementation is greatly reduced.

The Web Payments group believes that new digital signature and encryption schemes will have to be updated every 5-7 years. It is better to delay the decision to switch to another primary algorithm as long as as possible (and as long as it is safe to do so). Delaying the cryptographic algorithm decision ensures that the group will be able to make a more educated decision than attempting to predict which cryptographic algorithms may be the successors to currently deployed algorithms.

Bottom line: The Linked Data Signatures specification utilizes a much simpler approach than the JWA specification while supporting the same level of algorithm agility.

JSON Web Key vs. Linked Data Signatures

The JSON Web Key (JWK) specification details a data structure that is capable of representing one or more cryptographic keys. If you need to express one of the many types of cryptographic key types in use today, JWK details how you do that in an standard way. A typical RSA public key looks like the following using the JWK specification:

  "keys": [{
    "n": "0vx7agoe ... DKgw",

A similar RSA public key looks like the following using the Linked Data Signatures specification:

  "@context": "",
  "@id": "",
  "@type": "Key",
  "owner": "",
  "publicKeyPem": "-----BEGIN PUBLIC KEY-----\nMIIBG0BA...OClDQAB\n-----END PUBLIC KEY-----\n"

There are a number of differences between the two key formats. Specifically:

  1. The JWK format expresses key information by specifying the key parameters directly. The Linked Data Signatures format places all of the key parameters into a PEM-encoded blob. This approach was taken because it is easier for developers to use the PEM data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---)
  2. In the general case, the Linked Data Signatures key format assigns URL identifiers to keys and publishes them on the Web as JSON-LD, and optionally as RDFa. This means that public key information is discoverable and human and machine-readable by default, which means that all of the key parameters can be read from the Web. The JWK mechanism does assign a key ID to keys, but does not require that they are published to the Web if they are to be used in message exchanges. The JWK specification could be extended to enable this, but by default, doesn’t provide this functionality.
  3. The Linked Data Signatures format is also capable of specifying an identity that owns the key, which allows a key to be tied to an identity and that identity to be used for thinks like Access Control to Web resources and REST APIs. The JWK format has no such mechanism outlined in the specification.

Bottom line: The Linked Data Signatures specification provides four major advantages over the JWK format: 1) the key information is expressed at a higher level, which makes it easier to work with for Web developers, 2) it allows key information to be discovered by deferencing the key ID, 3) the key information can be published (and extended) in a variety of Linked Data formats, and 4) it provides the ability to assign ownership information to keys.

JSON Web Tokens vs. Linked Data Signatures

The JSON Web Tokens (JWT) specification defines a way of representing claims such as “Bob was born on November 15th, 1984”. These claims are digitally signed and/or encrypted using either the JSON Web Signature (JWS) or JSON Web Encryption (JWE) specifications. Here is an example of a JWT document:

  "iss": "joe",
  "exp": 1300819380,
  "": true

JWT documents contain keys that are public, such as iss and exp above, and keys that are private (which could conflict with keys from the JWT specification). The data format is fairly free-form, meaning that any data can be placed inside a JWT Claims Set like the one above.

Since the Linked Data Signatures specification utilizes JSON-LD for its data expression mechanism, it takes a fundamentally different approach. There are no headers or claims sets in the Linked Data Signatures specification, just data. For example, the data below is effectively a JWT claims set expressed in JSON-LD:

  "@context": "",
  "@type": "Person",
  "name": "Manu Sporny",
  "gender": "male",
  "homepage": ""

Note that there are no keywords specific to the Linked Data Signatures specification, just keys that are mapped to URLs (to prevent collisions) and data. In JSON-LD, these keys and data are machine-interpretable in a standards-compliant manner (unlike JWT data), and can be merged with other data sources without the danger of data being overwritten or colliding with other application data.

Bottom line: The Linked Data Signatures specifications use of a native Linked Data format removes the requirement for a specification like JWT. As far as the Linked Data Signatures specification is concerned, there is just data, which you can then digitally sign and encrypt. This makes the data easier to work with for Web developers as they can continue to use their application data as-is instead of attempting to restructure it into a JWT.

JSON Web Encryption vs. Linked Data Signatures

The JSON Web Encryption (JWE) specification defines a way to express encrypted content using JSON-based data structures. Basically, if you want to encrypt JSON data so that only the intended receiver can read the data, this specification tells you how to do it in an interoperable way. A JWE-encrypted message looks like this:

  "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0",
  "unprotected": {"jku": ""},
  "recipients": [{
    "header": {
      "encrypted_key": "UGhIOgu ... MR4gp_A"
  "iv": "AxY8DCtDaGlsbGljb3RoZQ",
  "ciphertext": "KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY",
  "tag": "Mz-VPPyU4RlcuYv1IwIvzw"

To decrypt this information, an application would retrieve the private key associated with the recipients[0].header, and then decrypt the encrypted_key. Using the decrypted encrypted_key value, it would then use the iv to decrypt the protected header. Using the algorithm provided in the protected header, it would then use the decrypted encrypted_key, iv, the algorithm specified in the protected header, and the ciphertext to retrieve the original message as a result.

For comparison purposes, a Linked Data Signatures encrypted message looks like this:

  "@context": "",
  "@type": "EncryptedMessage2012",
  "data": "VTJGc2RH ... Fb009Cg==",
  "encryptionKey": "uATte ... HExjXQE=",
  "iv": "vcDU1eWTy8vVGhNOszREhSblFVqVnGpBUm0zMTRmcWtMrRX==",
  "publicKey": ""

To decrypt this information, an application would use the private key associated with the publicKey to decrypt the encryptionKey and iv. It would then use the decrypted encryptionKey and iv to decrypt the value in data, retrieving the original message as a result.

The Linked Data Signatures encryption protocol is simpler than the JWE protocol for three major reasons:

  1. The @type of the message, EncryptedMessage2012, encapsulates all of the cryptographic algorithm information in a machine-readable way (that can also be hard-coded in implementations). The JWE specification utilizes the protected field to express the same sort of information, which is allowed to get far more complicated than the Linked Data Signatures equivalent, leading to more complexity.
  2. Key information is expressed in one entry, the publicKey entry, which is a link to a machine-readable document that can express not only the public key information, but who owns the key, the name of the key, creation and revocation dates for the key, as well as a number of other Linked Data values that result in a full-fledged Web-based PKI system. Not only is Linked Data Signatures encryption simpler than JWE, but it also enables many more types of extensibility.
  3. The key data is expressed in a PEM-encoded format, which is expressed as a base-64 encoded blob of information. This approach was taken because it is easier for developers to use the data without introducing errors. Since most Web developers do not understand what variables like dq (the second factor Chinese Remainder Theorem exponent parameter) or d (the Elliptic Curve private key parameter) are, the likelihood of transporting and publishing that sort of data without error is lower than placing all parameters in an opaque blob of information that has a clear beginning and end (-----BEGIN PUBLIC KEY-----, and --- END PUBLIC KEY ---).

The rest of the entries in the JSON are typically required for the encryption method selected to secure the message. There is not a great deal of difference between the two specifications when it comes to the parameters that are needed for the encryption algorithm.

Bottom line: The major difference between the Linked Data Signatures and JWE specification has to do with how the encryption parameters are specified as well as how many of them there can be. The Linked Data Signatures specification expresses only one encryption mechanism and outlines the algorithms and keys external to the message, which leads to a reduction in complexity. The JWE specification allows many more types of encryption schemes to be used, at the expense of added complexity.

JSON Web Signatures vs. Linked Data Signatures

The JSON Web Signatures (JWS) specification defines a way to digitally sign JSON data structures. If your application needs to be able to verify the creator of a JSON data structure, you can use this specification to do so. A JWS digital signature looks like the following:

  "payload": "eyJpc ... VlfQ",
    "header": {
    "signature": "cC4hi ... 77Rw"

For the purposes of comparison, a Linked Data Signatures message and signature looks like the following:

  "@context": ["", ""]
  "@type": "Person",
  "name": "Manu Sporny",
  "homepage": "",
    "@type": "GraphSignature2012",
    "creator": "",
    "created": "2013-08-04T17:39:53Z",
    "signatureValue": "OGQzN ... IyZTk="

There are a number of stark differences between the two specifications when it comes to digital signatures:

  1. The Linked Data Signatures specification does not need to base-64 encode the payload being signed. This makes it easier for a developer to see (and work with) the data that was digitally signed. Debugging signed messages is also simplified as special tools to decode the payload are unnecessary.
  2. The Linked Data Signatures specification does not require any header parameters for the payload, which reduces the number of things that can go wrong when verifying digitally signed messages. One could argue that this also reduces flexibility. The counter-argument is that different signature schemes can always be switched in by just changing the @type of the signature.
  3. The signer’s public key is available via a URL. This means that, in general, all Linked Data Signatures signatures can be verified by dereferencing the creator URL and utilizing the published key data to verify the signature.
  4. The Linked Data Signatures specification depends on a normalization algorithm that is applied to the message. This algorithm is non-trivial, typically implemented behind a JSON-LD library .normalize() method call. JWS does not require data normalization. The trade-off is simplicity at the expense of requiring your data to always be encapsulated in the message. For example, the Linked Data Signatures specification is capable of pointing to a digital signature expressed in RDFa on a website using a URL. An application can then dereference that URL, convert the data to JSON-LD, and verify the digital signature. This mechanism is useful, for example, when you want to publish items for sale along with their prices on a Web page in a machine-readable way. This sort of use case is not achievable with the JWS specification. All data is required to be in the message. In other words, Linked Data Signatures performs a signature on information that could exist on the Web where the JWS specification performs a signature on a string of text in a message.
  5. The JWS mechanism enables HMAC-based signatures while the Linked Data Signatures mechanism avoids the use of HMAC altogether, taking the position that shared secrets are typically a bad practice.

Bottom line: The Linked Data Signatures specification does not need to encode its payloads, but does require a rather complex normalization algorithm. It supports discovery of signature key data so that signatures can be verified using standard Web protocols. The JWS specification is more flexible from an algorithmic standpoint and simpler from a signature verification standpoint. The downside is that the only data input format must be from the message itself and can’t be from an external Linked Data source, like an HTML+RDFa web page listing items for sale.


The Linked Data Signatures and JOSE designs, while attempting to achieve the same basic goals, deviate in the approaches taken to accomplish those goals. The Linked Data Signatures specification leverages more of the Web with its use of a Linked Data format and URLs for identifying and verifying identity and keys. It also attempts to encapsulate a single best practice that will work for the vast majority of Web applications in use today. The JOSE specifications are more flexible in the type of cryptographic algorithms that can be used which results in more low-level primitives used in the protocol, increasing complexity for developers that must create interoperable JOSE-based applications.

From a specification size standpoint, the JOSE specs weigh in at 225 pages, the Linked Data Signatures specification weighs in at around 20 pages. This is rarely a good way to compare specifications, and doesn’t always result in an apples to apples comparison. It does, however, give a general idea of the amount of text required to explain the details of each approach, and thus a ballpark idea of the complexity associated with each specification. Like all specifications, picking one depends on the use cases that an application is attempting to support. The goal with the Linked Data Signatures specification is that it will be good enough for 95% of Web developers out there, and for the remaining 5%, there is the JOSE stack.

[Editor’s Note: The original text of this blog post contained the phrase “Secure Messaging”, which has since been rebranded to “Linked Data Signatures“.]

Technical Analysis of 2012 MozPay API Message Format

The W3C Web Payments group is currently analyzing a new API for performing payments via web browsers and other devices connected to the web. This blog post is a technical analysis of the MozPay API with a specific focus on the payment protocol and its use of JOSE (JSON Object Signing and Encryption). The first part of the analysis takes the approach of examining the data structures used today in the MozPay API and compares them against what is possible via PaySwarm. The second part of the analysis examines the use of JOSE to achieve the use case and security requirements of the MozPay API and compares the solution to JSON-LD, which is the mechanism used to achieve the use case and security requirements of the PaySwarm specification.

Before we start, it’s useful to have an example of what the current MozPay payment initiation message looks like. This message is generated by a MozPay Payment Provider and given to the browser to initiate a native purchase process:

  "aud": "",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,
  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",
    "description": "Adventure Game item",
    "icons": {
      "64": "",
      "128": ""
    "productData": "user_id=1234&my_session_id=XYZ",
    "postbackURL": "",
    "chargebackURL": ""

The message is effectively a JSON Web Token. I say effectively because it seems like it breaks the JWT spec in subtle ways, but it may be that I’m misreading the JWT spec.

There are a number of issues with the message that we’ve had to deal with when creating the set of PaySwarm specifications. It’s important that we call those issues out first to get an understanding of the basic concerns with the MozPay API as it stands today. The comments below use the JWT code above as a reference point.

Unnecessarily Cryptic JSON Keys

  "aud": "",
  "typ": "mozilla/payments/pay/v1",
  "iat": 1337357297,
  "exp": 1337360897,

This is more of an issue with the JOSE specs than it is the MozPay API. I can’t think of a good line of argumentation to shorten things like ‘issuer’ to ‘iss’ and ‘type’ to ‘typ’ (seriously :), the ‘e’ was too much?). It comes off as 1980s protocol design, trying to save bits on the wire. Making code less readable by trying to save characters in a human-readable message format works against the notion that the format should be readable by a human. I had to look up what iss, aud, iat, and exp meant. The only reason that I could come up with for using such terse entries was that the JOSE designers were attempting to avoid conflicts with existing data in JWT claims objects. If this was the case, they should have used a prefix like “@” or “$”, or placed the data in a container value associated with a key like ‘claims’.

PaySwarm always attempts to use terminology that doesn’t require you to go and look at the specification to figure out basic things. For example, it uses creator for iss (issuer), validFrom for iat (issued at), and validUntil for exp (expire time).



The MozPay API specification does not require the APPLICATION_KEY to be a URL. Since it’s not a URL, it’s not discoverable. The application key is also specific to each Marketplace, which means that one Marketplace could use a UUID, another could use a URL, and so on. If the system is intended to be decentralized and interoperable, the APPLICATION_KEY should either be dereferenceable on the public Web without coordination with any particular entity, or a format for the key should be outlined in the specification.

All identities and keys used in digital signatures in PaySwarm use URLs for the identifiers that must contain key information in some sort of machine-readable format (RDFa and JSON-LD, for now). This means that 1) they’re Web-native, 2) they can be dereferenced, and 3) when they’re dereferenced, a client can extract useful data from the document retrieved.


  "aud": "",

It’s not clear what the aud parameter is used for in the MozPay API, other than to identify the marketplace.

Issued At and Expiration Time

  "iat": 1337357297,
  "exp": 1337360897,

The iat (issued at) and exp (expiration time) values are encoded in the number of seconds since January 1st, 1970. These are not very human readable and make debugging issues with purchases more difficult than they need to be.

PaySwarm uses the W3C Date/Time format, which are human-readable strings that are also easy for machines to process. For example, November 5th, 2013 at 1:15:30 AM (Zulu / Universal Time) is encoded as: 2013-11-05T13:15:30Z.

The Request

  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",
    "pricePoint": 1,
    "name": "Magical Unicorn",

This object in the MozPay API is a description of the thing that is to be sold. Technically, it’s not really a request. The outer object is the request. There is a big of a conflation of terminology here that should probably be fixed at some point.

In PaySwarm, the contents of the MozPay request value is called an Asset. An asset is a description of the thing that is to be sold.

Request ID

  "request": {
    "id": "915c07fc-87df-46e5-9513-45cb6e504e39",

The MozPay API encodes the request ID as a universally unique identifier (UUID). The major downside to this approach is that other applications can’t find the information on the Web to 1) discover more about the item being sold, 2) discuss the item being sold by referring to it by a universal ID, 3) feed it to a system that can read data published at the identifier address, and 4) index it for the purposes of searching.

The PaySwarm specifications use a URL for the identifier for assets and publish machine-readable data at the asset location so that other systems can discover more information about the item being sold, refer to the item being sold in discussions (like reviews of the item), start a purchase by referencing the URL, index the item being sold such that it may be utilized in price-comparison/search engines.

Price Point

  "request": {
    "pricePoint": 1,

The pricePoint for the item being sold is currently a whole number. This is problematic because prices are usually decimal numbers including a fraction and a currency.

PaySwarm publishes its pricing information in a currency agnostic way that is compatible with all known monetary systems. Some of these systems include USD, EUR, JYP, RMB, Bitcoin, Brixton Pound, Bernal Bucks, Ven, and a variety of other alternative currencies. The amount is specified as a decimal with fraction and a currency URL. A URL is utilized for the currency because PaySwarm supports arbitrary currencies to be created and managed external to the PaySwarm system.


  "request": {
    "icons": {
      "64": "",
      "128": ""

Icon data is currently modeled in a way that is useful to developers by indexing the information as a square pixel size for the icon. This allows developers to access the data like so: icons.64 or icons.128. Values are image URLs, which is the right choice.

PaySwarm uses JSON-LD and can support this sort of data layout through a feature called data indexing. Another approach is to just have an array of objects for icons, which would allow us to include extended information about the icons. For example:

  "request": {
  "icon": [{size: 64, id: "", label: "Magical Unicorn"}, ...]

Product Data

  "request": {
    "productData": "user_id=1234&my_session_id=XYZ",

If the payment technology we’re working on is going to be useful to society at large, we have to allow richer descriptions of products. For example, model numbers, rich markup descriptions, pictures, ratings, colors, and licensing terms are all important parts of a product description. The value needs to be larger than a 256 byte string and needs to support decentralized extensibility. For example, Home Depot should be able to list UPC numbers and internal reference numbers in the asset description and the payment protocol should preserve that extra information, placing it into digital receipts.

PaySwarm uses JSON-LD and thus supports decentralized extensibility for product data. This means that any vendor may express information about the asset in JSON-LD and it will be preserved in all digital contracts and digital receipts. This allows the asset and digital receipt format to be used as a platform that can be built on top of by innovative retailers. It also increases data fidelity by allowing far more detailed markup of asset information than what is currently allowed via the MozPay API.

Postback URL

  "request": {
    "postbackURL": "",

The postback URL is a pretty universal concept among Web-based payment systems. The payment processor needs a URL endpoint that the result of the purchase can be sent to. The postback URL serves this purpose.

PaySwarm has a similar concept, but just lists it in the request URL as ‘callback’.

Chargeback URL

  "request": {
    "chargebackURL": ""

The chargeback URL is a URL endpoint that is called whenever a refund is issued for a purchased item. It’s not clear if the vendor has a say in whether or not this should be allowed for a particular item. For example, what happens when a purchase is performed for a physical good? Should chargebacks be easy to do for those sorts of items?

PaySwarm does not build chargebacks into the core protocol. It lets the merchant request the digital receipt of the sale to figure out if the sale has been invalidated. It seems like a good idea to have a notification mechanism build into the core protocol. We’ll need more discussion on this to figure out how to correctly handle vendor-approved refunds and customer-requested chargebacks.


There are a number of improvements that could be made to the basic MozPay API that would enable more use cases to be supported in the future while keeping the level of complexity close to what it currently is. The second part of this analysis will examine the JavaScript Object Signature and Encryption (JOSE) technology stack and determine if there is a simpler solution that could be leveraged to simplify the digital signature requirements set forth by the MozPay API.

[UPDATE: The second part of this analysis is now available]

Verifiable Messaging over HTTP

Problem: Figure out a simple way to enable a Web client or server to authenticate and authorize itself to do a REST API call. Do this in one HTTP round-trip.

There is a new specification that is making the rounds called HTTP Signatures. It enables a Web client or server to authenticate and authorize itself when doing a REST API call and only requires one HTTP round-trip to accomplish the feat. The meat of the spec is 5 pages long, and the technology is simple and awesome.

We’re working on this spec in the Web Payments group at the World Wide Web Consortium because it’s going to be a fundamental part of the payment architecture we’re building into the core of the Web. When you send money to or receive money from someone, you want to make sure that the transaction is secure. HTTP Signatures help to secure that financial transaction.

However, the really great thing about HTTP Signatures is that it can be applied anywhere password or OAuth-based authentication and authorization is used today. Passwords, and shared secrets in general, are increasingly becoming a problem on the Web. OAuth 2 sucks for a number of reasons. It’s time for something simpler and more powerful.

HTTP Signatures:

  1. Work over both HTTP and HTTPS. You don’t need to spend money on expensive SSL/TLS security certificates to use it.
  2. Protect messages sent over HTTP or HTTPS by digitally signing the contents, ensuring that the data cannot be tampered with in transit. In the case that HTTPS security is breached, it provides an additional layer of protection.
  3. Identify the signer and establish a certain level of authorization to perform actions over a REST API. It’s like OAuth, only way simpler.

When coupled with the Web Keys specification, HTTP Signatures:

  1. Provide a mechanism where the digital signature key does not need to be registered in advance with the server. The server can automatically discover the key from the message and determine what level of access the client should have.
  2. Enable a fully distributed Public Key Infrastructure for the Web. This opens up new ways to more securely communicate over the Web, which is timely considering the recent news concerning the PRISM surveillance program.

If you’re interested in learning more about HTTP Signatures, the meat of the spec is 5 pages long and is a pretty quick read. You can also read (or listen to) the meeting notes where we discuss the HTTP Signatures spec a week ago, or today. If you want to keep up with how the spec is progressing, join the Web Payments mailing list.

Browser Payments 1.0

Kumar McMillan (Mozilla/FirefoxOS) and I (PaySwarm/Web Payments) have just published the first draft of the Browser Payments 1.0 API. The purpose of the spec is to establish a way to initiate payments from within the browser. It is currently a direct port of the mozPay API framework that is integrated into Firefox OS. It enables Web content to initiate payment or issue a refund for a product or service. Once implemented in the browser, a Web author may issue navigator.payment() function to initiate a payment.

This is work that we intend to pursue in the Web Payments Community Group at W3C. The work will eventually be turned over to a Web Payments Working Group at W3C, which we’re trying to kick-start at some point this year.

The current Browser Payments 1.0 spec can be read here:

The github repository for the spec is here:

Keep in mind that this is a very early draft of the spec. There are lots of prose issues as well as bugs that need to be sorted out. There are also a number of things that we need to discuss about the spec and how it fits into the larger Web ecosystem. Things like how it integrates with Persona and PaySwarm are still details that we need to suss out. There is a bug and issue tracker for the spec here:

The Mozilla guys will be on next week’s Web Payments telecon (Wednesday, 11am EST) for a Q/A session about this specification. Join us if you’re interested in payments in the browser. The call is open to the public, details about joining and listening in can be found here:

Aaron Swartz, PaySwarm, and Academic Journals

For those of you that haven’t heard yet, Aaron Swartz took his own life two days ago. Larry Lessig has a follow-up on one of the reasons he thinks led to his suicide (the threat of 50 years in jail over the JSTOR case).

I didn’t know Aaron at all. A large number of people that I deeply respect did, and have written about his life with great admiration. I, like most of you that have read the news, have done so while brewing a cauldron of mixed emotions. Saddened that someone that had achieved so much good in their life is no longer in this world. Angry that Aaron chose this ending. Sickened that this is the second recent suicide, Ilya’s being the first, involving a young technologist trying to make the world a better place for all of us. Afraid that other technologists like Aaron and Ilya will choose this path over persisting in their noble causes. Helpless. Helpless because this moment will pass, just like Ilya’s did, with no great change in the way our society deals with mental illness. With no great change, in what Aaron was fighting for, having been realized.

Nobody likes feeling helpless. I can’t mourn Aaron because I didn’t know him. I can mourn the idea of Aaron, of the things he stood for. While reading about what he stood for, several disconnected ideas kept rattling around in the back of my head:

  1. We’ve hit a point of ridiculousness in our society where people at HSBC knowingly laundering money for drug cartels get away with it, while people like Aaron are labeled a felon and face upwards of 50 years in jail for “stealing” academic articles. This, even after the publisher of said academic articles drops the charges. MIT never dropped their charges.
  2. MIT should make it clear that he was not a felon or a criminal. MIT should posthumously pardon Aaron and commend him for his life’s work.
  3. The way we do peer-review and publish scientific research has to change.
  4. I want to stop reading about all of this, it’s heartbreaking. I want to do something about it – make something positive out of this mess.

Ideas, Floating

I was catching up on news this morning when the following floated past on Twitter:

clifflampe: It seems to me that the best way for we academics to honor Aaron Swartz’s memory is to frigging finally figure out open access publishing.

1Copenut: @clifflampe And finally implement a micropayment system like @manusporny’s #payswarm. I don’t want the paper-but I’ll pay for the stories.

1Copenut: @manusporny These new developments with #payswarm are a great advance. Is it workable with other backends like #Middleman or #Sinatra?

This was interesting because we have been talking about how PaySwarm could be applied to academic publishing for a while now. All the discussions to this point have been internal, we didn’t know if anybody would make the connection between the infrastructure that PaySwarm provides and how it could be applied to academic journals. This is up on our ideas board as a potential area that PaySwarm could be applied:

  • Payswarm for peer-reviewed, academic publishing
    • Use Payswarm identity mechanism to establish trusted reviewer and author identities for peer review
    • Use micropayment mechanism to fund research
    • Enable university-based group-accounts for purchasing articles, or refunding researcher purchases

Journals as Necessary Evils

For those in academia, journals are often viewed as a necessary evil. They cost a fortune to subscribe to, farm out most of their work to academics that do it for free, and employ an iron-grip on the scientific publication process. Most academics that I speak with would do away with journal organizations in a heartbeat if there was a viable alternative. Most of the problem is political, which is why we haven’t felt compelled to pursue fixing it. Political problems often need a groundswell of support and a number of champions that are working inside the community. I think the groundswell is almost here. I don’t know who the set of academic champions are that will be the ones to push this forward. Additionally, if nobody takes the initiative to build such a system, things won’t change.

Here’s what we (Digital Bazaar) have been thinking. To fix the problem, you need at least the following core features:

  • Web-scale identity mechanisms – so that you can identify reviewers and authors for the peer-review process regardless of which site is publishing or reviewing a paper.
  • Decentralized solution – so that universities and researchers drive the process – not the publishers of journals.
  • Some form of remuneration system – you want to reward researchers with heavily cited papers, but in a way that makes it very hard to game the system.

Scientific Remuneration

PaySwarm could be used to implement each of these core features. At its core, PaySwarm is a decentralized payment mechanism for the Web. It also has a decentralized identity mechanism that is solid, but in a way that does not violate your privacy. There is a demo that shows how it can be applied to WordPress blogs where just an abstract is published, and if the reader wants to see more of the article, they can pay a small fee to read it. It doesn’t take a big stretch of the imagination to replace “blog article” with “research paper”. The hope is that researchers would set access prices on articles such that any purchase to access the research paper would then go to directly funding their current research. This would empower universities and researchers with an additional revenue stream while reducing the grip that scientific publishers currently have on our higher-education institutions.

A Decentralized Peer-review Process

Remuneration is just one aspect of the problem. Arguably, it is the lesser of the problems in academic publishing. The biggest technical problem is how you do peer review on a global, distributed scale. Quite obviously, you need a solid identity system that can identify scientists over the long term. You need to understand a scientists body of work and how respected their research is in their field. You also need a review system that is capable of pairing scientists and papers in need of review. PaySwarm has a strong identity system in place using the Web as the identification mechanism. Here is the PaySwarm identity that I use for development: Clearly, paper publishing systems wouldn’t expose that identity URL to people using the system, but I include it to show what a Web-scale identifier looks like.

Web-scale Identity

If you go to that identity URL, you will see two sets of information: my public financial accounts and my digital signature keys. A PaySwarm Authority can annotate this identity with even more information, like whether or not an e-mail address has been verified against the identity. Is there a verified cellphone on record for the identity? Is there a verified driver’s license on record for the identity? What about a Twitter handle? A Google+ handle? All of these pieces of information can be added and verified by the PaySwarm Authority in order to build an identity that others can trust on the Web.

What sorts of pieces of information need to be added to a PaySwarm identity to trust its use for academic publishing? Perhaps a list of articles published by the identity? Review comments for all other papers that have been reviewed by the identity? Areas of research that other’s have certified that the identity is an expert on? This is pretty basic Web-of-trust stuff, but it’s important to understand that PaySwarm has this sort of stuff baked into the core of the design.

The Process

Leveraging identity to make decentralized peer-review work is the goal, and here is how it would work from a researcher perspective:

  1. A researcher would get a PaySwarm identity from any PaySwarm Authority, there is no cost associated with getting such an identity. This sub-system is already implemented in PaySwarm.
  2. A researcher would publish an abstract of their paper in a Linked Data format such as RDFa. This abstract would identify the authors of the paper and some other basic information about the paper. It would also have a digital signature on the information using the PaySwarm identity that was acquired in the previous step. The researcher would set the cost to access the full article using any PaySwarm-compatible system. All of this is already implemented in PaySwarm.
  3. A paper publishing system would be used to request a review among academic peers. Those peers would review the paper and publish digital signatures on review comments, possibly with a notice that the paper is ready to be published. This sub-system is fairly trivial to implement and would mirror the current review process with the important distinction that it would not be centralized at journal publications.
  4. Once a pre-set limit on the number of positive reviews has been met, the paper publishing system would place its stamp of approval on the paper. Note that different paper publishing systems may have different metrics just as journals have different metrics today. One benefit to doing it this way is that you don’t need a paper publishing system to put its stamp of approval on a paper at all. If you really wanted to, you could write the software to calculate whether or not the paper has gotten the appropriate amount of review because all of the information is on the Web by default. This part of the system would be fairly trivial to write once the metrics were known. It may take a year or two to get the correct set of metrics in place, but it’s not rocket science and it doesn’t need to be perfect before systems such as this are used to publish papers.

From a reviewer perspective, it would work like so:

  1. You are asked to review papers by your peers once you have an acceptable body of published work. All of your work can be verified because it is tied to your PaySwarm identity. All review comments can be verified as they are tied to other PaySwarm identities. This part is fairly trivial to implement, most of the work is already done for PaySwarm.
  2. Once you review a paper, you digitally sign your comments on the paper. If it is a good paper, you also include a claim that it is ready for broad publication. Again, technically simple to implement.
  3. Your reputation builds as you review more papers. The way that reputation is calculated is outside of the scope of this blog post mainly because it would need a great deal of input from academics around the world. Reputation is something that can be calculated, but many will argue about the algorithm and I would expect this to oscillate throughout the years as the system grows. In the end, there will probably be multiple reputation algorithms, not just one. All that matters is that people trust the reputation algorithms.

Freedom to Research and Publish

The end-goal is to build a system that empowers researchers and research institutions, is far more transparent than the current peer-reviewed publishing system, and remunerates the people doing the work more directly. You will also note that at no point does a traditional journal enter the picture to give you a stamp of approval and charge you a fee for publishing your paper. Researchers are in control of the costs at all stages. As I’ve said above, the hard part isn’t the technical nature of the project, it’s the political nature of it. I don’t know if this is enough of a pain-point among academics to actually start doing something about it today. I know some are, but I don’t know if many would use such a system over the draw of publications like Nature, PLOS, Molecular Genetics and Genomics, and Planta. Quite obviously, what I’ve proposed above isn’t a complete road map. There are issues and details that would need to be hammered out. However, I don’t understand why a system like this doesn’t already exist, so I implore the academic community to explain why what I’ve laid out above hasn’t been done yet.

It’s obvious that a system like this would be good for the world. Building such a system may have reduced the possibility of us losing someone like Aaron in the way that we did. He was certainly fighting for something like it. Talking about it makes me feel a bit less helpless than I did yesterday. Maybe making something good out of this mess will help some of you out there as well. If others offer to help, we can start building it.

So how about it researchers of the world, would you publish all of your research through such a system?

Web Payments: PaySwarm vs. OpenTransact Shootout (Part 3)

This is a continuing series of blog posts analyzing the differences between PaySwarm and OpenTransact. The originating blog post and subsequent discussion is shown below:

  1. Web Payments: PaySwarm vs. OpenTransact Shootout by Manu Sporny
  2. OpenTransact the payment standard where everything is out of scope by Pelle Braendgaard
  3. Web Payments: PaySwarm vs. OpenTransact Shootout (Part 2) by Manu Sporny
  4. OpenTransact vs PaySwarm part 2 – yes it’s still mostly out of scope by Pelle Braendgaard

It is the last post of Pelle’s that this blog post will address. All of the general points made in the previous analysis still hold and so familiarizing yourself with them before continuing will give you some context. In summary,

TL;DR – The OpenTransact standard does not specify the minimum necessary algorithms and processes required to implement an interoperable, open payment network. It, accidentally, does the opposite – further enforcing silo-ed payment networks, which is exactly what PaySwarm is attempting to prevent.

You may jump to each section of this blog post:

  1. Why OpenTransact Fails To Standardize Web Payments
  2. General Misconceptions (continued)
  3. Detailed Rebuttal (continued)
  4. Conclusion

Why OpenTransact Fails to Standardize Web Payments

After analyzing OpenTransact over the past few weeks, the major issue of the technology is becoming very clear. The Web Payments work is about writing a world standard. The purpose of a standard is to formalize the data formats and protocols, in explicit detail, that teaches the developer how two pieces of software should interoperate. If any two pieces of software implement the standard, it is known that they will be able to communicate and carry out any of the actions defined in the standard. The OpenTransact specification does not achieve this most fundamental goal of a standard. It does not specify how any two payment processors may interoperate, instead, it is a document that suggests one possible way for a single payment processor to implement its Web API.

Here is why this is a problem: When a vendor lists an OpenTransact link on their website, and a customer clicks on that link, the customer is taken to the vendor’s OpenTransact payment processor. If the customer does not have an account on that payment processor, they must get an account, verify their information, put money in the account, and go through all of the hoops required to get an account on that payment provider. In other words, OpenTransact changes absolutely nothing about how payment is performed online today.

For example, if you go to a vendor and they have a PayPal button on their site, you have to go to PayPal and get an account there in order to pay the vendor. If they have an Amazon Payments button instead, you have to go to Amazon and get an account there in order to pay the vendor. Even worse, OpenTransact doesn’t specify how individuals are identified on the network. One OpenTransact provider could use e-mail addresses for identification, while another one might use Facebook accounts or Twitter handles. There is no interoperability because these problems are considered out of scope for the OpenTransact standard.

PaySwarm, on the other hand, defines exactly how payment processors interoperate and identities are used. A customer may choose their payment processor independently of the vendor, and the vendor may choose their payment processor independently of the customer. The PaySwarm specification also details how a vendor can list items for sale in an interoperable way such that any transaction processor may process a sale of the item. PaySwarm enables choice in a payment processor, OpenTransact does not.

OpenTransact continues to lock in customers and merchants into a particular payment processor. It requires that they both choose the same one if they are to exchange payment. While Pelle has asserted that this is antithetical to OpenTransact, the specification fails to detail how a customer and a merchant could use two different payment processors to perform a purchase. Leaving something as crucial as sending payment from one payment processor to the next as unspecified will only mean that many payment processors will implement mechanisms that are non-interoperable across all payment processors. Given this scenario, it doesn’t really matter what the API is for the payment processor as everyone has to be using the same system anyway.

Therefore, the argument that OpenTransact can be used as a basic building block for online commerce is fatally flawed. The only thing that you can build on top of OpenTransact is a proprietary walled garden of payments, an ivory tower of finance. This is exactly what payment processors do today, and will do with OpenTransact. It is in their best interest to create closed financial networks as it strips market power away from the vendor and the customer and places it into their ivory tower.

Keep this non-interoperability point in mind when you see an “out of scope” argument on behalf of OpenTransact – there are some things that can be out of scope, but not at the expense of choice and interoperability.

General Misconceptions (continued)

There are a number of misconceptions that Pelle’s latest post continues to hold regarding PaySwarm that demonstrate a misunderstanding of the purpose of the specification. These general misconceptions are addressed below, followed by a detailed analysis of the rest of Pelle’s points.

PaySwarm is a fully featured idealistic multi layered approach where you must buy into a whole different way running your business.

The statement is hyperbolic – no payment technology requires you to “buy into a whole different way of running your business”. Vendors on the Web list items for sale and accept payment for those items in a number of different ways. This is usually accomplished by using shopping cart software that supports a variety of different payment mechanisms – eCheck, credit card, PayPal, Google Checkout, etc. PaySwarm would be one more option that a vendor could employ to receive payment.

PaySwarm standardizes an open, interoperable way that items are listed for sale on the Web, the protocol that is used to perform a transaction on the Web, and how transaction processors may interoperate with one another.

PaySwarm is a pragmatic approach that provides individuals and businesses with a set of tools to make Web-based commerce easier for their customers and thus provides a competitive advantage for those businesses that choose to adopt it. Businesses don’t need to turn off their banking and credit card processing services to use PaySwarm – it would be foolish for any standard to take that route.

PaySwarm doesn’t force any sort of out-right replacement of what businesses and vendors do today, it is something that can be phased in gradually. Additionally, it provides built-in functionality that you cannot accomplish via traditional banking and credit card services. Functionality like micro-payments, crowd-funding, a simple path to browser-integration, digital receipts, and a variety of innovative new business models for those willing to adopt them. That is, individuals and businesses will adopt PaySwarm because; 1) it provides a competitive advantage, 2) it allows new forms of economic value exchange to happen on the Web, 3) it is designed to fight vendor-lock in, and 4) it thoroughly details how to achieve interoperability as an open standard.

It is useless to call a technology idealistic, as every important technology starts from idealism and then gets whittled down into a practical form – PaySwarm is no different. The proposal for the Web was idealistic at the time, it was a multi-layered approach, and it does require a whole different way of running a Web-based business (only because Web-based businesses did not exist before the Web). It’s clear today that all of those adjectives (“idealistic”, “multi-layered”, and “different”) were some of the reasons that the Web succeeded, even if none of those words apply to PaySwarm in the negative way that is asserted in Pelle’s blog post.

However the basic PaySwarm philosophy of wanting to design a whole world view is very similar to central planning or large standards bodies like ANSI, IEEE etc. OpenTransact follows the market based approach that the internet was based on of small standards that do one thing well.

As stated previously, PaySwarm is limited in scope by the use cases that have been identified as being important to solve. It is important to understand the scope of the problem before attempting a solution. OpenTransact fails to grasp the scope of the problem and thus falls short of providing a specification that defines how interoperability is achieved.

Furthermore, it is erroneous to assert that the Internet was built using a market-based approach and small standards. The IEEE, ANSI, and even government had a very big part to play in the development of the Internet and the Web as we know it today.

Here are just a few of the technologies that we enjoy today because of the IEEE: Ethernet, WiFi, Mobile phones, Mobile Broadband, POSIX, and VHDL. Here are the technologies that we enjoy today because of ANSI: The standard for encoding most of the letters on your screen right now (ASCII and UTF-8), the C programming language standard, and countless safety specifications covering everything from making sure that commercial airliners are inspected properly, to hazardous waste disposal guidelines that take human health into account, to a uniform set of colors for warning and danger signs in the workplace.

The Internet wouldn’t exist in the form that we enjoy today without these IEEE and ANSI standards: Ethernet, ASCII, the C programming language, and many of the Link Layer technologies developed by the IEEE on which the foundation of the Internet was built. It is incorrect to assume that the Internet followed purely market-based forces and small standards. Let’s not forget that the Internet was a centrally planned, government funded (DARPA) project.

The point is that technologies are developed and come into existence through a variety of channels. There is not one overriding philosophy that is correct in every instance. The development of some technologies require one to move fast in the market, some require thoughtful planning and oversight, and some require a mixture of both. What is important in the end is that the technology works, is well thought out, and achieves the use cases it set out to achieve.

There are many paths to a standard. What is truly important in the end is that the technology works in an interoperable fashion, and in that vein, the assertion that OpenTransact does not meet the basic interoperability requirements of an open Web standard has still not been addressed.

Detailed Rebuttal (continued)

In the responses below, Pelle’s comment on his latest blog post is quoted and the rebuttal follows below each section of quoted text. Pay particular attention to how most of the responses are effectively that the “feature is out of scope”, but no solution is forthcoming to the problem that the feature is designed to address. That is, the problem is just kicked down the road for OpenTransact, where the PaySwarm specification makes a concerted effort to address each problem via the feature under discussion.

Extensible Machine Readable Metadata

Again this falls completely out of the scope. An extension could easily be done using JSON-LD as JSON-LD is simply an extension to JSON. I don’t think it would help the standard by specifying how extensions should be done at this point. I think JSON-LD is a great initiative and it may well be that which becomes an extension format. But there are also other simpler extensions that might better be called conventions that probably do not need the complication of JSON-LD. Such as Lat/Lng which has become a standard geo location convention in many different applications.

The need for extensible machine-readable metadata was explained previously. Addressing this problem is a requirement for PaySwarm because without it you have a largely inflexible messaging format. Pelle mentions that the extensibility issue could be addressed using JSON-LD, which is what PaySwarm does, but does not provide any concrete plans to do this for OpenTransact. That is, the question is left unaddressed in OpenTransact and thus the extensibility and interoperability issue remains.

When writing standards, one cannot assert that a solution “could easily be done”. Payment standards are never easy and hand waving technical issues away is not the same thing as addressing those technical issues. If the solution is easy, then surely something could be written on the topic on the OpenTransact website.

Transactions (part 1)

I don’t like the term transaction as Manu is using it here. I believe it is being used here using computer science terminology. But leaving that aside. OpenTransact does not support multi step transactions in itself right now. I think most of these can be easily implemented in the Application Layer and thus is out of scope of OpenTransact.

The term transaction is being used in the traditional English sense, the Merriam-Webster Dictionary defines a transaction as: something transacted; especially: an exchange or transfer of goods, services, or funds (electronic transactions). Wikipedia defines a transaction as: an agreement, communication, or movement carried out between separate entities or objects, often involving the exchange of items of value, such as information, goods, services, and money. Further, a financial transaction is defined as: an event or condition under the contract between a buyer and a seller to exchange an asset for payment. It involves a change in the status of the finances of two or more businesses or individuals. This demonstrates that the use of “transaction” in PaySwarm is in-line with its accepted English meaning.

The argument that multi-step transactions can be easily implemented is put forward again. This is technical hand-waving. If the solution is so simple, then it shouldn’t take but a single blog post to outline how a multi-step transaction happens between a decentralized set of transaction processors. The truth of the matter is that multi-step transactions are a technically challenging problem to solve in a decentralized manner. Pushing the problem up to the application layer just pushes the problem off to someone else rather than solving it in the standard so that the application developers don’t have to create their own home-brew multi-part transaction mechanism.

Transactions (part 2)

I could see a bulk payment extension supporting something similar in the future. If the need comes up lets deal with [it].

Here are a few reasons why PaySwarm supports multiple financial transfers to multiple financial accounts as a part of a single transaction; 1) it makes the application layer simpler, and thus the developer’s life easier, 2) ensuring that all financial transfers made it to their destination prevents race conditions where some people get paid and some people do not (read: you could be sued for non-payment), 3) assuming a transfer where money is disbursed to 100 people, doing it in one HTTP request is faster and more efficient than doing it in 100 separate requests. The need for multiple financial transfers in a single transaction is already there. For example, paying taxes on items sold is a common practice; in this case, the transaction is split between at least two entities: the vendor and the taxing authority.

OpenTransact does not address the problem of performing multiple financial transfers in a single transaction and thus pushes the problem on to the application developer, who must then know quite a bit about financial systems in order to create a valid solution. If the application developer makes a design mistake, which is fairly easy to do when dealing with decentralized financial systems, they could place their entire company at great financial risk.

Currency Exchange

…most of us have come to the conclusion that we may be able to get away with just using plain open transact for this.

While the people working on OpenTransact may have come to this conclusion, there is absolutely no specification text outlining how to accomplish the task of performing a currency exchange. The analysis was on features that are supported by each specification and the OpenTransact specification still does not intend to provide any specification text on how a currency exchange could be implemented. Saying that a solution exists, but then not elaborating upon the solution in the specification in an interoperable way is not good standards-making. It does not address the problem.

Decentralized Publishing of X (part 1)

These features listed are necessary if you subscribe to the world view that the entire worlds commerce needs to be squeezed into a web startup.

I don’t quite understand what Pelle is saying here, so I’m assuming this interpretation: “The features listed are necessary if you subscribe to the world view that all of the worlds commerce needs have to be squeezed into a payment standard.”

This is not the world-view that PaySwarm assumes. As stated previously, PaySwarm assumes a limited set of use cases that were identified by the Web Payments community as being important. Decentralization is important to PaySwarm because is ensures; 1) that the system is resistant to failure, 2) that the customer is treated fairly due to very low transaction processor switching costs, and 3) that market forces act quickly on the businesses providing PaySwarm services.

OpenTransact avoids the question of how to address these issues and instead, accidentally, further enforces silo-ed payment networks and walled gardens of finance.

Decentralized Publishing of X (part 2)

I think [decentralized publishing] would make a great standard in it’s own right that could be published separately from the payment standard. Maybe call it CommerceSwarm or something like that.

There is nothing preventing PaySwarm from splitting out the listing of assets, and listings from the main specification once we have addressed the limited set of use cases put forth by the Web Payments community. As stated previously, the PaySwarm specification can always be broken down into simpler, modularized specifications. This is an editorial issue, not a design issue.

The concern about the OpenTransact specification is not an editorial issue, it is a design issue. OpenTransact does not specify how multiple transaction processors interoperate nor does it describe how one publishes assets, listings and other information associated with the payment network on the Web. Thus, OpenTransact, accidentally, supports silo-ed payment networks and walled gardens of finance.

Decentralized Publishing of X (part 3)

If supporting these are a requirement for an open payment standard, I think it will be very hard for any existing payment providers or e-commerce suppliers to support it as it requires a complete change in their business, where OpenTransact provides a fairly simple easy implementable payment as it’s only requirement.

This argument is spurious for at least two reasons.

The first is that OpenTransact only has one requirement and thus all a business would have to implement was that one requirement. Alternatively, if businesses only want to implement simple financial transfers in PaySwarm (roughly equivalent to transactions in OpenTransact), they need only do that. Therefore, PaySwarm can be as simple as OpenTransact to the vast majority of businesses that only require simple financial transfers. However, if more advanced features are required, PaySwarm can support those as well.

The second reason is that it is effectively the buggy whip argument – if you were to ask businesses that depended on horses to transport their goods before the invention of the cargo truck, most would recoil at the thought of having to replace their investment in horses with a new investment in trucks. However, new businesses would choose the truck because of its many advantages. Some would use a mixture of horses and trucks until the migration to the better technology was complete. The same applies to both PaySwarm and OpenTransact – the only thing that is going to cause individuals and businesses to switch is that the technology provides a competitive advantage to them. The switching costs for new businesses are going to be less than the switching costs for old businesses with a pre-existing payment infrastructure.

Verifiable Receipts (part 1)

However I don’t want us to stall the development and implementation of OpenTransact by inventing a new form of PKI or battling out which of the existing PKI methods we should use. See my section on Digital Signatures in the last post.

A new form of PKI has not been invented for PaySwarm. It uses the industry standard for both encryption and digital signatures – AES and RSA. The PKI methods are clearly laid out in the specification and have been settled for quite a while, not a single person has mentioned that they want to use a different set of PKI methods or implementations, nor have they raised any technical issues related to the PKI portion of the specification.

Pelle might be referring to how PaySwarm specifies how to register public keys on the Web, but if he is, there is very little difference between that and having to manage OAuth 2 tokens, which is a requirement imposed on developers by the OpenTransact specification.

Verifiable Receipts (part 2)

Thus we have taken the pragmatic approach of letting businesses do what they are already doing now. Sending an email and providing a transaction record via their web site.

PaySwarm does not prevent businesses from doing what they do now. Sending an e-mail and providing a transaction record via their website are still possible using PaySwarm. However, these features become increasingly unnecessary since PaySwarm has a digital receipt mechanism built into the standard. That is, businesses no longer need to send an e-mail or have a transaction record via their website because PaySwarm transaction processors are responsible for holding on to this information on behalf of the customer. This means far less development and financial management headaches for website operators.

Additionally, neither e-mail nor proprietary receipts support Data Portability or system interoperability. That is, these are not standard, machine-readable mechanisms for information exchange. More to the point, OpenTransact is kicking the problem down the road instead of attempting to address the problem of machine-verifiable receipts.

Secure X-routed Purchases

These are neat applications that could be performed in some way through an application. You know I’m going to say it’s out of scope of OpenTransact. OpenTransact was designed as a simple way of performing payments over the web. Off line standards are thus out of scope.

The phrase “performed in some way through an application” is technical hand-waving. OpenTransact does not propose any sort of technical solution to a use case that has been identified by the Web Payments community as being important. Purchasing an item using an NFC-enabled mobile phone at a Web-enabled kiosk is not a far fetched use case – many of these kiosks exist today and more will become Web-enabled over time. That is, if one device has Web connectivity – is the standard extensible enough to allow a transaction to occur?

With PaySwarm, the answer is “yes” and we will detail exactly how to accomplish this in a PaySwarm specification. Note that it will probably not be in the main PaySwarm specification, but an application developer specification that thoroughly documents how to perform a purchase through a PaySwarm proxy.

Currency Mints

Besides BitCoin all modern alternative currencies have the mint and the transaction processor as the same entity.

These are but a few of the modern alternative currencies where the mint and the transaction processor are not the same entity (the year that the currency was launched is listed beside the currency): BerkShares (2006), Calgary Dollar (1996), Ithica Hours (1998), Liberty Dollar (1998-2009), and a variety of LETS and InterLETS systems (as recently as 2011).

OpenTransact assumes that the mint and the transaction processor are the same entity, but as demonstrated above, this is not the case in already successful alternative currencies. The alternative currencies above, where the mint and the transaction processor are different, should be supported by a payment system that purports to support alternative currencies. Making the assumption that the mint and the transaction processor are one and the same ignores a large part of the existing alternative currency market. It also does not protect against monopolistic behavior on behalf of the mint. That is, if a mint handles all minting and transaction processing, processing fees are at the whim of the mint, not the market. Conflating a currency mint with a transaction processor results in negative market effects – a separation of concerns is a necessity in this case.


Saying that you can not do crowd funding with OpenTransact is like saying you can’t do Crowd Funding with http. Obviously KickStarter and many others are doing so and yes you can do so with OpenTransact as a lower level building block.

The coverage of the Crowd Funding feature was never about whether OpenTransact could be used to perform Crowd Funding, but rather how one could perform Crowd Funding with OpenTransact and whether that would be standardized. The answer to each question is still “Out of Scope” and “No”.

Quite obviously there are thousands of ways technology can be combined with value exchange mechanisms to support crowd funding. The assertion was that OpenTransact does not provide any insight into how it would be accomplished and furthermore, contains a number of design issues that would make it very inefficient and difficult to implement Crowd Funding, as described in the initial analysis, on top of the OpenTransact platform.

Data Portability

We are very aware of concerns of vendor lock in, but as OpenTransact is a much simpler lower level standard only concerned with payments, data portability is again outside the scope. We do want to encourage work in this area.

PaySwarm adopts the philosophy that data portability and vendor lock-in are important concerns and must be addressed by a payment standard. Personal financial data belongs to those transacting, not to the payment processors. Ultimately, solutions that empower people become widely adopted.

OpenTransact, while encouraging work in the area, adopts no such philosophy for Data Portability as evidenced in the specification.


In doing this analysis between PaySwarm and OpenTransact, a few things have come to light that we did not know before:

  1. There are some basic philosophies that are shared between PaySwarm and OpenTransact, but there are many others that are not. Most fundamentally, PaySwarm attempts to think about the problem broadly where OpenTransact only attempts to think about one aspect of the Web payments problem.
  2. There are a number of security concerns that were raised when performing the review of the OpenTransact specification, more of which will be detailed in a follow-up blog post.
  3. There were a number of design concerns that we found in OpenTransact. One of the most glaring issues is something that was an issue with PaySwarm in its early days, until the design error was fixed. In the case that OpenTransact adopts digital receipts, excessive HTTP traffic and the duplication of functionality between digital signatures and OAuth 2 will become a problem.
  4. While we assumed that Data Portability was important to the OpenTransact specification, it was a surprise that there were no plans to address the issue at all.
  5. There was an assumption that the OpenTransact specification would eventually detail how transaction processors may interoperate with one another, but Pelle has made it clear that there are no current plans to detail interoperability requirements.

In order for the OpenTransact specification to continue along the standards track, it should be demonstrated that the design concerns, security concerns, and interoperability concerns have been addressed. Additionally, the case should be made for why the Web Payments community should accept that the list of features not supported by OpenTransact is acceptable from the standpoint of a world standards setting organization. These are all open questions and concerns that OpenTransact will eventually have to answer as a part of the standardization process.

* Many thanks to Dave Longley, who reviewed this post and suggested a number of very helpful changes.

Web Payments: PaySwarm vs. OpenTransact Shootout (Part 2)

The Web Payments Community group is currently evaluating two designs for an open payment platform for the Web. A thorough analysis of PaySwarm and OpenTransact was performed a few weeks ago, followed by a partial response by one of the leads behind the OpenTransact work. This blog post will analyze the response by the OpenTransact folks, offer corrections to many of the claims made in the response, and further elaborate on why PaySwarm actually solves the hard problem of creating a standard for an interoperable, open payment platform for the Web.

TL;DR – The OpenTransact standard does not specify the minimum necessary algorithms and processes required to implement an interoperable, open payment network. It, accidentally, does the opposite – further enforcing silo-ed payment networks, which is exactly what PaySwarm is attempting to prevent.

You can jump to each section below:

  1. The Purpose of a Standard
  2. Web Payments – The Hard Problem
  3. The Problem Space
  4. General Misconceptions
  5. Detailed Rebuttal
  6. Continuing the Discussion

The Purpose of a Standard

Ultimately, the purpose of a standard is to propose a solution to a problem that ensures interoperability among implementations of that standard. Furthermore, standards that establish a network of systems, like the Web, must detail how interoperability functions among the various systems in the network. This is the golden rule of standards – if you don’t detail how interoperability is accomplished, you don’t have a standard.

In Pelle’s blog post, he states:

OpenTransact [is] the payment standard where everything is out of scope

This is the major issue with OpenTransact. By declaring that just about everything is out of scope for OpenTransact, it fails to detail how systems on the payment network communicate with one another and thus does not support the golden rule of standards – interoperability. This is the point that I will be hammering home in this blog post, so keep it in mind while reading the rest of this article.

What OpenTransact does is outline “library interoperability”. The specification enables developers to write one software library that can be used to initiate monetary transfers by building OpenTransact URLs, but then does not specify what happens when you go to the URL. It does not specify how money gets from one system to the next, nor does it specify how those messages are created and passed from system to system. OpenTransact overly simplifies the problem and proposes a solution that is insufficient for use as a global payment standard.

In short, it does not solve the hard problem of creating an open payment platform.

Web Payments – The Hard Problem

Overall, the general argument put forward by Pelle on behalf of the OpenTransact specification is that it focuses on merely initiating a monetary transfer and nothing else because that is the fundamental building block for every other economic activity. His argument is that we should standardize the most basic aspect of transferring value and leave the other stuff out until the basic OpenTransact specification gains traction.

The problem with this line of reasoning is this: When you don’t plan ahead, you run the very high risk of creating a solution that works for the simple use cases, but is not capable of addressing the real problems of creating an interoperable, open payment platform. PaySwarm acknowledges that we need to plan ahead if we are to create a standard that can be applied to a variety of use cases. This does not mean that every use case must be addressed. Rather, the assertion is made that designing solutions that solve more than just the initiation of a simple monetary transfer is important because the world of commerce consists of much more than the initiation of simple monetary transfers.

Clearly, we should not implement solutions to every use case, but rather figure out the maximum number of use cases that can be solved by a minimal design. “Don’t bloat the specification” is often repeated as guidance throughout the standardization process. Where to draw the line on spec bloat is one of the primary topics of conversation in standards groups. It should be an ongoing discussion within the community, not a hard-line philosophical stance.

The hard problem has always been interoperability and the OpenTransact specification postpones addressing that issue to a later point in time. The point of a standard is to establish interoperability such that anyone can read the standard, implement it, and is guaranteed interoperability from others that have implemented the standard. From Pelle’s response:

We don’t specify how one payment provider transacts with another payment provider, but it is generally understood that they do so with OpenTransact.


An exchange system between multiple financial institutions can be achieved by many different means as they are today. But all of these methods are implementation details and the developer or end user does not need to understand what is going on inside the black box.

A specification elaborates on the implementation details so that you can guarantee interoperability among those that implement the standard. Implementation details are important because without those, you do not have interoperability and without interoperability, you do not have an open payment platform. Without interoperability, you have the state of online payment providers today – payment vendor lock-in.

General Misconceptions

There are a number of general misconceptions that are expressed in Pelle’s response that need to be corrected before addressing the rest of his feedback:

PaySwarm attempts to solve every single problem up front and thus creates a standard that is very smart in many ways but also very complex.

What PaySwarm attempts to do is identify real-world use cases that exist today with payment online and proposes a way to address those use cases. There are a number of use cases that the community postponed because we didn’t feel that addressing them now was reasonable. There were also use cases that we dropped entirely because we didn’t see a need to support those use cases now or in the future. To say that “PaySwarm attempts to solve every single problem up front” is hyperbolic. It is true that PaySwarm is more complex than OpenTransact today, but that’s because it attempts to address a much larger set of real-world use cases.

It’s background is I understand in a P2P media market place called Bitmunk where licenses, distribution contacts and other media DRM issues are considered important.

PaySwarm did start out as a platform to enable peer-to-peer media marketplace transactions. That was in 2004. The technology and specification have evolved considerably since that time. For example, mandatory DRM was never implemented, but watermarking was – both technologies have been dropped from the specification due to the needless complexity introduced by supporting those features. There was never the concept of a “distribution contract”, but digital contracts – outlining exactly what was purchased, the licenses associated with that purchase, and the costs associated with the transaction seem like reasonable things to support in an open payment platform.

Manu Sporny of Digital Bazaar has also been a chair of the RDFa working group so PaySwarm comes with a lot of linked data luggage as well.

I’m also the Chair of the RDF Web Applications Working Group and the JSON-LD Community Group, am a member of the HTML Working Group, founded the Data-Driven Standards Community Group, and am a member of the Semantic Web Coordination Group. Based on those qualifications, I would like to think that I know my way around Web standards and Linked Data – others may disagree :). While I don’t know if Pelle meant “luggage” in a negative sense, if he did, one must ask what the alternative is? If we are going to create an open payment platform that is interoperable and decentralized like the Web, then what alternative is there to Linked Data?

Many people do not know that we started working with RDFa and JSON-LD because we needed a viable solution to the machine-readable decentralized listing of things for sale problem in PaySwarm. That is, we didn’t get involved with Linked Data first and then carried that work into PaySwarm. We started out with PaySwarm and needed Linked Data to solve the machine-readable decentralized listing of things for sale problem.

OpenTransact comes from the philosophy that we don’t solve a problem until the problem exists and several people have real experiences solving it.

This is a perfectly reasonable philosophy to employ. In fact, PaySwarm adheres to the same philosophy. PaySwarm’s implementation of the philosophy diverges from OpenTransact because it takes more real-world problems into account. Online e-commerce has existed for over a decade now, with a fairly rich history of lessons-learned with regard to how the Web has been used for commerce. This history includes many more types of transactions than just a simple monetary transfer and therefore, PaySwarm attempts to take these other types of transactions into account during the design process.

The Problem Space

In his response, Pelle outlines a number of lessons learned from OpenID and OAuth development. These are all good lessons and we should make sure that we do not fall into the same trap that OpenID did in the beginning – attempting to solve too many problems, too soon in the process.

Pelle implies that PaySwarm falls into this trap and that OpenTransact avoids the trap by being very focused on just initiating payment transfers. The reasoning is spurious as the world is composed of many more types of value exchange than just a simple payment initiation. The main design failure of OpenTransact is to not attempt to detail how the standard applies to the real-world online payment use cases established over the past decade.

It is not that PaySwarm attempts to address too many use cases too soon, but rather that OpenTransact attempts to do too little, and by being hyper-focused, does not solve the problem of creating an open payment platform that is applicable to the state of online commerce today.

Detailed Rebuttal

The following section provides detailed responses to a number of points that are made in Pelle’s blog post:

IRIs for Identifiers

I’m sorry calling URI’s IRI just smells of political correctness. Everyone calls them URI’s and knows what it means. No one knows what a IRI is. Even though W3C pushes it I’m going to use the term URI to avoid confusion.

Wikipedia defines the Internationalized Resource Identifier (IRI) as: a generalization of the Uniform Resource Identifier (URI). While URIs are limited to a subset of the ASCII character set, IRIs may contain characters from the Universal Character Set (Unicode/ISO 10646), including Chinese or Japanese kanji, Korean, Cyrillic characters, and so forth. It is defined by RFC 3987.

PaySwarm is on a world-standards track and thus takes the position that being able to express identifiers in one’s native language is important. When writing standards, it is important to be technically specific and use terminology that has been previously defined by standards groups. Usage of the term IRI is not only technically correct, it acknowledges the notion that we must support non-English identifiers in a payment standard meant for the world to use.

IRIs for Identifiers (cont.)

We don’t want to specify what lives at the end of an account URI. There are many other proposals for standardizing it, we don’t need to deal with that. Until the day that a universal machine readable account URI standard exist, implementers of OpenTransact can either do some sensing of the URI as they already do today (Twitter, Facebook, Github) or use proposals like WebFinger or even enter the world of linked data and use that.

The problem with the argument is expressed in this phrase – Until the day that a universal machine readable account URI standard exist[s]. PaySwarm defines a universal, machine-readable account URI standard. This mechanism is important for interoperability – without it, it becomes difficult to publish information in a decentralized, machine-readable fashion. Without describing what lives at the end of an account IRI, you can’t figure out who owns a financial account, you can’t understand what the currency of the account is, nor can you extend the information associated with the account in an interoperable way. PaySwarm asserts that we cannot just gloss over this part of the problem space as it is important for interoperability.

Basic Financial Transfer

OpenTransact does not specify how a transfer physically happens as that is an implementation detail. It could be creating a record in a ledger, uploading a text file to a mainframe via ftp, calling multiple back end systems, generating a bitcoin, shipping a gold coin by fedex, etc.

At no point does the PaySwarm specification state what must happen physically. What happens physically is outside of the scope of the specification. What matters is how the digital exchange happens. This is primarily because any open payment platform for the Web is digitally native. That is, when you pay someone, the transfer is executed and recorded digitally, at times, between two payment processors. This is the same sort of procedure that happens at banks today. Rarely does physical cash move when you use your credit or debit card.

The point of supporting a Basic Financial Transfer between systems boils down to interoperability. OpenTransact doesn’t mention how you transfer $1 from PaymentServiceA to PaymentServiceB. That is, if you are and you want to send $1 to, how do you initiate the transfer from to That is, what is the protocol? OpenTransact is silent on how this cross-payment processor transfer happens. PaySwarm asserts that specifying this payment processor monetary exchange protocol in great detail is vital to ensure that the standard enables a fair and efficient transaction processor marketplace. That is, specifying how this works is vital for ensuring that new payment processor competitors can enter the marketplace with as little friction as possible. If a payment standard does not specify how this works, it enables vendor lock-in and payment network silos.

When it comes to standards, implementation details like this matter because without explicitly stating how two systems may exchange money with one another, interoperability suffers.

Transacted Item Identifer

It would be great to have a standard way of specifying every single item in this world and that is pretty much what RDF is about. However until such day that every single object in the world is specified by RDF, we believe it is best to just identify the purchased item with a url.

This argument seems to be saying two contradictory things; 1) It would be great to have a standard way of describing machine-readable items on the Web and 2) until that happens, we should just use URLs.

PaySwarm defines exactly how to express machine-readable items on the Web. Since the first part of the statement is true today, the last part of the statement becomes unnecessary. Furthermore, both OpenTransact and PaySwarm use IRIs for transacted item identifiers – that was never in question. OpenTransact uses an IRI to identify the transacted item. PaySwarm uses an IRI to identify the transacted item, but also ensures that the item is machine-readable and digitally signed by the seller for security purposes.

There are at least two reasons that you cannot just depend on URLs for describing items on the Web without also specifying how those items can be machine-readable and verifiable.

The first reason is because the seller can change what is at the end of a URL over time and that is a tremendous liability to those purchasing that item if the item’s description is not stored at the time of sale. For example, assume someone sells you an item described by the URL When you look at that URL just before you buy the item, it states that you are purchasing tickets to a concert. However, after you make the purchase, the person that sold you the item changes the item at the end of the URL to make it seem as if you purchased an article about their experience at the concert and not the ticket to go to the concert. PaySwarm protects against this attack by ensuring that the machine-readable description of what is being transacted is machine-readable and that machine-readable description is shown to the buyer before the sale and then embedded in the receipt of sale.

The second reason is that the URL, if served over HTTP, can be intercepted and changed, such that the buyer ends up purchasing something that the seller did not approve for sale. PaySwarm addresses this security issue by ensuring that all offers for sale must be digitally signed by the seller.

Alternative Currencies

In most cases the currency mint is equal to the transaction processor.

The currency mint is not equivalent to the transaction processor. Making that assertion conflates two important concepts; 1) the issuer of a currency, and 2) the transaction processors that are capable of transacting in that currency. To put it in different terms, that’s as if one were to say that the US Treasury (the issuer of the USD currency) is the same thing as a local bank in San Francisco (an entity transacting in USD).

Access Control Delegation

But Digital Signatures only solve the actual access process so you have to create your home built authorization and revocation scheme to match what OAuth 2 gives us for free.

In software development, nothing is free. There are always design trade-offs and the design trade-off that OpenTransact has made is to adopt OAuth 2 and punt on the problem of machine-readable and verifiable assets, licenses, listings, and digital contracts. While Pelle makes the argument that OpenTransact may add digital signature support in the future, the final solution would require that both OAuth 2 and digital signatures be implemented.

PaySwarm does not reject OAuth 2 as a useful specification, it rejects it because it overly-complicates the implementation details of the open payment platform. PaySwarm relies on digital signatures instead of OAuth 2 for the same reason that it relies on JSON instead of XML. XML is a perfectly good technology, but JSON is simpler and solves the problem in a more elegant way. That is, adding XML to the specification would needlessly over-complicate the solution, which is why it was rejected.

Furthermore, PaySwarm had previously been implemented using OAuth and we found it to be overly complicated because of this very reason. OAuth and digital signatures largely duplicate functionality and since PaySwarm requires digital signatures to offer a secure, distributed, open payment platform, the most logical thing was to remove OAuth. By removing OAuth, no functionality was sacrificed and the overall system was simplified as a result.

Machine Readable Metadata

Every aspect of PaySwarm falls apart if everything isn’t created using machine readable metadata. This would be great in a perfect greenfield world. However while meta data is improving as people are adding open graph and other stuff to their pages for better SEO and Facebook integration, there are many ways of doing it and a payment standard should not be specifying how every product is listed, sold or otherwise.

This argument is a bit strange – on one hand, it is asserted that it would be great if a product could be listed in a way that is machine-readable while simultaneously stating that a payment standard shouldn’t do it. More simply, the argument is – it would be great if we did X, but we shouldn’t do X.

Why shouldn’t a payment platform standard specify how items should be listed for sale? If there are many ways of specifying how a product should be listed for sale, isn’t that a good candidate for standardization? After all, when product listings are machine-readable, we can automate a great deal of what previously required human intervention.

The reason that Open Graph and Facebook integration happened so quickly across a variety of websites is because it provided good value for the website owners as well as Facebook. It allowed websites to be more accurately listed in feeds. It also allowed Facebook to leverage the people in its enormous social network to categorize and label content, something that had been impossible on a large scale before. The same is true for Google Rich Snippets and the recent work launched by Google, Microsoft, Yahoo! and Yandex. Website owners can now mark up people, events, products, reviews, and recipes in a way that is machine-readable and that shows up directly, in an enhanced form from regular search listings, in the search engine results pages.

Making things in a web page machine-readable, like products for sale, automates a very large portion of what used to require human oversight. When we automate processes like these, we are able to gain efficiencies and innovate on top of that automation. Specifying how a product should be marked up on the Web in order to be transacted via an open Web payment platform is exactly what should be standardized in a specification, and this is exactly what PaySwarm does.

Recurring payments

With OpenTransact we are still discussing how to specify recurring payments. Before we add it to the standard we would like a couple of real world implementations experiment with it.

This is a chicken and egg problem – at some point, someone has to propose a way to perform recurring payments for an open payment platform. When the OpenTransact specification states that it won’t implement recurring payments until somebody else implements recurring payments, then the problem is just shifted to another community that must do the hard work of figuring out how to implement recurring payments.

PaySwarm has gone to the trouble of specifying exactly how recurring payments are performed. There are many other implementations of recurring payments implemented by the many credit card transaction processors, PayPal, Google Checkout, and Amazon Payments, to name a few. There are many real-world implementations of recurring payments today, so it is difficult to understand exactly what the designers of OpenTransact are waiting on.

Financial Institution Interoperability

OpenTransact is most certainly capable of interoperability between financial institutions. We don’t specify how one payment provider transacts with another payment provider, but it is generally understood that they do so with OpenTransact.

The statement above seems to contradict itself. On one hand, it states that OpenTransact is capable of interoperability between financial institutions. On the other hand, it states that OpenTransact does not specify how one payment provider transacts with another payment provider.

By definition, you do not have interoperability if you do not specify how one system interoperates with another. Furthermore, claiming that two systems interoperate but then not specifying how they interoperate is an invitation for collusion between financial institutions and is a step backwards when looking at how financial institutions operate today. That is, at least there is an inter-bank monetary transfer protocol that you can utilize if you are a bank. This functionality, of detailing how two payment processors interact, is out of scope for OpenTransact.

Digital Signatures

Digital signatures are beautiful engineering constructs that most engineers who have worked with them tend to hold up in near religious reverence. You often hear that a digital signature makes a contract valid and it supports non-repudiation.

PaySwarm does not hold up digital signatures in religious reverence, nor does it assert that by using a digital signature, a digital contract is automatically a legally enforceable agreement. What PaySwarm does do is utilize digital signatures as a tool to provide system security. It also utilizes digital signatures so that simple forgeries on digital contracts cannot be performed.

By not supporting digital signatures in its core protocol, OpenTransact greatly limits itself regarding the use cases that can be addressed with the standard. These use cases that are not addressed by OpenTransact are regarded as very important to the PaySwarm work and thus, cannot be ignored.

Secure Communication over HTTP

We are not trying to reinvent TLS because certs are expensive, which is what PaySwarm proposes.

PaySwarm does not try to re-invent TLS. PaySwarm utilizes TLS to provide security against man-in-the-middle attacks. It also utilizes TLS to create a secure channel from the customer to their PaySwarm Authority and from the merchant to their PaySwarm Authority. What PaySwarm also does is allow sellers on the system to run an online storefront from their website over regular HTTP, thus greatly reducing the cost of setting up and operating an online store-front. The goal is to make the barrier to entry for a vendor on PaySwarm cost absolutely nothing, thus enabling a large group of people that were previously unable to participate in electronic commerce via their website to do so in a way that does not require an up-front monetary investment.

Continuing the Discussion

The fundamental point made in this blog post is that by being hyper-focused on initiating payment transfers, OpenTransact misses the bigger picture of ensuring interoperability in an open payment platform. Until this issue is addressed and it is demonstrated that OpenTransact is capable of addressing more than just a few of the simplest use cases supported by PaySwarm, I fear that it will not pass the rigors of the standardization process.

In his blog post, Pelle only responded to around half of the analysis on OpenTransact and thus further analysis will be performed on his responses when he is able to find time to post them.

If you are interested in listening in on or participating in the discussion, please consider joining the Web Payments Community Group mailing list at the World Wide Web Consortium (W3C) (it’s free, and anyone can join!).

Follow-up to this blog post

[Update 2012-01-02: Second part of response by Pelle to this blog post: OpenTransact vs PaySwarm part 2 – yes it’s still mostly out of scope]

[Update 2012-01-08: Rebuttal to second part of response by Pelle to this blog post: Web Payments: PaySwarm vs. OpenTransact Shootout (Part 3)]

* Many thanks to Dave Longley, who reviewed this post and suggested a number of very useful changes.

Web Payments: PaySwarm vs. OpenTransact Shootout

The W3C Web Payments Community Group was officially launched in August 2011 for the purpose of standardizing technologies for performing Web-based payments. The group launched at that time because Digital Bazaar had made a commitment to publish the PaySwarm technology as an open standard and eventually place it under the standardization direction of the W3C. During last week’s Web Payments telecon, a discussion ensued about using the OpenTransact specification as the basis for the Web Payments work at W3C. Inevitably, the group will have to thoroughly vet both technologies to see if it should standardize PaySwarm, OpenTransact, or both.

This blog post is a comparison of the list of features that both technologies have outlined as being standardization candidates for version 1.0 of each specification. The comparison uses the latest published specifications as of the time of this blog post, OpenTransact (October 19th, 2011) and PaySwarm (December 14th, 2011). Here is a brief summary table on the list of features supported by each proposed standard:

Feature PaySwarm 1.0 OpenTransact 1.0
IRIs for Identifiers Yes Yes
Basic Financial Transfer Yes Yes
Payment Links Yes Yes
Item For Sale Identifier Yes Yes
Micropayments Yes Yes
Access Control Delegation Digital Signatures OAuth 2.0
Alternative Currencies Yes Centralized
Machine Readable Metadata Yes Only for Items for sale
Recurring Payments Yes Use case exists, but no spec text
Transaction Processor Interoperability Yes No
Extensible Machine Readable Metadata Yes No
Transactions Yes No
Currency Exchange Yes No
Digital Signatures Yes No
Secure Communication over HTTP Yes No
Decentralized Publishing of Items for Sale Yes No
Decentralized Publishing of Licenses Yes No
Decentralized Publishing of Listings Yes No
Digital Contracts Yes No
Verifiable Receipts Yes No
Affiliate Sales Yes No
Secure Vendor-routed Purchases Yes No
Secure Customer-routed Purchases Yes No
Currency Mints Yes No
Crowd-funding Yes No
Data Portability Yes No

IRIs for Identifiers

If a payment technology is going to integrate cleanly with the Web, it should identify the things that it operates on in a Web-friendly way. Identifiers are at the heart of most Internet-based systems, and therefore it is important that the identifier work in a way that allows it to be used across the world and beyond and across different systems operating in different locations. The Internationalized Resource Identifier (IRI), of which Uniform Resource Locators (URLs) are a sub-set, provide a globally scale-able mechanism for creating distributed, de-reference-able identifiers.

PaySwarm uses IRIs to identify things like identities (, financial accounts (, assets (, licenses (, listings (, transactions (, contracts (, and a variety of the other things that must be expressed when building an open protocol for a financial system.

OpenTransact uses IRIs to identify Asset Services (, identities (, transfer receipts (, callbacks (, and providers of items for sale ( While OpenTransact does not detail many of the “things” that PaySwarm does, the implicit assumption is that those “things” would also have IRIs as their identifiers.

Basic Financial Transfer

An open payment protocol for the Web must be able to perform a simple financial transfer from one financial account to another. This simple exchange is different from the more complex transaction, as outlined below, which allows multiple financial transfers to occur across multiple accounts during a single transaction.

PaySwarm supports transfers from one financial account to another both within systems and between systems.

OpenTransact outlines transfers from one account to another within a system. It is unclear whether it supports financial transfers between systems since the implementation details of how the transfer occurs are not explained in the specification.

A Payment Link is a click-able link in a Web browser that enables one person or organization on the Web to request payment from another person or organization on the Web. Clicking on the link initiates the transfer process. The URL query parameters are standardized such that all payment processors on the Web would implement a single set of known query parameters, thus easing implementation burden when integrating with multiple payment systems.

A PaySwarm Authority will depend on the Payment Links specification to specify the proper query parameters for a Payment Link. These query parameters are intended to overlap almost entirely with the OpenTransact query parameters.

The OpenTransact specification outlines a set of query parameters for payment links.

Transacted Item Identifier

Being able to identify an item that is the cause of a financial transfer on the Web is important because it enables the payment system to understand why a particular transfer occurred. Furthermore, ensuring that the item identifier is de-reference-able allows humans and in some cases, machines, to view details about the item for sale.

PaySwarm calls an item that can be transacted an asset and ensures that the description of every asset on the network meet five important criteria; the asset is identified via an IRI, the asset description can be retrieved by de-referencing the IRI, the asset description is human-readable, the asset description is machine-readable and that the asset description can be frozen in time (to ensure the description of the asset does not change from one purchase to the next).

OpenTransact leaves the identification of the item being transacted a bit more open-ended and does not assign a conceptual name to the item. It ensures that the item is identified by an IRI and that de-referencing the IRI results in an item description. The specification is silent on whether or not the item description is required to be human-readable or machine-readable and does not require that the item description can be frozen in time. Since the item IRI is saved in the receipt but a machine-readable description is not, it allows the vendor to change the description of the purchased item at any point. For example, if the buyer purchased an ebook on day one, the vendor can change the purchase to seem as if it were a rental of the ebook on day two.


Micropayments allow the transmission of very small amounts of money from sender to receiver. For example, paying $0.05 for access to an article would be considered a micropayment. One of the primary benefits of micropayments is that they enable easy-to-use pay-as-you-go services. Micropayments also allow one to transfer small amounts of funds at a time without incurring high per-transaction fees.

The smallest amount for a PaySwarm transactions is unlimited. However, most transaction processors will limit payments up to 1/10,000th of the smallest whole denomination of a currency. For example, the smallest transaction possible using the PaySwarm software that Digital Bazaar is developing in US Dollars is $0.0000001 (one ten-millionth of a US Dollar).

OpenTransact is not limited in the smallest amount that is transferable.

Alternative Currencies

Alternative currencies like Ven, Bitcoin, time banking, Bernal Bucks and various gaming currencies like XBox Points are being increasingly used to solve niche economic problems that traditional fiat currencies have not been able to address. In order to ensure that experimentation with alternative currencies are supported, a Web-based payment protocol should ensure that those currencies can be easily created and exchanged.

PaySwarm allows currencies to be specified by either an internationalized currency code, like “USD”, or by an IRI that identifies a currency. This means that anyone capable of minting an IRI, which is just about anybody on the Web, has the ability to create a new alternative currency. The one draw-back for alternative currencies is that there must also be a location, or network, on the Web that acts as the currency mint. The concept of a currency mint will be covered later in this blog post.

OpenTransact supports alternative currencies by creating what it calls Asset Service endpoints. These endpoints can be for the transmission of fixed currencies, stocks, bonds, or other financial instruments.

Access Control Delegation

When implementing features like recurring payments, often some form of access control is required to ensure that a vendor honors the agreement that they created with the buyer. For example, giving permission to a vendor to bill you for up to $10 USD per month requires that some sort of access control privileges are assigned to the vendor. This access control mechanism allows the vendor to make withdrawals without needing to repeatedly bother the buyer. It also gives power to the buyer if they ever want to halt payment to the vendor. There are two primary approaches to access control on the Web; OAuth and digital signatures.

PaySwarm relies on digital signatures to ensure that a request for financial transfer is coming from the proper vendor. Setting up access control is a fairly simple process in PaySwarm, consisting of two steps. The first step requires the vendor to request access from the buyer via a Web browser link. The second step requires that the buyer select which privileges, and limits on those privileges, they are granting to the vendor. These privileges and limits may effectively make statements like: “I authorize this vendor to bill me up to $10 per month”. The vendor may then make financial transfer requests against the buyer’s financial account for up to $10 per month.

OpenTransact enables access control via the OAuth 2 protocol, which is capable of supporting similar privileges and limitations as PaySwarm. However, the OAuth 2 protocol does not allow for digital signatures external to the protocol and thus would also require a separate digital signature stack in order to support things like machine-verifiable receipts and decentralized assets, licenses and listings.

Machine Readable Metadata

One of the benefits of creating an open payment protocol is that the protocol can be further automated by computers if certain information is exposed in a machine-readable way. For example, if a digital receipt were exposed such that it was machine-readable, access to a website could be granted merely by transmitting the digital receipt to the website.

PaySwarm requires that all data that participates in a transaction is machine readable in a deterministic way. This ensures that assets, licenses, listings, contracts and digital receipts (to name a few) can be transacted and verified without a human in the loop. This increased automation leads to far better customer experiences on websites, where a great deal of the annoying practice of filling out forms can be skipped entirely because machines can automatically process things like digital receipts.

OpenTransact describes exactly what receipt metadata looks like – it’s a JSON object that contains a number of fields. It also outlines that implementations could ask for descriptions of OpenTransact Assets and the associated list of transactions that are associated with those Assets. However, there is no mechanism that allows this metadata to be extended in a deterministic fashion. This limitation will be further detailed below.

Recurring Payments

A recurring payment enables a buyer to specify that a particular vendor can bill them at a periodic interval and removes the burden of buyers having to remember to pay bills every month. Recurring payments requires a certain level of Access Control Delegation.

Recurring payments are supported in PaySwarm by pre-authorizing a vendor to spend a certain limit at a pre-specified time interval. Many other rules can be put into place as well. For example, the buyer could limit the time period that the vendor can operate or the transaction processor could send an e-mail every time the vendor withdraws money from the buyer.

While a recurring payments use case exists for OpenTransact, no specification text has been written on how one can accomplish this feat from a technical standpoint.

Transaction Processor Interoperability

For an open payment protocol to be truly open, it must provide interoperability between systems. At the most basic level, this means that a transaction processor must be able to take funds from an account on one system and transfer those funds to a different account on a different system. Interoperability often goes deeper than that, however, as transferring transaction history, accounts, and preference information from one system to the next is important as well.

Since PaySwarm lists financial accounts in a decentralized way (as IRIs), there is no reason that two financial accounts must reside on the same system. In fact, PaySwarm is built with the assumption that payments will freely flow between various payment processors during a single transaction. This means that a single transaction could contain multiple payees and each one of those payees could reside on a different system, and as long as each system adheres to the PaySwarm protocol, the money will be transmitted to each account. While the specification text has not yet been written, the PaySwarm system will not be fully standardized until the protocol for performing the back-haul communication between each PaySwarm Authority is detailed in the specification. That is, system to system inter-operability is a requirement of the protocol.

The OpenTransact specification identifies senders and receivers in a decentralized way (as IRIs). There are no plans to specify system-to-system transactions in the OpenTransact 1.0 specification. The type of interoperability that OpenTransact provides is library API-level compatibility. That is, those writing software libraries for financial services need only implement one set of query parameters to work with an OpenTransact system. However, the specification does not specify how money flows from one system to the next. There is no system-to-system interoperability in OpenTransact and thus each implementation does nothing to prevent transaction processor lock-in.

Extensible Machine Readable Metadata

Having machine-readable data allows computers to automate certain processes, such as receipt verification or granting access based on details in a digital contract. However, machine-readable data comes at the cost of having to use rigid data structures. These rigid data structures can be extended in a variety of ways that provide the best of both worlds – machine readability and extensibility. Allowing extensibility in the data structures enables innovation. For example, the addition of new terms in a contract or license would enable new business models not considered by the designers of the core protocol.

PaySwarm utilizes JSON-LD to express data in such a way as to be easily usable by Web programmers, but extensible in a way that is guaranteed to not conflict with future versions of the protocol. This means that assets, licenses, listings, digital contracts, and receipts may be extended by transaction processors in order to enable new business models without needing to coordinate with all transaction processors as a first step.

OpenTransact utilizes JSON to provide machine-readable data for receipts, Assets and transactions. However, it does not specify any mechanism that allows one to extend the data structures without running the risk of conflicting with future advancements to the language.


A transaction is defined as a collection of one or more transfers. An example of a transaction is paying a bill at a restaurant. Typically, that transaction results in a number of transfers – there is one from the buyer to the restaurant, another from the buyer to a tax authority, and yet another transfer from the buyer to tip the waiter (in certain countries). While the restaurant gives you a single receipt, the transfer of money is often more complex than just a simple transmission from one sender to one receiver.

PaySwarm models transactions in the way that we model them in the real world – a transaction is a collection of monetary transfers from a sender to multiple receivers. An example of a transaction can be found in the PaySwarm Commerce vocabulary.

OpenTransact models only the most low-level concept of a financial transfer and leaves implementation of transactions to a higher-level application. That is, transactions are considered out-of-scope for the OpenTransact specification and are expected to be implemented at the application layer.

Currency Exchange

A currency exchange allows one to exchange a set amount of one currency for a set amount of another currency. For example, exchanging US Dollars for Japanese Yen, or exchanging goats for chickens. A currency exchange functions as a mechanism for transforming something of value into something else of value.

PaySwarm supports currency exchanges via digital contracts that outline the terms of the exchange.

OpenTransact asserts that currency exchanges can be implemented on top of the specification, but states that the details of that implementation are strictly outside of the specification. One of the primary features that are required for a functioning currency exchange is the concept of a currency mint, which is also outside of the scope of OpenTransact.

Digital Signatures

A digital signature is a mechanism that is used to verify the authenticity of digital messages and documents. Much like a hand-written signature, it can be used to prove the identity of the person sending the message or signing a document. Digital signatures have a variety of uses in financial systems, including; sending and receiving verifiable messages, access control delegation, sending and receiving secure/encrypted messages, counter-signing financial agreements, and ensuring authenticity of digital goods.

PaySwarm has direct support for digital signatures and utilizes them to provide the following capabilities; sending and receiving verifiable messages, access control delegation, sending and receiving secure/encrypted messages, counter-signing financial agreements, ensuring the authenticity of assets, licenses, listings, digital contracts, intents to purchase, and verifiable receipts.

OpenTransact does not support digital signatures in the specification.

Secure Communication over HTTP

While HTTP has served as the workhorse protocol for the Web, it’s major weakness is that it is not secure unless wrapped in a Transport Layer Security (TLS, aka SSL) connection. Unfortunately, TLS/SSL Certificates are costly and one of the design goals for an open protocol for Web Payments should be reducing the financial burden placed on the network participants. Another mechanism that can be used to secure HTTP traffic is by using AES to encrypt/decrypt or digitally sign parts of the HTTP message. This approach results in a zero-financial-cost solution for implementing a secure message channel inside of an HTTP message.

Since PaySwarm supports AES and digital signatures by default, it is also capable of securing communication over HTTP.

OpenTransact relies on OAuth 2 and thus requires a financial commitment from the participant if they want to secure their network traffic via TLS. There is no way to use OpenTransact over HTTP in a secure manner without TLS. However, this is not a problem for the subset of use cases that OpenTransact aims to solve. It does not address the cases where a digital signature or encryption is required to communicate over an un-encrypted HTTP message channel.

Decentralized Publishing of Items for Sale

A vendor would like to have their products listed for sale as widely as possible while ensuring that their control over the item’s machine-readable description is ensured, regardless of where it is listed on the Web. It is important to be able to list items for sale in a secure manner, but allow the flexibility for that item to be expressed on sites that are not under your control. Decoupling the machine-readable description of an item for sale from the payment processor allows both mechanisms to be innovated upon on different time-frames by different people. Centralized solutions are often easier to implement, but far less flexible from decentralized solutions. This holds true for how items for sale are listed. Allowing vendors to have full control over how their items for sale should appear to those browsing their wares is something that a centralized solution cannot easily offer.

PaySwarm establishes the concept of an asset and describes how an asset can be expressed in a secure, digitally signed, decentralized manner.

OpenTransact does not support machine-readable descriptions of items, nor does it support digital signatures to ensure that items cannot be tampered with by third-parties, or even the originating party.

Decentralized Publishing of Licenses

When a sale occurs, there is typically a license that governs the terms of sale. Often, this license is implied based on the laws of commerce governing the transaction in the region in which the transaction occurs. What would be better is if the license could be encapsulated into the receipt of sale and specified in a way that is decentralized to the financial transaction processor and to the item being purchased. This would ensure that people and organizations that specialize in law could innovate and standardize a set of licenses independently of the rest of the financial system.

PaySwarm establishes the concept of a license and describes how it can be expressed in a secure, digitally signed, decentralized manner. Licenses typically contain boilerplate text which are sprinkled with configurable parameters such as “warranty period in days from purchase”, and “maximum number of physical copies” (for things like manufacturing).

OpenTransact does not support machine-readable licenses, embedding licenses, nor does it support digital signatures to ensure that the license cannot be tampered with by third-parties. License tampering isn’t just a problem when transmitting the license over insecure channels, it is also an issue if the the originator of the license changes the contents of the license.

Decentralized Publishing of Listings

A listing specifies the payment details and license under which an asset can be transacted. Giving a vendor full control over when, where and how a listing is published is vital to ensuring that new business models that depend on when and how items are listed for sale can be innovated upon independently of the financial network. So, it becomes important that listings can not only be expressed in a decentralized manner, but they are also tamper-proof and re-distribute-able across the Web while ensuring that the vendor stays in control of how long a particular offer lasts.

PaySwarm establishes the concept of a listing and describes how it can be expressed in a secure, digitally signed, decentralized manner. Decentralized listings allow assets described in the listings to be sold on a separate site, under terms set forth by the original asset owner. That is, a pop-star could release their song as a listing on their website, and the fans could sell it on behalf of the pop-star while making a small profit from the sale. In this scenario, the pop-star gets the royalties they want, the fan gets a cut of the sale, and mass-distribution of the song is made possible through a strongly motivated grass-roots effort by the fans.

OpenTransact does not support machine-readable listings, nor does it support digital signatures to ensure that the license cannot be tampered with by third-parties.

Digital Contracts

A contract is the result of a commercial transaction and contains information such as the item purchased, the pricing information, the the parties involved in the transaction, the license outlining the rights to the item, and payment information associated with the transaction. A digital contract is machine-readable, is self-contained and is digitally signed to ensure authenticity.

PaySwarm supports digital contracts as the primary mechanism for performing complex exchanges of value. Digital contracts support business practices like intent-to-purchase, being able to purchase an asset under different licenses (such as personal use and broadcast use), and digital receipts.

OpenTransact does not support digital contracts nor does it support digital signatures.

Verifiable Receipts

A verifiable receipt is a receipt that contains a digital signature such that you can verify the accuracy of the receipt contents. Verifiable receipts are helpful when you need to show the receipt to a third party to assert ownership over a physical or virtual good. For example, a music fan could show a verifiable receipt confirming that they purchased a certain song from an artist to get a discount on tickets to an upcoming show. There would not need to be any coordination between the original vendor of the songs and the vendor of the tickets if a verifiable receipt was used as a proof-of-purchase.

PaySwarm supports verifiable receipts, even when the signatory of the receipt is offline. The only piece of information necessary is the public key of the PaySwarm Authority. This means that receipts can be verified even if the transaction processor is offline or goes away entirely.

OpenTransact does support receipts delivered via the Asset Service, using OAuth 2 as the access control mechanism. This means that retrieving a receipt for validation requires having a human in the loop. A verifying website would need to request an OAuth 2 token, the receipt-holder would need to grant access to the verifying website, and then the verifying website would use the token to retrieve the receipt. Currently, OpenTransact does not support digitally signed receipts, and thus it does not support receipt verification if the Asset Service is offline.

Affiliate Sales

When creating content for the Web, getting massively wide distribution typically leads to larger profits. Therefore, it is important for people that create items for sale to be able to grant others the ability to redistribute and profit off of the redistribution, as long as the original creator is compensated under their terms for their creation. Typically, this is called the affiliate model – where a single creator allows wide-distribution of their content through a network of affiliate sellers.

PaySwarm supports affiliate re-sale through digitally signed listings. A listing associates an asset for sale, the license under which asset use is governed and the payment amount and rules associated with the asset. These listings can be used on their originating site, or a third party site. Security of the terms of sale specified in the listings are ensured through the use of digital signatures.

OpenTransact does not support affiliate sales.

Secure Vendor-routed Purchases

At times, network-connectivity can be a barrier for performing a financial transaction. In these cases, the likelihood that a at least one of the transaction participants has an available network connection is high. For example, if a vendor has a physical place of business with a network connection, the customers that frequent that location can depend on the vendor’s network connection instead of requiring their own when processing payments. This is useful when implementing a payment device in hardware, like a smart payment card, without also requiring that hardware device to have a large-area network communication capability, like a mobile phone. A typical payment flow is outlined below:

  1. The vendor presents a bill to the buyer.
  2. The buyer views the bill and digitally signs the bill, stating that they agree to the charges.
  3. The vendor takes the digitally signed bill and forwards it to the buyer’s payment processor for payment.

PaySwarm supports vendor-routed purchases through the use of digital signatures on digital contracts. When the vendor provides a digital contract to a buyer, the buyer may accept the terms of sale by counter-signing the digital contract from the vendor. The counter-signed digital contract can then be uploaded to the buyer’s PaySwarm Authority to transfer the funds from the buyer to the vendor. The digital contract is returned to the vendor with the PaySwarm Authority’s signature on the contract to assert that all financial transfers listed in the contract have been processed.

OpenTransact does not support vendor-routed purchases, requiring instead that both buyer and vendor have network connectivity when performing a purchase.

Secure Customer-routed Purchases

At times, network-connectivity can be a barrier for performing a financial transaction. In these cases, the likelihood that a at least one of the transaction participants has an available network connection is high. For example, a vendor could setup a sales kiosk without any network connection (such as a vending machine) and route purchase processing via a customer’s mobile-phone. A typical payment flow is outlined below:

  1. The buyer selects the product that they would like to purchase from the kiosk (like a soda).
  2. The kiosk generates a bill and transmits it to the mobile device via a NFC connection.
  3. The buyer’s mobile phone digitally signs the bill and sends it to their payment processor for processing.
  4. The digitally signed receipt is delivered back to the buyer’s mobile device, which then transmits it via NFC to the kiosk.
  5. The kiosk checks the digital signature and upon successful verification and delivers the product to the buyer (a cold, refreshing soda).

PaySwarm supports customer-routed purchases through the use of digital signatures on digital contracts. When the buyer receives the digital contract for the purchase from the kiosk, it is already signed by the vendor which implies that the vendor is amenable to the terms in the contract. The buyer then counter-signs the contract and sends it up to the PaySwarm Authority for processing. The PaySwarm Authority then counter-signs the contract, which is delivered back to the buyer, which then routes the finalized contract back to the kiosk. The kiosk checks the digital signature of the PaySwarm Authority on the contract and delivers the product to the buyer.

OpenTransact does not support customer-routed purchases, requiring instead that both buyer and vendor have network connectivity when performing a purchase.

Currency Mints

A currency mint is capable of creating new units of a particular currency. A currency mint is closely related to the topic of alternative currencies, as previously mentioned in this blog post. In order for an alternative currency to enter the financial network, there must be a governing authority or algorithm that ensures that the generation of the currency is limited. If the generation of a currency is not limited in any way, hyperinflation of the currency becomes a risk. The most vital function of a currency mint is to carefully generate and feed currency into payment processors.

PaySwarm supports currency mints by allowing the currency mint to specify an alternative currency via an IRI on the network. The currency IRI is then used as the origin of currency across all PaySwarm systems. The currency mint can then deposit amounts of the alternative currency into accounts on any PaySwarm Authority through an externally defined process.

OpenTransact does not support currency mints. It does support alternative currencies, but does not specify how the alternative currency can be used across multiple payment processors and therefore only supports alternative currencies in a non-inter-operable way.


Crowd-funding is the act of pooling a set of funds together with the goal of effecting a change of some kind. Kickstarter is a great example of crowd-funding in action. There are a number of requirements when crowd-funding; ensure that money is not exchanged until a funding goal has been reached (an intent to fund), if a funding goal is reached – allow the mass collection of money (bulk transfer), if a funding goal is not reached – invalidate the intent to fund (cancellation).

PaySwarm supports crowd-funding. The digital contracts that PaySwarm uses can express an intent to fund. Mass-collection of a list of digital contracts expressing an intent to fund can occur in an atomic operation, which is important to make sure that the entire amount is available at once. Finally, the digital contracts containing an intent to fund also contain an expiration time for the offer to fund.

OpenTransact does not support crowd-funding as described above.

Data Portability

Data portability provides the mechanism that allows people and organizations to easily transfer their data across inter-operable systems. This includes identity, financial transaction history, public keys, and other financial data in a way that ensures that payment processors cannot take advantage of platform lock-in. Making data portability a key factor of the protocol ensures that the customers always have the power to walk away if they become unhappy with their payment processor, thus ensuring strong market competition among payment processors.

PaySwarm ensures data portability by expressing all of its Web Service data as JSON-LD. There will also be a protocol defined that ensures that data portability is a requirement for all payment processors implementing the PaySwarm protocol. That is, data portability is a fundamental design goal for PaySwarm.

OpenTransact does not specify any data portability requirements, nor does it provide any system inter-operability requirements. While certainly not done on purpose, this creates a dangerous formula for vendor lock-in and non-interoperability between payment processors.

Follow-up to this blog post

[Update 2011-12-21: Partial response by Pelle to this blog post: OpenTransact the payment standard where everything is out of scope]

[Update 2012-01-01: Rebuttal to Pelle’s partial response to this blog post: Web Payments: PaySwarm vs. OpenTransact Shootout (Part 2)]

[Update 2012-01-02: Second part of response by Pelle to this blog post: OpenTransact vs PaySwarm part 2 – yes it’s still mostly out of scope]

[Update 2012-01-08: Rebuttal to second part of response by Pelle to this blog post: Web Payments: PaySwarm vs. OpenTransact Shootout (Part 3)]

Standardizing Payment Links

Why online tipping has failed.

TL;DR – Standardizing Payment Links for the Web is not enough – we must also focus on listing and transacting assets that provide value to people.

Today was a busy day for PaySwarm/Bitcoin integration. We had a very productive discussion about the PaySwarm use cases, which includes supporting Bitcoin as an alternative currency to government-backed currencies. I also had a very interesting discussion with Amir Taaki, who is one of the primary developers behind Bitcoin, about standardization of a Bitcoin IRI scheme for the Internet. Between those two meetings, Dan Brickley asked an interesting question:

danbri Sept 23rd 2011 12:36pm: @manusporny any thoughts re bitcoin/foaf, vs general ways of describing online payability? @graingert @melvincarvalho

This question comes up often during the PaySwarm work – we’ve been grappling with it for a number of years.

Payment Links Solutions

Payment links have quite a history on the Internet. People have been trying to address payment via the Web for over a decade now with many failures and lessons learned. The answer to the question really boils down to what you’re trying to do with the payment. I’ll first answer what I think Dan was asking: “Do you think we should add a bitcoin term to the Friend of a Friend Vocabulary? Should we think about generalizing this to all types of payment online?”

First, I think that adding a bitcoin term to the FOAF Vocabulary would be helpful for Bitcoin, but a bit short-sighted. This is typically what people wanting to be paid by Bitcoin addresses do today:

Support more articles like this by donating via Bitcoin: 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa

If you wanted to donate, you would copy/paste the crazy gobbledey-gook text starting with 1A1z and dump that into your Bitcoin client. One could easily make something like this machine-readable via HTML+RDFa to say that they can be paid at a particular Bitcoin address. To make the HTML above machine-readable, one could do the following:

<div about="#dan-brickley">
Support more articles like this by donating via Bitcoin: 
   <span property="foaf:bitcoin">1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa</span>

However, that wouldn’t trigger a Bitcoin client to be launched when clicked. The browser would have to know to do something with that data and we’re many years away from that happening. So, using some sort of new Bitcoin IRI scheme that was discussed today might be a better short-term solution:

<div about="#me">
Support more articles like this by 
   <a rel="foaf:tipjar" href="bitcoin:1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa?amount=0.25">donating via Bitcoin</a>.

This is a pretty typical approach to an online donation system or tipjar. You see PayPal buttons like this on a number of websites today. There are a few problems with it:

  • What happens when there are multiple Bitcoin addresses? How does a Web Browser automatically choose which one to use? It would be nice if we could integrate payment directly into the browser, but if we do that, we need more information associated with the Bitcoin address.
  • What if we want to use other payment systems? The second approach is better because it’s not specific to Bitcoin – it uses an IRI – but can we be more generalized than that? Requiring the creation of a new IRI scheme for every new payment protocol seems like overkill.
  • How should this be used to actually transact a digital good? Is it only good for tipjars? How does this work in a social setting – that is, do most people tip online?

The answer to the first question can be straight forward. A browser can’t automatically choose which Bitcoin address to use unless there is more machine-readable information in the page about each Bitcoin address or unless you can follow-your-nose to the Bitcoin address. You can’t do the latter yet with Bitcoin, so the former is the only option. For example, if the reason for payment were outlined for each Bitcoin address in the page, an informative UI could be displayed for the person browsing the page.

Luckily, this is pretty easy to do with HTML+RDFa, but it does require slightly more markup to associate descriptions with Bitcoin addresses. However, what if we want to move beyond tips? Just describing a payment endpoint is often confusing to people that want to pay for some specific good. Browsers or Bitcoin software may need to know more about the transaction to produce a reasonable summary or receipt to the person browsing, and that can’t be done with the markup of a single link.

The second question is a bit more difficult to answer. It would be short-sighted to just have a vocabulary term for Bitcoin. What happens if Bitcoin fails for some reason in the future? Should FOAF also add a term for Ven payments? What about PaySwarm payments? The FOAF vocabulary already has a term for a tipjar, so why not just use that coupled with a special IRI for the payment method? What may be better is a new term for “preferred payment IRI” – maybe “foaf:financialAccount” could work? Or maybe we should add a new term to the Commerce Vocabulary?

Depending on a payment protocol-specific IRI would require every payment method on the Web to register a new Internet protocol scheme. This makes the barrier to entry for new payment protocols pretty high. There is no reason why many of these payment mechanisms cannot be built on top of HTTP or other existing Internet standards, like SIP. However, If we have multiple payment protocols that run over HTTP, how can we differentiate one from another? Perhaps each payment mechanism needs it’s own vocabulary term, for example this for Bitcoin:

<a rel="bitcoin:payment" href="bitcoin:1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa?amount=0.25">tip me</a>.

and this for Ven:

<a rel="ven:payment" href="">tip me</a>.

and this for PaySwarm:

<a rel="ps:payment" href="">tip me</a>.

The key thing to remember with all of these payment protocols is that what you do with the given IRI is different in each case. That is, the payment protocol matters and thus we may not want to generalize using the first two, but we do want a generalized solution. PaySwarm is a little different, in that it is currency agnostic. The standard aims to enable payments in Bitcoin, Ven, Bernal Bucks, or any current or future currency. So, one could just specify a person to pay via PaySwarm, like so:

<a rel="ps:payment" href="">tip me</a>.

The financial account could be selected automatically based on the type of payment currency. If the person is transmitting Bitcoins, a selection of target Bitcoin accounts could be automatically discovered by retrieving the contents of the URL above and extracting all accounts with a currency type of “Bitcoin”. So, that may be a good technical solution, but that doesn’t mean it is a good social solution. The hard problem remains – most people don’t tip online.

Tipping is a Niche Solution

Sure, there are solutions like Flattr and PayPal donations. These solutions will always be niche transactions because they’re socially awkward financial transactions. People like paying for refined goods – they only give on very rare occasions, usually when the payment is for a specific good. Even tipping wait staff at a restaurant isn’t the same as a tip on a website. When you tip wait staff, you are reimbursing them for the time and courtesy that they provided you. You are paying them for something that is scarce, for something that has value to you.

Now, think of how often you tip online vs. how often do you actually buy things online? A good summary of why people rarely tip online can be found on Gregory Rader’s blog – first in why people have a hard time paying for unrefined goods and why tips and donations rarely work for small websites. The core of what I took away from Greg’s articles is that, generally speaking, asking for tips on a small website is easily dismiss-able if done correctly and incredibly awkward if done incorrectly. You’re not getting the same sort of individual attention from a website than you are when you tip at a restaurant. You are far less likely to tip online than you are during a face-to-face encounter at a restaurant. Anonymity protects you from feeling bad about not tipping online.

People have a much easier time making a payment for something of perceived value online, even if it is virtual. These goods include songs, shares in a for-profit project, a pre-release for a short film, an item in a game, or even remotely buying someone a coffee in exchange for a future article that one may write. In order to do this, however, we must be able to express the payment in a less abstract form than just a simple Payment Link. It helps to be able to describe an asset that is being transacted so that there is less confusion about why the transaction is happening. Describing a transaction in detail also helps make the browser UIs more compelling, which results in a greater degree of trust that you’re not being scammed when you decide to enter into a transaction with a website.

Refined Payments

So, if we have a solution for Payment Links on the Web, we need to make sure that they:

  1. Are capable of expressing that they are for something of refined value, even if virtual.
  2. Are machine-readable and can be described in great detail.

The Web has a fairly dismal track record of tipping for content – people expect most unrefined content to be free. So, applying plain old Payment Links to that problem will probably not have the effect that most people expect it will have. The problem isn’t with ease of payment – the problem is a deeper social issue of paying for unrefined content. The solution is to be able to describe what is being transacted in far more detail, marked up in a form that is machine-readable and currency agnostic.

Expressing a PaySwarm Asset and Listing in a page, with Bitcoin or Ven or US Dollars as the transaction currency is one such approach that meets this criteria. The major draw-back being that expressing this information on a page is far more complicated than just expressing a Payment Link. So, perhaps we need both PaySwarm and Payment Links, but we should know that the problem space for Payment Links are much more socially complex than they may seem at first.

To answer Dan Brickley’s question more directly: I don’t think FOAF should add a “bitcoin” vocabulary term. Perhaps it should add something like “financialAccount”. However, once that term has been added, exactly what problem do you hope to solve with that addition?

Building a Better World with Web Payments

TL;DRWeb Payments will enable a new era of people-powered finance on the Web.

The World Wide Web Consortium (W3C) just announced their Community Group standardization process, which is basically an incubator for technology projects that could eventually become world standards. One of those groups is called the Web Payments Community Group and is something that I’m really excited to see moving forward at W3C. This blog post outlines what we’re hoping to accomplish in that group and what it will mean for the world.

One of the first reactions that I usually get when I tell people about Web Payments and PaySwarm is this sort of wide-eyed look that usually translates into one of two things: either they get the long-term implications of having a universal payment mechanism for the world, or they’re having a very hard time figuring out how PaySwarm is any different from credit cards, ACH, Google Checkout, Amazon Payments, Square, PayPal, Flattr and the myriad of other payment solutions on the Web.

We have a ton of ways to get money from point A to point B, but all of them fail at least one of these two principles:

  • Open – Is how the technology works publicly documented and available for implementation under patent and royalty-free terms?
  • Decentralized – Does the technology work with the architecture of the Web – is it decentralized?

If you go down the list of payment technologies above, you will find that all of them fail both tests – but who cares!? Why are those two principles above important?

The Foundation of the Internet and the Web

Centralized systems work quite well most of the time, but when things go wrong, they really go off of the rails. Having a single point of failure in any system that is meant to be the foundation on which humanity builds is a terrible idea. This concept of decentralization is at the core of how the Internet and the Web was developed. All systems that are meant to become a core part of the infrastructure for the Internet and the Web must meet the two requirements above. In fact, every major technology that is baked into your Web browser meets the two requirements above. TCP/IP, HTTP, HTML, CSS, JavaScript, XML, JSON – each of them – are decentralized in design and are open Web standards. That is how technology becomes ubiquitous. That is how technology gets into the browser.

It is the year 2011, and we still don’t have anything like that on the Web for transferring the fundamental unit of value exchange used by the human race – money. There are great inefficiencies that will be resolved if we can do that and more importantly, many lives will be made better with a solution to this problem.

Exchanging Value

Today’s systems are built on technology developed during the 1980s that was based on thinking from the 1940s that arose from the same antiquated monetary guidelines that banks have operated upon for hundreds of years. In the past 20 years, you could count the number of companies that have fundamentally changed the way that an individual distributes and collects capital online using one hand. People-powered finance has not been a priority for a variety of reasons. One could say that PayPal was really the first to break ground in this area and be successful – but it is still a closed, proprietary and centralized solution. PayPal, in its current form, will never be a part of the Web like HTTP, HTML and JavaScript are today. That’s not to say that PayPal is a bad thing – it just doesn’t address the issue that we’re interested in addressing.

The world needs an open payment platform that they can innovate upon. We need the freedom to innovate upon a financial system that is open, transparent and built by people that truly understand how the Web works.

What the Future Holds

We are not attempting to create another PayPal, or Flattr, or Google Checkout. We are attempting to change the way we fundamentally collect and distribute capital online. There is already a demo of how this technology works, so I won’t go into that here. I am also not attempting to state that PayPal, Flattr or any of the other services are inherently bad. The currently popular payment services online are all very good at what they do – they work for their intended purpose. However, I’d like to focus on what is possible to do with a payment platform like the one being worked on at the World Wide Web Consortium.

Wouldn’t it be great if you could receive money from anyone or send money to anyone on the Web with just one click in a browser? Wouldn’t it be great if this mechanism was open and universal? That is, it worked in the same way across every single blog, Web App, news site, Web-based game, tablet, and mobile phone? What could you accomplish with such a universal payment system? Today, exchanging cash is really the only analogue. Not everybody takes credit cards, not everybody takes PayPal – but everyone does take a set of bills and coins – you can buy any product or service with those economic instruments. Wouldn’t it be great if you could do the same via the Web?

If there were a universal payment mechanism, most anyone that creates digital content could make a living on the Web. Creating an app and launching it through a website would be simple, as the payment solution would just be a part of the Web’s infrastructure just like delivering an HTML page from a server to a client is a part of the Web’s infrastructure. Hardly anyone has to think about how HTTP and the Internet gets a document from one side of the world to the other. It just works. Our goal is to make payments on the Web as ubiquitous as HTTP.

Automatic Micro-donations to Eradicate Malaria

Let’s assume that you believe that the Bill and Melinda Gates foundation is changing the world for the better and that you would like to help out. The W3C Web Payments work would allow you to fund local or regional change by automatically transferring $0.25/month into a few initiatives that you believe in. You could set aside a budget for the year, say $25, and trickle your money toward the groups that you believe are making a difference. There is no long-term commitment like there is today. This would not only be a way of helping those non-profits achieve their goals, but it would also be a vote of confidence – “I still believe in what you are doing and the direction you’re taking.” If a better non-profit comes along, or you feel that they are no longer being effective with your money, you can always re-allocate the monthly donations somewhere else. With a universal payment infrastructure and micropayments, funding world-changing initiatives doesn’t have to be an involved process. Just allocate the payments at the beginning of the year and forget about it until you decide to change them at your leisure.

People-powered Politics

If you are politically inclined and participate in the election of your government, wouldn’t it be nice to be able to fund the candidates that best represent you on a monthly basis during the election cycle? What about live donations during a televised national debate over the Web? Many of the donations during the last presidential election were very small dollar amounts. How would allowing people to automatically donate $1 here and there change the way we elect officials? How would crowd-financing a lobbyist to work on your behalf for Net Neutrality or Technology Education change the way our nations operate? It is currently very difficult to give money through the Web – far more difficult than just sending somebody an e-mail. There must be a reduction in friction to remove the large separation that exists between the people and their representatives.

Micro-funding for Start-ups

What about funding people, non-profits and start-ups? Kickstarter is a fantastic service that supports an all-or-nothing funding model that has been very successful at funding artists, writers, directors and software developers. Innovators ask investors to cover their start-up costs and investors get something in return for supporting the innovator. Kiva is another great example of people-powered finance – where hundreds of thousands of lenders around the world loan small amounts to people in developing countries that build or grow something and re-pay the loan after they sell their goods. Kiva has a loan re-payment rate of 98.83% – which is impressive. This model could be applied to a variety of areas that depend on more traditional funding these days; blockbuster films, farming, construction, small business loans, and open-source software development, to name a few. Having to create a new account and enter your credit card information across 10-20 sites is too much to ask of most people browsing the Web. Eventually, with the Web Payments work at W3C, a single click is all it will take to fund these types of endeavors.

The Big Picture

So, hopefully the long-term goal is more visible now. The Web Payments work at W3C is not just about making payments easier on the Web, it is about empowering people to affect positive change in the world.

There are a number of ways that you can participate. The easiest is to follow this feed on Twitter (@manusporny) or on Google+, I’ll be updating folks about this stuff over the next couple of years through the standardization process. If you want to keep tabs on the technical progress of the Web Payments standardization work – join the Web Payments mailing list. If you would like to contribute to the technical work or make sure that your use case is supported, join the Web Payments Community Group.