All posts in Web Payments

The State of W3C Web Payments in 2017

Challenges In Building a Sustainable Web Platform by Manu Sporny and Dave Longley

There is a general API pattern that is emerging at the World Wide Web Consortium (W3C) where the browser mediates the sharing of information between two different websites. For the purposes of this blog post, let’s call this pattern the “Web Handler” pattern. Web Handlers are popping up in places like the Web Payments Working Group, Verifiable Credentials Working Group, and Social Web Working Group. The problem is that these groups are not coordinating their solutions and are instead reinventing bits of the Web Platform in ways that are not reusable. This lack of coordination will most likely be bad for the Web.

This blog post is about drawing attention to this growing problem and suggesting a way to address it that doesn’t harm the long term health of the Web Platform.

The Web Payments Polyfill

Digital Bazaar recently announced a polyfill for the Payment Request and Payment Handler API. It enables two things to happen.

The first thing it enables is a “digital wallet” website, called a payment handler, that helps you manage payment interactions. You store your credit cards securely on the site and then provide the card information to merchants when you want to do a purchase. The second thing it does is enable a merchant to collect information from you during a purchase, such as credit card information, billing address, shipping address, email address, and phone number.

When a merchant asks you to pay for something, you typically:

  1. select the card you want to use,
  2. select a shipping address (for physical goods), and
  3. send the information to the merchant.

Here is a video demo of the Web Payments polyfill in action:

It’s one thing to mock up something that looks like it works, but implementations must demonstrate their level of conformance by passing tests from a W3C test suite. What you will find below is the current state of how Digital Bazaar’s polyfill does against the official W3C test suite:

As you can see, it passes 702 out of 832 tests and we don’t see any barriers to getting very close to passing 100% in the months to come.

The other thing that’s important for a polyfill is ensuring that it works in a wide variety of browsers including Google Chrome, Apple Safari, Mozilla Firefox, and Internet Explorer. Here is the current state, where the blue shows native support in Google Chrome on Android, with the Polyfill making up even more of the support in the rest of the browsers:

The great news here is that this solution is compatible with roughly 3.3 billion browsers today, which is around 85% of the people using the Web. In order to take advantage of this opportunity, just like the native implementations, the polyfill has to be deployed by merchants and payment app providers on their websites.

Credential Handler Polyfill

Digital Bazaar also recently announced that they have created an experimental polyfill for the Verifiable Credentials work. This polyfill enables a “digital wallet” website, called a credential handler, to help you manage your verifiable credential interactions. This feature enables websites to ask you for things like your shipping address, proof of age, professional qualifications, driver’s license, and other sorts of 3rd-party attested information that you may keep in your wallet. Here is a video demo of the Credential Handler polyfill in action:

Like the Web Payments polyfill, this solution is compatible with roughly 3.3 billion existing browsers, which is around 85% of the people using the Web today. Again, this doesn’t mean that 3.3 billion people are using it today, the polyfill still has to be deployed on issuer, verifier, and digital wallet websites to make that a reality.

General Problem Statement

There is a common Web Handler pattern evident in the above implementations that is likely to be repeated for sharing social information (friends) and data (such as media files) in the coming years. At this point, the general pattern is starting to become clear:

  • There is a website, called a Web Handler, that manages requests for information.
  • There is a website, called the Relying Party, that requests information from you.
  • There is a process, called Web Handler Registration, where the Web Handler asks for authorization to handle specific types of requests for you and you authorize it to do so.
  • There is a process, called Web Handler Request, where the Relying Party asks you to provide a specific piece of information, and the Browser asks you to select from a list of options (associated with Web Handlers) that are capable of providing the information.
  • There is a feature that enables a Web Handler to optionally open a task-specific or contextual window to provide a user interface.
  • There is a process, called Web Handler Processing, that generates an information response that is then approved by you and sent to the Relying Party via the Browser.

If this sounds like the Web Intents API, or Web Share API, that’s because they also fit this general pattern. There is a good write up of the reason the Web Intents API failed to gain traction. I won’t go into the details here, but the takeaway is that Web Intents did not fail because it wasn’t an important problem that needed to be solved, but rather because we needed more data, implementations, and use cases around the problem before we could make progress. We needed to identify the proper browser primitives before we could make progress. Hubris also played a part in its downfall.

Fundamentally, we didn’t have a general pattern or the right composable components identified when Web Intents failed in 2015, but we do now with the advent of Web Payments and Verifiable Credentials.

Two Years of the Web Payments WG

For those of you that have seen the Payment Request API, you may be wondering what happened to composability over the first two years of the Working Groups existence. Some of us did try to solve the Web Payments use cases using simpler, more composable primitives. There is a write up on why convincing the Web Payments WG to design modular components for e-commerce failed. We had originally wanted at least two layers; one layer to request payment (and that’s all it did), and another layer to provide Checkout API functionality (such as billing and shipping address collection). These two layers could be composed together, but in hindsight, even that was probably not the right level of abstraction. In the end, it didn’t matter as it became clear that the browser manufacturers wanted to execute upon a fairly monolithic design. Fast forward to today and that’s what we have.

Payment Request: A Monolithic API

When we had tried to convince the browser vendors that they were choosing a problematic approach, we were concerned that Payment Request would become a monolithic API. We were concerned that the API would bundle too many responsibilities into the same API in such a highly specialized way such that it couldn’t be reused in other parts of the Web Platform.

Our argument was that if the API design did not create core reusable primitives, it would not allow for it to be slotted in easily amongst other Web Platform features, maintaining a separation of concerns. It would instead create a barrier between the space where Web developers could compose primitives in their own creative ways and a new space where they must ask browser vendors for improvements because of a lack of control and extensibility via existing Web Platform features. We were therefore concerned that an increasing number of requests would be made to add functionality into the API where said functionality already exists or could exist in a core primitive elsewhere in the Web Platform.

Now that we have implemented the Payment Request API and pass 702 out of 832 tests, we truly understand what it takes to implement and program to the specification. We are also convinced that some of our concerns about the API have been realized. Payment Request is a monolithic API that confuses responsibilities and is so specialized that it can only ever be used for a very narrow set of use cases. To be clear, this doesn’t mean that Payment Request isn’t useful. It still is a sizeable step forward for the Web, even if it isn’t very composable.

This lack of composability will most likely harm its long term adoption and it will eventually be replaced by something more composable, just like AppCache was replaced by Service Workers, and how XMLHttpRequest is being replaced by Fetch. While developers love these new features, browsers will forever have the dead code of AppCache and XMLHttpRequest rotting in their code bases.

Ignoring Web Handlers at our Peril

We now know that there is a general pattern emerging among the Payments, Verifiable Credentials, and Social Web work:

  1. Relying Party: Request Information
  2. Browser: Select Web Handler
  3. Web Handler: Select and Deliver Information to Relying Party

We know, through implementation work we described above, that the code and data formats look very similar. We also know that there are other W3C Working Groups that are grappling with these cross-origin user-centric data sharing use cases.

If each of these Working Groups does what the Payment Request API does, we’ll expend three times the effort to create highly specific APIs that are only useful for the narrow set of use cases each Working Group has decided to work on. Compare this to expending far less effort to create the Web Handler API, with appropriate extension points, which would be able to address many more use cases than just Payments.

Components for Web Handlers

There are really only four composable components that we would have to create to solve the generalized Web Handler problem:

  1. Permissions that a user can grant to a website to let them manage information and perform actions for the user (payments, verifiable credentials, friends, media, etc.).
  2. A set of APIs for the Web Handler to register contextual hints that will be displayed by the browser when performing Web Handler selection.
  3. A set of APIs for Relying Parties to use when requesting information from the user.
  4. A task-specific or contextual window the Web Handler can open to present a user interface if necessary.

The W3C Process makes it difficult for Working Groups chartered to work on a more specific problem, like the Web Payments WG, to work at this level of abstraction. However, there is hope as Service Workers and Fetch do exist. Other Working Groups at W3C have successfully created composable APIs for the Web and the Web Payments work should not be an exception to the rule.


It should be illuminating that both the Web Payments API and the Credential Handler API were able to achieve 85% browser compatibility for 3.3 billion people without needing any new features from the browser. So, why are we spending so much time creating specifications for native code in the browser for something that doesn’t need a lot of native code in the browser?

The polyfill implementations reuse existing primitives like Service Worker, iframes, and postMessage. It is true that some parts of the security model and experience, such as the UI that a person uses to select the Web Handler, registration, and permission management would be best handled by native browser code, but the majority of the other functionality does not need native browser code. We were able to achieve a complete implementation of Payment Request and Payment Handler because there were existing composable APIs in the Web Platform that had nothing to do Web Payments, and that’s pretty neat.

When Web Intents failed, the pendulum swung far too aggressively in the other direction of highly specialized and focused APIs for the Web Platform. Overspecialization fundamentally harms innovation in the Web Platform as it creates unnecessarily restrictive environments for Web Developers and causes duplication of effort. For example, due to the design of the Payment Request API, merchants unnecessarily lose a significant amount of control over their checkout process. This is the danger of this new overspecialized API focus at W3C. It’s more work for a less flexible Web Platform.

The right thing to do for the Web Platform is to acknowledge this Web Handler pattern and build an API that fits the pattern, not merely charge ahead with what we have in Payment Request. However, one should be under no illusion that the Web Payments WG will drastically change its course as that would kick off an existential crisis in the group. If we’ve learned anything about W3C Working Groups over the past decade, it is that the larger they are, the less introspective and less likely they are to question their existence.

Whatever path the Web Payments Working Group chooses, the Web will get a neat new set of features around Payments, and that has exciting ramifications for the future of the Web Platform. Let’s just hope that future work can be reconfigured on top of lower-level primitives so that this trend of overspecialized APIs doesn’t continue, as that would have dire consequences for the future of the Web Platform.

Advancing the Web Payments HTTP API and Core Messages

Summary: This blog post strongly recommends that the Web Payments HTTP API and Core Messages work be allowed to proceed at W3C.

For the past six years, the W3C has had a Web Payments initiative in one form or another. First came the Web Payments Community Group (2010), then the Web Payments Interest Group (2014), and now the Web Payments Working Group (2015). The title of each of those groups share two very important words: “Web” and “Payments”.

“Payments” are a big and complex landscape and there have been international standards in this space for a very long time. These standards are used over a variety of channels, protocols, and networks. ISO-8583 (credit cards), ISO-20022 (inter-bank messages), ISO-13616 (international bank account numbers), the list is long and it has taken decades to get this work to where it is today. We should take these messaging standards into account while doing our work.

The “Web” is a big and complex landscape as well. The Web has its own set of standards; HTML, HTTP, URL, the list is equally long with many years of effort to get this work to where it is today. Like payments, there are also many sorts of devices on the Web that access the network in many different ways. People are most familiar with the Web browser as a way to access the Web, but tend to be unaware that many other systems such as banks, business-to-business commerce systems, phones, televisions, and now increasingly appliances, cars, and home utility meters also use the Web to provide basic functionality. The protocol that these systems use is often HTTP (outside of the Web browser) and those systems also need to initiate payments.

It seems as if the Web Payments Working Group is poised to delay the Core Messaging and HTTP API. This is important work that the group is chartered to deliver. The remainder of this blog post elaborates on why delaying this work is not in the best interest of the Web.

Why a Web Payments HTTP API is Important

At least 33% of all payments[1][2], like subscriptions and automatic bill payment, are non-interactive. The Web Payments Working Group has chosen to deprioritize those use cases for the past 10 months. The Web Payments Working Group charter expires in 14 months. We’re almost halfway down the road with no First Public Working Draft of an HTTP API or Web Payments Core Messages, and given the current rate of progress and way we’re operating (working on specifications in a serial manner instead of in parallel), there is a real danger that the charter will expire before we get the HTTP API out there.

The Case Against the Web Payments HTTP API and Core Messages

Some in the group have warned against publication of the Web Payments HTTP API and Core Messages specification on the following grounds:

  1. The Web Payments HTTP API and Core Messages are a low priority.
  2. There is a lack of clear support.
  3. There is low interest from implementers in the group.
  4. The use cases are not yet well understood.
  5. It’s too soon to conclude that payment messages will share common parts between the Browser API and the HTTP API.

While some of the arguments seem reasonable on the surface, deconstructing them shows that only one side of the story is being told. Let’s analyze each argument to see where we end up.

The Web Payments HTTP API and Core Messages are a low priority.

The group decided that the Web Payments HTTP API and Core Messages specs were a relatively lower priority than the Browser API and the Payment Apps API until June 2016. There was consensus around this in the group and our company agreed with that consensus. What we did not agree to is that the HTTP API and Core Messages are a low priority in the sense that it’s work that we really don’t need to do. One of the deliverables in the charter of the Working Group is a Web Payments Messages Recommendation. The Charter also specifies that “request messages are passed to a server-side wallet, for example via HTTP, JavaScript, or some other approach”. Our company was involved in the writing of the charter of this group and we certainly intended the language of the charter to include HTTP, which it does.

So, while these specs may be lower priority, it’s work that we are chartered to do. This work was one of the reasons that our company joined the Web Payments Working Group. Delaying this work is making a number of us very concerned about what the end result is going to look like. The fact that there is no guiding architecture or design document for the group makes the situation even worse. The group is waiting for an architecture to emerge and that is troubling because we only have around 8 months left to figure this out and then 6 months to get implementations and testing sorted. One way to combat the uncertainty is to do work in parallel as it will help us uncover issues sooner than at the end, when it will be too late.

There is a lack of clear support.

In the list of concerns, it was noted that:

“Activity on the issue lists and the repositories for these deliverables has been limited to two organizations… This suggests that the Working Group as a whole is not engaged in this work.”

Previous iterations of the HTTP API and Core Messages specifications have been in development for more than 5 years with far more than two organizations collaborating on the documents. It is true that there has been a lack of engagement in the Web Payments Working Group, primarily because the work was deprioritized. That being said, there are only ever so many people that actively work on a given specification. We need to let people who are willing and able to work on these specifications to proceed in parallel with the other documents that the group is working on.

There is low interest from implementers in the group.

We were asked to not engage the group, which we didn’t, and still ended up with two implementations and another commitment to implement. Note that this is before First Public Working Draft. Commitments to implement are typically not requested until entering Candidate Recommendation, so that there is now a request for implementation commitments before a First Public Working Draft is strange and not a requirement per the W3C Process.

If the group is going to require implementations as a prerequisite for First Public Working Group publication, then these new requirements should apply equally to all specifications developed by the group. I personally think this requirement is onerous and sets a bad precedent as it raises the bar for starting work in the group so high that it’ll result in a number of good initiatives being halted before they have a chance to get a foothold in the group. For example, I expect that the SEPA Credit Transfer and crypto-currency payment methods will languish in the group as a result of this requirement.

The use cases are not yet well understood.

It has also been asserted that the basic use cases for the Web Payments HTTP API are not well understood. We have had a use cases document for quite a while now, which makes this assertion shaky. To restate what has been said before in the group, the generalized use case for the HTTP API is simple:

A piece of software (that is not a web browser) operating on behalf of a payer attempts to access a service on a website that requires payment. The payee software provides a payment request to the payer software. The payment request is processed and access is granted to the service.

Any system that may need to process payments in an automated fashion could leverage the HTTP API to do so. Remember, at least 33% of all payments are automated and perhaps many more could be automated if there were an international standard for doing so.

There are more use cases that would benefit from an HTTP API that were identified years ago and placed into the Web Payments Use Cases document: Point of Sale, Mobile, Freemium, Pre-auth, Trialware, In-Vehicle, Subscription, Invoices, Store Credit, Automatic Selection, Payer-initiated, and Electronic Receipts. Additional use cases from the W3C Automotive Working Group related to paying for parking, tolls, and gasoline have been proposed as well.

The use cases have been understood for quite some time.

It’s too soon to conclude that payment messages will share common parts between the Browser API and the HTTP API.

The work has already been done to determine if there are common parts and those that have done the work have discovered around 80% overlap between the Browser API messages and the HTTP API messages. Even if this were not the case, I had suggested that we could deal with the concerns in at least two ways:

  1. The first was to mark this concern as an issue in the specification before publication.
  2. The second was to relabel the “Core Messages” as “HTTP Core Messages” and change the label back if the group was able to reconcile the messages between the Browser API and the HTTP API.

These options were not surfaced to the group in the call for consensus, which is frustrating.

The long term effects of pushing off discussion of core messages, however, are more concerning. If we cannot find common messages, then the road we’re headed down is one where a developer will have to use different Web Payments messages if payment is initiated via the browser vs. non-browser software. In addition, this further confuses our relationship to ISO-20022 and ISO-8583 and will make Web developers lives far complex than necessary. We’re advocating for two ways of doing something when we should be striving for convergence.

The group is chartered to deliver a Web Payments Messages Recommendation; I suggest we do that. We are more than 40% of the way through our chartered timeline and we haven’t even started having this discussion yet. We need to get this document sorted as soon as possible.

The Problem With Our Priorities

The problem with our priorities is that we have placed the Browser API front-and-center in the group. The group did this for two reasons:

  1. A subset of the group wanted to improve the “checkout experience” and iterate quickly instead of focusing on initiating payments, which was more basic and easier to accomplish.
  2. A subset of the group was very concerned that the browser vendors would lose interest if their work was not done first.

I understand and sympathize with both of these realities, but as a result, the majority of other organizations in the group are now non-browser second-class citizens. This is not a new dynamic at W3C; it happens regularly, and as one of the smaller W3C member companies, it is thoroughly frustrating.

It would be more accurate to have named ourselves the Browser Payments Working Group because that is primarily what we’ve been working on since its inception and if we don’t correct course, that is all we will have time to do. This focus on the browser checkout experience and pushing things out quickly without much thought to the architecture of what we’re building does result in “product” getting out there faster. It is also short-sighted, results in technical debt, and makes it harder to reconcile things like the HTTP API and Core Messages after the Browser API is “fully baked”. We are supportive of browser specifications, but we are not supportive of only browser specifications.

This approach is causing the design of the Web Payments ecosystem to be influenced by the way things are done in browsers to a degree that is deeply concerning. Those of us in the group that are concerned with this direction have been asked to not “distract” the group by raising concerns related to the HTTP API and Core Messages specifications. “Payments” and the “Web” are much bigger than just browsers, it’s time that the group started acting accordingly.

Parallel and Non-Blocking is a Better Approach

The Web Payments Working Group has been working on specifications in a serial fashion since the inception of the group. The charter expires in 14 months and we typically need around 6 months to get implementations and testing done. That means we really only have 8 months left to wrap up the specs. We’re not going to get there by working on specs in a serial fashion.

We need to start working on these issues in parallel. We should stop blocking people that are motivated to work on specifications that are needed in order for the work to be successful. Other groups work in this fashion. For example, the Web Apps Sec group has over 13 specs that they’re working on, many of them in parallel. The same is true for the Web Apps working group, which has roughly 32 specs that it’s working on, often in parallel. These are extremes, but so is only focusing on one specification at a time in a Working Group.

We should start working in parallel. Let’s publish the HTTP API and HTTP Core Messages specifications as First Public Working Drafts and get on with it.

Credentials: A Retrospective of Mild Successes and Dramatic Failures

Over the past 20 years, various organizations have tried to “solve” problems related to identity and credentialing on the Web and have been met with varying degrees of mild success, but mostly failure. Understanding what worked and what did not is necessary if we’re going to make progress on identity and credentialing on the Web.

The word identity means different things to different people and is often discussed as a problem waiting to be solved on the Web. In the physical world, we have many identities. We have an identity for work life and home life. We have an identity that we use when we talk with our friends and one that we use when we talk with our families. The concept of identity is as nuanced as it is broad.

There are aspects of our identities that have very little consequence to others, such as whether we have dark brown hair or black hair. There are also aspects of our identities that are vital for proving that we should be able to perform certain tasks, like a drivers license or nursing license. Then there are aspects of our identities that are important for social reasons, such as the rapport that we build with our friends over multiple decades.

Many aspects of our identity are often expressed via credentials, which can be seen as verifiable statements made by one person or organization about another. There have been multiple attempts at formalizing credentials on the Web; each one of them have been met with varying degrees of mild success, but mostly failure. This blog post explores the development goals and system capabilities that would lead to a healthy credentialing ecosystem and why previous attempts to achieve these goals have been met with limited success.

Credentialing Ecosystem Goals

A healthy credentialing ecosystem should have a number of qualities:

  • Credentials should be interoperable and portable. Credentials should be used by as broad of a range of organizations as possible. The recipient of a credential should be able to store, manage, and share credentials throughout their lifetime with relative ease.
  • The ecosystem should scale to the 3 billion people using the Web today and then to the 6 billion people that will be using the Web by the year 2020.
  • The process of exchanging a credential should be privacy enhancing and recipient controlled such that the system protects the privacy of the individual or organization using the credential by placing the recipient in control of who is allowed to access their credential.
  • Implementing systems that issue and consume credentials should be easy for Web developers in order to lower barriers to entry and increase the amount of software solutions in the ecosystem.
  • Creating systems that are accessible should be a fundamental design criteria, as 10% of the world’s population have disabilities and the solution should be usable by as many people as possible.
  • The solution should follow a number of core Web principles such as being patent and royalty-free, adhering to Web architecture fundamentals, supporting network and device independence, and being machine-readable where possible to enable automation and engagement of non-human actors.

Credentialing Capabilities

A solution for a healthy credential ecosystem should have the following capabilities:

  • Extensible Data Model – A data model that supports an entity making an unbounded set of claims about another entity. This enables a very broad applicability of credentials to different use cases and market verticals.
  • Choice of Syntax – A data model that is capable of being expressed in a variety of data syntaxes. This increases interoperability between disparate credentialing systems and increases the long-term viability of the technology.
  • Decentralized Vocabularies – A formal mechanism of expressing new types of claims without centralized coordination. This promotes a high degree of parallel adoption and innovation.
  • Web-based PKI – A digital signature mechanism that does not require out-of-band information to verify the authenticity of claims; instead it should enable public keys to be automatically fetched via the Web during verification. It should not render the signed data opaque because opaque data is harder to learn from, program to, and debug. This makes the digital signature mechanism easier to use for developers and system integrators.
  • Choice of Storage – A protocol for storing a credential at an arbitrary identity provider after it has been issued by an arbitrary issuer. This helps create a level playing field for all actors in the ecosystem.
  • App Integration – A protocol for managing credentials by a recipient using arbitrary 3rd party applications. This promotes a healthy application ecosystem for managing credentials.
  • Privacy-enhanced Sharing – A protocol that enables the recipient to share their credentials without revealing the intended destination to their identity provider. This enhances privacy.
  • Credential Portability – A protocol for migrating from one identity provider to another without the need to reissue each credential. This promotes a healthy identity provider ecosystem.
  • Credential Revocation – A protocol for revocation of a previously issued credential by the credential issuer. This enables issuers to ensure that the credentials they have issued accurately represent their claims.

A Foreword on the Analysis

There are a number of existing identity, credentialing, and general authentication solutions that have been deployed in the past and have seen success according to the design goals of the particular solution. The design goals and capabilities listed in the previous sections do not always align with the design goals and capabilities of the systems that have been analyzed below. It is fair to say that some of the analysis below is “unfair” as some of the systems are being judged against criteria that they were not meant to address. The reason these solutions have been included below are because 1) they come up in conversation as potential solutions, and/or 2) they have been successful in meeting some of the criteria listed above, and/or 3) they have failed in important ways that we need to learn from to ensure that a new initiative doesn’t make the same mistakes.

With that said, let’s get on with the analysis.


The grandparent of these identity initiatives is SAML, an XML-based, open-standard data format for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider.

Capability Rating Summary
Extensible Data Model Problematic XML-based data model. Extensible, but extensibility is rarely used leading to very limited claim types.
Choice of Syntax Poor Only XML is supported.
Decentralized Vocabularies Problematic The use of simple key-value pairs created the possibility of name clashes, limiting decentralized innovation and adoption.
Web-based PKI Problematic Service Providers had to explicitly trust Identity Providers leading to scalability issues.
Choice of Storage Poor Recipients can only have credentials issued and stored via a single identity provider.
App Integration Poor A recipient could only manage their credentials via their identity provider interface.
Privacy-enhanced Sharing Poor Identity providers cannot be prevented from knowing where recipients use their credentials.
Credential Portability Poor Credentials cannot be transferred from one identity provider to another.
Credential Revocation Problematic Credentials can only be issued and revoked by identity providers.

SAML has failed to gain traction outside the education and public service organization sectors and is rarely used as an option to log into non-enterprise websites. SAML isn’t a viable solution for the goals and capabilities listed above because 1) it is hobbled by older technologies such as XML and SOAP, 2) it has scalability issues due to the need to manually setup federations of identity and service providers, 3) it restricts the organizations that can issue valid credentials to identity providers, 4) it enables tracking and violates a number of privacy requirements listed above, and 5) it encourages centralization and super providers as more people use the system because of the administrative overhead of managing the identity and service provider federations as they grow.

Windows CardSpace / Infocard

Microsoft released CardSpace (code named InfoCard) in late 2006. CardSpace stored references to your digital identity, presenting them for selection as visual Information Cards. CardSpace provided a consistent interface designed to help you easily and securely use these identities in applications and websites.

Capability Rating Summary
Extensible Data Model Problematic Extensibility was possible (XML and URLs), but hard coded strings were used for parameter names.
Choice of Syntax Poor Only XML was supported.
Decentralized Vocabularies Problematic URLs were used for parameter values, but the solution was ultimately Windows-only.
Web-based PKI Good XML enveloping signatures with attached X509 certificates were used.
Choice of Storage Good Credentials were stored in an application controlled by the recipient.
App Integration Problematic Credentials were managed by a Windows application.
Privacy-enhanced Sharing Good Credential consumers requested credentials directly from the recipient.
Credential Portability Problematic Credentials could only live on Windows devices with no automatic cross-device synchronization capability (although manual export/import was supported).
Credential Revocation Good Security tokens are not granted for revoked credentials.

Windows CardSpace failed to gain traction and the project was canceled in 2011. The initiative has been replaced with U-Prove, a Microsoft Research project. CardSpace did get a number of things right, but ultimately failed because 1) it was largely a Microsoft-centric solution that required Windows and Active Directory 2) early adopters that initially backed the project did not feel as if there was a strong near-term demand for a solution and thus didn’t roll out products to support the standard, 3) it didn’t scale to mobile and was largely tied to the desktop, 4) it was a separate product requiring manual installation and not a feature of Internet Explorer, 5) while it supported verified claims through a trusted third party, very few applications surfaced that enabled that optional feature, and 6) it was partly based on the WS-* technology stack, which has been derided as “bloated, opaque, and insanely complex“.


Shibboleth is a middleware initiative for an architecture and open-source implementation for identity management and federated identity-based authentication and authorization. It is based on SAML and thus has many of the same advantages and drawbacks.

Capability Rating Summary
Extensible Data Model Problematic XML-based data model. Extensible, but extensibility is rarely used leading to very limited claim types.
Choice of Syntax Poor Only XML is supported.
Decentralized Vocabularies Problematic The use of simple key-value pairs created the possibility of name clashes, limiting decentralized innovation and adoption.
Web-based PKI Problematic Service Providers had to explicitly trust Identity Providers leading to scalability issues.
Choice of Storage Poor Recipients can only have credentials issued and stored via a single identity provider.
App Integration Poor A recipient can only manage their credentials via their identity provider interface.
Privacy-enhanced Sharing Poor Identity providers cannot be prevented from knowing where recipients use their credentials.
Credential Portability Poor Credentials cannot be transferred from one identity provider to another.
Credential Revocation Problematic Credentials can only be issued and revoked by identity providers.

Shibboleth has failed to gain traction outside the research, education, and public service organization sectors and is rarely used as an option to log into non-enterprise websites. Shibboleth is not a viable solution for the goals and capabilities listed above because 1) it is hobbled by older technologies such as XML and SOAP, 2) it has scalability issues due to the need to manually setup federations of identity and service providers, 3) it restricts the organizations that can issue valid credentials to identity providers, 4) it enables tracking and violates a number of privacy requirements listed above, and 5) it encourages centralization and super providers as more people use the system because of the administrative overhead of managing the identity and service provider federations as they grow.

OAuth 2.0

While not an identity or credentialing solution, OAuth 2.0 often comes up as being in the same class of solution and has been used (by Facebook and OpenID Connect) to achieve an authentication/authorization solution. Introduced in 2006 and finalized as OAuth 2.0 in 2012, the latest version of the framework has wide deployment among Facebook, Google, and Microsoft but suffers from multiple non-interoperable implementations.

Capability Rating Summary
Extensible Data Model Poor Key-value pairs, which require centralized coordination to avoid conflicts.
Choice of Syntax Poor Only string-based key-value pairs are supported.
Decentralized Vocabularies Poor Key-value pairs, which require centralized coordination to avoid conflicts.
Web-based PKI N/A OAuth 2.0 is not designed to perform digital signatures.
Choice of Storage N/A OAuth 2.0 is not designed to express credentials.
App Integration N/A OAuth 2.0 is not designed to manage credentials.
Privacy-enhanced Sharing N/A OAuth 2.0 is not designed to request credentials.
Credential Portability N/A OAuth 2.0 is not designed to port credentials.
Credential Revocation Problematic OAuth 2.0 tokens timeout after a while, but operate as bearer tokens (if they are stolen, they can be used by other people for a non-trivial amount of time).

OAuth 2.0 was never designed to achieve the goals and capabilities listed at the beginning of this blog post, and so the comparison above isn’t very fair. That said, there are a number of reasons that OAuth 2.0 is not a good fit for the goals and required capabilities listed above: 1) it is often criticized as being problematic from a security standpoint, overly complex, and favoring large enterprise deployments, 2) it doesn’t have a data model that is flexible enough to model more than the most basic type of credentials, 3) it does not support digital signatures, and 4) it doesn’t solve the credentialing problem described above because it is designed for a completely different set of use cases: providing access to 3rd parties that want to perform certain operations on your account. While OAuth 2.0 is not a solution to the credentialing problem, it can be used as part of a credentialing solution, which is what OpenID Connect does.

OpenID Connect

OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol, which allows clients to verify the identity of an end-user based on the authentication performed by an authorization server, as well as to obtain basic profile information about the end-user in an interoperable and REST-like manner.

Capability Rating Summary
Extensible Data Model Problematic JSON is extensible, but needs a single centralized registry for terms or full URLs must be used.
Choice of Syntax Poor Only JSON is supported.
Decentralized Vocabularies Poor Extensions require a single centralized registry for terms or full URLs must be used.
Web-based PKI Poor URLs can be used for JWK key IDs, but the mechanism to do discovery isn’t specified in the specifications.
Choice of Storage Poor Credential information is not portable.
App Integration Problematic Recipients can only manage credentials via their identity provider.
Privacy-enhanced Sharing Poor Identity providers can track all logins.
Credential Portability Poor Credentials are not portable between identity providers.
Credential Revocation Problematic Distributed credentials can be revoked at any time, but are rarely used.

The OpenID initiative started in 2005. Today Google, Microsoft, Deutsche Telekom, and SalesForce use the OpenID Connect protocol to support federated login, but Facebook and Twitter do not. OpenID Connect has been fairly successful in creating a Web-scale single-sign on solution, but it fails to address the goals and required capabilities for the following reasons: 1) it depends heavily on centralized registries to define types of credentials, 2) credentials issued by 3rd parties and digital signatures are rarely used (there is no rich 3rd party credential ecosystem), 3) there is no protocol for transferring credentials from one provider to another, 4) it enables tracking and other privacy violations, and 5) it promotes centralization and super-providers due to a reliance on email-addresses-as-identifiers, ensuring that email super providers become the new credential super providers.


WebID+TLS, previously known as FOAF+SSL, is a decentralized and secure authentication protocol built on W3C standards around Linked Data and utilizing client-side x509 certificates and TLS to boostrap the identity discovery process.

Capability Rating Summary
Extensible Data Model Good WebID is based on RDF which is a provably extensible data model.
Choice of Syntax Good WebID is based on RDF and thus supports multiple syntaxes.
Decentralized Vocabularies Good All data is expressed using vocabularies, which can be created by anyone. Data merging is designed to happen automatically.
Web-based PKI Problematic All claims in a WebID are self-issued claims. No clear specification for 3rd party claims.
Choice of Storage Good WebIDs can be stored at the recipient’s preferred location
App Integration Problematic Possible, but no clear specification exists to do so.
Privacy-enhanced Sharing Problematic Identity Providers can track logins by default (unless a mixnet is used).
Credential Portability Problematic Self-issued credentials can be ported easily. If there were 3rd party credentials tied to WebID URL, those credentials would not be portable.
Credential Revocation Problematic Revocation is possible, but there is no clear specification for doing so.

WebID+TLS was first presented for the W3C Workshop on the Future of Social Networking in 2009, but has failed to gain any significant traction in the past six years. WebID+TLS is also problematic because 1) it depends on browser client-side certificate handling, which is a bad UI experience, 2) it depends on the KEYGEN HTML element in a non-trivial way, which is currently under discussion to be removed from certain browsers, and 3) it is not well specified how you achieve many of the credential goals and required capabilities listed in the beginning of this post.

Mozilla Persona

Persona was launched in 2011 and shares some of its goals with some similar authentication systems like OpenID or Facebook Connect, but it is different in several important ways: 1) It used email addresses as identifiers, 2) It was more focused on privacy, 3) It was intended to be fully integrated in the browser.

Capability Rating Summary
Extensible Data Model N/A Not a design requirement.
Choice of Syntax N/A Not a design requirement.
Decentralized Vocabularies N/A Not a design requirement.
Web-based PKI Problematic Public keys are discoverable but content is largely hidden.
Choice of Storage Poor Identity provider is tied to login domain.
App Integration N/A There are no credentials to manage.
Privacy-enhanced Sharing Good Logins are privacy-enhancing and not easily trackable.
Credential Portability Poor There are no credentials to port.
Credential Revocation N/A There are no credentials to revoke.

Persona is solely an authentication system and was not designed to support arbitrary credentials. Mozilla was hoping for widespread adoption of the protocol, but that did not occur because super providers like Google and Facebook had already developed their own competing login mechanisms. All full-time developers were pulled from the project in 2014.

Omitted Technologies

There are a number of technologies, such as U-Prove, the WebAppSec Login/Credentials API, and Mozilla’s Firefox Account API, that were not included in this analysis because they are still very early in the research and development phases. APIs like Facebook Connect and Login with Google were not included because they are not intended to be open standards.


There are a number of goals and required capabilities that need to be fulfilled in order to create a vibrant credentialing ecosystem. There have been various attempts at addressing subsets of the problems in the ecosystem and those solutions have been met with small successes and varied failures. There still is no widely deployed and adopted way of issuing, storing, managing, and transmitting credentials on the Web today and we do have quite a bit of insight into why the prior attempts at solving the general identity/credentialing problem have failed.

These findings lead to a simple question: Is it time to do something about Credentials on the Web?

The Case for Standardizing Credentials via W3C

Credentials solve a variety of real authentication problems on the Web today, we can implement them in a way that doesn’t threaten anonymity or privacy and avoid the mistakes made in previous attempts. The W3C is the right place to do this important work.

Over the past two years a growing number of organizations in education, finance, and healthcare have been gathering in W3C Community Groups around the concept of a standard format for credentials on the Web.

A credential is a qualification, achievement, quality, or piece of information about an entity’s background, typically used to indicate suitability. A credential could include information such as a name, home address, government ID, professional license, or university degree.

We use credentials when we show our shoppers card to get discounts at the grocery store, we use them when we show a drivers license to order a drink at the bar, and we use them when we show our passport as we enter a foreign country. The use of credentials to demonstrate capability, membership, status, and minimum legal requirements is something that we do on a regular basis as we go about our lives.

A variety of identity and authentication solutions exist today to perform things like single-sign on and limited attribute verification, but they are not widely deployed enough to be ubiquitous on the Web. As a result, these problems exist in the world today because it’s difficult to provide a flexible, digitally verifiable credential:

  • In 2014, 12.7 million U.S. consumers experienced identity fraud losses of over $16B dollars.
  • Over the last 5 years, more than $100B in identity-related fraud losses have been detected in the United States alone. That number is in the several hundreds of billions of dollars worldwide.
  • The cost to law enforcement ranges from $15,000 to $25,000 to investigate each identity theft case; many of them are not investigated. Victims spend, on average, 600 hours trying to clear damaged credit or even criminal records caused by the thief.
  • In 2010, 7 million people self-reported illegal use of prescription drugs in the previous month in the US. These drugs are often acquired through the use of faked credentials. The healthcare costs alone of nonmedical use of prescription opioids – the most commonly misused class of prescription drugs – are estimated to total $72.5 billion annually.
  • Educational and professional service testing fraud has led to multiple hundred million dollar losses when test takers are not who they say they are and the testing agencies are held accountable as a result.
  • Forty thousand legitimate Ph.D.s are awarded annually in the U.S. — while 50,000 spurious Ph.D.s are purchased; that is, more than 50% of new PhDs degrees are fake and more than 500,000 working Americans have a fraudulent degree. More than 25% of job applicants inflate their educational achievements on their resumes.

So, the simple question has been raised in the W3C community: Is it time to do something about Credentials and if so, can W3C add value to the standardization process? Or is this already a solved problem?

Myriad Credentials

There are many different types of credentials in the world and many industry verticals that use credentials in a variety of different ways.

There are many categories of credentials that have been identified as important to society, here are a sample of them:


Academic credentials and co-curricular activities are recognized and exchanged among learners, institutions, employers, or consumers.
A worker’s certified skill or license is a condition for employment, professional development, and promotability.
Civil Society
Access to social benefits and contracts may be based on verifiable conditions such as marital status.
Ensuring that medical professionals are properly licensed to write prescriptions for controlled substances or provide certain services to patients.
Regulations require the proper identification of parties in high value transactions. The legal right to purchase a product depends on the verifiable age or location of the buyer.

The use of credentials is a big part of how our society operates. In order to standardize their usage, it’s important that a generalized mechanism is used to express all the data associated with the types of credentials above. To put it another way, the solution shouldn’t be specific to each category above because there are too many different types of credentials for a single group to standardize. Rather, the solution should be a generalized mechanism that enables verifiable claims to be made about an entity where the contents of the credential can be specified by a specific industry or market vertical (such as university entrance testing, or pharmaceutical distribution, or wealth management).

HTML, CSS, XML, and JSON-LD, are examples of formats that have been used by many different market verticals to encode data. WebRTC, Geolocation, Drag and Drop, and Web Audio are examples of new ways of encoding and exchanging data over the Web that have found use in a variety of industries. Each of these technologies were standardized at W3C.

The purpose of this post is to help build a shared understanding of the use cases for credentials and, in doing so, help accelerate work at W3C. What follows is a list of concerns and hesitations that have been raised over the past several months. It is helpful to discuss these concerns as part of a larger conversation around use cases and technical merits related to the work.

Hesitation #1: The End of Anonymity on the Web

The spectrum of identity needs across diverse use cases suggests there will be different kinds of credentials to match strong privacy use cases and strong identity use cases.

One of the hesitations expressed by privacy proponents is that if we make it easy to identify people using credentials, that we are going to lose anonymity on the Web. The Web started as a largely anonymous medium of communication. In the early days you could jump from site to site without having to identify yourself or expose any personally identifying information. Things have changed for the worse since then with the advent of IP tracking, evercookies, device fingerprinting, browser fingerprinting, email addresses as identifiers, analytics packages, and ad-driven business models. One could argue that anonymity on the Web was lost long ago.

Whether or not you believe that anonymity on the Web is a real thing today, there is a strong argument to not make things worse. Where strong privacy is required, bearer credentials can be used. Where strong identity assurances are required, we can use tightly-bound credentials.

A tightly-bound credential contains a set of claims that are associated with an identifier, effectively stating things like “John Doe is over the age of 21 — signed by some mutually trusted organization or person”. The “John Doe” part of the credential is the problem, because if you provide that credential to multiple websites, you can be tracked across those websites because they know who you are. A tightly-bound credential, however, can contain a link to retrieve a bearer credential. A bearer credential is a short lived, untrackable (by itself) credential that effectively states things like “The holder of this credential is over the age of 21 — signed by some mutually trusted organization or person”.

Bearer credentials do have downsides. Sophisticated attackers can intercept them and replay them across multiple sites. For this reason, bearer credentials have a very short lived lifetime and may not be accepted by credential consumer sites that require a stronger type of authentication such as a tightly-bound credential. That said, there are many websites where using bearer credentials to verify things like age or postal code should be acceptable.

Do bearer credentials ensure anonymity on the Web? No. The only thing you need to do to defeat them is to ask the entity with the bearer credential what their email address is and you have a universally trackable identifier for them. What bearer credentials do, however, is to help not make the tracking problem worse.

Hesitation #2: Credentials as a Gateway Drug

If an easy mechanism exists to ask for personally invasive credentials, it does not necessarily follow that every website will ask for those credentials or that people will willingly hand them over.

Another concern that is often raised is that if there is a good way to strongly identify people via credentials, and the mechanism is easy to use, more websites will require credentials in order to use their services. This is certainly a concern and predicting what will happen here is more difficult particularly because there are many market forces at work.

Websites have varying degrees of utility to people. Depending on each site’s utility, people are willing to give up more or less of their personal information to access the site. A website that is effectively a collection of cat pictures will probably not convince many to give up their email address in exchange for more cat pictures. To provide contrast to the previous example, a personal banking website will most likely be able to convince you to provide far more personally identifiable information. This dynamic will most likely not change as far as it’s clear what information is being transmitted to each site via a credential.

Ultimately, this choice is up to the person sitting behind the browser and the choice must be presented in such a way as to ensure informed consent before the credential is transmitted.

Hesitation #3: We’ve Tried This Before

While it may sound like this has been tried before, the capabilities required to meet the identified goals are more advanced than what the state of the art today supports.

The third major hesitation for not starting the Credentials work as an official work item at W3C is a misconception that many of the identified goals and capabilities are the same as previous attempts at a solution. SAML, InfoCard, Shibboleth, OAuth, OpenID Connect, WebID+TLS, Mozilla Persona; none have solved the credentialing problem stated at the beginning of this blog post in a significant way.

A detailed analysis of each of these initiatives has been performed in a separate blog post titled Credentials: A Retrospective of Mild Successes and Dramatic Failures. Here’s a summary of the state of the art ranked against desired capabilities:

SAML 2.0
OAuth 2.0
OpenID Connect
Mozilla Persona
Extensible Data Model
Choice of Syntax
Decentralized Vocabularies
Web-based PKI
Choice of Storage
App Integration
Privacy-Enhanced Sharing
Credential Portability
Credential Revocation

The primary finding of the analysis above is simple: there still is no widely deployed and adopted way of exchanging credentials on the Web today, but we do have quite a bit of insight into why the prior attempts at solving the general identity/credentialing problem have failed.

Hesitation #4: There Is Nothing to Do

While the problem may seem trivial on the surface, to solve it correctly requires carefully understanding the current gaps in the Open Web Platform.

Another assertion that has been made before is that all of the technology necessary to create a robust open credentialing ecosystem on the Web already exists and it is just a simple matter of stringing HTTP, JSON, Javascript Object Signing and Encryption (JOSE), OAuth, OpenID Connect, and a few other technologies together to create the solution. Thus, there is nothing for W3C to do as the problem is more or less solved, and developers only need to be educated about the proper solution.

Clearly, we should reuse existing technologies wherever possible to build a single coherent, stable solution. We do not want to reinvent the wheel.

If it is true that all the technologies already exist, we could quickly solve the problem and move on to other pressing problems on the Web. Unfortunately, this view is not held by the majority of the community that has had contact with the credentialing problem. Participants in the community have tried and failed to implement credentialing solutions using all of the technologies listed at the beginning of this section. It is true that pieces of the solution exist across multiple standardization organizations, but a set of standards for a vibrant credentialing ecosystem do not exist yet as detailed in this blog post: Credentials: A Retrospective of Mild Successes and Dramatic Failures.

We use credentials in our daily lives. We often receive them from one party and then share them with another. If this is a solved problem on the Web, then why is there no similarly vibrant ecosystem of disparate but interoperable Web-based credentialing systems? Where is the specification that outlines how to express a digitally signed credential? Or the protocol for issuing a credential to a storage location? Or the protocol for requesting a credential?

Even if all of the core technologies exist, they have yet to be put together into a coherent, secure, easy to integrate solution for the Web and that sounds like something where the W3C could add value.


This work is worth doing at the W3C because the Open Web Platform doesn’t have a functionality that society depends on to run its education, healthcare, finance, and government sectors. It’s worth doing because fraud is a big problem on the Web for high-stakes exchanges, and the problem is only getting worse. It’s worth doing because the W3C has a platform that 3 billion people around the world use and we could solve this problem at an international scale if successful. It’s worth doing because we plan to add another 3 billion people to the Web in the next 5 years, many of them lacking the basic credentials they need to participate in society, and we can improve upon that condition. It’s worth doing because the W3C membership has solved problems like this before, and has changed the world for the better in doing so.

Web Payments: The Architect, the Sage, and the Moral Voice

Three billion people use the Web today. That number will double to more than six billion people by the year 2020. Let that sink in for a second; in five years, the Web will reach 90% of all humans over the age of 6. A very large percentage of these people will be using their mobile phones as their only means of accessing the Web.

TL;DR: In 2015, the World Wide Web Consortium (The Architect), the US Federal Reserve (The Sage), and the Bill and Melinda Gates Foundation (The Moral Voice) have each started initiatives to dramatically improve the worlds payment systems. The organizations are highly aligned in their thinking. Imagine what we could accomplish if we joined forces.

The Problem

With the power of the Web, we can send a message from New York to New Delhi in the blink of an eye. Millions of people around the globe can read a story published by a citizen journalist, writing about a fragile situation, from a previously dark corner of the world. Yet, when it comes to exchanging economic value, the Web has not fulfilled a promise to ease the way we exchange information.

While it costs fractions of a penny to send a message around the globe, on average, it costs tens of thousands of times that to send money the same distance. Furthermore, two and a half billion adults on this planet do not have access to modern financial infrastructure, which places families in precarious situations when a shock such as a medical emergency hits the household. Worse, it leaves these families in a vicious cycle of living hand-to-mouth, unable to fully engage in the local economy much less the global one.

The Architect

What if we were able to use the most ubiquitous communication network that exists today to move economic value in the same way that we move instant messages? What if we could lower the costs to the point that we could pull those 2.5 billion people out of the precarious situation in which they’re operating now? Doing this would have dramatically positive financial and societal effects for those people as well as the people and businesses operating in industrialized nations.

That’s the premise behind the Web Payments Activity at the World Wide Web Consortium (W3C). The W3C, along with its 400 member organizations, standardized the Web and is one of the main reasons you’re able to view this web page today from wherever you’re sitting on this planet of ours. For the last five years or so, a number of us have been involved in trying to standardize the way money is sent and received by building that mechanism into the core architecture of the Web.

It has been a monumental effort, and we’re very far from being done, but the momentum we’ve gained so far is more than we predicted by far. For example, here is just a sampling of the organizations involved in the work: Bloomberg, Google, The US Federal Reserve, Alibaba, Tencent, Apple, Opera, Target, Intel, Deutsche Telekom, Ripple Labs, Oracle, Yandex… the list goes on. What was a pipe dream a few years ago at W3C is a very real possibility today. There is a strong upside here for customers, merchants, financial institutions, and developers.

The Sage

At one time, the US had the most advanced payment system in the world. One of the problems with being the first is that you quickly start accruing technical debt. Today, the US Payment system is among the worst in the world in many categories in which you rate these sorts of things. For the last several years, the US Fed has been running an initiative to improve the state of the US payment ecosystem.

Two of the US Fed’s strengths are 1) pouring through massive amounts of research on our financial system and producing a cohesive summary of the state of the US financial system and 2) its ability to convene a large number of financial players around a particular topic of interest. Their call for papers on ideas around improving the US payment system resulted in 190 submitted papers and a coherent strategy for fixing the US payment system. Their most recent Faster Payments Task Force has attracted over 320 organizations that will be attempting to propose systems to fix a number of the US payment systems rough spots.

If we are going to try to upgrade the payment systems in the world, it’s important to be able to make decisions based on data. The research and convening ability of the US Fed is a powerful force and the W3C and US Fed are already collaborating on the Web Payments work. The plan should be to deepen these relationships over the next couple of years.

The Moral Voice

The Bill and Melinda Gates Foundation just announced the LevelOne Project, which is an initiative to dramatically increase financial inclusion around the world by building a system that will work for the 2.5 billion people that have little to no access to financial infrastructure in their countries. This isn’t just a developing world problem. At least 30% of people in places like the United States and the European Union don’t have access to modern financial infrastructure.

The Gates Foundation has just proposed a research-backed formula for success for launching a new payment system designed to foster financial inclusion, and here’s where it gets interesting.

The Collaboration

Building the next generation payments system for the world requires answering the ‘what’, ‘how’, and ‘why’. The organizations mentioned previously will play a crucial role in elaborating on those answers. The US Fed (the sage) can influence what we are building; they can explain what has been and what should be. The W3C (the architect) can influence how we build what is needed; they can explain how all the pieces fit together into a cohesive whole. Finally, the Gates Foundation (the moral voice) can explain the why behind what we are building in the way that we’re constructing it.

I’ve had the great pleasure of working with the people in these initiatives over the past several years. Aside from everyone I’ve spoken with being deeply dedicated to the task at hand, I can also say from first-hand experience that there is a tremendous amount of alignment between the three organizations. It’ll take time to figure out the logistics of how to most effectively work together, but it is certainly something worth pursuing. At a minimum each organization should be publicly supportive of each others work. My hope is that the organizations start to become deeply involved with each other where it makes sense to do so.

The first opportunity to collaborate in person is going to be the Web Payments face-to-face meeting in New York City happening on June 16th-18th 2015. W3C and US Fed will be there. We need to get someone from the Gates Foundation there.

If this collaboration ends up being successful, the future is looking very bright for the Web and the 6 billion people that will have access to Web Payments in a few years time.

Three Important Upcoming Web Payments Events

The next two months will bring three important events related to the Web Payments work at W3C:

  1. The deadline for voting in favor of an official Web Payments Activity at W3C is October 10th, 2014 (in days). If you know a W3C Member company, be sure to push them to vote in favor of the initiative via the online form. If you have any questions, contact Stephane Boyera <>.
  2. The first face-to-face meeting for the official Web Payments work will be October 27th and 28th at the W3C Technical Plenary, as long as the Web Payments Activity is approved (see #1). Make sure you have registered for W3C TPAC on those dates if you want to participate and are a W3C member. Early bird registration ends in days. If you have any questions, contact Stephane Boyera <>
  3. The Web Payments use cases are being finalized this week in preparation for a vote by the Web Payments Community Group, make sure you review the use cases and weigh in on those that concern you by October 3rd 2014 (in days). If you have any further questions, contact Manu Sporny <>

If you want more information about any of these upcoming events, feel free to ask about them in the comments below, or email me directly. Details about each one of these events can be found below.

The Web Payments Activity Vote

The W3C is a member organization composed of roughly 389 technology companies, financial institutions, universities, governments, and NGOs. In order to start work officially, there is a process that is followed. Typically the W3C holds a workshop to get input, proposes a charter for one or more groups, and then has the membership vote on the charter.

The plan to initiate the Web Payments work followed this tried and true process. The worlds first Web Payments Workshop happened in March of this year in Paris. A charter for a Web Payments Interest Group was then proposed by the W3C and later refined by the workshop participants over the summer of 2014. The Web Payments Interest Group’s purpose is to identify gaps in the Web Platform with respect to payments and propose technical Working Groups to address those gaps.

This is all happening under a larger initiative at W3C called the Web Payments Activity. As the last set of pre-vote comments trickled in, the W3C opened the Web Payments Activity to an official vote by the W3C membership which is currently open (vote now!) and will close on October 10th 2014.

The W3C Technical Plenary

The W3C holds a technical plenary every year, usually led by a W3C host organization. Last year it was in Shenzhen, China. This year it will be in Santa Clara, California from October 27th-31st. If you want to be involved in the first Web Payments face-to-face meeting, register now for at least the first two days of the plenary. This meeting will set the tone and work plan for much of the next year. As an added bonus, this year is the 25th anniversary of the Web and the 20th anniversary of W3C. There will be a big gala dinner that you will not want to miss. It’s open to the public, so if you’re in Silicon Valley during that time, you may want to get your tickets now.

Web Payments Use Cases

The Web Payments Community Group has been hard at work organizing and refining the use cases that were raised at the Web Payments Workshop. The Community Group has spent close to 18 hours of teleconference time removing unnecessary cruft, more clearly expressing the core needs of each use case, and debating the merits of addressing each use case in the first iteration or latter iterations of the Web Payments work. The current list is going to go through a final review this week and then will be voted on by the community group to be formalized in a document and sent as input into the Web Payments Interest Group. If you would like to send your comments into the Web Payments Community Group, please do so by October 3rd, 2014.