When a credential from an outsourced CSP shows up at the front door of a RP, the RP needs two pieces of information. First, an answer to the question “Are you the same person this credential was issued to?” and second, information to uniquely resolve and enroll the credential holder at the RP. We have more or less standardized the first bit, but have not been as mindful about the second.
I have my own opinions as to why this has not been done before:
- This is typically a federation requirement, and successful federations exist in industry verticals where this is addressed by the operator of that closed community
- A driver for this requirement, e.g. public sector service delivery, where multi-sector standardization would have significant benefits has only recently started to come online
- Entities who, in the absence of RP access to authoritative identity establishment sources, have become gatekeepers to identity resolution may desire to protect their IP ("magic sauce")
At the same time, I do believe that in order to deliver public sector services, it is critical to address this issue. But it needs to be done in a manner that looks at the world as it exists and not as we would wish it to be, which in the U.S. means that:
- There is little to no direct access to authoritative identity establishment sources
- Identity verification and validation are done by corroborating different sources of non-authoritative information
- Entities with IP ("magic sauce") to bring to the table when dealing with that aggregated set of data have a role to play
- RP's need a set of quantitative criteria to evaluate what they get from such an entity i.e. "identity proofing component"
To make this happen will require three things:
- A clear understanding by the RP of the various approaches it can utilize to enroll users
- An understanding of the context in which IP/proprietary approaches have a role in identity resolution e.g. At the "identity proofing component"
- Development and standardization of the quantitative criteria used by the RP to evaluate the information it needs for identity resolution
If identity is defined as a set of attributes that uniquely describe an individual, identity resolution is the confirmation that an identity has been resolved to a unique individual within a particular context. In a federation environment, identity resolution is a means to an end; namely user enrollment. This blog post looks at identity resolution in two separate contexts, at the identity proofing component and at the RP.
My earlier blog post on Identity Establishment, Verification and Validation provided a description of those terms. Given that, some things to keep in mind:
- Verification and validation are two separate functions. Validation is typically performed as a subset of verification.
- Verification and validation could be done by different providers but are typically done by a single “identity proofing component” (e.g. CSP or IM)
- An identity proofing component must be able to resolve to a unique individual, within its context, before performing a verification and/or validation function
- A RP is responsible for resolving an identity to a unique individual within its context
- The context of the identity proofing component could be the entire population of the U.S, while the context of the RP is the set of identity records it holds
This leads to the following question. Given the different contexts, is the set of attributes required by the RP for identity resolution the same as the set of attributes used by the identity proofing component when it does identity resolution?
Some initial thoughts that may lead to an answer:
- If the attributes are self-asserted to the RP by the individual, and it passes them to the identity proofing component, there has to be prior agreement that the information passed is enough for the identity proofing component to do the resolution, verification and validation
- If the identity proofing component performs the resolution, verification and validation first, it determines the mechanisms and sources used, and the verified attributes sent to the RP could be a subset of what the identity proofing component holds
My earlier blog post on proxy/broker/hub/exchange architecture introduced two deployment patterns which I called unified proxy and split proxy. This blog post explores the capabilities that could be implemented by the attribute validation component of a split proxy architecture.
I am becoming more and more convinced that a unified proxy implementation that combines both authentication and attribute validation into a single physical instance limits architectural flexibility and increases privacy and operational burdens.
I won't focus here on the authentication proxy component, but will simply point you to the Government of Canada's SecureKey Concierge Credential Broker Service as an example of a successful, large scale, public sector implementation of a pure authentication proxy. Mike Waddingham has a screen-by-screen walk-through of how it works for our northern neighbors.
At its core, the attribute validation proxy is all about the specialized brokering of attributes from sources that are external AND internal to the RP's trust domain. It must also be interoperable with other attribute brokers (e.g. ID DataWeb Attribute Exchange Network) that exist.
The following are some of the "questions" that I would expect a public sector attribute validation proxy to be able to answer:
- Here is an identifier; send the previously agreed upon verified attribute bundle that enables identity resolution for the individual associated with that identifier
- Here is a self-asserted attribute bundle; verify and validate it
- Here is a self-asserted attribute bundle; return a MATCH/NO-MATCH on a per attribute basis
- Here is an identifier and a policy URI; Use the policy URI to look up previously agreed upon actions that need to performed (e.g. retrieve verified attributes 1,2,3, do policy evaluation X, use answer format Y) and provide the answer such that it does not reveal anything sensitive about the individual associated with the identifier
What other questions would you want an attribute validation proxy to answer?
Judging the health of a public sector online service delivery program has traditionally been hard. It is even harder when it is in its infancy or has never been attempted before. The following are three indicators I would look for to evaluate if the program is on the path to success.
The last month or so has been very educational for me; sometimes painfully so. At the RSA conference and earlier, I had the opportunity to have in-depth conversations with folks from outside the usual echo chamber including some from very different global jurisdictions.
What I found fascinating were the three points that came up consistently as indicators to program success:
- A focus on listening and delivering over talking and messaging
- First and highest priority is solving the transaction pain of the individual, with the following critical caveat; Pain points are identified based on what individuals do, and not what they say
- Understands the critical role of transaction volume, and have made the resource investments needed to bring it, measure it, meter it, and monetize it for everyone in the transaction flow
Specialization in the identity service industry has given us component identity services; a good thing. In a typical integration with a Credential Service Provider (CSP) or a Token Manager (TM) the point of integration is often a profile of a protocol, such as SAML, to minimize interoperability issues. This blog post looks at the current status of attribute validation (remote identity proofing) APIs where proprietary is the name of the game.
A relying party will sometimes choose to keep in-house the binding between the token and the identity, and out-source the identity management function to an identity manager. In such a case, the RP will ask for self-asserted information from an individual, and outsource the verification and validation of that information to an Identity Manager (i.e. a "remote identity proofing service").
The challenge in this world is that everyone from specialists like Socure and Trulioo, who focus on social identity verification, to the big guys like Equifax, Experian, Lexis-Nexis and others have proprietary APIs they use for integrating with RPs. This requires one-off integrations when an RP is using multiple Identity Managers, or when it wants to move from an existing vendor to a new one.
At the same time, if you speak with these folks, you quickly realize that their value proposition is not at the protocol level but in the payload they offer. The analytics they offer on top of their aggregated data is what they see as their unique selling point. So it has always been a point of curiosity to me that there has not been more of a movement to standardize the APIs/Interfaces they provide.
Given that they often need to partner with multiple token managers to provide a full CSP solution, and with industry Trust Frameworks seeking clarity around division of labor between token managers and identity managers, my hope is that this is an area these folks will come together on standardization at the protocol level and compete at the data/payload level.
Yahoo just got a lot more interesting to me. Not because of any new application or content strategy, and only peripherally due to their recent federation announcement. No, it is because of what their recent announcement is signaling about their realization of what they have given away, and what they are willing to do to get it back. Let me explain.
Amidst the brilliantly managed and orchestrated global symphony performance that was the OpenID Connect launch, there was a discordant note from Yahoo, an OpenID Foundation corporate board member, when it announced that it would no longer be a Google or facebook relying party. I am sure that the awkwardness of the timing of the announcement was unintentional, but what it signifies about Yahoo is very interesting.
In a previous blog post, I had written about platforms in a multi-sided market, and used Google as an example (just as applicable to facebook and others) of how the Google platform is put together in order to drive consumers across their properties while packaging their targeted knowledge of the consumer to earn revenue from advertisers.
The starting point to make this happen effectively in a seamless and joined-up way across multiple channels is predicated on "owning the identity/user/account/consumer". This was the critical piece that Yahoo was giving up to Google and facebook when it allowed their users to log in to Yahoo using their existing credentials. No more!
What this signifies to me is two things:
- Yahoo leadership is willing to let go of the past and make the tough calls needed for success. Very Drucker-ish
- Yahoo is building out their platform strategy and are executing on the critical role that identity plays in that strategy's success
I am simultaneously impressed and disappointed. Impressed as to the leadership being demonstrated to pivot the strategic direction of an internet-scale company. Disappointed that Yahoo, one of the early giants of the internet, is becoming just one more company that will collect, process, slice and dice our behavior to sell that information to the highest bidder.
In the comments of my previous blog post on fraudulent account activity signaling, Steve Howard pointed to NISTIR 7817: A Credential Reliability and Revocation Model for Federated Identities (PDF) by Hilde Ferraiolo as being relevant to the discussion. It is, and I was rather mortified to realize that it had slipped my mind. So this blog post provides a short synopsis of that work as it applies to fraudulent activity monitoring in federated identity implementations.
To keep it relevant, let me focus on what the report calls the Three Party Model (Credential Holder, Identity Provider and Service Provider) and the Four Party Model (Credential Holder, Identity Provider, Attribute Provider and Service Provider). I would encourage you to read the overview which outlines the various models in which actors in an authentication and attribute validation scenario can come together.
Really liked the emphasis on this bit:
Evidence of malicious activity at the service provider is not generally shared with the identity provider. This situation is unfortunate, as the service provider is at the forefront of attacks. It has all audit trails and knowledge of suspicious or malicious account activities [...] Service provider feedback is especially useful and indicative in the federation since the feedback is likely reported by several service providers in the federation, thus providing strong evidence of credential compromise.
NISTIR 7817: A Credential Reliability and Revocation Model for Federated Identities
- The introduction is a setup for describing what the report called a Uniform Reliability and Revocation Service (URRS) which "... provides revocation status information to and from identity providers, service providers, attribute providers, and users"
- A role for a credential holder to inform the URRS about a credential compromise
- The concept of a 'Reliability Score' that can be updated by a SP and can be used by other SPs or Identity Providers to make a risk based decision on future action
- Discussion about how privacy enhancing technologies such as selective disclosure schemes and anonymous credentials could play in this model
The report, very similar to the shared signals report, requires a trusted service that interacts with both Identity Providers and Service Providers with all the associated non-technical challenges it implies.
I found the focus on credential revocation checking and status notification (Revoked, Suspended, Active) via the URRS a bit baffling since in a 3 party or 4 party model, when a credential is revoked or suspended by an Identity Provider, it is not usable in a federation scheme. At the same time, I found much value in the concept of a shared 'Reliability Score' that shows decreased reliability with each negative feedback from the SPs and serves as input into a risk-based decision by the SPs to determine the suitability of a presented credential in an authentication event.
My sense is that there are points from both this report and the shared signals paper that are complementary, and could be the core of a shared fraud analytics platform service.
And since I am, at least on a thought exercise level, expending some energy on this and since any seemingly valuable effort/task/time-wasting-exercise requires a good acronym, I hereby name this particular windmill that I am tilting at the Federation-wide Reliable Account Usage Data (FRAUD) Service.