One of the first steps taken to protect a system from authentication errors is the determination of its assurance level requirement. That risk assessment process takes as input potential harm and likelihood of harm. This blog post looks at the applicability of the likelihood factor when assessing assurance level requirements for Internet connected systems.
The classic "E-Authentication Guidance for Federal Agencies (OMB-M04-04) [PDF]" defines risk from authentication error as a function of two factors: (a) potential harm or impact and (b) the likelihood of such harm or impact. The categories of harm and impact and how to apply them, per OMB-04-04, can be found in my earlier blog post on HOW-TO Conduct a Risk Assessment to Determine Acceptable Credentials.
The key point to note is that most risk assessment methodologies allow for “tuning” the risk using a “likelihood of harm/impact” factor, which looks something like this:
Risk of Authentication Error = Potential Impact/Harm * Likelihood of Impact/Harm
But how does one determine the "likelihood of harm" number? The two classic approaches are to explore "base rates" or to consult with experts. But there is a gotcha with experts:
The simplest and most intuitive advice we can offer [...] is that when you’re trying to gather good information and reality-test your ideas, go talk to an expert. Here’s what is less intuitive: Be careful what you ask them. Experts are pretty bad at predictions. But they are great at assessing base rates.
Decisive: How to Make Better Choices in Life and Work
So a prediction by an expert may not be all that valuable. But what about the base rates? My concern there is the constantly evolving threat environment that is the Internet, and how base rates that are based on past data are an unreliable predictor of the future.
So my recommendation in this particular case is rather simple. In this type of evaluation set the "likelihood" factor equal to 1. DO NOT discount the likelihood of harm, and ALWAYS assume there is a likelihood of harm:
Risk of Authentication Error = Potential Impact/Harm * 1
What that means is that, if as part of your assurance assessment you need to factor in the impact or harm from an alien invasion, do not discount the likelihood! Stand firm, fully account for it, and put into place compensating controls to mitigate the consequences.
Identity, authentication, attribute management and authorization domain experts tend to seek clear distinctions between each of those facets. The operational folks who actually deal with these issues often blur the boundaries between them. This blog post shows an example of laying out access control use cases from an operational perspective that I found rather educational.
With the current buzz around mobility and BYOD, there is sometimes a belief that the infrastructure and choices that exist today will have to be completely re-done in order to accommodate new devices. While I am not sure about that, I recently saw a public NASA ICAM presentation that outlined a framework for how to look at access control from an operational perspective that I found relevant.
I've kept the concept, but changed some of the details for the sake of clarity:
The key to the above visualization is to know that no one does credentialing and authentication for its own sake but as a means to an end to manage access to a system or resource. From an operational perspective, it allows for calling out an end to end process using natural language; "A person who is anonymous, using an organization managed PC, on the organization's network, wants to access administrator level functions during normal business hours".
You can then lay out the use case variations using a tabular format:
|Use Case||Applicability||Priority||Criteria A|
It immediately gives you a way to articulate possibilities that may or may not apply to you; What if it was a Smartphone instead of the PC? What if the connection is from the Internet? etc. It also provides you insights into what aspects change, what aspects still remain the same.
Do you have any pointers to frameworks like these that help to clarify choices people need to make regarding access controls?
Web APIs, API Management, and Open Data are hot topics these days for application developers. At the same time, protecting the information/data transferred over a variety of delivery channels are top of mind to the identity and security folks. I am seeking current practices and approaches that address the needs and concerns of both communities.
For application developers, these are exciting times during which more and more data is available over Web APIs, and there is increasing relevance for the "Internet of Things". For identity and security folks these are "interesting" times where perimeters are disappearing, delivery channels are expanding, and the application of security controls are no longer at a device or app server, but at the level of data and information. The security market place has responded with acquisitions such as Intel/Mashery, Axway/Vordel and CA/Layer 7.
As I've noted before, these changes do not need to be treated in isolation but as an opportunity to work together. As such, I've been trying to be more intentional about stepping outside the usual "identity, access, compliance, security" bubble to seek out, learn and understand the needs and priorities on the service delivery and application development side of the house.
In trying to educate myself by having discussions with the smart people in this domain, I have also started to put together a set of questions that need to be answered to meet the needs of all concerned:
- What are the current approaches and best practices for securing web APIs?
- How easy is it use the capability from an API consumer's (developer's) perspective?
- What options exist for the management of APIs across multiple organizations?
- Are there consistent approaches for securing APIs that deliver data over multiple channels (web, mobile etc.)?
- What approaches exist for integrating the API and API management into an organization's existing security and identity infrastructure?
- What current protocols bridge the gap between how identity and security is done on the web side to how it is done on the web API side?
- Are there best practices around implementing identity and access management for publicly facing APIs that are used by those outside your organization?
- What federation protocols play well across both Web SSO and Web APIs? Are there particular use cases where they work best?
I am not sure if I am on the right track (and I know that my questions are weighted towards the security side, which is not ideal), so am looking to become smarter about this topic. If you have knowledge and expertise in this area and are interested in having a conversation, please feel free to ping me directly (if you have my contact info) or via LinkedIn.
First a new idea is attacked as absurd; then it is admitted to be true, but obvious and insignificant; finally it is seen to be so important that its adversaries claim they themselves discovered it.
Many of the current conversations about identity are triggering echoes in my mind of the Cycle of Time quote from Battlestar Galactica "All of this has happened before, and all of it will happen again". So in the interest of not reinventing the wheel, I wanted to provide pointers to some existing definitions regarding Assurance Concepts and Trust Frameworks that could serve as the foundation for meaningful conversations.
The phrase "Identity is the New Money" is something I saw first on Dave Birch's blog post and the concept became much more real to me when he provided a synopsis of a recent SXSW Session on "Identity+30" by Sam Lessin, Head of the Identity Product Group at Facebook. It yielded some very interesting insights about the role of identity at some of the big players in the industry, and how it is driving their current behaviour.
At the same time, in order to get to an operational "trust and trade layer" leveraging the social graph and/or credentials, standardized identity assurance is needed as the currency of trust. As such, clarity on assurance and related aspects are foundational to understanding the big picture.
Unfortunately, this is where I see a lot of re-inventing the wheel happening these days.
So, if you are looking for a model on assurance and related concepts, a good place to start from are the definitions in the Pan-Canadian Assurance Model. [Credit to Tim Bouma from Canada TBS who put the above model together, and from whom I got the phrase "standardized assurance is the currency of trust"] The only minor terminology issue I have with the Pan-Canadian Assurance Model is their use of "Credential" instead of "Token".
As to the definition of a Trust Framework, I like the one from the American Bar Association's Federated Identity Management Legal Task Force:
An Identity Trust Framework is the governance structure for a specific identity system consisting of:
What Is an Identity Trust Framework? (PPT)
- the Technical and Operational Specifications that have been developed
- to define requirements for the proper operation of the identity system (i.e., so that it works),
- to define the roles and operational responsibilities of participants, and
- to provide adequate assurance regarding the accuracy, integrity, privacy and security of its processes and data (i.e., so that it is trustworthy); and
- the Legal Rules that govern the identity system in order to
- regulate the content of the Technical and Operational Specifications,
- make the Technical and Operational Specifications legally binding on and enforceable against the participants, and
- define and govern the legal rights, responsibilities, and liabilities of the participants of the identity system.
There are 40 miles of the storied Appalachian Trail (A.T.) in Maryland. We have been having some spectacular weekend weather in the local area and I am taking the opportunity to hike sections of it.
The terrain in Maryland is considered fairly easy by A.T. standards with only a 1,650 foot change in elevation from the low point at the Potomac River (250' elevation) to the high point at High Rock (1,900' elevation). But there some sections with impressive scenery as well as historic sites.
My recent hike took me past the first completed monument dedicated to the memory of George Washington. It was erected by the citizens of the city of Boonsboro in 1827.
I believe there is a place for the use of compensating controls when it comes to identity assurance, but am ambivalent about the approach that is referred to as "trust elevation". This blog post describes my understanding of the two approaches and why I believe the former is a more valid and realistic approach in the current time frame.
In a recent discussion on compensating controls and trust elevation, a friend who focuses on the privacy side of house asked me if the difference is simply a matter of semantics. I found that to be a fair question and thought I would provide my answer here as well.
Compensating controls, as it relates to identity assurance, are something to be implemented by the relying party when there is a mismatch between assurance(s) available from a credential service provider and what is needed by the relying party. I've written about this before, so won't repeat it here. The point is that for a variety of reasons including unavailability of credentials at the needed LOA or the need to make the user journey as friction-less as possible, an RP may put into place controls that seek to mitigate the risk of mis-identification.
Those controls are often transactional in nature and allows the RP to operate within a risk profile that they are comfortable with. There are trade-offs that the RP needs to be aware of in making this decision and the key take-away is that the manner in which the compensating controls are implemented are unique to each RP and they are fully responsible for the consequences. The uniqueness of the implementation combined with the RP specific risk appetite means that it is almost impossible to quantify the control mechanisms such that it can be used outside the RP's domain.
Trust elevation, as I understand it, seeks to quantify a set of "other factors" that can be used in combination with an existing credential in order to "elevate" the assurance level of the credential beyond its original level. It is similar in concept to NIST SP-800-63-1 Multi-Token Assurance Level Escalation, but uses factors such as behavior and context instead of additional tokens.
Conceptually, I understand the intent and the approach.
What I disagree with are:
- the presumption that the approach is raising the assurance level instead of mitigating the risk of using a lower assurance credential
- that the "elevation" can persist beyond the session; A Bad Idea
- that the "elevated" level of assurance can then be asserted downstream to another application or in a federation context; Another Bad Idea
Is my understanding of trust elevation correct? Do you believe that factors such as behavior can be quantified such that they can be consistently used across organizations?