Skip to content

IdentityServer and Signing Key Rotation

August 9, 2019

When maintaining keys used for cryptographic operations (such as when running a token server that maintains keys used to sign tokens), a good security practice is to periodically rotate your keys. This is the process of retiring one key and onboarding another.

Within IdentityServer, the way you indicate your primary signing key is with the AddSigningCredential extension method we provide that adds IdentityServer to the ASP.NET dependency injection system. AddSigningCredential can accept an X509 certificate, the subject distinguished name or thumbprint of a X509 certificate stored in the windows certificate store, or just a plain old RSA key. The public portion of the key used for signing will be included in the discovery document.

We also provide an AddValidationKey extension method to allow additional keys to be included, such as those that are pre-active, or deactivated. In other words, the keys that you plan to use, or that were recently used for signing.

All of those calls might look like this in your ConfigureServices in Startup:

services.AddIdentityServer()
   .AddSigningCredential("CN=currentKeyName")
   .AddValidationKey("CN=lastKeyName")
   .AddValidationKey("CN=nextKeyName")

So, what’s the process for performing key rotation?

When you first deploy your IdentityServer, you will have your first signing key (let’s call it key1). You will run with this in production for some amount of time (say 90 days, or 9 months, or whatever you deem the acceptable duration your key should be in use). This is what the AddSigningCredential API is for.

You will then prepare key2 as the next key to be used, but you can’t switch immediately to using it. The reason is that normally OpenID Connect and/or OAuth2 consumers will cache your token server’s key material from the discovery document. If you were to immediately change keys, then new tokens signed with key2 would be delivered to consumers that have only key1 in their cache. What is needed is to introduce key2 into the discovery document prior to the switch over to using key2. This is what the AddValidationKey API is for.

Now in your discovery document you still have key1 as the active signing credential, and additionally key2 as a validation credential. You will leave this running for some amount of time (say 2-5 days or longer depending on cache durations) to allow consumers to update their caches from your updated discovery document. Then you can switch over and promote key2 to your active signing credential.

But what about key1? Well, you need to maintain it in the discovery document even though you won’t be using it anymore for signing. Why?

Let’s say that you just issued a token signed with key1, and then you switch keys (and drop key1 from your discovery document), and then at that exact moment a consumer reloads their cache. This would mean that the consumer would then only have key2 in their cache and would not be able find the correct key to validate the token signed with key1.

In short, when you retire a key, you need to keep it in discovery. This means when you retire a key you will just switch the two keys used in the calls to AddSigningCredential and AddValidationKey.

Then after some more amount of time (longer than the expiration of any issued token), you can finally remove key1 from discovery.

Given the above workflow, it’s possible that you could have two keys in discovery (if not three or more, depending on how narrow your window of rotation). Just as a means to validate my point, here is a screen shot of Azure Active Directory’s key materials from its discovery document. As you can see, there are three signing keys:

azure_signing_keys.png

This also means there are three steps in a key rotation lifecycle. Depending on how your build and deploy your IdentityServer, this might be a manual (and potentially tedious) process. But in short, the primitives are in place for you to implement key rotation in IdentityServer.

HTH

Update:

Rock Solid Knowledge and I have teamed up to release a commercial component that performs key management and key rotation automatically. It can be found here. Enjoy!

Scope and claims design in IdentityServer

February 25, 2019

Very often I see developers that are confused about the relationship of scopes and claims in IdentityServer. Hopefully this blog post will help.

In OpenID Connect and OAuth 2.0 the definition of a scope is a resource that a client application is trying to get access to. This concept of a resource is deliberately vague and the confusion is exacerbated by the two different specs using the scope concept for two similar yet disparate uses. Also, designing scopes is up to the application developer which means you must impart semantics into your scopes, and this requires some amount of planning and/or design.

OpenID Connect and Identity Scopes

Given that OpenID Connect is all about an application authenticating a user, then the scope, as a resource, means that the application wants identity data about a user. For example, the user’s unique ID, name, email, employee ID, or something else along those lines. This scope is an identity resource and is an alias for some number of claims that the application requires about the user.

The OpenID Connect specification defines some scopes, for example openid which simply maps to the user’s unique ID (or sub claim), and profile which maps to about 10+ claims which include the user’s first name, last name, display name, website, location, etc. Custom identity scopes are allowed and the scope of the scope, so to speak, is defined by the application developer. So a custom scope called employee_info could be defined which could represent the employee ID, building number, and office number.

In IdentityServer, these identity scopes are modeled with the IdentityResource class. The constructor allows you to pass the name of the scope (e.g. employee_info) and a string array which is the list of claims that scope represents.

new IdentityResource("employee_info", new[] {
    "employee_id", "building_number", "office_number"})

The scopes (and corresponding claims) defined by the OpenID Connect specification are provided by the IdentityResources class and its nested classes such as OpenId, Profile, etc.

var identityResources = new[] { 
    new IdentityResources.OpenId(), 
    new IdentityResources.Profile(),
};

The nice aspect of this design is that the claims are only delivered to the application if needed as expressed by the scopes requested, and different applications can receive different claims by requesting different scopes.

OAuth 2.0 and API Scopes

Given that OAuth 2.0 is all about allowing a client application access to an API, then the scope is simply an abstract identifier for an API. A scope could be as coarse grained as “the calendar API” or “the document storage API”, or as fine grained as “read-only access to the calendar API” or “read-write access to the calendar API”. It’s possible that other semantics could be infused into your scope definitions as well. This scope is an API scope and models an application’s ability to use an API.

In IdentityServer, these API scopes are modeled with the ApiResource class. The constructor allows you to pass the name of the scope (e.g. calendar or documents).

var apiResources = new[] {
    new ApiResource("calendar"),
    new ApiResource("documents"),
};

The access token used to call these APIs will contain a minimal set of claims. Some of these claims are protocol claims (e.g. scope, issuer, expiration, etc), and there is one main user related claim which is the user’s unique ID (or sub claim). If other claims about the user are needed in one of the APIs, then the ApiResource constructor provides an additional constructor parameter as a string array which is the list of claims needed. This, in essence, allows the ApiResource class to model an API and the user claims needed by that API.

var apiResources = new[] {
    new ApiResource("calendar", new[] { "employee_id" }),
    new ApiResource("documents", new[] { "country" }),
};

The nice aspect of this design is that the claims are only present in the access token if the access token is meant to be used at those APIs.

Summary

We feel this is a nice balance for how to work with the abstract scope concepts in the OpenID Connect and OAuth 2.0 protocols, and at the same time allowing a concrete pattern for expressing what claims are needed by apps and APIs.

Hope this helps.

 

Using OAuth and OIDC with Blazor

January 11, 2019

I am sometimes asked what OIDC/OAuth2 protocol flow a Blazor application would use. Since a Blazor application is just a browser-based client-side application, then the answer is the same as if you were asking for a JavaScript browser-based client-side application (or SPA). And more specifically, I’d expect most Blazor applications to be some-domain. Here’s the updated guidance for that.

Same-site cookies, ASP.NET Core, and external authentication providers

January 11, 2019

Recently Safari on iOS made changes to their same-site cookie implementation to be more stringent with lax mode (which is purportedly more in-line with the spec). In my testing, I noticed that using strict mode same-site cookies had the same behavior on both Chrome and FireFox running on Windows. This behavior affected ASP.NET Core’s handling of external authentication providers for any security protocol, including OpenID Connect, OAuth 2, google/facebook logins, etc. The solution was to unfortunately configure cookies to completely disable the same-site feature. Sad.

I was curious if we could really figure out what was happening and come up with a solution that allowed us to keep using same-site cookies for our application’s main authentication cookie. I think I have, and this solution could work with any server-side technology stack that works similarly to how ASP.NET Core does when processing authentication responses from external providers (cross-site).

Recap of what’s not working

Here’s a summary of the expected flow:

untitled

The step where the flow fails is on the last step, step4, where the user is not logged in. It turns out that’s not exactly what’s happening. Here are the details:

Step1: An anonymous user is in their browser on your application’s website. The user attempts to request a page that requires authorization, so a login request is created and the user is redirected to the authentication provider (which is cross-site).

Step2: The user is presented with a login page and they fill that in and submit. If the credentials are valid then the provider creates a token for the user, and this token needs to be delivered back to the client application. This delivery is performed by sending the token back into the browser and then having the browser deliver it to the application’s callback endpoint. This delivery could be via a redirect with a GET or a form submission via a POST. The problem with same-site cookies is not affected by the method of delivery back to the client application, so either of these triggers the issue.

The key point here is that, from the browser’s perspective, the user is starting a workflow from the login page in the provider’s domain. The response is then sending the user back to the client application (which is cross-site).

Step3: The client application will receive and validate the token, and then issue a local authentication cookie while redirecting the user back to the original page they requested.

This is the step that I think is easy to misunderstand. Because the request from the provider back to the app is cross-site, there is a belief that the issued cookie is ignored by the browser. It is not, though, and the browser will in fact maintain this cookie issued from your application.

Step4: The last redirect in the workflow sends the user back to the original page they requested. This is the step that fails from the end-user’s perspective. The cookie issued from step3 is not sent to the server, and so the user seems to not have been authenticated.

The reason this step fails is not because the cookie was not issued to the browser, but instead because the current redirect workflow started from the provider’s login page, which is cross-site so the browser refuses to send the cookie just issued in step3. If at this point the user were to refresh the page, or manually navigate their browser to the original page the browser would send the cookie and the user would be logged in. The reason is that a refresh or manual navigation is not a cross-site request.

The fix in general

The solution to this problem then is to change how the final redirect in step3 is performed. In ASP.NET Core it’s done with a 302 status code, but there’s another way. Instead the response in step3 could be a 200 OK and render this HTML:

<meta http-equiv='refresh' 
      content='0;url=https://yourapp.com/path_to_original_page' />

This response in step3 in essence ends the cross-site redirect workflow from the browser’s perspective, and then asks the browser to make a new request from the client-side. The trick is that this request is a new workflow and considered same-site since it’s from a page on the application’s website, and then the authentication cookie will be sent. Not Sad.

The fix specifically for ASP.NET Core

Given that the redirect in step3 is handled by ASP.NET Core’s authentication system, we need a way to hook into it and override the redirect. Unfortunately there’s no event that’s raised at the right time for us to change how the redirect is done. So instead we use middleware so we can catch the response before it leaves the pipeline:

public void Configure(IApplicationBuilder app)
{
   app.Use(async (ctx, next) =>
   {
      await next();

      if (ctx.Request.Path == "/signin-oidc" && 
          ctx.Response.StatusCode == 302)
      {
          var location = ctx.Response.Headers["location"];
          ctx.Response.StatusCode = 200;
          var html = $@"
             <html><head>
                <meta http-equiv='refresh' content='0;url={location}' />
             </head></html>";
          await ctx.Response.WriteAsync(html);
      }
   });
   app.UseAuthentication();
   app.UseMvc();
}

This puts a middleware in front of the authentication middleware. It will run after the rest of the pipeline and inspect responses on the way out. If the request was for the application’s authentication redirect callback from step3 (in this case the typical path when using OpenID Connect) and the response is a redirect, then we capture that redirect location and change how it’s done using the client-side <meta> tag approach instead.

Front-channel sign-out notification for OpenID Connect

It turns out there’s another type of request into your app from the external provider when using OpenID Connect, which is the front-channel sign-out notification request. This request is performed in an <iframe> and requires the user’s authentication cookie to perform the sign-out. Given that this is absolutely cross-site, this means the same-site cookie would be blocked by the browser. We need to perform the same sort of trick to get the browser to make this request originating from our application so the browser considers it same-site.

Here’s the additional code to handle this type of request:

public void Configure(IApplicationBuilder app)
{
   app.Use(async (ctx, next) =>
   {
      if (ctx.Request.Path == "/signout-oidc" && 
          !ctx.Request.Query["skip"].Any())
      {
         var location = ctx.Request.Path + 
            ctx.Request.QueryString + "&skip=1";
         ctx.Response.StatusCode = 200;
         var html = $@"
            <html><head>
               <meta http-equiv='refresh' content='0;url={location}' />
            </head></html>";
         await ctx.Response.WriteAsync(html);
         return;
      }

      await next();

      if (ctx.Request.Path == "/signin-oidc" &&
          ctx.Response.StatusCode == 302)
      {
          var location = ctx.Response.Headers["location"];
          ctx.Response.StatusCode = 200;
          var html = $@"
              <html><head>
                 <meta http-equiv='refresh' content='0;url={location}' />
              </head></html>";
          await ctx.Response.WriteAsync(html);
      }
   });
   app.UseAuthentication();
   app.UseMvc();
}

The workflow for this request is simply re-issuing the request to the sign-out notification endpoint, with the difference being that it will now be same-site. The “skip” flag is needed to ensure we don’t re-issue the request again on that next request.

More general ASP.NET Core solution

The above code is fine if you’re willing to hand-code (and know) the endpoints that you need to convert the cross-site redirect into same-site redirects. But if you have several endpoints because you’re dealing with several external providers, then this might be tedious. Here’s a more generalized solution to the problem:

public void Configure(IApplicationBuilder app)
{
   app.Use(async (ctx, next) =>
   {
        var schemes = ctx.RequestServices.GetRequiredService<IAuthenticationSchemeProvider>();
        var handlers = ctx.RequestServices.GetRequiredService<IAuthenticationHandlerProvider>();
        foreach (var scheme in await schemes.GetRequestHandlerSchemesAsync())
        {
            var handler = await handlers.GetHandlerAsync(ctx, scheme.Name) as IAuthenticationRequestHandler;
            if (handler != null && await handler.HandleRequestAsync())
            {
                // start same-site cookie special handling
                string location = null;
                if (ctx.Response.StatusCode == 302)
                {
                    location = ctx.Response.Headers["location"];
                }
                else if (ctx.Request.Method == "GET" && !ctx.Request.Query["skip"].Any())
                {
                    location = ctx.Request.Path + ctx.Request.QueryString + "&skip=1";
                }

                if (location != null)
                {
                    ctx.Response.StatusCode = 200;
                    var html = $@"
                        <html><head>
                            <meta http-equiv='refresh' content='0;url={location}' />
                        </head></html>";
                    await ctx.Response.WriteAsync(html);
                }
                // end same-site cookie special handling

                return;
            }
        }

      await next();
   });
   app.UseAuthentication();
   app.UseMvc();
}

The above code is, in essence, the same code from ASP.NET Core’s UseAuthentication for dealing with requests from external providers. I have simply weaved the redirect handling logic into the normal processing that was being done for normal ASP.NET authentication. Perhaps this type of behavior might make its way into ASP.NET Core in the future.

HTH

 

The State of the Implicit Flow in OAuth2

January 3, 2019

This blog post is a summary of my interpretation and perspective of what’s been going on recently with the implicit flow in OAuth2, mainly spurred on by the recent draft of the OAuth 2.0 for Browser-Based Apps (which I will refer to here as OBBA) and the updated OAuth 2.0 Security Best Current Practice (which I will refer to as the BCP) documents from the OAuth2 IETF working group. These are still in draft, so it’s possible they might be changed in the future.

This is a long post because these new documents have forced the community to rethink the security practices we’ve been using for several years now.

A brief history of the implicit flow

The implicit flow in OAuth2 and later adopted in OpenID Connect (OIDC) was originally designed to accommodate client-side browser-based JavaScript applications (also known as “single page applications” or “SPAs”). At the time it was introduced into the specification with trepidation due to concerns with the nature of these public clients running in the browser. A public client is one is running on a user’s device and thus can’t keep a secret and can’t properly authenticate back to the token server. Native apps also fall under that category.

This trepidation was documented in the RFC6819, the OAuth 2.0 Threat Model and Security Considerations spec. In fact, many threats for all the flows are covered in that RFC, and any decent client and token server implementations should heed the advice (for example, using the state parameter for cross-site request forgery (CSRF) protection, exact redirect URI matching, etc.). But the aspect of the implicit flow that is most criticized as difficult to protect is also the fundamental mechanic of what defines the implicit flow, namely that the access token is returned from the token server to the client from the authorize endpoint.

Concretely, the concern is that the access token is delivered to the client via the front-channel in a hash fragment parameter in the redirect URI. Returning the access token in the URL means it’s visible in the browser’s address bar, browser history, and possibly in referrer headers. Given the complexity of HTML, CSS, JavaScript, and browsers there is potential for this access token to leak from the URL. Also, OAuth2 (by itself), doesn’t provide a mechanism for a client to validate that the access token wasn’t injected maliciously. Now that doesn’t mean there were not mitigations against these concerns. Anyone who has ever come to our workshop or hired us for consulting would get an earful on the steps that you need to take in your applications to mitigate these threats. But these mitigations made use of features from OIDC and some strict programming practices. Not everyone using OAuth2 knew to use these mitigations.

One question that would commonly be asked about making browser-based clients more secure is “what about code flow – why can’t we use that instead?”. It turns out code flow (by itself) was worse because 1) public clients don’t use a real secret to exchange the code at the token endpoint, so an attacker could just as easily steal the code to obtain the access token, 2) codes passed via the query string are sent to the server (whereas fragment values are not), so they would be exposed more than when using implicit flow, 3) the client is required to make more requests to complete the protocol for no additional security, and 4) to use the token endpoint the token server would need to support CORS, and CORS was not yet widely enough supported by browsers. At that time, the spec designers could not take a dependency on CORS thus they had to find an alternative. I think this is (at least) one of the main reasons implicit as a flow was originally devised.

The spec committee has long wanted something built into the protocol itself to help protect against this threat of access token leakage from the URL. There are numerous posts on the working group email list that discuss this (from the time OAuth2 started being developed in 2010 and since it’s completion in 2012). But, at the time, the implicit flow was the best they had to offer. It was important that they provided some guidance rather than no guidance for fear of people inventing their own security protocols.

And just so we’re all clear on the value of specifications; they are pre-vetted threat models. That’s why we like them and (typically) follow them, because there is a high level of scrutiny, many people have thought about the attacks, documented the approaches we can take to mitigate them, and educated us on the current known issues for the types of activities we’re trying to perform. Without them each developer would have to come up with their own security and threat model it and that historically hasn’t sufficed.

A new hope

Since October 2012 when the OAuth2 RFC was released, the implicit flow was “the best we had” for client-side browser-based JavaScript applications. As a point of reference, recall that client-side JavaScript and full-blown SPAs still weren’t mainstream. For example, AngularJS didn’t really start to get popular until 2014.

Nonetheless, in the working group there was still a desire to “do better” when it came to the implicit flow. Looming on the horizon was hope: HTTP token binding (first introduced in 2015). HTTP token binding was a new spec that would (basically) tie the token to a particular TLS connection ensuring only the rightful client could use that token. This provided a solution to address the concerns about exfiltrating tokens from the browser (and other types of clients too). Things were looking up and the future of security was good. In fact, work was in progress in 2016 to incorporate this new token binding feature into the OAuth2 protocol, but then, unfortunately, in 2018 Google made the decision to drop support for this “unused feature” in their Chrome browser. This effectively made token binding impractical for browser-based clients (despite the final token binding RFCs being completed in the same month).

Around the same time (in 2015) the OAuth2 working group devised RFC7636 Proof Key for Code Exchange by OAuth Public Clients (also known as PKCE) to address an attack against native clients. The attack involved stealing the authorization code as it was being sent back to the client in the redirect URI, and since public clients don’t have a real secret then the authorization code issued was as good as the access token. The mitigation used in PKCE was to create a new dynamic secret each time a client needed to connect to the authorize endpoint. This dynamic secret would then be used on the token endpoint and the token server would help guarantee that only the rightful client could use the code to obtain the corresponding access token.

I don’t think anyone in the OAuth2 working group anticipated it, but PKCE turned out to be useful for all types of clients not just native ones. Given that token binding had fizzled, the idea of using code flow with PKCE became a candidate to address the issue of access tokens being exposed in the redirect URI for implicit clients. Also, by this time CORS had finally become well enough supported that it could be utilized for these browser-based clients. This confluence is what seems to have galvanized the work on the documents that this blog post is about.

The OAuth 2.0 for Browser-Based Apps document

In addition to including many of the suggestions already described in the existing RFC6819 OAuth 2.0 Threat Model and Security Considerations, and the use of some other already well-known security practices for JavaScript apps (such as using CSP and CORS), OBBA, in short, recommends using code flow with PKCE to mitigate the potential exposure of the access token from the URL. That’s the main thrust of the document. That’s it.

And having thought about it, despite having all the existing mitigations for implicit flow, I agree that code flow with PKCE is valid advice and an improvement. Justin Richer, on the working group email list, summarized it the best in my opinion:

“The limitations and assumptions that surrounded the design of the implicit flow back when we started no longer apply today. It was an optimization for a different time. Technology and platforms have moved forward, and our advice should move them forward as well.”

As such, I have updated my OIDC certified client library oidc-client-js to support code flow with PKCE as of 1.6.0 (released in December 2018). I will follow up with another blog post on those details.

Other recommendations in OBBA

My summary of OBBA is a bit curt; it actually does provide a few other recommendations for JavaScript apps. The first has to do with same-domain apps, and the second has to do with the elephant in the room now that code flow is in play which has to do with refresh tokens. I have concerns about the latter.

Same-domain apps

Same-domain applications are those where the client-side browser-based application is hosted from the same domain as the API that it is invoking. Often an application would use a cookie as the authentication mechanism when making the calls to the backend API, but that design was discouraged for many years due to the potential for CSRF attacks on the API. Using token-based authentication was an approach to mitigate the CSRF attack, and those tokens would be obtained by the client-side JavaScript application using OAuth2 (and presumably with the implicit flow).

The somewhat surprising recommendation in OBBA for same-domain applications is to not use OAuth2 at all and use the old approach of using cookies to authenticate to the backend API. Suggesting an approach that makes security worse seems counterproductive, but the recommendation is based on yet another recent security specification: Same-site Cookies. Same-site cookies allows a web server, when issuing a cookie, to instruct the browser to only send the cookie when requests comes from the domain that issued the cookie. This behavior mitigates the CSRF attack.

Given that same-site cookies have sufficient browser support now, it seems practical to rely upon it for our CSRF protection and thus this recommendation in OBBA is also convincing. But at the same time it’s sort of ironic that the OAuth 2.0 for Browser-Based Apps best practices document suggests not using OAuth.

Anyway, this doesn’t help you if your API must be accessed by clients on different domains or from clients that aren’t running in the browser, so using OAuth2 and token-based security for your API for those scenarios is still appropriate.

Token renewal and refresh tokens

Access tokens expire and client applications need some user-friendly mechanism for renewing those access tokens. Prior to OBBA, a common renewal technique for implicit clients would was to make a new request to the authorization endpoint but using a hidden iframe. In fact, the OIDC spec even added a provision for this style with the prompt=none authorization request parameter. This relied upon the user’s authentication session (typically in the form of a cookie) at the token server for this to succeed. This approach works fairly well (if you can get over your incense of still using an iframe in this day and age). Recall, though, that when the implicit flow was developed CORS was still not a viable option. And even with the updated guidance in OBBA, the iframe approach is still a valid approach.

Now that our browser-based JavaScript application will now be using code flow, the obvious question comes up “why couldn’t we use refresh tokens now instead of iframes to renew access tokens?”. This is technically possible (again, assuming CORS), but the concern is that if the refresh token is exfiltrated from the browser then it can be used by an attacker to perpetually access the API on behalf of the user. Unbound refresh tokens issued to public clients are more powerful (and therefore dangerous) than an individual access token. As such, OBBA states that you should not use refresh tokens for browser-based applications. Case closed, right? Maybe…

The practical effects of OBBA and BCP

The intent of the OBBA guidance was meant to simplify the work needed to be performed in a browser-based JavaScript application to obtain access tokens, and to reduce the mental burden on the application developer so that they did not need to be security experts. But, unfortunately, I worry these documents will have the opposite effect.

The OBBA has language that seems to contradict the earlier statement about not using refresh tokens. The OBBA and the BCP (the other document this post is about) both indicate that you need to evaluate for yourself if you want to use refresh tokens in browser-based clients.

To their credit, both documents provide guidance on ways to protect the refresh token in the browser and mitigate abuse by an attacker. The main mitigations include using a client-bound refresh token and/or performing refresh token expiration and rotation when the refresh token is used. Unfortunately, these mitigations might not be available based on the situation. Having said that, I also added refresh token support to oidc-client-js in 1.6.0, including renewal and revocation. Again, more about that in another blog post.

If you wish to use refresh tokens in the browser, this means you must use a token server that has the mitigation features described in these documents. Not all token servers support these recommendations. I believe one intended audience of these documents is the token server vendors. But on the working group email list I see some vendors that express concern that they will not be able to accommodate the approaches recommended (or at least not soon). Having said that, IdentityServer already has support for many of the recommendations, and we are making plans to add additional mitigations.

This escape clause is the concern I have with these documents. I’m left uneasy with the burden now being back on the developer to decide to use refresh tokens and evaluate their token server’s support for mitigations for use with browser-based clients. The developer will need to know how to use them, configure them properly, know how to protect them in their client, and how to threat model those decisions. We know that most developers are not security specialists. From that perspective, perhaps this is a step backwards in terms of improving overall security. I suspect we will see compromises in the future based on refresh tokens being used in the browser.

Where are we?

After all of this, what options are we left with? It seems we have three styles for our client-side browser-based JavaScript clients calling APIs:

Same-domain apps with cookies

This style would be for same-domain applications that use same-site (and HTTP-only, as always) cookies to authenticate to an API backend. The backend would issue the cookie based on the user’s authentication (which itself could be as a result of SSO to an OIDC token server), and cookies would be renewed while the user is still active in the client. This style of application would use well-established approaches for securing the client, including CSP. This style is admittedly the easiest for the developer. The main downside might be the requirement to run in older browsers.

OAuth2 clients without using refresh tokens

This style would be for browser-based clients that need to use cross-domain APIs, or an API that only accepts tokens (and doesn’t support cookies). This client would use code flow with PKCE to obtain the access token, but the rest would be essentially the same as an implicit client would do today, including using an iframe to renew access tokens. Again, standard approaches should be used to secure the client application (including CSP). The token server will need to support CORS and PKCE, and the ability the renew tokens is based on the user’s session at the token server.

OAuth2 clients using refresh tokens

This style is essentially the same as the previous, except that refresh tokens would be obtained by the client and used to renew access tokens. To mitigate the attacks against the refresh token being leaked the token server needs to support some sort of client-bound refresh tokens, or a refresh token expiration and rotation strategy.

Choose wisely.

 

Beware the combined authorize filter mechanics in ASP.NET Core 2.1

July 15, 2018

In ASP.NET Core 2.1 one of the security changes was related to how authorization filters work. In essence the filters are now combined, whereas previously they were not. This change in behavior is controlled via the AllowCombiningAuthorizeFilters on the MvcOptions, and also set with the new SetCompatabilityVersion API that you frequently see in the new templates.

Prior to 2.1 each authorization filter would run independently and all the authorization filters would need to succeed allow the user access to the action method. For example:

[Authorize(Roles = "role1", AuthenticationSchemes = "Cookie1")]
public class SecureController : Controller
{
    [Authorize(Roles = "role2", AuthenticationSchemes = "Cookie2")]
    public IActionResult Index()
    {
        return View();
    }
}

The above code would trigger the first authorization filter and run “Cookie1” authentication, set the HttpContext’s User property with the resultant ClaimsPrincipal, and then check the claims for a role called “role1”. Then, the second authorization filter and run “Cookie2” authentication, overwrite the HttpContext’s User property (thus losing the “Cookie1” user’s claims) with the resultant ClaimsPrincipal, and then check the claims for a role called “role2”. In short, the user had to have both cookies to be granted access. Also, as side effect of this is also that in the action method, the code would only see the claims from “Cookie2”.

With the new compatibility changes in 2.1, the behavior of the above authorization filters has changed. The mechanics are that the authorization filters are now combined (somewhat). The roles are still kept separate, meaning the user must still have both “role1” and “role2”. But the surprising change is that now instead of both schemes being required, now only one is.

What happens is that both “Cookie1” and “Cookie2” are authenticated (if present) and the resultant claims are combined into the one User object. Then the checks for both “role1” and “role2” are done. So if both roles were only in one cookie, then access would be granted. And, of course, in the action method the combined claims from “Cookie1” and “Cookie2” would be available.

This is a different semantic that the way things previously worked. In essence your authorize filter requirements might be relaxed due to the presence of other authorize filters in the action method invocation hierarchy.

A scenario would this might be a issue is where you have an app that has both UI and APIs. A common technique is to use a global filter as a blanket protection to requires that all users be authenticated in the rest of the app:

services.AddMvc(options =>
{
   var policy = new AuthorizationPolicyBuilder()
      .AddAuthenticationSchemes(“ApplicationCookie”)
      .RequireAuthenticatedUser()
      .Build();
   options.Filters.Add(new AuthorizeFilter(policy));
});

And then an API action method like this:

[Authorize(AuthenticationSchemes = "Bearer")]
public IActionResult PostData()
{
    ...
}

This new behavior opens us up to possible XSRF attacks on our APIs, whereas pre-2.1 the explicit authentication scheme on the action method protected us.

Now of course, policy schemes (aka virtual schemes), which are also new in 2.1, could help us address this, but that’s a design change in your app.

I agree with the suggestion by Microsoft in the docs that for this new feature:

We recommend you test your application using the latest version (CompatibilityVersion.Version_2_1).

But not so sure about their other comment:

We anticipate that most applications will not have breaking behavior changes using the latest version.

So, beware the side effects of the new combined authorization filter behavior in ASP.NET Core 2.1.

IdentityManager2

July 9, 2018

In 2014 I developed and released the first version of IdentityManager. The intent was to provide a simple, self-contained administrative tool for managing users in your ASP.NET Identity or MembershipReboot identity databases. It targeted the Katana  framework, and it served its purpose.

But now that we’re in the era of ASP.NET Core and ASP.NET Identity 3 has come to supplant the prior identity frameworks, and it’s time to update IdentityManager for ASP.NET Core. Unfortunately I’ve been so busy with other projects I have not had time. Luckily Scott Brady, of Rock Solid Knowledge, has had the time! They have taken on stewardship of this project so it can continue to live on.

I’m happy to see they have released the first version of IdentityManager2. Here’s a post on getting started. Congrats!