diridp: replace access keys with public key crypto

How often do you change your passwords? Even with password managers, we tend to stick with our passwords for years. Protecting user accounts is rightly better solved with solutions like two-factor authentication.

But two-factor is not really a thing for most applications. Think about all the passwords, tokens, secrets and access keys your applications use to talk to providers like AWS, GCP, Azure, other specialized services, or even just regular old databases. They may be strong random passwords, but exactly how many years can you reuse a password like that until it becomes a problem?

I ran into an interesting alternative in Kubernetes, in that it implements an OpenID Connect provider. Normally, that's a way for users to identify themselves on the web across services, but Kubernetes uses it for containers. OpenID Connect is a fancy specification, but stripped down here to something that can be explained fairly simply as: create an RSA key pair, and sign some JSON Web Tokens (JWT) with it, then publish the RSA public key over HTTPS.

As it turns out, OpenID Connect is starting to become a supported way for applications to identify themselves to some services. In AWS, and probably other cloud providers too, you can exchange a JWT for a temporary access key, then proceed as normal. This is a flow supported by all the official SDKs, with little configuration and automatic refresh.

The advantage is not only that everything is automatically rotated, but also that applications read the JWTs at run-time, instead of having some static place where credentials have to exist and be tracked, like application config files or Terraform state.

I wanted this outside of Kubernetes, so I wrote 'diridp'.

diridp

Diridp is a simple tool that generates an RSA key pair and rotates it regularly. You then configure it to write JWTs to specific paths for you applications to find, and it will also rotate those regularly. Finally, it creates a webroot directory that you can serve from Nginx, Apache httpd, or any HTTPS webserver as static files, and voilà: you are now an OpenID Connect identity provider!

For example, a simple configuration looks like:

providers:
  main:
    issuer: "https://example.com"
    keys:
      - alg: RS256
    tokens:
      - path: "/run/diridp/my-application/token"
        claims:
          sub: "my-application"
          aud: "some-cloud-service.example.com"

Some knowledge of JWTs is useful here. Basically, a JWT is composed of some 'claims' that are then signed using the RSA private key. These claims are just properties on a JSON object:

  • iss (issuer), which diridp fills with the issuer value from config, and is the HTTPS host of the provider. Your application independently hands over the JWT to some service, and the issuer field is how the service finds the provider it came from. It can then fetch the RSA public key from the provider and validate the signature.

  • iat (issued at) / exp (expires) / nbf (not before) are timestamps that restrict during which time window the token is valid. These are automatically added by diridp.

  • Other claims are manually configured in the claims section in diridp config. The above example demonstrates the two most common ones:

    • sub (subject) is used distinguish applications, users, etc. from the same provider, and can be used by the receiver to assign the correct set of permissions.

    • aud (audience) restricts the JWT to a specific receiver, so that for example, when you use a JWT to authenticate to an Amazon service, Amazon can't maliciously reuse the JWT at a Google service.

In addition, some knowledge of Unix permissions is useful, especially on servers that run multiple applications or are even shared by multiple users. Typically, you'd place the JWT in some directory that has access restricted to just the intended application.

Using JWTs

So now you have a JWT as a file on disk. How to use it? That depends on what service your application talks to, but let's take AWS as an example.

In the AWS web console, you can go to Identity & Access Management (IAM) and create an 'identity provider'. This is the record on the AWS side that tells it what JWTs to accept. Most importantly, here you provide the 'audience' value, which must simply match the aud claim in the JWT.

Next you create an IAM role, which is the actual identity for your application on the AWS side. The first step in creating a role is to select a 'trusted entity', which is the important part. Here you pick 'web identity', and select the provider we just created. From there, you continue as normal assigning permissions and a name.

The AWS web console essentially helped you create a 'trust policy' document in their custom JSON syntax. By default it doesn't restrict on the sub claim, but I recommended adding it to the conditions. Altogether, the policy may look like this:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Principal": {
        "Federated": "arn:aws:iam::123456789012:oidc-provider/example.com"
      },
      "Condition": {
        "StringEquals": {
          "example.com:sub": ["my-application"],
          "example.com:aud": ["sts.amazonaws.com"]
        }
      }
    }
  ]
}

(A Terraform example for all of the above is provided in the diridp README.)

Now your IAM role is all set up to be used by your application. How does that work? You can tell every official AWS SDK where your JWT file is and what IAM role to use with environment variables:

AWS_WEB_IDENTITY_TOKEN_FILE="/run/diridp/my-application/token"
AWS_ROLE_ARN="arn:aws:iam::123456789012:role/my_application"

If your application had custom means to configure AWS credentials, throw it all away! You're done! Creating a client is now as simple as:

const s3 = new S3Client({
  apiVersion: "2006-03-01",
  region: "eu-west-1",
});

In fact, you can even get rid of region if you set AWS_REGION as an environment variable.

An important detail is that AWS SDKs have to do an additional call to exchange the JWT for a temporary access key. The result of this call is cached per client object, which is sufficient even if you have several client objects for different AWS services, or multiple application processes. It only becomes something to be aware of if you create lots of client objects on-demand.

However, if you're using the AWS SDK for PHP, I'm sorry to tell you this, but you're the exception! You have to provide a cache yourself, a consequence of the request-based nature of PHP. It may look something like:

use Symfony\Component\Cache\Adapter\FilesystemAdapter;
$cache = new FilesystemAdapter('my-application');

$s3 = new \Aws\S3\S3Client([
  'version' => '2006-03-01',
  'region' => 'eu-west-1',
  'credentials' => new \Aws\PsrCacheAdapter($cache),
]);

Security

It's important to realize that the security of this solution as a whole is rooted in HTTPS. These days, that is pretty much tied to DNS domain ownership, because services like Let's Encrypt happily issue a certificate for you as long as their automated check verifies you own the domain.

AWS requires that you provide them thumbprints of the Certificate Authority (e.g. Let's Encrypt) that issued the HTTPS certificate, a kind of 'certificate pinning'. This, plus CAA records, can add some additional defense.

The functionality provided by diridp can probably be strung together just as well with cron, some bash scripts and the OpenSSL CLI, but I felt a proper tool would be more robust. I'm a big fan of Rust, and tried to leverage its great error handling capabilities as much as I could.

One of the great things about leaving the HTTPS part to a separate webserver is that diridp itself requires zero network access, and essentially just fiddles with files. This means it can be sandboxed a great deal, and the included systemd unit file does so. Here's what systemd-analyze has to say about it:

→ Overall exposure level for diridp.service: 0.4 SAFE 😀

Conclusion

I hope you find diridp useful, or maybe you already have a far better solution to managing application credentials. In that case, do share!

Quite a bit of effort is spent on user identity on the web, but not as much on how we manage application credentials. I'm excited to see some new options in this space that hopefully turn out to be a security benefit.