Issuing and validating tokens part 1: Problem description

Many of the projects I do at work and in my free time require two components to exchange and validate some sort of tokens - whether it’s OAuth2 access tokens, session cookies, API keys or other types of data that is generally passed as a string and represents information about the user.

We generally apply one on two approaches to issuing and validating tokens:

First way allows us to make the token smaller (as additional information will generally be stored in the DB) and makes it easier to expire them, but it requires DB lookups and API calls. Second approach means the tokens are larger and harder to expire, but it doesn’t require a DB and (in case of asymmetric encryption) can save API calls.

I want to compare the performance of those two approaches, so I plan to develop a simple API project that will issue tokens and allow clients to validate them using an API call. To add some background let’s assume users can purchase subscriptions in our book library, and whenever a purchase is completed we issue a token that can be used to get subscription expiry date, its level (say All books or Books older than 1 year) and platforms for which it is available.

API will have two endpoints - one for issuing tokens and one for verifying them. We are most interested in performance of the verify endpoint, as it directly impacts the (otherwise quick) user experience. We will investigate:

The last option gives us the possibility of sharing our public key with a trusted client so that the payload can be decrypted without hitting the API - we’ll try to benchmark that as a separate option. I included both Redis and Postgres as I generally use Postgres as the source of truth database - we’ll see how much can be gained by moving hot data to Redis.

Next post will include code and basic tests, and we’ll see where we can go from there.