Apr 9, 2016 · 2 minute read
Many of the projects I do at work and in my free time require two components to exchange and validate some sort of tokens - whether it’s OAuth2 access tokens, session cookies, API keys or other types of data that is generally passed as a string and represents information about the user.
We generally apply one on two approaches to issuing and validating tokens:
- persist them in some sort of database and load them on every request
- encrypt, sign and verify them cryptographically
First way allows us to make the token smaller (as additional information will generally be stored in the DB) and makes it easier to expire them, but it requires DB lookups and API calls. Second approach means the tokens are larger and harder to expire, but it doesn’t require a DB and (in case of asymmetric encryption) can save API calls.
I want to compare the performance of those two approaches, so I plan to develop a simple API project that will issue tokens and allow clients to validate them using an API call. To add some background let’s assume users can purchase subscriptions in our book library, and whenever a purchase is completed we issue a token that can be used to get subscription expiry date, its level (say All books or Books older than 1 year) and platforms for which it is available.
API will have two endpoints - one for issuing tokens and one for verifying them. We are most interested in performance of the verify endpoint, as it directly impacts the (otherwise quick) user experience. We will investigate:
- GUIDs and data stored in Postgres
- GUIDs and data stored in Redis
- JWTs containing all user data and encrypted symmetrically
- JWTs containing all user data and encrypted asymmetrically
The last option gives us the possibility of sharing our public key with a trusted client so that the payload can be decrypted without hitting the API - we’ll try to benchmark that as a separate option. I included both Redis and Postgres as I generally use Postgres as the source of truth database - we’ll see how much can be gained by moving hot data to Redis.
Next post will include code and basic tests, and we’ll see where we can go from there.
Mar 5, 2016 · 2 minute read
Another Golang issue with goroutines and for loops today :) This time let’s assume we start with a simple for loop that calls an anonymous function:
package main
import (
"fmt"
"sync"
)
func main() {
numbers := []int{1, 2, 3, 4, 5, 6}
// WaitGroup will be used to wait for child goroutines
var wg sync.WaitGroup
for _, n := range numbers {
wg.Add(1)
func foo() {
fmt.Printf("%d ", n)
wg.Done()
}()
}
wg.Wait()
}
This works fine and prints
1 2 3 4 5 6
but to run the anonymous function in child goroutines - we will add a go
keyword before the function call:
package main
import (
"fmt"
"sync"
)
func main() {
numbers := []int{1, 2, 3, 4, 5, 6}
// WaitGroup will be used to wait for child goroutines
var wg sync.WaitGroup
for _, n := range numbers {
wg.Add(1)
go func foo() {
fmt.Printf("%d ", n)
wg.Done()
}()
}
wg.Wait()
}
and check the result - we would expect to get the same thing as above, or the same numbers in different order, but instead we get
6 6 6 6 6 6
What’s wrong? We see that all goroutines see the same value of n, and the value they see is equal to the last value of this variable. This suggests that goroutines access the variable not when they are started, but at a later time, when the for loop has run through all elements of numbers
.
This is in fact true - the anonymous function closes over the variable, and uses it’s value from the time it was executing, not from the time it was started. To fix the issue we can do two things - copy the loop variable to the for block:
for _, n := range numbers {
wg.Add(1)
var n = n
go func foo() {
fmt.Printf("%d ", n)
wg.Done()
}()
}
or binding the variable to a parameter of the anonymous function:
for _, n := range numbers {
wg.Add(1)
go func foo(n int) {
fmt.Printf("%d ", n)
wg.Done()
}(n)
}
Both of those are correct, I prefer the second one but this is really a matter of taste. This problem is not specific to Go, and some other languages go to great lengths to help programmers avoid this trap -
Microsoft introduced a backwards incompatible change in C# 5.0 to fix this.
Jul 20, 2015 · 3 minute read
Go has been my go-to side project language for quite some time now (since before v1.0), and when I started the Matasano crypto challenges it seemed like a perfect fit for a number of reasons - it doesn’t force me to write a lot of boilerplate, is low-level enough to allow implementing your own crypto primitives and it comes with a rich standard library (I am looking at you Scala). I made my way through the first set and while solving one of the problems I wanted to run a certain function in parallel.
The simplest way of making this use all CPU cores is to run each calculation in it’s own goroutine, and the standard way tracking if all of them completed is using a sync.WaitGroup - for simplicity’s sake this code assumes we only care about side effects (printing) and do not consume the result:
package main
import (
"fmt"
"sync"
)
func main() {
numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
var wg sync.WaitGroup
for _, n := range numbers {
wg.Add(1)
go func(in int) {
fmt.Printf("%d: %d\n", in, cpuIntensive(in))
wg.Done()
}(n)
}
wg.Wait()
}
// does something CPU intensive
func cpuIntensive(n int) int {
return n * n * n
}
This works fine, but wouldn’t it be nice to extract the anonymous function to make things more readable / testable:
package main
import (
"fmt"
"sync"
)
func main() {
numbers := []int{1, 10, 100, 1000}
var wg sync.WaitGroup
for _, n := range numbers {
wg.Add(1)
go runInGoroutine(n, wg)
}
wg.Wait()
}
func runInGoroutine(in int, wg sync.WaitGroup) {
fmt.Printf("cpuIntensive(%d): %d\n", in, cpuIntensive(in))
wg.Done()
}
// does something CPU intensive
func cpuIntensive(n int) int {
return n * n * n
}
Turns out it’s not that simple - this code completes the calculations but fails with
fatal error: all goroutines are asleep - deadlock!
What is happening here - we only extracted a function? Go is kind enough to let us know that our program is deadlocked - but why? Our simple refactoring wasn’t correct - we changed the way the code run in goroutine uses the sync.WaitGroup variable. Previously it closed (as in closure over it, now it takes it as a parameter.
The issue is that the WaitGroup is passed by value, so each goroutine gets a copy of the WaitGroup. This means that when we call wg.Wait() we are waiting on a WaitGroup that will never be modified by child goroutines, and we will never exit the main function. The fix is simple - we pass a pointer to all goroutines, letting WaitGroup take care of concurrent modifications:
package main
import (
"fmt"
"sync"
)
func main() {
numbers := []int{1, 10, 100, 1000}
var wg sync.WaitGroup
for _, n := range numbers {
wg.Add(1)
go runInGoroutine(n, &wg)
}
wg.Wait()
}
func runInGoroutine(in int, wg *sync.WaitGroup) {
fmt.Printf("cpuIntensive(%d): %d\n", in, cpuIntensive(in))
wg.Done()
}
// does something CPU intensive
func cpuIntensive(n int) int {
return n * n * n
}
Point to remember - whenever you are mutating parameters (or for that matter
method receivers) make sure you are referencing the original object, not a copy.