Imported Google Group message.
Sender:Martin Buhr
.
Date:Wednesday, 21 January 2015 10:52:28 UTC.
Hi Chris,
We just ran a more thorough load test with JMeter against our main test machine, I’ve put the results below - the set up of the test is the same as yours, the test was run from an 8GB MBP Core i7, which was benchmarked to make sure it had capacity to run the tests, each test ran for 3 minutes with a 1 minute ramp-up. The raw results are below, I’ve also put in a brief summary for those not wishing to trawl through and compare figures.
Summary of results
NginX performance hums along (similar in your test) at an average of around ~80ms stable throughout all three scenarios, CPU usage never gets above 10% (average across both). So this is our baseline.
With Tyk, we saw it add 10ms to the request time at 200RPM, add 49ms at 400rps and add it became unstable at 800rps with response times going up to more than 2 seconds.
We noticed that the application was only using 1 CPU, with a Go app, if the task is CPU bound, as a web server is, it is possible to make an application use more by setting the GOMAXPROCS environment variable, which we did, and re-ran the 400rps and 800rps tests.
With GOMAXPROCS set, Tyk added 12ms at 400rps and 421ms at 800rps, CPU load at the highest level was now maxing out at 94% average across both cores.
This indicates that more cores do improve performance, but only with GOMAXPROCS set properly, and that Tyk performs pretty well at high loads, even on an untuned single-server.
We have, in separate tests, noticed that if the Redis server is far away, i.e. not in the same network or not physically close to the main machines, can introduce the largest increase in response time impact, naturally a local Redis server is not a likely production scenario, but it’s worth keeping in mind for anyone implementing Tyk.
Test Setup and Results
Target machine:
Digital Ocean VM
2 CPU’s
2GB RAM
Ubuntu 64 bit
The software setup:
Tyk running:
Analytics data collection enabled
No additional middleware, standard token-based Auth
Version 1.3
Tyk analytics (v0.9) running vanilla configuration
Redis installed locally
ulimit set to 99999 as otherwise it runs out of file descriptors, this is the only tuning we did
Mongo installed locally (latest version 2.6.6)
NginX running locally, serving a single JSON file from root
The test setup:
Two test groups, one testing NGinX on it’s own, the other testing Tyk (port 80 vs. port 5000)
Three scenarios: 200 rps, 400 rps and 800 rps for both groups
Two additional scenarios for the Tyk group: setting GOMAXPROCS value to the number of CPU’s for 400rps and 800rps
The results:
Test Group 1: NGinX Baseline
200 RPS (12,000 rpm):
Throughput: ~200 rps
Average: 82ms
Median: 77ms
CPU AVG: 4% (global system utilisation)
400 RPS (24,000 rpm):
Throughput: ~404 rps
Average: 79ms
Median: 77ms
CPU AVG: 6-7% (global system utilisation)
800 RPS (48,000 rpm):
Throughput: ~ 810rps
Average: 81ms
Median: 77ms
CPU AVG: 10% (global system utilisation)
Test Group 2: Tyk Simple Auth - 1 Key
200 RPS (12,000 rpm):
Throughput: ~209 rps
Average: 92ms
Median: 80ms
CPU AVG: 30-40% (global system utilisation) - mellowed to 12%
400 RPS (24,000 rpm):
Throughput: ~ 412rps
Average: 128ms
Median: 88ms
CPU AVG: 50-60% (global system utilisation)
400 RPS (24,000 rpm) with GOMAXPROCS set to num CPUs (2):
Throughput: ~ 420rps
Average: 91ms
Median: 81ms
CPU AVG: 60-66% (global system utilisation)
800 RPS (48,000 rpm):
Throughput: ~ 668rps
Average: 2949ms (high error rate, above 1%)
Median: 3278ms
CPU AVG: 76% (global system utilisation) (CPU1 MAXED)
800 RPS (48,000 rpm) Test 2 - GOMAXPROCS set to num CPUs (2):
Throughput: ~ 795rps
Average: 502ms (high error rate, above 1%)
Median: 401ms
CPU AVG: 80-100% (global system utilisation) (CPU1 MAXED)
Thanks for taking the time to test it out, we really want to make sure Tyk is performant and having the community contribute is so valuable
Really appreciate it.
Cheers,
Martin