Grpc performance metrics

I have gone through the performance metrics for tyk grpc and i they are looking great.
I am planning to leaverage tyk grpc for my go project
I would like to know- on what basis those metrics are calculated and

  1. what is the system configuration and what percentage of cpu was utilised, Are client server and tyk gateway running on same machine?
  2. What is the size of message that is considered per hit
  3. What is the time gap btwn two different connections
  4. What is the time gap btwn two different message sent from same client ?

It would be of a great help if you can share the details.

Grpc Perf

1 Like

Hi, we have more detailed information about another benchmark that involved 6 C5.4xLarge AWS instances, consuming 80% CPU.

The performance was around 60k requests per second with sub 15ms added latency with the gRPC plugin listening on an UNIX domain socket, effectively as a sidecar to the gateway on same machine.

The implementation involved some crypto functions, you can inspect the implementation here.

We’re planning to run new benchmarks and share the results in the next weeks. We’ll keep you updated.

Regards.

1 Like

Hi @LakshmiMekala,

I’ve just done some quick and dirty performance benchmarks:

Compute optimised 8 virtual core 16GB DO server running ubuntu 18. This upstream server contains the following services:

Load β†’ Upstream β†’ Load (Baseline)

48k rps
95%ile 1.7ms

Load β†’ Gateway β†’ Upstream β†’ Gateway β†’ Load

23.3k rps
95 %ile 3.7ms

As such, this extra hop introduces 2ms 95th %ile latency

Load β†’ Gateway β†’ gRPC β†’ Gateway β†’ Upstream β†’ Gateway β†’ Load

13k rps
95 %ile 6.7ms

the gRPC middleware has introduced 3ms 95th%ile latency & reduced throughput by approximately 45%.

These tests were all run on the same machine, resource contention included. As such, by properly separating components onto their own auto-scaling VMs, you will be able to achieve a more optimal setup.

The drop in throughput by the gRPC server is due to serialisation of the message, sending over the wire to the gRPC server, then de-serialisation at gRPC server. Finally when returining from the gRPC server it needs to re-serialise the message and decode again at the gateway.

We are actively working on improving throughput alongside native GoLang plugins which should improve performance considerably.

PoC - DO NOT MERGE: native golang plugins by asoorm Β· Pull Request #1868 Β· TykTechnologies/tyk Β· GitHub - This PoC was closed, but we expect a proper implementation to be ready and GA within months.