Grpc performance metrics

Hi @LakshmiMekala,

I’ve just done some quick and dirty performance benchmarks:

Compute optimised 8 virtual core 16GB DO server running ubuntu 18. This upstream server contains the following services:

Load → Upstream → Load (Baseline)

48k rps
95%ile 1.7ms

Load → Gateway → Upstream → Gateway → Load

23.3k rps
95 %ile 3.7ms

As such, this extra hop introduces 2ms 95th %ile latency

Load → Gateway → gRPC → Gateway → Upstream → Gateway → Load

13k rps
95 %ile 6.7ms

the gRPC middleware has introduced 3ms 95th%ile latency & reduced throughput by approximately 45%.

These tests were all run on the same machine, resource contention included. As such, by properly separating components onto their own auto-scaling VMs, you will be able to achieve a more optimal setup.

The drop in throughput by the gRPC server is due to serialisation of the message, sending over the wire to the gRPC server, then de-serialisation at gRPC server. Finally when returining from the gRPC server it needs to re-serialise the message and decode again at the gateway.

We are actively working on improving throughput alongside native GoLang plugins which should improve performance considerably.

PoC - DO NOT MERGE: native golang plugins by asoorm · Pull Request #1868 · TykTechnologies/tyk · GitHub - This PoC was closed, but we expect a proper implementation to be ready and GA within months.