Redis sending too much data

Hi,

I have installed tyk professional edition on a cloud based virtual machine. There is one host each for tyk gateway,dashboard, mongodb, redis and tyk pump.

I have set up a very basic api which uses an Auth token for authentication. Since, I want to test how fast the gateway can go I have not defined any policies and set the rate limiter on the key to really large number (10000000000). The policy forwards the message to nginx web server. There are no endpoints defined for the api. The rest of the configurations are the default ones.

I can see that the gateway cannot process more than 1000 tps.CPU and memory on all boxes seem fine. However, I can see that redis is sending 25MB/sec of data to the gateway.Since this is cloud infrastructure, there is bandwidth limit of 25 MB/sec on these boxes. Is there a way to reduce the amount of data redis is sending to the gateway ( decrease the number of times redis sends message rates to gateway or disable rate limiting completely etc )

Take a look at this:

https://tyk.io/docs/tyk-api-gateway-v-2-0/deploy-to-production/

Out of the box, Tyk 2.1 doesn’t use our newer rate limiter and has health checks enabled (which adds more data and a bottleneck). You can also disable analytics which will be a large chunk of what you are seeing.

With the settings described in this page (many of which will become defaults in the next version) you’ll see smoother performance.

Also please remember a single key will run “hot”, if load testing, we tend to go with a larger keyspace to make it more representative.

Thanks Martin.

I’ve already tried all those settings. Disabling analytics does not help reduce the bandwidth usage.

What do you mean by “a single key will run “hot””.Could you please clarify your last statement.

Also, is there a way to turn off rate limiting all together?

All rate limiting synchronisation happens in redis, so one key means one rate limiter transaction, that will run hot, as in Redis will bottleneck on that one token, more tokens mean less operations in a single rate limiter bucket.

You could make the rate limit smaller (e.g 2000 per second), it’s a hash list that we use to do the work, if you have a very high rate limit then that list needs to be processed, the larger the list, the more data to be transferred.

Rate limiting is built into Tyk from the very ground up, so it can’t be switched off.

can I ask what the rate limit you set was and which cloud you are using?

Lookin into how AWS scales their inter instance bandwidth, it apparently scales up by instance size.

I had initially set the rate limit to a very large number (10000000000) in the hope that the gateway can run as fast it can.

However, I have now set the rate limit to 1000 every 0 seconds. This seems to drastically reduce the amount of data being sent back from Redis and am now able to get around 3500 tps.

1 Like

Great to hear it, is that off a single instance or is this off a load balanced set?

Its on a single instance. Do you know why setting the rate to 1000 every 0 seconds gave me this improvement?

The rate limiter is pulling less data across from redis for the time period set, the rate limiter uses a list segment length to check for a rate window, and having a large limit wiht along period means all the data is sliced as opposed to a small segment.