Issues with overloading redis

Hi!

We are deploying Tyk with a cluster of redis nodes, mostly for analytics. We have 2 issues

  1. For the rate limit metric, every record gets a new connection to the cluster. Is there a way to use a connection pool for this? Not something that really has to be real time, so it wouldn’t matter if the gateway had to wait for an available connection.

  2. Even if we give it every node in the cluster, it only talks to one main node and overloads it, then moves on to the next. Can we set it to at least a round robin or something?

Both of these could be handled with a plugin, but if we can just change something in a config that would be awesome.

Hello @razorsh4rk and welcome to the community.

For the rate limit metric, every record gets a new connection to the cluster. Is there a way to use a connection pool for this? Not something that really has to be real time, so it wouldn’t matter if the gateway had to wait for an available connection

For issue 1, I think Tyk uses connections pools. By default 100 idle connections are created. You could change the idle connections at the start and also limit it with the storage.optimisation_max_active config.

Even if we give it every node in the cluster, it only talks to one main node and overloads it, then moves on to the next. Can we set it to at least a round robin or something?

For the second issue, I don’t think we have a config for this. This is also a grey area for me so I would have to reach out internally for more info. Can you share how you are observing or monitoring this?

@razorsh4rk I haven’t heard back from you. Some details in how you are monitoring and observing this may help. Sharing your configuration could also be useful.

Anyways, I asked internally and the gateway communicates with all Redis master nodes directly.

This is because Redis shards data when operating as a cluster. This means that each record is stored on a specific master node (and its replicas). You can read more about this process on the Redis website.

Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes.

It uses a process they call hash slots:

Redis Cluster does not use consistent hashing, but a different form of sharding where every key is conceptually part of what we call a hash slot.

There are 16384 hash slots in Redis Cluster, and to compute the hash slot for a given key, we simply take the CRC16 of the key modulo 16384.

Every node in a Redis Cluster is responsible for a subset of the hash slots, so, for example, you may have a cluster with 3 nodes, where:

  • Node A contains hash slots from 0 to 5500.
  • Node B contains hash slots from 5501 to 11000.
  • Node C contains hash slots from 11001 to 16383.