Clustering Tyk gateway for high availability

Is there any option to cluster multiple Tyk CE(headless) gateway nodes for high availability?

Suppose there are n nodes behind a load balancer and request came to one Tyk node and that node was down, is there any in-built option in Tyk itself to forward the request to another active node.

Or can I only use load balancer features to implement this functionality?

What would be the best way to implement the same? Kindly help me with this.

PS: I am not speaking about Redis clustering.

Also it would have been great if someone could shed some light on the correct way to deploy multiple Tyk gateway nodes and cluster it. Though my setup is working fine, I am seeing an error in log when it starts.

“AddOrUpdateServer error. Seems like you running multiple segmented Tyk groups in same Redis.” error=“DRL has no information on current host, waiting…”

Thanks in advance.

1 Like

Have you checked out some threads regarding load balancing:

As for the error message below, could you share you configuration file.

“AddOrUpdateServer error. Seems like you running multiple segmented Tyk groups in same Redis.” error=“DRL has no information on current host, waiting…”

What I had been trying to do was I had been creating a redis cluster and using the same redis cluster in all the Tyk nodes in each of their conf files. And then using the same files(API definitions, policy files etc) for all.

Redis part in each Tyk.conf file is:

“storage”: {
“type”: “redis”,
“enable_cluster”: true,
“addrs”: [
“ip-address-of-redis-1:6108”,“ip-address-of-redis-2:6101”,“ip-address-of-redis-3:6107”,“ip-address-of-redis-4:6106”,“ip-address-of-redis-5:6104”,“ip-address-of-redis-6:6105”
],

What else is to be done instead for clustering Tyk gateway nodes?

1 Like

I think your setup of publishing the same apis across board with a shared redis and some kind of load balancer in front is fine. The error message you shared is what I am looking into. For starters, I believe they are separate and not together. I had earlier assumed it was one error all together.

It seems the errors have something to do with the distributed rate limiter (DRL):

So I would suggest checking if rate limiting works fine?

If there are no issues there, with that then, could you confirm if you have nothing else trying to use the same Redis. Or perhaps set up a different Redis database for a similar test?

Also, could you share the use_db_app_configs and db_app_conf_options in your gateway config file.

I am not using a dashboard.

Neither I am using the tyk nodes to load API separately or sharding. I just want high availability. So is it okay to just ignore that error?

Yes you can ignore it as long as nothing is wrong with your rate limiting.