Replication Between TYK Servers

Can we setup replication between two tyk servers If we setup HA environment?

Yes, you can. Assuming you are trying to replicate only the gateway (CE), you would need to copy the essential files:

  • config file
  • policies
  • JavaScript files if used

I would also assume that the gateway’s would share a single Redis. If not then you would need to look into Redis replication.

is there any automated way available by doing some configuration?

Like if there is change in any of the TYK serverer it should detect automatically and replicate the changes to the other server.

Are you looking for only open source options here? If not, and you would consider a paid solution, that functionality is part of the Tyk Dashboard and makes it simple to manage clusters or multiple clusters across different deployments and/or regions.

You can get a trial by signing up at Tyk.io

Yes, We are using open source edition.

The dashboard and beyond offers the whole republishing everything on changes because of the event handlers. There’s no such event generation using the OSS gateway alone.

For gateway (CE) on K8s: Is it advisable/safe to just increase the replicaCount to n (e.g., 3)?

Hello @ben, for a CE gateway deployment on k8’s, increasing the replicaCount to ‘n’ will not do much good since the replicas will be in a inconsistent state as it is a stateless deployment and there is no inbuilt mechanism to sync the data between replicas. So, it would be better to work with a single instance unless you have ideas to mount the data on the pods.

Thanks @Cherry. This picture is in the OSS docs:

So I assumed that the oss gateway can scale horizontally easily.

We use EKS and the AWS ALB Ingress Controller.

Aside from just buying a pro version are the following options reasonable?

  • Sharing /opt/tyk-gateway (anything else?) using a persistent volume (e.g. AWS EFS/EBS).
  • Implementing a lightweight orchestration tool subsuming the Tyk Gateway API (i.e., as an OSS alternative to the Dashboard)

P.S: We are an early stage not-for-profit and simply cannot afford a pro license…

The diagram above is correct, as long as all your gateways share the same Redis instance, they will be in sync. It’s only if you have separate clusters that you need something to sync them together.

Thanks for the clarification! So we can just increase the number of replicas (in a cluster).

Hi,

Yes you can just increase the number of replicas, however just to be clear, each of the gateways needs to have similar/same configurations, which you can set using environment variables in the k8s definition rather than using the gateway config alone. Furthermore the policies and api definitions are loaded from a local files to the gateway process (so inside the container), however of course you can create a configmaps for these and mount it the same way on every gateway pod or use another consistent mechanism that scales with your deploymentset. This way all the gateways have the same config (via local file or preferably, environment variables) and the same api and policy definitions to load, and when sharing the same redis instance they will use the same keystore and act like a cluster, by which I mean things like load balancing and rate limiting can be enforced across the group of gateways rather than individually.

Hope that makes sense.

Best Regards,
Chris

Got it. Thanks for the detailed explanation :slight_smile:

As per my understanding keys related data will be stored in the redis database so If we use same redis those stuff can be in synch but what about api definitions and tyk.conf, can they be in synch by enabling some sort of configurations.

Correct my understanding If its different.

Hi @saloni512,

There is no inbuilt configuration to sync the API definitions and configuration files. You can sync them by using the same kind of configuration described in the first point as shown below

  • Sharing /opt/tyk-gateway (anything else?) using a persistent volume (e.g. AWS EFS/EBS ).

If you are using Kubernetes, then @chris.f shares how to do this using environment variables and config map mounts.

Hope this helps.