Tyk-Gateway configuration persistence

We are currently evaluating Tyk-Gateway on kubernetes with the operator but without the Dashboard. Our Tyk-Gateway is configured with redis and mongodb.

Up until recently everything was working perfectly until our Tyk-Gateway container got restarted. This is when we have lost all APIs registered. Restarting the operator fixed the problem but I have difficulty understanding how Tyk-Gateway persist it’s configuration when configured through the API by the operator.

I understand that when it is used in conjunction with Tyk-Dashboard, this is where it gets it’s configuration from. I can think of scenarios where a Tyk-Gateway would be restarted and would be unable to reach the control plane. Is there a form of cache or local storage that is supposed to take place in redis or mongoDB?

Thanks in advance

Hi @Yannick

I have difficulty understanding how Tyk-Gateway persist it’s configuration when configured through the API by the operator.

This is done via the apps directory which defaults to /opt/tyk-gateway/apps. When an API is published via operator the gateway saves the definition into /opt/tyk-gateway/apps and during startup all APIs found there are loaded.

It sounds like having a persistent /opt/tyk-gateway/apps would solve your problem.

I wanted to add a little bit of information about Mongo’s use with Tyk. In a pro install (with tyk-dashboard) mongo db is used for persistence of API definitions and policies as well as being the place where tyk-pump stores API analytics (along with numerous other tyk pro related things). It’s not needed in Tyk OSS unless you’re also using pump to send analytics to it.

Redis on the other hand is essential for running the OSS gateway and care needs to be taken to ensure it’s persistence once you start using things like the certificate store, rate limits and quotas etc. In fact making sure your redis instance can be shutdown and restarted without loss of data is best practice for all installs so please do that.

Cheers,
Pete

1 Like

@Pete,

Thanks for your answer. It works like a charm now! Here is what I did in case any one else would be interested. This solve both by persistence problem and my synchronization between both instances.

1- Created a PVC as such:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-tykconf
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs
  resources:
    requests:
      storage: 2Gi

2- I mounted the volumes in values.yaml

gateway:
  [....]
  extraEnvs: 
    - name: TYK_GW_APPPATH
      value: /mnt/apppath
  extraVolumes:
     - name: app-path
       persistentVolumeClaim:
         claimName: efs-tykconf
  extraVolumeMounts:
    - name: app-path
      mountPath: /mnt/apppath
2 Likes