I deployed a new Tyk Gateway CE using the helm chart following the tutorial here
It seems setup successfully.
I can create a new api and it respond with
You can verify that your APIs are being loaded by the gateway if you follow through from that line. One of the reasons the API call can return and empty array [ ] is if one of the API definitions are formatted wrongly. If that’s the case you may even see an error log in the logs.
I am using the helm chart and its DaemonSet created 3 pods.
I found that it was specified replicaCount to 1.
Not sure why it is generated 3 pods.
gateway:
# The hostname to bind the Gateway to.
hostName: tyk-gw.local
# When true, sets the gateway protocol to HTTPS.
tls: false
kind: DaemonSet
replicaCount: 1
I tried with 0.9.4 and 0.9.5.
Both created 3 gateway-tyk-ce-tyk-headless pods and 1 gateway-tyk-ce-tyk-headless daemon. bitnami/redis is used.
The YAML values is as below.
## Default values for tyk-headless chart.
## This is a YAML-formatted file.
## Declare variables to be passed into your templates.
## See Tyk Helm documentation for installation details:
## https://tyk.io/docs/tyk-oss/ce-helm-chart/
## Registry for all Tyk images - https://hub.docker.com/u/tykio
# Chart name override. Truncates to 63 characters.
# Default value: tyk-headless.name
nameOverride: ""
# App name override. Truncates to 63 characters.
# Default value: tyk-headless.fullname
fullnameOverride: ""
# These are your Tyk stack secrets will directly map to the following Tyk stack
# configuration:
secrets:
# tyk.conf node_secret
# tyk.conf secret
APISecret: xxx
# If you don't want to store plaintext secrets in the Helm value file and would
# rather provide the k8s Secret externally please populate the value below
useSecretName: ""
redis:
# The addrs value will allow you to set your Redis addresses. If you are
# using a redis cluster, you can list the endpoints of the redis instances
# or use the cluster configuration endpoint.
# Default value: redis.{{ .Release.Namespace }}.svc.cluster.local:6379
addrs:
# - redis.tyk.svc.cluster.local:6379
# This is the the DNS name of the redis as set when using Bitnami
- tyk-redis-master.tyk.svc.cluster.local:6379
# Redis password
# If you're using Bitnami Redis chart please input your password in the field below
pass: xxx
# Enables SSL for Redis connection. Redis instance will have to support that.
# Default value: false
# useSSL: true
# The enableCluster value will allow you to indicate to Tyk whether you are
# running a Redis cluster or not.
# enableCluster: true
# Enables sentinel connection mode for Redis. If enabled, sentinelPass and masterName values are required.
# enableSentinel: false
# Redis sentinel password, only required while enableSentinel is true.
# sentinelPass: ""
# Redis sentinel master name, only required while enableSentinel is true.
# masterName: ""
# By default the database index is 0. Setting the database index is not
# supported with redis cluster. As such, if you have enableCluster: true,
# then this value should be omitted or explicitly set to 0.
storage:
database: 0
mongo:
# If you want to collect analytics through the mongo pumps you can turn this
# option on. Once on, Tyk Pump will assume that MongoDB is avaibale at
# mongo.tyk.svc.cluster.local:27017 if it is not, please change the mongoURL
# value below.
enabled: true
# The mongoURL value will allow you to set your MongoDB address.
# Default value: mongodb://mongo.{{ .Release.Namespace }}.svc.cluster.local:27017/tyk_analytics
# mongoURL: mongodb://mongo.tyk.svc.cluster.local:27017/tyk_analytics
# If your MongoDB has a password you can add the username and password to the url
mongoURL: mongodb://root:[email protected]:27017/tyk_analytics?authSource=admin
# Enables SSL for MongoDB connection. MongoDB instance will have to support that.
# Default value: false
# useSSL: true
gateway:
# The hostname to bind the Gateway to.
hostName: tyk-gw.local
# When true, sets the gateway protocol to HTTPS.
tls: false
kind: DaemonSet
replicaCount: 1
containerPort: 8080
image:
repository: docker.tyk.io/tyk-gateway/tyk-gateway
tag: v3.2.1
pullPolicy: IfNotPresent
service:
type: NodePort
port: 443
externalTrafficPolicy: Local
annotations: {}
control:
enabled: false
containerPort: 9696
port: 9696
type: ClusterIP
annotations: {}
# Creates an ingress object in k8s. Will require an ingress-controller and
# annotation to that ingress controller.
ingress:
enabled: true
# specify your ingress controller class name below
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
# kubernetes.io/tls-acme: "true"
hosts:
- host: xxx
paths:
- path: /
pathType: ImplementationSpecific
tls:
- secretName: xxx
hosts:
- xxx
resources: {}
# We usually recommend not to specify default resources and to leave this
# as a conscious choice for the user. This also increases chances charts
# run on environments with little resources, such as Minikube. If you do
# want to specify resources, uncomment the following lines, adjust them
# as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
affinity: {}
extraEnvs: []
mounts: []
# If pump is enabled the Gateway will create and collect analytics data to send
# to a data store of your choice. These can be set up in the pump config. The
# possible pump configs can be found here:
# https://github.com/TykTechnologies/tyk-pump#configuration
pump:
# Determines whither or not the pump component should be installed.
enabled: true
replicaCount: 1
image:
repository: docker.tyk.io/tyk-pump/tyk-pump
tag: v1.4.0
pullPolicy: IfNotPresent
annotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this
# as a conscious choice for the user. This also increases chances charts
# run on environments with little resources, such as Minikube. If you do
# want to specify resources, uncomment the following lines, adjust them
# as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
mounts: []
rbac: true
I tried with 0.9.5 but no luck
I changed the redis.addrs, redis.pass and secrets.APISecret but still have 3 pods created.
## Default values for tyk-headless chart.
## This is a YAML-formatted file.
## Declare variables to be passed into your templates.
## See Tyk Helm documentation for installation details:
## https://tyk.io/docs/tyk-oss/ce-helm-chart/
## Registry for all Tyk images - https://hub.docker.com/u/tykio
# Chart name override. Truncates to 63 characters.
# Default value: tyk-headless.name
nameOverride: ""
# App name override. Truncates to 63 characters.
# Default value: tyk-headless.fullname
fullnameOverride: ""
# These are your Tyk stack secrets will directly map to the following Tyk stack
# configuration:
secrets:
# tyk.conf node_secret
# tyk.conf secret
APISecret: xxx
# If you don't want to store plaintext secrets in the Helm value file and would
# rather provide the k8s Secret externally please populate the value below
useSecretName: ""
redis:
# The addrs value will allow you to set your Redis addresses. If you are
# using a redis cluster, you can list the endpoints of the redis instances
# or use the cluster configuration endpoint.
# Default value: redis.{{ .Release.Namespace }}.svc.cluster.local:6379
addrs:
# - redis.tyk.svc.cluster.local:6379
# This is the the DNS name of the redis as set when using Bitnami
- tyk-redis-master.tyk.svc.cluster.local:6379
# Redis password
# If you're using Bitnami Redis chart please input your password in the field below
pass: "xxx"
# Enables SSL for Redis connection. Redis instance will have to support that.
# Default value: false
# useSSL: true
# The enableCluster value will allow you to indicate to Tyk whether you are
# running a Redis cluster or not.
# enableCluster: true
# Enables sentinel connection mode for Redis. If enabled, sentinelPass and masterName values are required.
# enableSentinel: false
# Redis sentinel password, only required while enableSentinel is true.
# sentinelPass: ""
# Redis sentinel master name, only required while enableSentinel is true.
# masterName: ""
# By default the database index is 0. Setting the database index is not
# supported with redis cluster. As such, if you have enableCluster: true,
# then this value should be omitted or explicitly set to 0.
storage:
database: 0
mongo:
# If you want to collect analytics through the mongo pumps you can turn this
# option on. Once on, Tyk Pump will assume that MongoDB is avaibale at
# mongo.tyk.svc.cluster.local:27017 if it is not, please change the mongoURL
# value below.
enabled: false
# The mongoURL value will allow you to set your MongoDB address.
# Default value: mongodb://mongo.{{ .Release.Namespace }}.svc.cluster.local:27017/tyk_analytics
# mongoURL: mongodb://mongo.tyk.svc.cluster.local:27017/tyk_analytics
# If your MongoDB has a password you can add the username and password to the url
# mongoURL: mongodb://root:[email protected]:27017/tyk_analytics?authSource=admin
# Enables SSL for MongoDB connection. MongoDB instance will have to support that.
# Default value: false
# useSSL: true
gateway:
# The hostname to bind the Gateway to.
hostName: tyk-gw.local
# When true, sets the gateway protocol to HTTPS.
tls: false
kind: DaemonSet
replicaCount: 1
containerPort: 8080
image:
repository: docker.tyk.io/tyk-gateway/tyk-gateway
tag: v3.2.1
pullPolicy: IfNotPresent
service:
type: NodePort
port: 443
externalTrafficPolicy: Local
annotations: {}
control:
enabled: false
containerPort: 9696
port: 9696
type: ClusterIP
annotations: {}
# Creates an ingress object in k8s. Will require an ingress-controller and
# annotation to that ingress controller.
ingress:
enabled: false
# specify your ingress controller class name below
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this
# as a conscious choice for the user. This also increases chances charts
# run on environments with little resources, such as Minikube. If you do
# want to specify resources, uncomment the following lines, adjust them
# as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
affinity: {}
extraEnvs: []
mounts: []
# If pump is enabled the Gateway will create and collect analytics data to send
# to a data store of your choice. These can be set up in the pump config. The
# possible pump configs can be found here:
# https://github.com/TykTechnologies/tyk-pump#configuration
pump:
# Determines whither or not the pump component should be installed.
enabled: false
replicaCount: 1
image:
repository: docker.tyk.io/tyk-pump/tyk-pump
tag: v1.4.0
pullPolicy: IfNotPresent
annotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this
# as a conscious choice for the user. This also increases chances charts
# run on environments with little resources, such as Minikube. If you do
# want to specify resources, uncomment the following lines, adjust them
# as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
extraEnvs: []
mounts: []
rbac: true
Thanks for sharing. I’ll see if I can setup something similar to try and reproduce.
Backtracking to the initial issue, is this gateway.hydrogenx.live the domain name of one of the gateways? Possibly the one which loads the 2 APIs?
If yes, then are you also making the same POST call to the other 2 gateways? I ask because Tyk stores the API definitions at the app_path on disk. So you could be getting the inconsistency as earlier observed.
If no, then what is it? I am a bit interested to know if it’s a load balancer.
The issue as you guys spotted is that of course kind: daemonset created one pod per node and that is why you get 3 apigw.
Now the second issue is that the creation of an API via the tyk API only will get created in the POD that it hits via the ingress->sevice. it does not have config replication between the other ones. So you will end up with the config only in one apigw and the rest will not have it. To resolve this you need to create APIs via config file and distribute them to all your apigw pods and hot-reload each one.
tyk-sync might help but i had some issues with the policies IIRC and does not have too much movement. SO if you plan onusing tyk CE for production with multiple PODs you will need to develop or find the way to distribute the APIs and Policies to all the PODs. Same happens with Tyk Operator.
We ran into the same issue, and have found a workaround: When running multiple pods, if you mount a PersistentVolume which all pods share, then all pods will have the same api definitions. When you add an api-definition on one pod, that pod will write the JSON api-definition to the local ‘apps’ folder, then uses Redis to trigger a reload on all pods. You’ll need to make sure the PersistentVolume uses a storage-class which supports NFS / EFS, since the pods in Kubernetes will run on different nodes / vms.
Hope this helps, we had to make a small change to the helm-chart to support the PVC. In our case, we’re running on AWS EKS, so we had to enable the EFS storage driver in our cluster. I’m sure other Kubernetes distributions have similar support for sharing volumes across nodes.