List Apis respond not consistent

Hi all,

I deployed a new Tyk Gateway CE using the helm chart following the tutorial here
It seems setup successfully.
I can create a new api and it respond with

{
    "key": "xxx",
    "status": "ok",
    "action": "added"
}

After than I call the “/tyk/reload/group” and it respond with

{
    "status": "ok",
    "message": ""
}

However, I cannot get the new endpoint in api lists every-time.
Sometimes it does but sometimes just simply return the empty array [].

Any advise would be appreciated. Thanks.

Hi @CKLFish and welcome to the community. Can you check the gateway logs that your APIs are being loaded?

There should be a log info like like the one below. You can try filtering for the word "Detect"

tyk-open-source-gateway    | time="Jun 09 10:30:32" level=info msg="Detected 2 APIs" prefix=main

You can verify that your APIs are being loaded by the gateway if you follow through from that line. One of the reasons the API call can return and empty array [ ] is if one of the API definitions are formatted wrongly. If that’s the case you may even see an error log in the logs.

Please confirm that and let us know.

Hi @Olu,

I got these in 1 of the pods.

time="Jun 09 10:33:44" level=info msg="Loading API configurations." prefix=main
time="Jun 09 10:33:44" level=info msg="Tracking hostname" api_name=xxx domain="(no host)" prefix=main
time="Jun 09 10:33:44" level=info msg="Initialising Tyk REST API Endpoints" prefix=main
time="Jun 09 10:33:44" level=info msg="API bind on custom port:0" prefix=main
time="Jun 09 10:33:44" level=info msg="Checking security policy: Open" api_id=xxx api_name=xxx org_id=1
time="Jun 09 10:33:44" level=info msg="API Loaded" api_id=xxx api_name=xxx org_id=1 prefix=gateway server_name=-- user_id=-- user_ip=--
time="Jun 09 10:33:44" level=info msg="Loading uptime tests..." prefix=host-check-mgr
time="Jun 09 10:33:44" level=info msg="Initialised API Definitions" prefix=main
time="Jun 09 10:33:44" level=info msg="API reload complete" prefix=main
time="Jun 09 10:33:44" level=info msg="reload: complete" prefix=main
time="Jun 09 10:33:44" level=info msg="Initiating coprocess reload" prefix=main
time="Jun 09 10:33:44" level=info msg="Reloading middlewares" prefix=coprocess
time="Jun 09 10:33:44" level=info msg="coprocess reload complete" prefix=main
time="Jun 09 10:33:44" level=info msg="reload: cycle completed in 3.557609ms" prefix=main

But the other 2 pods only got these.

time="Jun 09 10:33:44" level=info msg="Detected 0 APIs" prefix=main
time="Jun 09 10:33:44" level=warning msg="No API Definitions found, not reloading" prefix=main
time="Jun 09 10:33:44" level=info msg="reload: complete" prefix=main
time="Jun 09 10:33:44" level=info msg="Initiating coprocess reload" prefix=main
time="Jun 09 10:33:44" level=info msg="Reloading middlewares" prefix=coprocess
time="Jun 09 10:33:44" level=info msg="coprocess reload complete" prefix=main
time="Jun 09 10:33:44" level=info msg="reload: cycle completed in 261.625µs" prefix=main

In which pod is the issue occurring? The one that loaded the APIs or the 2 pods that didn’t?

Hi @Olu,

I am using the helm chart and its DaemonSet created 3 pods.
I found that it was specified replicaCount to 1.
Not sure why it is generated 3 pods.

gateway:
  # The hostname to bind the Gateway to.
  hostName: tyk-gw.local
  # When true, sets the gateway protocol to HTTPS.
  tls: false

  kind: DaemonSet
  replicaCount: 1

I just tried it now. I am not sure why it created 3 pods for you. Have you tried it and did the same thing occur? What version did you use?

I tried with 0.9.4 and 0.9.5.
Both created 3 gateway-tyk-ce-tyk-headless pods and 1 gateway-tyk-ce-tyk-headless daemon.
bitnami/redis is used.

The YAML values is as below.

## Default values for tyk-headless chart.
## This is a YAML-formatted file.
## Declare variables to be passed into your templates.
## See Tyk Helm documentation for installation details:
## https://tyk.io/docs/tyk-oss/ce-helm-chart/
## Registry for all Tyk images - https://hub.docker.com/u/tykio

# Chart name override. Truncates to 63 characters.
# Default value: tyk-headless.name
nameOverride: ""

# App name override. Truncates to 63 characters.
# Default value: tyk-headless.fullname
fullnameOverride: ""

# These are your Tyk stack secrets will directly map to the following Tyk stack
# configuration:
secrets:
  # tyk.conf node_secret
  # tyk.conf secret
  APISecret: xxx
  # If you don't want to store plaintext secrets in the Helm value file and would
  # rather provide the k8s Secret externally please populate the value below
  useSecretName: ""

redis:
  # The addrs value will allow you to set your Redis addresses. If you are
  # using a redis cluster, you can list the endpoints of the redis instances
  # or use the cluster configuration endpoint.
  # Default value: redis.{{ .Release.Namespace }}.svc.cluster.local:6379
  addrs:
  #   - redis.tyk.svc.cluster.local:6379
  #   This is the the DNS name of the redis as set when using Bitnami
  - tyk-redis-master.tyk.svc.cluster.local:6379

  # Redis password
  # If you're using Bitnami Redis chart please input your password in the field below
  pass: xxx

  # Enables SSL for Redis connection. Redis instance will have to support that.
  # Default value: false
  # useSSL: true

  # The enableCluster value will allow you to indicate to Tyk whether you are
  # running a Redis cluster or not.
  # enableCluster: true

  # Enables sentinel connection mode for Redis. If enabled, sentinelPass and masterName values are required.
  # enableSentinel: false

  # Redis sentinel password, only required while enableSentinel is true.
  # sentinelPass: ""

  # Redis sentinel master name, only required while enableSentinel is true.
  # masterName: ""

  # By default the database index is 0. Setting the database index is not
  # supported with redis cluster. As such, if you have enableCluster: true,
  # then this value should be omitted or explicitly set to 0.
  storage:
    database: 0

mongo:
  # If you want to collect analytics through the mongo pumps you can turn this
  # option on. Once on, Tyk Pump will assume that MongoDB is avaibale at
  # mongo.tyk.svc.cluster.local:27017 if it is not, please change the mongoURL
  # value below.
  enabled: true

  # The mongoURL value will allow you to set your MongoDB address.
  # Default value: mongodb://mongo.{{ .Release.Namespace }}.svc.cluster.local:27017/tyk_analytics
  # mongoURL: mongodb://mongo.tyk.svc.cluster.local:27017/tyk_analytics
  # If your MongoDB has a password you can add the username and password to the url
  mongoURL: mongodb://root:[email protected]:27017/tyk_analytics?authSource=admin

  # Enables SSL for MongoDB connection. MongoDB instance will have to support that.
  # Default value: false
  # useSSL: true

gateway:
  # The hostname to bind the Gateway to.
  hostName: tyk-gw.local
  # When true, sets the gateway protocol to HTTPS.
  tls: false

  kind: DaemonSet
  replicaCount: 1
  containerPort: 8080
  image:
    repository: docker.tyk.io/tyk-gateway/tyk-gateway
    tag: v3.2.1
    pullPolicy: IfNotPresent
  service:
    type: NodePort
    port: 443
    externalTrafficPolicy: Local
    annotations: {}
  control:
    enabled: false
    containerPort: 9696
    port: 9696
    type: ClusterIP
    annotations: {}
  # Creates an ingress object in k8s. Will require an ingress-controller and
  # annotation to that ingress controller.
  ingress:
    enabled: true
    # specify your ingress controller class name below
    className: "nginx"
    annotations: 
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
      # kubernetes.io/tls-acme: "true"
    
    hosts:
      - host: xxx
        paths:
          - path: /
            pathType: ImplementationSpecific
    tls: 
    - secretName: xxx
      hosts:
        - xxx

  resources: {}
    # We usually recommend not to specify default resources and to leave this
    # as a conscious choice for the user. This also increases chances charts
    # run on environments with little resources, such as Minikube. If you do
    # want to specify resources, uncomment the following lines, adjust them
    # as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule
  affinity: {}
  extraEnvs: []
  mounts: []

# If pump is enabled the Gateway will create and collect analytics data to send
# to a data store of your choice. These can be set up in the pump config. The
# possible pump configs can be found here:
# https://github.com/TykTechnologies/tyk-pump#configuration
pump:
  # Determines whither or not the pump component should be installed.
  enabled: true

  replicaCount: 1
  image:
    repository: docker.tyk.io/tyk-pump/tyk-pump
    tag: v1.4.0
    pullPolicy: IfNotPresent
  annotations: {}
  resources: {}
    # We usually recommend not to specify default resources and to leave this
    # as a conscious choice for the user. This also increases chances charts
    # run on environments with little resources, such as Minikube. If you do
    # want to specify resources, uncomment the following lines, adjust them
    # as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: []
  mounts: []

rbac: true

I see there are some changes from the default headless helm. Things like

  • mongo enablement
  • pump enablement
  • and ingress controller enablement

Can you try using disabling those or using the default one with your redis associated to it and confirm if you can replicate the issue?

Hi @Olu,

I tried with 0.9.5 but no luck
I changed the redis.addrs, redis.pass and secrets.APISecret but still have 3 pods created.

## Default values for tyk-headless chart.
## This is a YAML-formatted file.
## Declare variables to be passed into your templates.
## See Tyk Helm documentation for installation details:
## https://tyk.io/docs/tyk-oss/ce-helm-chart/
## Registry for all Tyk images - https://hub.docker.com/u/tykio

# Chart name override. Truncates to 63 characters.
# Default value: tyk-headless.name
nameOverride: ""

# App name override. Truncates to 63 characters.
# Default value: tyk-headless.fullname
fullnameOverride: ""

# These are your Tyk stack secrets will directly map to the following Tyk stack
# configuration:
secrets:
  # tyk.conf node_secret
  # tyk.conf secret
  APISecret: xxx
  # If you don't want to store plaintext secrets in the Helm value file and would
  # rather provide the k8s Secret externally please populate the value below
  useSecretName: ""

redis:
  # The addrs value will allow you to set your Redis addresses. If you are
  # using a redis cluster, you can list the endpoints of the redis instances
  # or use the cluster configuration endpoint.
  # Default value: redis.{{ .Release.Namespace }}.svc.cluster.local:6379
  addrs:
  #   - redis.tyk.svc.cluster.local:6379
  #   This is the the DNS name of the redis as set when using Bitnami
    - tyk-redis-master.tyk.svc.cluster.local:6379

  # Redis password
  # If you're using Bitnami Redis chart please input your password in the field below
  pass: "xxx"

  # Enables SSL for Redis connection. Redis instance will have to support that.
  # Default value: false
  # useSSL: true

  # The enableCluster value will allow you to indicate to Tyk whether you are
  # running a Redis cluster or not.
  # enableCluster: true

  # Enables sentinel connection mode for Redis. If enabled, sentinelPass and masterName values are required.
  # enableSentinel: false

  # Redis sentinel password, only required while enableSentinel is true.
  # sentinelPass: ""

  # Redis sentinel master name, only required while enableSentinel is true.
  # masterName: ""

  # By default the database index is 0. Setting the database index is not
  # supported with redis cluster. As such, if you have enableCluster: true,
  # then this value should be omitted or explicitly set to 0.
  storage:
    database: 0

mongo:
  # If you want to collect analytics through the mongo pumps you can turn this
  # option on. Once on, Tyk Pump will assume that MongoDB is avaibale at
  # mongo.tyk.svc.cluster.local:27017 if it is not, please change the mongoURL
  # value below.
  enabled: false

  # The mongoURL value will allow you to set your MongoDB address.
  # Default value: mongodb://mongo.{{ .Release.Namespace }}.svc.cluster.local:27017/tyk_analytics
  # mongoURL: mongodb://mongo.tyk.svc.cluster.local:27017/tyk_analytics
  # If your MongoDB has a password you can add the username and password to the url
  # mongoURL: mongodb://root:[email protected]:27017/tyk_analytics?authSource=admin

  # Enables SSL for MongoDB connection. MongoDB instance will have to support that.
  # Default value: false
  # useSSL: true

gateway:
  # The hostname to bind the Gateway to.
  hostName: tyk-gw.local
  # When true, sets the gateway protocol to HTTPS.
  tls: false

  kind: DaemonSet
  replicaCount: 1
  containerPort: 8080
  image:
    repository: docker.tyk.io/tyk-gateway/tyk-gateway
    tag: v3.2.1
    pullPolicy: IfNotPresent
  service:
    type: NodePort
    port: 443
    externalTrafficPolicy: Local
    annotations: {}
  control:
    enabled: false
    containerPort: 9696
    port: 9696
    type: ClusterIP
    annotations: {}
  # Creates an ingress object in k8s. Will require an ingress-controller and
  # annotation to that ingress controller.
  ingress:
    enabled: false
    # specify your ingress controller class name below
    className: ""
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    hosts:
      - host: chart-example.local
        paths:
          - path: /
            pathType: ImplementationSpecific
    tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

  resources: {}
    # We usually recommend not to specify default resources and to leave this
    # as a conscious choice for the user. This also increases chances charts
    # run on environments with little resources, such as Minikube. If you do
    # want to specify resources, uncomment the following lines, adjust them
    # as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations:
    - key: node-role.kubernetes.io/master
      effect: NoSchedule
  affinity: {}
  extraEnvs: []
  mounts: []

# If pump is enabled the Gateway will create and collect analytics data to send
# to a data store of your choice. These can be set up in the pump config. The
# possible pump configs can be found here:
# https://github.com/TykTechnologies/tyk-pump#configuration
pump:
  # Determines whither or not the pump component should be installed.
  enabled: false

  replicaCount: 1
  image:
    repository: docker.tyk.io/tyk-pump/tyk-pump
    tag: v1.4.0
    pullPolicy: IfNotPresent
  annotations: {}
  resources: {}
    # We usually recommend not to specify default resources and to leave this
    # as a conscious choice for the user. This also increases chances charts
    # run on environments with little resources, such as Minikube. If you do
    # want to specify resources, uncomment the following lines, adjust them
    # as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #  cpu: 100m
    #  memory: 128Mi
    # requests:
    #  cpu: 100m
    #  memory: 128Mi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: []
  mounts: []

rbac: true

This is strange. Let me ask internally if anyone might have run into this same issue and get back to you.

@CKLFish Could you help answer a couple of questions below

  1. What OS or environment are you using?
  2. Are you doing anything different when creating the pods since the replica count is 1?
  3. How are you putting the APIs into the cluster?

Hi @Olu ,

  1. Deploy on Kubernetes created by Kops host on AWS.
    Master Instance Type: t3.medium
    Worker Instance Type: c6a.2xlarge *2
    ARCH: amd64
    OS: ubuntu-focal-20.04-amd64-server-20220404

  2. I use Lens to deploy the helm chert.

  3. By Postman, POST to https://gateway.hydrogenx.live/tyk/apis

{
    "name": "xxx",
    "slug": "xxx",
    "api_id": "xxx",
    "org_id": "1",
    "use_keyless": true,
    "version_data": {
      "not_versioned": true,
      "versions": {
        "Default": {
          "name": "Default",
          "use_extended_paths": true
        }
      }
    },
    "proxy": {
      "listen_path": "/xxx/",
      "target_url": "xxx",
      "strip_listen_path": true
    },
    "active": true
}

Thanks for sharing. I’ll see if I can setup something similar to try and reproduce.

Backtracking to the initial issue, is this gateway.hydrogenx.live the domain name of one of the gateways? Possibly the one which loads the 2 APIs?

If yes, then are you also making the same POST call to the other 2 gateways? I ask because Tyk stores the API definitions at the app_path on disk. So you could be getting the inconsistency as earlier observed.

If no, then what is it? I am a bit interested to know if it’s a load balancer.

Hi @Olu,

I think I find out the issue.
I changed the gateway.kind from DaemonSet to Deployment and it created 1 pod only instead of 3 pods (1 pod per node).

Is there any problem if I use Deployment instead of DaemonSet?
Thanks~

For the domain it is the load balancer.
I enabled the ingress in the helm chart.

As the gateway.kind is DaemonSet and the pod is created on every node,
the load balancer is having 3 endpoints at the back.

If I keep the gateway.kind as DaemonSet, should I create ingress according to # of nodes and make the POST call to each of them?

Suspected Deployment as kind was the case here.

Is there any problem if I use Deployment instead of DaemonSet ?

There should be no problem at all. It should work fine.

If I keep the gateway.kind as DaemonSet , should I create ingress according to # of nodes and make the POST call to each of them?

Yes. However, how you setup your environment is still up to you. The key note is that each gateway needs to have a copy of the API definition.

I see. Thanks for the help.~

The issue as you guys spotted is that of course kind: daemonset created one pod per node and that is why you get 3 apigw.
Now the second issue is that the creation of an API via the tyk API only will get created in the POD that it hits via the ingress->sevice. it does not have config replication between the other ones. So you will end up with the config only in one apigw and the rest will not have it. To resolve this you need to create APIs via config file and distribute them to all your apigw pods and hot-reload each one.
tyk-sync might help but i had some issues with the policies IIRC and does not have too much movement. SO if you plan onusing tyk CE for production with multiple PODs you will need to develop or find the way to distribute the APIs and Policies to all the PODs. Same happens with Tyk Operator.

Cheers!

We ran into the same issue, and have found a workaround: When running multiple pods, if you mount a PersistentVolume which all pods share, then all pods will have the same api definitions. When you add an api-definition on one pod, that pod will write the JSON api-definition to the local ‘apps’ folder, then uses Redis to trigger a reload on all pods. You’ll need to make sure the PersistentVolume uses a storage-class which supports NFS / EFS, since the pods in Kubernetes will run on different nodes / vms.

Hope this helps, we had to make a small change to the helm-chart to support the PVC. In our case, we’re running on AWS EKS, so we had to enable the EFS storage driver in our cluster. I’m sure other Kubernetes distributions have similar support for sharing volumes across nodes.

Kind regards, Barry

1 Like