Interacting with tyk APIs without port forwarding on k8s cluster


We are in the process of evaluating Tyk for API gateway use cases. We are using Tyk community edition for our evaluation. From the documentation, we understand that APIs are not exposed outside of the pod, the only way to interact with the Gateway REST API is to use the kubectl port-forward feature.

We are just wondering if we can interact with the Gateway REST APIs without using kubectl port-forwarding. We would like to know if helm charts for tyk can be customized to automatically configure kubernetes port forwarding post installation of tyk CE. Has any one given it a try and succeeded in doing so?

The documentation on community edition does not provide enough information. We would appreciate, if some one can provide us insights on how to do port forwarding in a more sophisticated way.

Configure the service for the gateway as either a nodeport or loadbalancer and you can access internal apis from outside the cluster.

If using minkube or docker desktop normally this will mean you can access from localhost.


As suggested, we have configured the service for the gateway as a nodeport. Both the control_api_port and listen_port (as per default configuration of tyk helm charts, they are 9696 and 8080 respectively) are exposed to be interacted outside the cluster.

We understand that the only option available in the community edition is without dashboard - i.e. via file-based installation. Ours is a on premise kubernetes cluster with 3 master and 7 worker nodes. Now, here is what we have observed and have issues with:

  1. tyk on k8s is deployed as daemonset. In our evaluation test environment, 7 tyk-gateway pods allocated to 7 worker nodes (1 each per worker node).

  2. While creating API definitions via the tyk REST APIs, API definition was stored only on one of the pods (pod gets selected randomly for this purpose) in the “/opt/tyk-gateway/apps folder”.

  3. While retrieving via REST calls, we were not getting the created API definitions, if the request is served by another pod.

We came across a couple of threads in this forum mentioning that file synchronization across pods is upto the implementor.

Now, we would like to know, if this holds true even in the latest versions of community edition? If Yes, what would be the suggested approach to do?

Thanks in advance,
Suresh Charan