Interacting with tyk APIs without port forwarding on k8s cluster

Hi,

We are in the process of evaluating Tyk for API gateway use cases. We are using Tyk community edition for our evaluation. From the documentation, we understand that APIs are not exposed outside of the pod, the only way to interact with the Gateway REST API is to use the kubectl port-forward feature.

We are just wondering if we can interact with the Gateway REST APIs without using kubectl port-forwarding. We would like to know if helm charts for tyk can be customized to automatically configure kubernetes port forwarding post installation of tyk CE. Has any one given it a try and succeeded in doing so?

The documentation on community edition does not provide enough information. We would appreciate, if some one can provide us insights on how to do port forwarding in a more sophisticated way.

Configure the service for the gateway as either a nodeport or loadbalancer and you can access internal apis from outside the cluster.

If using minkube or docker desktop normally this will mean you can access from localhost.

Hi,

As suggested, we have configured the service for the gateway as a nodeport. Both the control_api_port and listen_port (as per default configuration of tyk helm charts, they are 9696 and 8080 respectively) are exposed to be interacted outside the cluster.

We understand that the only option available in the community edition is without dashboard - i.e. via file-based installation. Ours is a on premise kubernetes cluster with 3 master and 7 worker nodes. Now, here is what we have observed and have issues with:

  1. tyk on k8s is deployed as daemonset. In our evaluation test environment, 7 tyk-gateway pods allocated to 7 worker nodes (1 each per worker node).

  2. While creating API definitions via the tyk REST APIs, API definition was stored only on one of the pods (pod gets selected randomly for this purpose) in the “/opt/tyk-gateway/apps folder”.

  3. While retrieving via REST calls, we were not getting the created API definitions, if the request is served by another pod.

We came across a couple of threads in this forum mentioning that file synchronization across pods is upto the implementor.

Now, we would like to know, if this holds true even in the latest versions of community edition? If Yes, what would be the suggested approach to do?

Thanks in advance,
Suresh Charan

1 Like

We are also facing this issue. We have three pods running but only one of them is getting the newly created APIs.
How can we sync this data across the pods? We are creating APIs though tyk-operator.

Thanks.

Hello @Anup_Rai,

You unfortunately can not sync configuration across clustered gateways in the CE version. This is managed by the Tyk Dashboard which is part of the paid product.

Here is how that gets achieved using the dashboard. https://tyk.io/docs/tyk-configuration-reference/gateway-dashboard/#how-it-works

Thanks @zaid , Is there any recommended way to sync up the pods in a cluster for CE version?

Some 9 months later this limitation isn’t documented, nor made clear in helm Charts either.

Like alot of companies testing out the OSS version and evaluating what the community is like for support.

Did you resolve this issue?

I’m tempted to use a PVC with RWX permissions. But that means manually editing the helm templates to bind to an existing PVC loaded with a blank config.

Or if you’re really good create the PVC as part of the deployment.

An easier method of course is to not use the K8s svc but manually connect to each gateway and apply the config. The problem with this approach is that there is no shared state for rate limits etc.

If I’d known about this type of limitation I would’ve abandoned this 3 weeks. I even had a call about the paid version believing I had a full functional OSS version.