Setting up the ingress controller


Hi everyone,

I’m trying to set-up tyk in my on-prem kubernetes cluster (which did not prove to be a walk in the park unfortunately). However, I did manage to get the gateway up and running, the pump also appears to be running (but not sure what I’m doing, but that’s probably another topic)! So the candle was lit and the party started! I did manage to run the examples via the API!

However, now I’m ready to open “gate number 6” and combine this with the brand new ingress controller; however, to me, it’s unclear if:

  • This still falls under the open-source offering? In the example helm chart it seems to link to the dashboard for some reason?
  • There is a more structured way that explains the “wiring” of this thing? Currently I try to understand the helm chart as it can only be used by licensed users (which I am not unfortunately)
  • Anyone, using the open-source offering, been able to get this up-and-running
  • There’s documentation that explains the following config file?

The only thing I’ve found so far was the cheerful blog post mentioning it has been released :frowning:
The ingress controller looks really promising and seems to help to try and abstract a bit away. In the past, I’ve tried this with kong which was quite easy but it didn’t come as “battery included” like tyk does.

To be clear; I’m already stranded at the config part, as such, I cannot run the tyk-k8s pod

I’d already be happy with some pointers that could help me to get this up-and-running.

Thank you already for reading this and hopefully helping out :slight_smile:


I’ll try to answer your questions bit by bit:

While the controller is open source, to use it requires the dashboard API as that is what it interacts with to handle the orchestration.

The k8s ingress controller for Tyk uses the Tyk Dashboard API to centralise API definitions for gateways running across the cluster, that’s mainly why we need the dashboard for this to work.

The controller monitors the K8s API for ingress spec creation and deletion, and if found, converts them to an API Definition (and uploads any associated certificates) to the dashboard API, or removes any linked routes on deletion. This then triggers the gateways to pull in their new configurations.

We haven’t set up the controller to work with the raw gateway API yet, as it would require synchronisation of all gateway API stores and there’s no other centralised point except for the dashboard at the moment.

However, now that I think about it, it could be accomplished by a sidecar that is deployed with the gateway, then the sidecars monitor for changes and only modify the gateway they are associated with using the local API. :thinking:

Defintely worth investigating.

Not yet, but I can provide a snapshot here:

  tyk_k8s.yaml: |-
      // The port the sidecar injector webhook listener should run on, this listens for K8s deployment events and modifies deployments on the fly to inject the appropriate sidecar configuration
      addr: ":443"
      // The certificate to use, this webhook must be secured
      certFile: "/etc/tyk-k8s/certs/cert.pem"
      keyFile: "/etc/tyk-k8s/certs/key.pem"
      // THe dashboard service URL
      url: "http://dashboard-svc-{{ include "tyk-pro.fullname" . }}.{{ .Values.nameSpace }}:{{ .Values.dash.service.port }}"
      // These can be safely ignored as they are set by environment variables, but for testing can be set here
      secret: "set-by-env"
      // The organisation to pin API definitions to
      org: "set-by-env"
      // Should the injector generate new routes in the API definition for each sidecar (set to 'false' to disable the injector)
      createRoutes: true
      // This section describes the injected container
      containers: ...

I hope that clarifies things a little. I do think there’s a way to get this all working with the open-source gateway without centralising things. I will need to do some tinkering on the controller to see what can be achieved.