Gateway health check endpoint

Hi

Is there a gateway health check endpoint e.g /healthz so I can perform liveness and readiness checks within Kubernetes?

Cheers
Tom

Hi Tom, we have a health check endpoint, check this page.

Best.

Hi MatĂ­as

Is this endpoint to check the health of an API added to the gateway or the health of the gateway itself? I’m after the latter of the two.

Thanks for the quick response!

Tom

Do not use this endpoint, it is being deprecated and can cause performance issues!

To check if a gateway is running just send a GET to /hello

3 Likes

Thanks Martin! That is exactly what I am after.

Thanks for your suggestion too MatĂ­as.

Good to know that /tyk/health is being deprecated. Do you maintain a list or document upcoming changes such as deprecations?

Tom

We are going to kill the healthcheck API dead - it’s been quite problematic :-/

The only reason it is still around is because we really don;t like to deprecate things… It will be noted in the docs if it gets dropped and we’ll make sure the change log for any release makes it clear.

Makes sense. We are pumping our logs in to Elasticsearch so we will be able to generate things such as upstream latency average and requests per second metrics ourselves when we need it in the future. :slight_smile:

To check if a gateway is running just send a GET to /hello

Is this comment still valid? I Only get a 404 page not found when trying to request that endpoint.
The only ones “working” is under the /tyk/ “namespace” but those fail with {"status":"error","message":"Forbidden"} since I do not provide auth (which I don’t want for the health check).

Yes that endpoint is still available - what version are you running?

According to log: time="Jan 9 16:09:33" level=info msg="Gateway started (v2.4.2)"

I am running in kubernetes and got it working (somehow).

Then I deleted my namespace to get it going again (this is a test-environment and trying to make sure to properly document how setup works and have been messing around a lot so I want to get it right).

Now I have the same problem again, the gateway won’t respond on /hello.

My deployment conf:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tyk-gateway
  namespace: test-tyk
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: tyk-gateway
    spec:
      containers:
      - image: tykio/tyk-gateway:latest
        imagePullPolicy: Always
        name: tyk-gateway
        env:
          - name: REDIGOCLUSTER_SHARDCOUNT
            value: "128"
        command: ["/opt/tyk-gateway/tyk", "--conf=/etc/tyk-gateway/tyk.conf"]
        workingDir: /opt/tyk-gateway
        ports:
        - containerPort: 80
        volumeMounts:
          - name: tyk-gateway-apps
            mountPath: /apps
          - name: tyk-gateway-conf
            mountPath: /etc/tyk-gateway
        readinessProbe:
          httpGet:
            path: /hello
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5
          timeoutSeconds: 3
          successThreshold: 1
          failureThreshold: 3
      volumes:
        - name: tyk-gateway-apps
          gcePersistentDisk:
            pdName: test-tyk-gateway
            fsType: ext4
        - name: tyk-gateway-conf
          configMap:
            name: tyk-gateway-conf
            items:
              - key: tyk.conf
                path: tyk.conf

Even on localhost of the pod i get:

root@tyk-gateway-6bb4955b99-6sgsn:/opt/tyk-gateway# curl http://localhost/hello
404 page not found

Can you pull the logs for the service? It might be that the server is listening but it hasn’t loaded any routes yet.

(still not working a night later)

|tyk-gateway|2018-01-09T16:09:33.299911129Z|time="Jan 9 16:09:33" level=info msg="--> PID: 1"|
|---|---|---|
|tyk-gateway|2018-01-09T16:09:33.299903615Z|time="Jan 9 16:09:33" level=info msg="--> Listening on port: 80"|
|tyk-gateway|2018-01-09T16:09:33.299739120Z|time="Jan 9 16:09:33" level=info msg="--> Listening on address: (open interface)"|
|tyk-gateway|2018-01-09T16:09:33.299619913Z|time="Jan 9 16:09:33" level=info msg="Gateway started (v2.4.2)"|
|tyk-gateway|2018-01-09T16:09:33.299589982Z|time="Jan 9 16:09:33" level=info msg="Detected 0 APIs"|
|tyk-gateway|2018-01-09T16:09:33.292990111Z|time="Jan 9 16:09:33" level=warning msg="Insecure configuration detected (allowing)!"|
|tyk-gateway|2018-01-09T16:09:33.285995705Z|time="Jan 9 16:09:33" level=info msg="Starting gateway rate limiter notifications..."|
|tyk-gateway|2018-01-09T16:09:33.283222986Z|time="Jan 9 16:09:33" level=info msg="Initialising distributed rate limiter"|
|tyk-gateway|2018-01-09T16:09:33.283189684Z|time="Jan 9 16:09:33" level=info msg="Node registered" id=b042f158-9381-41c6-4f20-4879eda3d60d|
|tyk-gateway|2018-01-09T16:09:33.243356236Z|time="Jan 9 16:09:33" level=info msg="Starting Poller"|
|tyk-gateway|2018-01-09T16:09:33.241290898Z|time="Jan 9 16:09:33" level=info msg="Registering node."|
|tyk-gateway|2018-01-09T16:09:33.241285595Z|time="Jan 9 16:09:33" level=info msg="Setting up Server"|
|tyk-gateway|2018-01-09T16:09:33.241279257Z|time="Jan 9 16:09:33" level=info msg="--> Standard listener (http)" port=":80"|
|tyk-gateway|2018-01-09T16:09:33.241258917Z|time="Jan 9 16:09:33" level=info msg="Initialising Tyk REST API Endpoints"|
|tyk-gateway|2018-01-09T16:09:33.239398957Z|time="Jan 9 16:09:33" level=info msg="PIDFile location set to: /var/run/tyk-gateway.pid"|
|tyk-gateway|2018-01-09T16:09:33.219013172Z|time="Jan 9 16:09:33" level=info msg="--> Using clustered mode"|

This is all I get in the logs of the gateway pod. (Am I supposed to dig further in some other log?)

I found the “problem”.

Since this was a new instance, I did not have any API’s defined in the dashboard.
I am not sure this is the exact point but while digging in the source-code around here:

It looks like the endpoing /hello is not initialized if no API-definitions is available.

If I added a api pointing to anything (httpbin) the /hello endpoint started working and my healthcheck is now green.

That’s a good spot - I’ll make sure that we ensure the gateway has a health status on bootstrap!

I’ve raised the issue here:

1 Like

This is now fixed on master, and will be pushed out with our next patch.

1 Like