Tyk-pump cannot pump

Hello,
I’ve a tyk-ce on K8s; enabled pump in values.yaml file; below;

...
  extraEnvs:
    - name: TYK_LOGLEVEL
      value: debug
    - name: TYK_GW_ENABLEHASHEDKEYSLISTING
      value: true
    - name: TYK_GW_ENABLEANALYTICS
      value: true
    - name: TYK_DB_USESHARDEDANALYTICS
      value: true  
    - name: TYK_PMP_PUMPS_PROM_TYPE
      value: prometheus
    - name: TYK_PMP_PUMPS_PROM_TIMEOUT
      value: 10
    - name: TYK_PMP_PUMPS_PROM_META_LISTENADDRESS
      value: "prometheus-server.monitoring.svc.cluster.local:80"
    - name: TYK_PMP_PUMPS_PROM_META_PATH
      value: "/metrics"

    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGETYPE
      value: redis
    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_HOST
      value: tyk-redis-master.tyk.svc.cluster.local
    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_PORT
      value: 6379
    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_PASSWORD
      value: "veryhiddenpassword"
    # - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_DATABASE
    #   value: 0
    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_MAXIDLE
      value: 100
    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_MAXACTIVE
      value: 100
    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_ENABLECLUSTER
      value: false
    - name: TYK_PMP_PUMPS_PROM_ANALYTICSSTORAGECONFIG_MASTERNAME
      value: tyk-redis-master
    - name: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISUSESSL
      value: false
    - name: TYK_PMP_ANALYTICSSTORAGECONFIG_REDISSSLINSECURESKIPVERIFY
      value: true
  mounts: []

# If pump is enabled the Gateway will create and collect analytics data to send
# to a data store of your choice. These can be set up in the pump config. The
# possible pump configs can be found here:
# https://github.com/TykTechnologies/tyk-pump#configuration
pump:
  # Determines whither or not the pump component should be installed.
  enabled: true

  replicaCount: 1
  image:
    repository: docker.tyk.io/tyk-pump/tyk-pump
    tag: v1.4.0
    pullPolicy: IfNotPresent
...

but there is no any action in logs; looks like pump cannot pump.

k logs -n tyk pump-tyk-ce-tyk-headless-6799f5d555-9hvmz -f
time="Oct  5 21:20:02" level=info msg="## Tyk Analytics Pump, 1.4.0 ##"
time="Oct  5 21:20:02" level=info msg="--> [REDIS] Creating single-node client"
time="Oct  5 21:20:02" level=info msg="Serving health check endpoint at http://localhost:8083/health ..."

also there is nothing at prometheus.

Am I missing something? Please advise…

Regards

Hello @tirelibirefe, since you are gonna configure pump with Prometheus for the first time, setting ENV variables will not work as they are only meant to override the configs set in config file for the Tyk Pump.

The config file for pump is defined in a ConfigMap named “pump-conf-tyk-ce-tyk-headless”. Remove the Env variables which you have set for pump in values.yaml file and edit this configMap with the prometheus config as mentioned here Prometheus pump config as per your environment.

Once the changes are saved in configMap, restart the pump pod to pick up the changes and you should be able to see Prometheus Pump Initialization from pump pod logs.

Hello @Cherry
here is my prometheus ns:

k get svc -n monitoring
NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
grafana                         ClusterIP   10.100.92.209    <none>        80/TCP     2d7h
prometheus-alertmanager         ClusterIP   10.100.149.234   <none>        80/TCP     2d7h
prometheus-kube-state-metrics   ClusterIP   10.100.155.134   <none>        8080/TCP   2d7h
prometheus-node-exporter        ClusterIP   None             <none>        9100/TCP   2d7h
prometheus-pushgateway          ClusterIP   10.100.250.177   <none>        9091/TCP   2d7h
prometheus-server               ClusterIP   10.100.205.165   <none>        80/TCP     2d7h

Here is my configmap

apiVersion: v1
data:
  pump.conf: |-
    {
      "analytics_storage_type": "redis",
      "analytics_storage_config": {
        "type": "redis",
        "enable_cluster": false,
        "host": "tyk-redis-master.tyk.svc.cluster.local",
        "username": "",
        "password": "",
        "database": 0,
        "optimisation_max_idle": 2000,
        "optimisation_max_active": 4000
      },
      "purge_delay": 4,
      "pumps": {
        "mongo": {
          "name": "mongo",
          "meta": {
            "collection_name": "tyk_analytics_headless",
            "mongo_url": ""
          }
        },
        "mongo-pump-aggregate": {
          "name": "mongo-pump-aggregate",
          "meta": {
            "mongo_url": "",
            "use_mixed_collection": true
          }
        }
      },
        "prometheus": {
          "type": "prometheus",
            "meta": {
                "listen_address": "prometheus-server.monitoring.svc.cluster.local:80",
                "path": "/metrics"
            }
        },
      "uptime_pump_config": {
        "collection_name": "tyk_uptime_analytics_headless",
        "mongo_url": ""
      },
      "dont_purge_uptime_data": false
    }
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: tyk-ce
    meta.helm.sh/release-namespace: tyk
  creationTimestamp: "2021-10-05T21:20:01Z"
  labels:
    app: pump-tyk-ce-tyk-headless
    app.kubernetes.io/managed-by: Helm
    chart: tyk-headless-0.9.3
    heritage: Helm
    release: tyk-ce
  name: pump-conf-tyk-ce-tyk-headless
  namespace: tyk
  resourceVersion: "3033085"
  uid: ebca7276-77a8-4559-9165-07c5fa9e6016

I edit configmap and reapplied it then I deleted the pump pod and it was recreated. Also I’ve removed environment variables from values.yaml file of gw.

…and the result:

k logs -n tyk pump-tyk-ce-tyk-headless-6799f5d555-4f6bb -f
time="Oct  6 07:11:11" level=info msg="## Tyk Analytics Pump, 1.4.0 ##"
time="Oct  6 07:11:11" level=info msg="--> [REDIS] Creating single-node client"
time="Oct  6 07:11:11" level=info msg="Serving health check endpoint at http://localhost:8083/health ..."
time="Oct  6 07:11:12" level=info msg=Init collection_name="tyk_analytics_headless" url=
time="Oct  6 07:11:12" level=info msg="-- No max batch size set, defaulting to 10MB"
time="Oct  6 07:11:12" level=info msg="-- No max document size set, defaulting to 10MB"

Nothing changed, pump pod sits there without pump.

P.S > Do I not need to fill redis auth information? As you know redis has only password which is defined in values.yaml file and gateway pods authenticate by using. Does pump pod read it and use the same information to logon redis?

Do I not need to fill redis auth information?

I don’t think you need the redis auth information in pump. My redis section looks similar to yours so that should be fine.

Regarding your pump config, I observed that your prometheus section is outside the pumps section. It needs to be a child of the pumps section. Also, if you are using CE then you could remove the mongo configurations. I have dropped a snippet below

{
  "analytics_storage_type": "redis",
  "analytics_storage_config": {
	"type": "redis",
	"enable_cluster": false,
	"host": "tyk-redis-master.tyk.svc.cluster.local",
	"username": "",
	"password": "",
	"database": 0,
	"optimisation_max_idle": 2000,
	"optimisation_max_active": 4000
  },
  "log_level":"debug",
  "log_format":"text",
  "purge_delay": 4,
  "pumps": {
	"mongo": {
	  "name": "mongo",
	  "meta": {
		"collection_name": "tyk_analytics_headless",
		"mongo_url": ""
	  }
	},
	"mongo-pump-aggregate": {
	  "name": "mongo-pump-aggregate",
	  "meta": {
		"mongo_url": "",
		"use_mixed_collection": true
	  }
	},
	"prometheus": {
	  "type": "prometheus",
		"meta": {
			"listen_address": "prometheus-server.monitoring.svc.cluster.local:80",
			"path": "/metrics"
		}
	}
  }
  "uptime_pump_config": {
	"collection_name": "tyk_uptime_analytics_headless",
	"mongo_url": ""
  },
  "dont_purge_uptime_data": true
}

I have also enabled debug logging for more information if errors occur. Let us know how it goes.

1 Like

Hello @tirelibirefe, I was reproducing in my cluster and it turns out that we cannot connect pump to prometheus because pump deployed using helm-charts currently does not expose any service endpoint which can be used by the prometheus to scrape metrics.

Hoping that this would be added in upcoming releases !

Feel free to open a github issue in tyk-helm-chart repo !!!

Thank you @Cherry This last comment is very helpful, I was struggle with it…
Ok, I open an issue to GitHub…

You can add a PodMonitor target. ther eis no need for a service there.

This looks promising. But the ADDR for prometheus is the one that pump needs to bind to so it is incorrect to use the servcie for prometheus… should be :9090 alone or 0.0.0.0:9090
Is where prometheus will scrape from.

Now i did that and it started at least. But logs do not show any activity
How can i know if its working?

Thanks!

Hey @brahama_von,

But the ADDR for prometheus is the one that pump needs to bind to so it is incorrect to use the servcie for prometheus… should be :9090 alone or 0.0.0.0:9090
Is where prometheus will scrape from.

Yes, this is correct.

When you make an API call, you should see entries like this in pump’s logs. You would need to set log_level to “debug” to see all but the last line.

`time="Nov 03 11:45:01" level=debug msg="Unpacked vals: 1" prefix=redis`

`time="Nov 03 11:45:01" level=debug msg="Decoded Record: {GET httpbin.org /anything /anything 0 curl/7.79.1 3 November 2022 11 200 00000000 2022-11-03 11:45:00.471233 +0000 UTC Non Versioned HTTPBin f84c14977fa94f39665db0949d1db1e0 5e9d9544a1dcd60001d0ed20  417 R0VUIC9hbnl0aGluZyBIVFRQLzEuMQ0KSG9zdDogbG9jYWxob3N0OjgwODANClVzZXItQWdlbnQ6IGN1cmwvNy43OS4xDQpBY2NlcHQ6ICovKg0KDQo= SFRUUC8xLjEgMjAwIE9LDQpDb250ZW50LUxlbmd0aDogMzgzDQpBY2Nlc3MtQ29udHJvbC1BbGxvdy1DcmVkZW50aWFsczogdHJ1ZQ0KQWNjZXNzLUNvbnRyb2wtQWxsb3ctT3JpZ2luOiAqDQpDb250ZW50LVR5cGU6IGFwcGxpY2F0aW9uL2pzb24NCkRhdGU6IFRodSwgMDMgTm92IDIwMjIgMTE6NDU6MDAgR01UDQpTZXJ2ZXI6IGd1bmljb3JuLzE5LjkuMA0KWC1SYXRlbGltaXQtTGltaXQ6IDANClgtUmF0ZWxpbWl0LVJlbWFpbmluZzogMA0KWC1SYXRlbGltaXQtUmVzZXQ6IDANCg0KewogICJhcmdzIjoge30sIAogICJkYXRhIjogIiIsIAogICJmaWxlcyI6IHt9LCAKICAiZm9ybSI6IHt9LCAKICAiaGVhZGVycyI6IHsKICAgICJBY2NlcHQiOiAiKi8qIiwgCiAgICAiQWNjZXB0LUVuY29kaW5nIjogImd6aXAiLCAKICAgICJIb3N0IjogImh0dHBiaW4ub3JnIiwgCiAgICAiVXNlci1BZ2VudCI6ICJjdXJsLzcuNzkuMSIsIAogICAgIlgtQW16bi1UcmFjZS1JZCI6ICJSb290PTEtNjM2M2E5YmMtN2Q2YmRkNTM0MzUzZGE5YzQxZWE4ZDZjIgogIH0sIAogICJqc29uIjogbnVsbCwgCiAgIm1ldGhvZCI6ICJHRVQiLCAKICAib3JpZ2luIjogIjE3Mi4yMC4wLjEsIDQxLjczLjEuNjkiLCAKICAidXJsIjogImh0dHA6Ly9odHRwYmluLm9yZy9hbnl0aGluZyIKfQo= 172.20.0.1 {{} {0 map[]} {0 0 }} {0 0 0 0} {417 406} [key-00000000 org-5e9d9544a1dcd60001d0ed20 api-f84c14977fa94f39665db0949d1db1e0]  false 2022-11-10 11:45:00.486679 +0000 UTC }" prefix=main`

`time="Nov 03 11:45:01" level=debug msg="Writing to: Prometheus Pump" prefix=main`

`time="Nov 03 11:45:01" level=debug msg="Attempting to write 1 records..." prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=debug msg="Processing metric:tyk_http_status" prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=debug msg="Processing metric:tyk_http_status_per_path" prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=debug msg="Processing metric:tyk_http_status_per_key" prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=debug msg="Processing metric:tyk_http_status_per_oauth_client" prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=debug msg="Processing metric:tyk_latency" prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=debug msg="Processing metric:tyk_http_requests_total" prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=debug msg="Processing metric:tyk_http_latency" prefix=prometheus-pump`

`time="Nov 03 11:45:01" level=info msg="Purged 1 records..." prefix=prometheus-pump`

Thanks @Ubong for that. I am not seeing anything. I see the logs in the Gateway but then nothng in the pump. Also the only metrics i am getting when I curl pump:9090 are promhttp_ and go_. None with tyk_
So it might seem like the pump is not bringing anything from the GW? How can i check that? Do i need to enable something in the GW to generate those metrics?

My pump config is

{
  "log_level": "debug",
  "log_format":"text",
  "analytics_storage_type": "redis",
  "analytics_storage_config": {
    "type": "redis",
    "enable_cluster": false,
    "host": "tyk-redis-master.tyk.svc:6379",
    "username": "",
    "password": "",
	  "database": 0,
    "optimisation_max_idle": 2000,
    "optimisation_max_active": 4000
  },
  "dont_purge_uptime_data": true,
  "purge_delay": 60,
  "pumps": {
    "prometheus": {
      "type": "prometheus",
      "meta": {
        "listen_address": "0.0.0.0:9090",
        "path": "/metrics",
        "custom_metrics":[
          {
              "name":"tyk_http_requests_total",
              "description":"Total of API requests",
              "metric_type":"counter",
              "labels":["response_code","api_name","method","api_key","alias","path"]
          },
          {
              "name":"tyk_http_latency",
              "description":"Latency of API requests",
              "metric_type":"histogram",
              "labels":["type","response_code","api_name","method","api_key","alias","path"]
          }
        ]
      }
    }
  },
  "uptime_pump_config": {
    "collection_name": "tyk_uptime_analytics_headless"
  }
}

My pump logs

time="Nov  3 01:32:29" level=info msg="## Tyk Analytics Pump, 1.6.0 ##"
time="Nov  3 01:32:29" level=error msg="Instrumentation is enabled, but no connectionstring set for statsd"
time="Nov  3 01:32:29" level=debug msg="Connecting to redis cluster"
time="Nov  3 01:32:29" level=debug msg="Creating new Redis connection pool"
time="Nov  3 01:32:29" level=info msg="--> [REDIS] Creating single-node client"
time="Nov  3 01:32:29" level=debug msg="[STORE] SET Raw key is: pump"
time="Nov  3 01:32:29" level=debug msg="Input key was: version-check-pump"
time="Nov  3 01:32:29" level=debug msg="[STORE] Setting key: version-check-pump"
time="Nov  3 01:32:29" level=debug msg="Input key was: version-check-pump"
time="Nov  3 01:32:29" level=info msg="Serving health check endpoint at http://localhost:8083/health ..."
time="Nov  3 01:32:29" level=debug msg="Checking default Prometheus Pump env variables with prefix TYK_PMP_PUMPS_PROMETHEUS"
time="Nov  3 01:32:29" level=info msg="Starting prometheus listener on:0.0.0.0:9090"
time="Nov  3 01:32:29" level=info msg="Prometheus Pump Initialized"
time="Nov  3 01:32:29" level=info msg="Init Pump: prometheus"
time="Nov  3 01:32:29" level=info msg=Init collection_name= url=
time="Nov  3 01:32:29" level=debug msg="Checking MongoDB Pump env variables with prefix TYK_PMP_PUMPS_MONGO_META"
time="Nov  3 01:32:29" level=info msg="-- No max batch size set, defaulting to 10MB"
time="Nov  3 01:32:29" level=info msg="-- No max document size set, defaulting to 10MB"

Thats it. stays there forever.

Thanks a lot! :slight_smile:

Hmm.

Your purge_delay is quite high. The analytics records in Redis may have expired and been removed before pump’s cycle ends/starts, so it sees nothing to pickup.
Try something much lower… 5?

In your analytics config, you might want to define the port separate from the host?

"analytics_storage_config": {
    "type": "redis",
    "enable_cluster": false,
    "host": "tyk-redis-master.tyk.svc",
    "port": 6379
    "username": "",
    "password": "",
	  "database": 0,
    "optimisation_max_idle": 2000,
    "optimisation_max_active": 4000
  },

Although, if the connection to Redis was not successful, you would see such message. Worth checking though.

time="Nov  3 13:10:13" level=error msg="Multi command failed: dial tcp 127.0.0.1:6379: connect: connection refused"

Lastly, there’s a known bug in pump v1.6.0 with custom metrics. Use this instead for now

"custom_metrics":[
    {
        "name":"tyk_http_requests_total",
        "description":"Total of API requests",
        "metric_type":"counter",
        "labels":["response_code","api_name","method","api_key","path"]
    },
    {
        "name":"tyk_http_latency",
        "description":"Latency of API requests",
        "metric_type":"histogram",
        "labels":["response_code","api_name","method","api_key","path"]
    }
]
1 Like

Thanks @Ubong

Yes, the purge i tried already 2, 4 and put 60 now. But same results. No connection errors, but i will try all that again and come back with feedback…

Thanks for your detailed answers :slight_smile:

Still same issue. I tried with 2 secs. i am running a while loop doing curl and i see that a KEY is generated in redis

tyk-redis-master:6379> keys *
1) "tyk-liveness-probe"
2) "version-check-pump"
3) "host-checker:PollerActiveInstanceID"
4) "apikey-c12dd6c19e2ea6c"
5) "redis-test-c72a19fe-274d-4613-88fe"
6) "apikey-6604a0196c949e331"
7) "apikey-4dc21d5c51fe66"
8) "analytics-tyk-system-analytics"

But Pump logs keep the same

time="Nov  3 16:01:09" level=info msg="## Tyk Analytics Pump, 1.6.0 ##"
time="Nov  3 16:01:09" level=error msg="Instrumentation is enabled, but no connectionstring set for statsd"
time="Nov  3 16:01:09" level=debug msg="Connecting to redis cluster"
time="Nov  3 16:01:09" level=debug msg="Creating new Redis connection pool"
time="Nov  3 16:01:09" level=info msg="--> [REDIS] Creating single-node client"
time="Nov  3 16:01:09" level=debug msg="[STORE] SET Raw key is: pump"
time="Nov  3 16:01:09" level=info msg="Serving health check endpoint at http://localhost:8083/health ..."
time="Nov  3 16:01:09" level=debug msg="Input key was: version-check-pump"
time="Nov  3 16:01:09" level=debug msg="[STORE] Setting key: version-check-pump"
time="Nov  3 16:01:09" level=debug msg="Input key was: version-check-pump"
time="Nov  3 16:01:09" level=debug msg="Checking default Prometheus Pump env variables with prefix TYK_PMP_PUMPS_PROMETHEUS"
time="Nov  3 16:01:09" level=info msg="Starting prometheus listener on::9090"
time="Nov  3 16:01:09" level=info msg="Prometheus Pump Initialized"
time="Nov  3 16:01:09" level=info msg="Init Pump: prometheus"
time="Nov  3 16:01:09" level=info msg=Init collection_name= url=
time="Nov  3 16:01:09" level=debug msg="Checking MongoDB Pump env variables with prefix TYK_PMP_PUMPS_MONGO_META"
time="Nov  3 16:01:09" level=info msg="-- No max batch size set, defaulting to 10MB"
time="Nov  3 16:01:09" level=info msg="-- No max document size set, defaulting to 10MB"

Perhaps ther eis something in the logs that is obvious. For instance those SETs, i see the version one version-check-pump but i do not see the pump one

Thanks!

Just in case, these are the logs from the Gateway!

time="Nov 03 22:18:01" level=debug msg="Using /etc/tyk-gateway/tyk.conf for configuration" prefix=main
time="Nov 03 22:18:01" level=info msg="Tyk API Gateway 4.2.1" prefix=main
time="Nov 03 22:18:01" level=warning msg="Insecure configuration allowed" config.allow_insecure_configs=true prefix=checkup
time="Nov 03 22:18:01" level=warning msg="AnalyticsConfig.PoolSize unset. Defaulting to number of available CPUs" prefix=checkup runtime.NumCPU=8
time="Nov 03 22:18:01" level=warning msg="AnalyticsConfig.RecordsBufferSize < minimum - Overriding" minRecordsBufferSize=1000 prefix=checkup
time="Nov 03 22:18:01" level=warning msg="AnalyticsConfig.StorageExpirationTime is 0, defaulting to 60s" prefix=checkup storageExpirationTime=0
time="Nov 03 22:18:01" level=error msg="Could not set version in versionStore" error="storage: Redis is either down or was not configured" prefix=main
time="Nov 03 22:18:01" level=debug msg="Setting up analytics DB connection" prefix=main
time="Nov 03 22:18:01" level=debug msg="Analytics pool worker buffer size" workerBufferSize=125
time="Nov 03 22:18:01" level=debug msg="No Primary instance found, assuming control" prefix=host-check-mgr
[Nov 03 22:18:01] DEBUG Using serializer msgpack for analytics

time="Nov 03 22:18:01" level=error msg="cannot set key in pollerCacheKey" error="storage: Redis is either down or was not configured"
time="Nov 03 22:18:01" level=info msg="Starting Poller" prefix=host-check-mgr
time="Nov 03 22:18:01" level=debug msg="---> Initialising checker" prefix=host-check-mgr
time="Nov 03 22:18:01" level=debug msg="[HOST CHECKER] Config:TriggerLimit: 3"
time="Nov 03 22:18:01" level=debug msg="[HOST CHECKER] Config:Timeout: ~10"
time="Nov 03 22:18:01" level=debug msg="[HOST CHECKER] Config:WorkerPool: 8"
time="Nov 03 22:18:01" level=debug msg="[HOST CHECKER] Init complete"
time="Nov 03 22:18:01" level=debug msg="---> Starting checker" prefix=host-check-mgr
time="Nov 03 22:18:01" level=debug msg="[HOST CHECKER] Starting..."
time="Nov 03 22:18:01" level=debug msg="[HOST CHECKER] Check loop started..."
time="Nov 03 22:18:01" level=debug msg="[HOST CHECKER] Host reporter started..."
time="Nov 03 22:18:01" level=debug msg="---> Checker started." prefix=host-check-mgr
time="Nov 03 22:18:01" level=debug msg="Starting routine for flushing network analytics" prefix=main
time="Nov 03 22:18:01" level=debug msg="Notifier will not work in hybrid mode" prefix=main
time="Nov 03 22:18:01" level=info msg="PIDFile location set to: /mnt/tyk-gateway/tyk.pid" prefix=main
time="Nov 03 22:18:01" level=error msg="Instrumentation is enabled, but no connectionstring set for statsd"
time="Nov 03 22:18:01" level=warning msg="The control_api_port should be changed for production" prefix=main
time="Nov 03 22:18:01" level=debug msg="Initialising default org store" prefix=main
time="Nov 03 22:18:01" level=error msg="Connection to Redis failed, reconnect in 10s" error="storage: Redis is either down or was not configured" prefix=pub-sub
time="Nov 03 22:18:01" level=info msg="Initialising Tyk REST API Endpoints" prefix=main
time="Nov 03 22:18:01" level=debug msg="Creating new Redis connection pool"
time="Nov 03 22:18:01" level=info msg="--> [REDIS] Creating single-node client"
time="Nov 03 22:18:01" level=debug msg="Loaded API Endpoints" prefix=main
time="Nov 03 22:18:01" level=info msg="--> Standard listener (http)" port=":8080" prefix=main
time="Nov 03 22:18:01" level=warning msg="Starting HTTP server on:[::]:8080" prefix=main
time="Nov 03 22:18:01" level=info msg="Initialising distributed rate limiter" prefix=main
time="Nov 03 22:18:01" level=debug msg="DRL: Setting node ID: solo-be11ce5e-c364-4aa9-9822-95edd02496d8|gateway-tyk-tyk-headless-776876b5fc-5kvqz"
time="Nov 03 22:18:01" level=info msg="Tyk Gateway started (4.2.1)" prefix=main
time="Nov 03 22:18:01" level=info msg="--> Listening on address: (open interface)" prefix=main
time="Nov 03 22:18:01" level=info msg="--> Listening on port: 8080" prefix=main
time="Nov 03 22:18:01" level=info msg="--> PID: 1" prefix=main
time="Nov 03 22:18:01" level=info msg="Loading policies" prefix=main
time="Nov 03 22:18:01" level=info msg="Starting gateway rate limiter notifications..."
time="Nov 03 22:18:01" level=info msg="Policies found (1 total):" prefix=main

I’ve been with this for the last two days and seems like some connection in the docs is missing… Will try destroying and recreating.

So my setup is pretty simple… One GW (OPen source) Redis and Pump. All in K8s running from the TYK helm chart with those settings.

OK. finally i have something working. Looks like the helm chart introduces some missconfigurations. Ended up setting my own pump.conf with the values i needed.

Hey @brahama_von,

are you good now?

Hey @Ubong ! yess… sorry but I was not allowed to add a new reply so I added the line at the end of my previous post.

Yes. Working, and i raised a few issues in the Helms repo. There is a bug there. ANd also the one for custom metrics that is being fixed in next release.

Cheers!

1 Like

Hey @brahama_von , reviving this old thread since I’m facing the exact same issue as you. Do you remember how you ended up resolving this ?