Tyk-pump not pumping

Running tyk, redis, and tyk-pump in kubernetes each in their own pods. It’s all running in a single node master/minion config, so there is no cross node issue. Tyk is working fine and communicating with redis. I know this because I see the connection with netstat and tyk proxies noticably slower and stops enforcing rate limiting and quota when the redis pod is shutdown.

Here is my tyk config:

"storage": {
    "type": "redis",
    "host": "tyk-redis",
    "port": 6379,
    "username": "",
    "password": "",
    "database": 0,
    "optimisation_max_idle": 100
}

Tyk-pump log is complaining that it can’t connect to redis and isn’t creating the csv file here is the log and config.

2016-12-05T17:42:13.792232606Z time=“Dec 5 17:42:13” level=info msg="## Tyk Analytics Pump, v0.3.0.0 ##"
2016-12-05T17:42:13.792492381Z time=“Dec 5 17:42:13” level=info msg=“Init Pump: CSV Pump”
2016-12-05T17:42:13.792705732Z time=“Dec 5 17:42:13” level=info msg=“Starting purge loop @10(s)”
2016-12-05T17:42:23.800919022Z time=“Dec 5 17:42:23” level=warning msg=“Connection dropped, connecting…”

{
“analytics_storage_type”: “redis”,
“analytics_storage_config”: {
“type”: “redis”,
“host”: “tyk-redis”,
“port”: 6379,
“hosts”: null,
“username”: “”,
“password”: “”,
“database”: 0,
“optimisation_max_idle”: 100,
“optimisation_max_active”: 0,
“enable_cluster”: false
},
“purge_delay”: 10,
“pumps”: {
“csv”: {
“name”: “csv”,
“meta”: {
“csv_dir”: “./”
}
}
},
“dont_purge_uptime_data”: true
}

Started with:

/opt/tyk-pump/tyk-pump -c /opt/tyk-pump/provisionPump/pump.conf

Can you see anything wrong with the config or know anything else that might be causing the problem.

Hi Troy,

I suspect that your tyk.conf file may not have been configured correctly. This user received a similar error message and was able to solve the problem by setting the enable_analytics value to “true”.

Let me know if that fixes the problem or if you run into any further issues.

Kind regards,
Jess @ Tyk

Yep, already had read that whole post and made that change prior to posting the above. It fixed the mongo log messages I was seeing, but not the redis connect warning and still no csv file.

I don’t have anything in the hosts config property like he did and didn’t see that explained in the tyk-pump config documentation. Could that be the problem?

Added a hosts and set clustering to true, but I’m guessing this is merely to specify multiple redis nodes which isn’t necessary in Kubernestes since it automagically load balances nodes of a service. Here is the resulting log file, but still no csv file being produced.

time=“Dec 6 14:13:31” level=info msg="## Tyk Analytics Pump, v0.3.0.0 ##"
time=“Dec 6 14:13:31” level=info msg=“Init Pump: CSV Pump”
time=“Dec 6 14:13:31” level=info msg=“Starting purge loop @10(s)”
time=“Dec 6 14:13:41” level=warning msg=“Connection dropped, connecting…”
time=“Dec 6 14:13:41” level=info msg="–> Using clustered mode"

Interestingly, the redis hosts requires port as a string and host requires port as integer.

Still can’t get this to work. I have the tyk and tyk-pump redis host and port set identically.

Hi Troy,

I’ve looked into this a little more and the warning message you’re getting in the terminal is actually expected behaviour for Tyk Pump logs and should indicate that everything is fine. As you’ve correctly determined, the hosts value only needs to be set for Redis clusters (there’s an article on how to set this up here if you’re interested but you shouldn’t need to do this for a master/minion configuration).

Now I realise that you’ve already read the link I sent you earlier but as you’re still unable to generate CSV files, could you please confirm as to whether Tyk definitely has access to the directory specified in the csv_dir setting before I escalate this to one of my colleagues?

Many thanks,

Jess @ Tyk

Yes, the user running tyk owns the dir and has 777 access. I set the csv path to absolute /opt/tyk-pump which is where tyk is running as WORKDIR.

I’m now seeing this in the log at the end.

time=“Dec 6 15:23:49” level=error msg=“Multi command failed: EOF”

Pod details

tycollinsworth1ro:tyk tycollinsworth$ kubectl get po
NAME READY STATUS RESTARTS AGE
gs-query-service-95km1 1/1 Running 0 12d
nginx-ingress-rc-0q8kb 1/1 Running 0 12d
tyk-7gngu 1/1 Running 0 1m
tyk-pump-alwvw 1/1 Running 5 4m
tyk-redis-lm3ys 1/1 Running 0 1m

tycollinsworth1ro:tyk tycollinsworth$ kubectl exec tyk-pump-alwvw -c tyk-pump -i -t – bash -il

[email protected]:/opt/tyk-pump# ls -al
total 14376
drwxrwxrwx 4 root root 4096 Dec 6 15:09 .
drwxr-xr-x 7 root root 4096 Dec 6 15:08 …

[email protected]:/opt/tyk-pump/provisionPump# ps aux | grep pump
root 1 0.0 0.0 17976 1468 ? Ss Dec05 0:00 /bin/bash /opt/tyk-pump/startPump.sh
root 7 0.0 0.2 294096 10604 ? Sl Dec05 0:07 /opt/tyk-pump/tyk-pump -c /opt/tyk-pump/provisionPump/pump.conf

pump.conf

{
“analytics_storage_type”: “redis”,
“analytics_storage_config”: {
“type”: “redis”,
“host”: “tyk-redis”,
“port”: 6379,
“hosts”: {
“tyk-redis”: “6379”
},
“username”: “”,
“password”: “”,
“database”: 0,
“optimisation_max_idle”: 100,
“optimisation_max_active”: 0,
“enable_cluster”: false
},
“purge_delay”: 10,
“pumps”: {
“csv”: {
“name”: “csv”,
“meta”: {
“csv_dir”: “/opt/tyk-pump”
}
}
},
“dont_purge_uptime_data”: true
}

Dockerfile

FROM tykio/tyk-pump-docker-pub
MAINTAINER Troy Collinsworth

/install gettext for envsubst/
RUN apt-get update && apt-get install -y gettext-base && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

WORKDIR /opt/tyk-pump

RUN chmod 777 /opt/tyk-pump

COPY provisionPump /opt/tyk-pump/provisionPump

COPY provisionPump/pump.conf /opt/tyk-pump/pump.conf

COPY ./scripts/startPump.sh /opt/tyk-pump/startPump.sh
RUN chmod u+x /opt/tyk-pump/startPump.sh

CMD ["/opt/tyk-pump/startPump.sh"]

EXPOSE 8080

Hi Troy,

One of my colleagues wants to replicate this issue. Are you able to provide the steps taken to set up the pods you used in Kubernetes?

Many thanks,

Jess @ Tyk

Absolutely, below should be everything you need. The indentation is lost in the display below, but when editing it appears preserved.

***** below are masked intentionally

tyk-pump.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: tyk-pump
labels:
app: tyk-pump
spec:
replicas: 1
selector:
app: tyk-pump
template:
metadata:
name: tyk-pump
labels:
app: tyk-pump
spec:
containers:
- name: tyk-pump
image: *.com//tykpump:dev
imagePullPolicy: Always
#ports:
#- #name: http
# containerPort: 8080
imagePullSecrets:
- name: *********

tyk-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
name: tyk
labels:
app: tyk
spec:
replicas: 1
selector:
app: tyk
template:
metadata:
name: tyk
labels:
app: tyk
spec:
containers:
- name: tyk
image: .com/**/tyk:dev
imagePullPolicy: Always
ports:
- #name: http
containerPort: 8080
imagePullSecrets:
- name: **********

tyk-redis-rc.yaml (test, not for production)

apiVersion: v1
kind: ReplicationController
metadata:
name: tyk-redis
labels:
app: tyk-redis
spec:
replicas: 1
selector:
app: tyk-redis
template:
metadata:
name: tyk-redis
labels:
app: tyk-redis
spec:
containers:
- name: tyk-redis
image: redis:3.2.5-alpine
imagePullPolicy: Always
ports:
- #name: redis
containerPort: 6379
imagePullSecrets:
- name: **********

tyk-redis-svc.yaml

kind: Service
apiVersion: v1
metadata:
name: tyk-redis
labels:
app: tyk-redis
spec:
ports:

  • port: 6379
    targetPort: 6379
    protocol: TCP
    selector:
    app: tyk-redis

tyk-svc.yaml

kind: Service
apiVersion: v1
metadata:
name: tyk-svc
labels:
app: tyk-svc
spec:
ports:

  • port: 8080
    targetPort: 8080
    protocol: TCP
    selector:
    app: tyk

reset_kube script

(shebang)!/bin/bash

kubectl delete -f deployment/gs-query-service-ingress.yaml
kubectl delete -f deployment/tyk-pump.yaml
kubectl delete -f deployment/tyk-rc.yaml
kubectl delete -f deployment/tyk-redis-rc.yaml
kubectl delete -f deployment/tyk-config.yaml
kubectl delete -f deployment/tyk-svc.yaml
kubectl delete -f deployment/tyk-redis-svc.yaml

kubectl create -f deployment/tyk-config.yaml
kubectl create -f deployment/tyk-redis-svc.yaml
kubectl create -f deployment/tyk-redis-rc.yaml
kubectl create -f deployment/tyk-svc.yaml
kubectl create -f deployment/tyk-rc.yaml
kubectl create -f deployment/gs-query-service-ingress.yaml
kubectl create -f deployment/tyk-pump.yaml

Hi Troy,

As Jess said, this warning is nothing to worry about, if there was a redis connection error there would be a red angry error stating the return message from the redis server/driver.

This needs to be set to a value, though AFAIK it may default to a value if set to 0.

Correct, this setting is specifically if you are using a redis cluster - it is a very different configuration to load balancing.

That’s a bit more serious, it means that Tyk could not connect to the redis instance, or that it was connected, but redis terminated the connection and returned nil.

If the pump is working, you will see it “purging x records”.

I would suggest to set the purge delay to lower (easier to debug) to say 2 seconds or so, you can check if Tyk Pump is connected to redis by killing redis, the pump should start spouting errors.

I noticed earlier you said that redis is load balanced - is your redis service load balanced? If it is, there is a possibility that Tyk Pump is purging from an empty node, because redis does not load balance unless you are using a redis cluster in clustered mode.

M.

Hi Troy,

I think we know what the issue is with pump here, if you are using one of the newer releases (0.4), there was an issue with the settings map, so purge_delay needed to be purgedelay etc. This has been fixed in the latest patch. This would also have affect optimisation_max_active.

Let us know if updating helps.

M.

Thanks, probably won’t get to testing this till Monday. I’ll let you know.