Running tyk, redis, and tyk-pump in kubernetes each in their own pods. It’s all running in a single node master/minion config, so there is no cross node issue. Tyk is working fine and communicating with redis. I know this because I see the connection with netstat and tyk proxies noticably slower and stops enforcing rate limiting and quota when the redis pod is shutdown.
I suspect that your tyk.conf file may not have been configured correctly. This user received a similar error message and was able to solve the problem by setting the enable_analytics value to “true”.
Let me know if that fixes the problem or if you run into any further issues.
Yep, already had read that whole post and made that change prior to posting the above. It fixed the mongo log messages I was seeing, but not the redis connect warning and still no csv file.
I don’t have anything in the hosts config property like he did and didn’t see that explained in the tyk-pump config documentation. Could that be the problem?
Added a hosts and set clustering to true, but I’m guessing this is merely to specify multiple redis nodes which isn’t necessary in Kubernestes since it automagically load balances nodes of a service. Here is the resulting log file, but still no csv file being produced.
I’ve looked into this a little more and the warning message you’re getting in the terminal is actually expected behaviour for Tyk Pump logs and should indicate that everything is fine. As you’ve correctly determined, the hosts value only needs to be set for Redis clusters (there’s an article on how to set this up here if you’re interested but you shouldn’t need to do this for a master/minion configuration).
Now I realise that you’ve already read the link I sent you earlier but as you’re still unable to generate CSV files, could you please confirm as to whether Tyk definitely has access to the directory specified in the csv_dir setting before I escalate this to one of my colleagues?
As Jess said, this warning is nothing to worry about, if there was a redis connection error there would be a red angry error stating the return message from the redis server/driver.
This needs to be set to a value, though AFAIK it may default to a value if set to 0.
Correct, this setting is specifically if you are using a redis cluster - it is a very different configuration to load balancing.
That’s a bit more serious, it means that Tyk could not connect to the redis instance, or that it was connected, but redis terminated the connection and returned nil.
If the pump is working, you will see it “purging x records”.
I would suggest to set the purge delay to lower (easier to debug) to say 2 seconds or so, you can check if Tyk Pump is connected to redis by killing redis, the pump should start spouting errors.
I noticed earlier you said that redis is load balanced - is your redis service load balanced? If it is, there is a possibility that Tyk Pump is purging from an empty node, because redis does not load balance unless you are using a redis cluster in clustered mode.
I think we know what the issue is with pump here, if you are using one of the newer releases (0.4), there was an issue with the settings map, so purge_delay needed to be purgedelay etc. This has been fixed in the latest patch. This would also have affect optimisation_max_active.