Imported Google Group message. Original thread at: Redirecting to Google Groups Import Date: 2016-01-19 21:20:42 +0000.
Sender:Tor Inge Skaar.
Date:Wednesday, 19 August 2015 11:21:50 UTC+1.
In my setup I’ve daemonized three Tyk processes on the same host behind an nginx reverse proxy. They all use the same tyk.conf, and hence use the same Redis and MongoDB instances (I use --port as an input argument when starting tyk to override the port setting in tyk.conf and have each process listen on a different port). Redis and Mongo runs on the same host.
Now, this seems to work just fine. And by tailing my nginx log I can see that nginx load balances the requests between the three tyk processes.
However, is there a potential conflict when multiple Tyk processes uses the same Redis and MongoDB instance? And is there any configuration options that I need to consider? Btw, I’m also running Tyk Dashboard on the same host.
Happy to provide the entire configuration if needed.
Tor Inge Skaar
Imported Google Group message.
Date:Wednesday, 19 August 2015 11:41:28 UTC+1.
The Tyk gateway process does not ever write to MongoDB unless it is flushing analytics to it, when it does this it does it via a redis transaction so only one set of data will ever be written to redis (no duplication) so all three procs can write at the same time, the records may be slightly out of order, but since all the views are aggregates this doesn’t affect the dashboard. Everything else is read-only.
If you want to avoid this, you would need to split the three processes to have separate tyk.confs and set the purge_delay to -1 on two of the processes (or run a fourth elsewhere that does the purging and switch it off on the rest, then the flow is to redis only and only one non-active node is doing the purging).
As for Redis, all write transaction of importance (rate limiting and quotas) are transactioned so they are atomic, so there should be no issues there either.
Tyk works best when one process has access to all cores, you can do this by setting GOMAXPROCS environment varibale to the number of cores on the machine, I think you’ll actually get way more bang for your buck out of that than multiple processes running individually on the same host.
Running it all together is absolutely fine an you shouldn’t expect any issues with shared resources, we’ve made sure (as far as possible) that there’s no side-effects.