Imported Google Group message.
Date:Monday, 15 December 2014 09:13:22 UTC.
There isn’t currently an AMI for Amazon, we’re looking into getting tyk onto the marketplace and how exactly we would achieve it (there are some hoops to jump through). We are working towards docker integration in the upcoming version, which should make it easier to launch tyk using Docker, and very easy to deploy onto any cloud architecture.
With regards to MongoDB, you can deploy and run tyk without MongoDB, Tyk only requires MongoDB for analytics data collection, the dashboard and centralising API Definitions (e.g. if you are running multiple tyk instances and want to manage them using the dashboard) in the upcoming release we are working towards adding third-party data-sinks for analytics data (for example: CloudWatch), and making this interface easy to extend so further data sinks can be created as part of the open source project.
I should elaborate on this point:
Tyk can store configuration locally or in Mongo, it only loads configurations at start-up or during hot-reload, which means once a configuration is loaded, there are no more active hits to the database except if analytics data is being purged (and this can be handled by a dedicated process if needed).
If deploying without a DB (which most of our users do I believe), you can manage configurations using Puppet or Salt to sync the files from a repo or a NAS share, and then hot-reload them using an API call when the files have synchronised
If using the database and have a license, the dashboard will notify all running Tyk hosts that they should hot reload, these can be spread across multiple machines or on a single machine as multiple processes or both.
Hot reloading is done with zero downtime, any requests currently being processed will complete their middleware chain until they upstream API has responded.
Tyk stores analytics data in Redis for a “purge period”, when this time elapses, data is shovelled across into MongoDB for long term storage where the dashboard can run data pipelines
We have some users that have decided to collect data, but de-activate the purge so they can manually do it into their own BI solutions
As for session splitting and instances going up and down, Tyk runs as a series of nodes, you can have one or you can have many, they have idempotent configurations which are stored locally on the host (if not using MongoDB as a central configuration store) and can be destroyed or activated as needed, all sessions are stored in Redis, so long as your Redis cluster is healthy and active, so will the session data.
With Tyk we took the approach that you should be able to install and run Tyk with minimal external dependencies, and that it should be flexible enough to work with your existing set-up without imposing any usage patterns on you, which is in part why we haven’t put Tyk on Docker yet or pre-baked AMI images - it’s such as small application that requires very little to run, so deploying it as part of a devops solution should be trivial.
MongoDB is primarily used as a data sink (because it has great data-processing capabilities, the way you can run data-pipeline requests makes it trivial to create aggregate data out of vast swathes of raw analytics) to support the dashboard, which means as part of an overall Tyk deployment it’s a non-critical requirement - it doesn’t get hit often, and it is mainly used by the dashboard.
But you are correct in that we have’t provided a simple demo environment, we hope that the docker integration will address this to some extent.
Let me know if you have any other questions.