What is complexity of on-boarding & maintaining api’s in tyk
Can we publish api’s using swagger documentation into tyk or we have to do it through dashboard only by publishing each api in tyk ? If we can do it through swagger what is the process to publishing it ?
What does it mean by cloud & self managed ? Can we deploy to our own cloud in our own vpc ?
Your timing is perfect for your third question. We have just released Tyk 4.1 which brings a new level of ‘early access’ OAS 3.0.x support for Tyk. You can import an existing API defined in OAS 3.0.x via an API call and Tyk will automatically add in the specific fields to make it a live API running inside Tyk. You can see a little video of this in action here:
A tutorial for the ‘import’ can be found here:
and if you are using VS Code, this might be helpful:
To get going more generally with OAS, you can find the documentation here:
This talks through the high level concepts, then walks through a series of tutorials.
If you have any feedback, please don’t hesitate to add a post and tag it with the ‘OAS’ category.
If you deploy Tyk to K8s then a standard rolling update is all that is needed. If helm charts was used, then just update the versions of the components, and Tyk will upgrade
What is complexity of on-boarding & maintaining api’s in tyk
You can managed your APIs using our Tyk Operator. I think you can also use the dashboard here.
What does it mean by cloud & self managed ?
Self managed is simply self-hosted or on-prem. You handle the deployment, management and maintenance yourself. You can read more here.
For cloud, we simply just handle the deployment for you. You can compare our offerings to make the best decision.
Can we deploy to our own cloud in our own vpc ?
Does VPC mean Virtual Private Cloud? If yes, then I think this should be possible with our Self-Managed offering.
Thanks @Olu
Have few more questions-
1- Is there any limit on number api’s can be present in catalog ?
2- Can rate limit be done on session object ? It’s not clear from this docs
3- Which algorithm is used for rate limiting do we have algorithm options for open source ?
4- What is key in key-based rate limit ? Is it auth token ?
5- What are other key supported for key-based rate limiting ?
6- What does it mean by environment & team in architecture section in plans & pricing ?
However, if you meant to ask for hashing algorithm, the please have a read at our docs.
4- What is key in key-based rate limit ? Is it auth token ?
I assume key should mean the access key or the token that is used for authorizing a request. However, can you share the source or link where you saw this? It would help with context.
5- What are other key supported for key-based rate limiting ?
6- What does it mean by environment & team in architecture section in plans & pricing ?
A team is a collection of people in an organisation. While an environment is a group of deployments that can contain one or more of your gateway(s) and control plane(s) (dashboard, portal and MDCB). You can find more information here
It is assumed that the APIs being protected with a rate limit are using our Authentication token Authentication mode and have policies already created
is auth token key here ?
Attached link as text since new users can only attach 2 links in a post.
For ques 3
I’m interested in sliding window algorithm & from the docs of above algorithm it’s not clear. Can you please let me know if tyk support algorithm like sliding window, sliding log, leaky bucket etc. for rate limiting ?
The key is the access key (token, certificate, username or password etc.) that is used for authorising the request. Except keyless, all methods mentioned in my earlier statement below are supported
Based on my reading of what a sliding window rate limiter is, Tyk’s rate limiting options would work for you. The underlying ultimate goal is to ensure the number of requests are not exceeded of a given period. The choice with Tyk is about decising between performance and accuracy:
Distributed - is the default. It’s the fastest, but may be inaccurate due to how the Gateways estimate how many requests they should allow. This is based on the number of Gateways in the cluster and the current load, which is achieved by sharing information through Redis pub/sub channels.
Transactional and non-transactional - both read/write Redis data to keep an accurate record of usage. This approach has a performance impact, but is very accurate.
Yes it can. I think nginx was used to get the headers. Tyk has context variables that can be used to retrieve those same information. So you don’t have to use nginx
Yes @Oluwhat a sliding window rate limiter is this is exactly what we needed.
But which among this implements sliding window and which one are you suggesting in above answer (Tyk’s rate limiting options would work for you) for our use case ?
No. It’s a commercial product that requires a license. Our open and commercial products are displayed on our stack documentation. You can read more about our licensing here. If you have more questions, then please check out the already asked questions
I do recommend inputting your keyword in the search bar and looking for any relevant documentation on. The doc is a great resource that explains it better than I could.
Do we store rate limiting couters in local or redis ?
Yes, the rate limits counters are stored in Redis when using transactional and non-transactional rate limiters. For transactional (both redis rolling and sentinel), you can observe the prefixed key rate-limiter-<key-hash> just like quotas in your associated Redis. However, for non-transactional, the key name is random and hard to detect.
For distributed rate limiters, no centralized Redis is used and the counters are are stored locally in memory.
How OAuth2 can be used as a rate limiting key ?
Have you checked out our documentation on OAuth2.0