Tiered policy and unexpected behavior of api key

Hi,
via dashboard , I created a policy and assigned rates to it (6 calls per hour) . and added two apis to it (make it tiered).
Then I generated a key based on that policy and started to call the gateway with that key.

now when I call one of the apis, using the key , it gives me error after 3 times which I kind of expected that since the policy is shared between 2 apis, but when I call the other one, it gives me again the same error (i.e. Rate limit exceeded).

could you tell me why it is behaving like this?

There seems to be a misunderstanding, there is no such thing as a tiered policy, quotas and rate limits are global to the token across all apis that the polcy grants access to.

Thanks Martin for the reply. maybe using the term ‘tiered’ was not proper, what I was tying to say , was having a policy for my 2 apis.
so you are saying that it doesn’t matter how many api is added to the policy, the rates and quotas are global . right? then why I am observing such a behavior with my token based on this policy ? (basically it only allows me to use half the specified rates, and there is no quota set for the policy)

Are you talking about a rate limit or a quota?

A rate limit (e.g. 100 reuests per second) is subtely different form a uota (10,000 requests over a month), the first will smooth the flow while the second is to enforce billing / governance.

So are you getting odd rate limits (as in you set 50 p/s and you can only get 10 p/s), or are you getting odd quota counts (see the X-Ratelimit headers inthe response)?

There is no quota set. I only have 6 request per 3600s. it only allows 3 of them and the next ones respond with ‘429’ ‘too many requests’.
The response headers for the calls with 200 status is like this
< X-Ratelimit-Limit: -1
< X-Ratelimit-Remaining: -1
< X-Ratelimit-Reset: 1507744130

for the responses with 429 status, I don’t receive such header

Those headers are for quotas, not rate limits. 3600s might actually be outside of the cache limit for rate limits in memory, can you try ablower limit like 3/ 10s

I am observing the same behavior for 3 per 10.
Is enabling cache option for the api relevant to the rates?

It shouldn’t be. So with a key rate limit set at 3/10s you can now only make approx 1 request p/s?

This would only happen if there is another gateway on the same redis instance - are you sure yu do not have more than one gateway connected to the same redis DB? Even a dev one or a demo?

sorry for the delay response. we have set up an ha for tyk gateway with 2 nodes; you think that’s the issue?

It depends, do they use the same redis DB?

yes they do use the same instance of redis.

So when you send the request to the gateway, are you sending it to the load balancer or to the gateways individually?

it is sent to load balancer(which is one of the tyk gateway nodes)

Wait, Tyk is load balancing itself?

Hi Martin,

Following up on this issue, yes we have two gateways sharing the same redis and mongo clusters. We have an haproxy cluster load balancing the two gateways.

We observe that the rate limit is counted twice. Ex if I set a rate limit of 10 requests per minute, I get 429s after 5 requests.

Should each gateway instance have its own redis and mongo ? What would be considered the recommended HA setup. Our requirement is to be able to survive single VM failures…

BR/Sid.