Rate limit does not get reset when using more than 1 gateway pods in kubernetes

I am configuring 100 rate per 60 seconds with a python plugin.
When I am using a single gateway pod everything is working fine. but when we scale up number of pods we are seeing that

  1. for first 60 seconds we are getting 200
  2. but after that, rate never gets reset. We are constantly getting 429s
    My middleware is like this
from tyk.decorators import *
from gateway import TykGateway as tyk
from policies import policy
import datetime
import os
import sys

parent_dir = os.path.abspath(os.path.dirname(__file__))
vendor_dir = os.path.join(parent_dir, 'vendor')
sys.path.append(vendor_dir)

@Hook
def MyAuthMiddleware(request, session, metadata, spec):
    tyk.log("Starting Auth middleware", "info")
    default_policy = policy["voyages"]["default"]
    try:
        tyk.log(f"request ip is {request.get_header('X-Real-Ip')}", "info")
        if str(request.get_header('X-Real-Ip')) != '':
            metadata["token"] = request.get_header('X-Real-Ip')
            session.rate = default_policy["rate"]
            session.per = default_policy["per"]
            session.quota_max = default_policy["per"]
            session.quota_renewal_rate = default_policy["quota_renewal_rate"]
            session.quota_renews = int(datetime.datetime.now().timestamp()) + 60
            session.session_lifetime = 300
    except:
        tyk.log("something went wrong", "info")
    return request, session, metadata

I am using redis

- name: "TYK_GW_ENABLEREDISROLLINGLIMITER"
      value: "true"
    - name: "TYK_GW_ENABLENONTRANSACTIONALRATELIMITER"
      value: "true"

All three pods are using same volume claims.

Could you set log_level=“debug” mode and enable_key_logging. Then share the verbose logs of the resulting behaviour.

You should see something like this

level=debug msg="[RATELIMIT] Inbound raw key is: eyJvcmciOiJkZWZhdWx0IiwiaWQiOiI4ZWU3NGY3ZmRkMzM0YmQ4YWNlN2Q2YzIxYzhmM2Q1NSIsImgiOiJtdXJtdXI2NCJ9"
level=debug msg="[RATELIMIT] Rate limiter key is: rate-limit-eyJvcmciOiJkZWZhdWx0IiwiaWQiOiI4ZWU3NGY3ZmRkMzM0YmQ4YWNlN2Q2YzIxYzhmM2Q1NSIsImgiOiJtdXJtdXI2NCJ9"
level=debug msg="Incrementing raw key: rate-limit-eyJvcmciOiJkZWZhdWx0IiwiaWQiOiI4ZWU3NGY3ZmRkMzM0YmQ4YWNlN2Q2YzIxYzhmM2Q1NSIsImgiOiJtdXJtdXI2NCJ9"
level=debug msg="keyName is: rate-limit-eyJvcmciOiJkZWZhdWx0IiwiaWQiOiI4ZWU3NGY3ZmRkMzM0YmQ4YWNlN2Q2YzIxYzhmM2Q1NSIsImgiOiJtdXJtdXI2NCJ9"

You can use the key rate-limit-* to inspect Redis for the key and check the TTL.

:thinking: On second look, you might just be hitting your quota limit. Kindly see token object details for more info

session.per = default_policy["per"]
session.quota_max = default_policy["per"]

Thanks for your reply @Olu

Here are the logs

time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting mw=VersionCheck org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146359939381
time="Dec 28 07:05:46" level=debug msg=Finished api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting code=200 mw=VersionCheck ns=38217 org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting mw=RateCheckMW org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146359989612
time="Dec 28 07:05:46" level=debug msg=Finished api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting code=200 mw=RateCheckMW ns=6731 org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting mw=MiddlewareContextVars org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146360005328
time="Dec 28 07:05:46" level=debug msg=Finished api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting code=200 mw=MiddlewareContextVars ns=28262 org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting mw=CoProcessMiddleware org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146360047738
time="Dec 28 07:05:46" level=debug msg="CoProcess Request, HookType: CustomKeyCheck" api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting mw=CoProcessMiddleware org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=info msg="Starting Auth middleware" prefix=python
time="Dec 28 07:05:46" level=info msg="request ip is 18.200.43.40" prefix=python
time="Dec 28 07:05:46" level=debug msg="gRPC request processing took" api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting ms=0.352706 mw=CoProcessMiddleware org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Finished api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting code=200 mw=CoProcessMiddleware ns=896964 org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting key=18.200.43.40 mw=KeyExpired org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146360980215
time="Dec 28 07:05:46" level=debug msg=Finished api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting code=200 key=18.200.43.40 mw=KeyExpired ns=14618 org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting key=18.200.43.40 mw=AccessRightsCheck org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146361015503
time="Dec 28 07:05:46" level=debug msg=Finished api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting code=200 key=18.200.43.40 mw=AccessRightsCheck ns=14026 org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting key=18.200.43.40 mw=GranularAccessMiddleware org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146361072043
time="Dec 28 07:05:46" level=debug msg=Finished api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting code=200 key=18.200.43.40 mw=GranularAccessMiddleware ns=6985 org_id= origin=18.200.43.40 path=/v5/voyages/search
time="Dec 28 07:05:46" level=debug msg=Started api_id=YWR0L3ZveWFnZXMtYXBpLWdsb2JhLXJhdGUtbGltaXRpbmctcHJldmlldy1hcGktZ2xvYmEtcmF0ZS1saW1pdGluZw api_name=voyages-api-globa-rate-limiting-preview-api-globa-rate-limiting key=18.200.43.40 mw=RateLimitAndQuotaCheck org_id= origin=18.200.43.40 path=/v5/voyages/search ts=1703747146361098993
time="Dec 28 07:05:46" level=debug msg="[RATELIMIT] Inbound raw key is: 18.200.43.40"
time="Dec 28 07:05:46" level=debug msg="[RATELIMIT] Rate limiter key is: rate-limit-a76c8f79"
time="Dec 28 07:05:46" level=debug msg="Incrementing raw key: rate-limit-a76c8f79"
time="Dec 28 07:05:46" level=debug msg="keyName is: rate-limit-a76c8f79"
time="Dec 28 07:05:46" level=debug msg="Now is:2023-12-28 07:05:46.361129879 +0000 UTC m=+569.346817242"
time="Dec 28 07:05:46" level=debug msg="Then is: 2023-12-28 07:04:46.361129879 +0000 UTC m=+509.346817242"
time="Dec 28 07:05:46" level=debug msg="Returned: 199"

One things that I am observing is (for 100 rate per minutes setting), for first 100 calls, api call is successful where logs say ‘Returned: <some number between 1-99>’ for next 100 calls, I get 429 with logs saying ‘Returned: <some number between 101-199>’ but after that I am still getting 429 with logs ‘Returned: <199 or 200>’

While on redis side, I can see that tyk is creating key ‘rate-limit-*’ of type zset and every time the key getting updated a new key is getting created in zset

here is the config in middleware:

session.rate = default_policy["rate"]
    session.per = default_policy["per"]
    session.quota_max = default_policy["quota_max"]
    session.quota_renewal_rate = default_policy["quota_renewal_rate"]
    session.quota_renews = int(datetime.datetime.now().timestamp()) + 60
    session.session_lifetime = 300
    session.expires = -1

and config is as such

extraEnvs:
    - name: TYK_GW_STORAGE_SSLINSECURESKIPVERIFY
      value: "true"
    - name: TYK_GW_ENABLEHASHEDKEYSLISTING
      value: "true"
    - name: TYK_GW_LOGLEVEL
      value: "debug"
    - name: TYK_GW_ENABLEBUNDLEDOWNLOADER
      value: "true"
    - name: TYK_GW_BUNDLEBASEURL
      value: "https://vortexa-develop-tyk-bundles.s3.eu-west-1.amazonaws.com/bundles/"
    - name: "TYK_GW_BUNDLEINSECURESKIPVERIFY"
      value: "true"
    - name: "TYK_GW_COPROCESSOPTIONS_ENABLECOPROCESS"
      value: "true"
    - name: "TYK_GW_COPROCESSOPTIONS_PYTHONPATHPREFIX"
      value: "/opt/tyk-gateway"
    - name: "TYK_GW_CONTROLAPIHOSTNAME"
      value: "tyk-control.develop.vortexa.com"
    - name: TYK_GW_ENABLEANALYTICS
      value: "true"
    - name: "VTX_ENVIRONMENT"
      value: "develop"
    - name: "TYK_PMP_DECODERAWREQUEST"
      value: "true"
    - name: "TYK_GW_ENABLEREDISROLLINGLIMITER"
      value: "true"
    - name: "TYK_GW_ENABLENONTRANSACTIONALRATELIMITER"
      value: "true"
    - name: "TYK_GW_LOCALSESSIONCACHE_DISABLECACHESESSIONSTATE"
      value: "true"
    - name: "TYK_GW_ENABLEKEYLOGGING"
      value: "true"
    - name: "TYK_GW_HASHKEYS"
      value: "true"

I am yet to try this with the structure of your plugin but when I test with round-robin loadbancer, I don’t experience any issues.

While on redis side, I can see that tyk is creating key ‘rate-limit-*’ of type zset and every time the key getting updated a new key is getting created in zset

From Redis what’s the Time To Live (TTL) of the key rate-limit-a76c8f79? It should start from the value set in per and count down every second

The TTL for this key is 60 which is same as what is set in session.per and counting down every second

@Olu could you help me with this?
This is something that is stopping us to put tyk in production

I think the issue here is that the redis rate limiter requires the traffic be below the ‘rate’ for a full ‘per’ period before it will allow more calls through. It isn’t a leaky bucket like the DRL (distributed rate limiter)

Another way to think of this is that 429s count towards the rate so they have to be below the rate limit for anything to be allowed through.

Please see the description at Rate Limiting in Tyk for more details

Cheers,
Pete