Key quota is not respected

Hi, I created a policy with this body using the gateway API:

{
        "id": "bb051e8128a24be1a3bf5e8608839ece",
        "name": "Test quota",
        "org_id": "",
        "quota_max": 1,
        "quota_renewal_rate": 10,
        "access_rights": {
            ...
        }
}

Then I created an API key with this body:

{
        "apply_policies": ["bb051e8128a24be1a3bf5e8608839ece"]
}

However, the quota isn’t actually respected when I try to access the API using the key. I can make more than 1 request during the 10 second quota renewal window.

Thanks

@bls Can you share your Tyk version for confirmation?

Also when making the request do you see the following response headers?

X-Ratelimit-Limit: 1
X-Ratelimit-Remaining: 0
X-Ratelimit-Reset: 1692371046

If you don’t see those headers then could you enabled debug mode and share the gateway logs. I have added a snippet of what you should see in from the debug logs however, sharing the full log from the time of the request would help

time="Aug 18 15:21:54" level=debug msg="[RATELIMIT] Inbound raw key is: default7032f78352384339a1eae4dbf138bd49"
time="Aug 18 15:21:54" level=debug msg="[RATELIMIT] Rate limiter key is: rate-limit-default7032f78352384339a1eae4dbf138bd49"
time="Aug 18 15:21:54" level=debug msg="Incrementing raw key: rate-limit-default7032f78352384339a1eae4dbf138bd49"
time="Aug 18 15:21:54" level=debug msg="keyName is: rate-limit-default7032f78352384339a1eae4dbf138bd49"
time="Aug 18 15:21:54" level=debug msg="Now is:2023-08-18 15:21:54.6989418 +0000 UTC m=+14.569715001"
time="Aug 18 15:21:54" level=debug msg="Then is: 2023-08-18 15:20:54.6989418 +0000 UTC m=-45.430284999"
time="Aug 18 15:21:54" level=debug msg="Returned: 0"
time="Aug 18 15:21:54" level=debug msg="[QUOTA] Quota limiter key is: quota-default7032f78352384339a1eae4dbf138bd49"
time="Aug 18 15:21:54" level=debug msg="Renewing with TTL: 10"
time="Aug 18 15:21:54" level=debug msg="Incremented key: quota-default7032f78352384339a1eae4dbf138bd49, val is: 1"

Tyk version is 5.0.0. Yes I see those exact response headers (except reset has its own reset date).

An observation is that when I get the key right after making the request, quota_remaining for the API does go down to 0, and resets to 1 after 10 seconds. But it’s still letting me make more than 1 request in the 10 second window.

Can you try updating the updating the quota_renews field in the key with a value with a minute in future and try again.

Or try creating a new key and ensure the quota_renews value has a future time (at least 10 secs from creation date).

I set quota_renewal_rate to 120 in the policy. Then recreated the key with that policy to make sure everything is reset. Initially, quota_renews is 0, I guess since no requests have been made yet. After first request, quota_renews becomes 2 minutes in the future as expected. But still able to make indefinite requests.

I found out that quota works when I create the key containing the quota in it instead of applying policy. The key:

{
   "quota_max": 1,
   "quota_renewal_rate": 10,
   "access_rights": {
      "api1": {
         "allowed_urls": [],
         "api_id": "api1",
         "api_name": "api1",
         "limit": {
            "quota_max": 1,
            "quota_renewal_rate": 10
         }
      }
   }
}

But when I place the same code in a policy and apply it to the key, it behaves strangely. When I use policy with quota_max=1, it hits quota on the 8th request.

Does this same query work if you strip out the limit object?

  "limit": {
    "quota_max": 1,
     "quota_renewal_rate": 10
}

Also, is this a snippet of your key definition or is this the full payload.

I can’t seem to replicate the issue. I may need your full policy definition, API definition and key request call to test.

I have shared mine below, so let me know if there is anything I’m missing.

API Definition

{
  "active": true,
  "allowed_ips": [],
  "api_id": "api_01",
  "auth_configs": {
    "authToken": {
      "auth_header_name": "Authorization"
    }
  },
  "enable_context_vars": false,
  "enable_ip_whitelisting": false,
  "internal": false,
  "name": "Api 01",
  "org_id": "default",
  "proxy": {
    "listen_path": "/api_01/",
    "preserve_host_header": false,
    "target_url": "http://host.docker.internal:80/headers",
    "strip_listen_path": true,
    "transport": {
      "proxy_url": "",
      "ssl_min_version": 771,
      "ssl_ciphers": [],
      "ssl_insecure_skip_verify": true,
      "ssl_force_common_name_check": false
    }
  },
  "slug": "api_01",
  "use_keyless": false,
  "use_standard_auth": true,
  "version_data": {
    "not_versioned": true,
    "versions": {
      "Default": {
        "name": "Default",
        "use_extended_paths": true,
        "global_headers": {},
        "extended_paths": {}
      }
    }
  }
}

Policy Definition

{
  "id": "bb051e8128a24be1a3bf5e8608839ece",
  "name": "Test quota",
  "org_id": "default",
  "quota_max": 1,
  "quota_renewal_rate": 10,
  "access_rights": {
    "api_01": {
      "allowed_urls": [],
      "api_id": "api_01",
      "api_name": "api_01"
    }
  }
}

Key creation request

###################################
# Create a key definition via policy
###################################
POST /tyk/keys HTTP/1.1
Host: {{host}}:{{port}}
x-tyk-authorization: {{gateway_secret}}
Content-Type: application/json

{
  "apply_policies": [
    "bb051e8128a24be1a3bf5e8608839ece"
  ]
}

When I remove the limit object, no quota is applied to the API call because there is no API-level quota anymore.

It is the full payload.

I tried your API/policy/key objects and when I use the key it only says quota exceeded on the 8th request, which is strange (same count I had earlier).

That’s weird. That should not be the default behaviour. Could you share the result when you fetch the properties:

  • with the limit stripped
  • without the limit stripped
##################################
# Read a key definition definition
##################################
GET /tyk/keys/{key_id} HTTP/1.1
Host: {{host}}:{{port}}
x-tyk-authorization: {{gateway_secret}}

Also, can you omit any sensitive info and share your gateway config?

With limit stripped:

{
    "last_check": 0,
    "allowance": 0,
    "rate": 0,
    "per": 0,
    "throttle_interval": 0,
    "throttle_retry_limit": 0,
    "max_query_depth": 0,
    "date_created": "2023-08-22T13:48:19.189542267Z",
    "expires": 0,
    "quota_max": 1,
    "quota_renews": 1692712129,
    "quota_remaining": 0,
    "quota_renewal_rate": 10,
    "access_rights": {
        "api1": {
            "api_name": "api1",
            "api_id": "api1",
            "versions": null,
            "allowed_urls": [],
            "restricted_types": null,
            "allowed_types": null,
            "limit": {
                "rate": 0,
                "per": 0,
                "throttle_interval": 0,
                "throttle_retry_limit": 0,
                "max_query_depth": 0,
                "quota_max": 0,
                "quota_renews": 0,
                "quota_remaining": 0,
                "quota_renewal_rate": 0
            },
            "field_access_rights": null,
            "disable_introspection": false,
            "allowance_scope": ""
        }
    },
    "org_id": "",
    "oauth_client_id": "",
    "oauth_keys": null,
    "certificate": "",
    "basic_auth_data": {
        "password": "",
        "hash_type": ""
    },
    "jwt_data": {
        "secret": ""
    },
    "hmac_enabled": false,
    "enable_http_signature_validation": false,
    "hmac_string": "",
    "rsa_certificate_id": "",
    "is_inactive": false,
    "apply_policy_id": "",
    "apply_policies": null,
    "data_expires": 0,
    "monitor": {
        "trigger_limits": null
    },
    "enable_detail_recording": false,
    "enable_detailed_recording": false,
    "meta_data": {},
    "tags": [],
    "alias": "",
    "last_updated": "",
    "id_extractor_deadline": 0,
    "session_lifetime": 0
}

Without limit stripped:

{
    "last_check": 0,
    "allowance": 0,
    "rate": 0,
    "per": 0,
    "throttle_interval": 0,
    "throttle_retry_limit": 0,
    "max_query_depth": 0,
    "date_created": "2023-08-22T15:31:13.540722303Z",
    "expires": 0,
    "quota_max": 1,
    "quota_renews": 0,
    "quota_remaining": 0,
    "quota_renewal_rate": 10,
    "access_rights": {
        "api1": {
            "api_name": "api1",
            "api_id": "api1",
            "versions": null,
            "allowed_urls": [],
            "restricted_types": null,
            "allowed_types": null,
            "limit": {
                "rate": 0,
                "per": 0,
                "throttle_interval": 0,
                "throttle_retry_limit": 0,
                "max_query_depth": 0,
                "quota_max": 1,
                "quota_renews": 0,
                "quota_remaining": 1,
                "quota_renewal_rate": 10
            },
            "field_access_rights": null,
            "disable_introspection": false,
            "allowance_scope": "auth-service-api.bs.1"
        }
    },
    "org_id": "",
    "oauth_client_id": "",
    "oauth_keys": null,
    "certificate": "",
    "basic_auth_data": {
        "password": "",
        "hash_type": ""
    },
    "jwt_data": {
        "secret": ""
    },
    "hmac_enabled": false,
    "enable_http_signature_validation": false,
    "hmac_string": "",
    "rsa_certificate_id": "",
    "is_inactive": false,
    "apply_policy_id": "",
    "apply_policies": null,
    "data_expires": 0,
    "monitor": {
        "trigger_limits": null
    },
    "enable_detail_recording": false,
    "enable_detailed_recording": false,
    "meta_data": {},
    "tags": [],
    "alias": "",
    "last_updated": "",
    "id_extractor_deadline": 0,
    "session_lifetime": 0
}

The difference I see is that with limit stripped, the global limit of key is not applied to api limit, so api quota is 0.

I will send the gateway config soon.

Here is the config:

{
    "listen_port": 8080,
    "template_path": "/opt/tyk-gateway/templates",
    "tyk_js_path": "/opt/tyk-gateway/js/tyk.js",
    "middleware_path": "/mnt/tyk-gateway/middleware",
    "use_db_app_configs": false,
    "db_app_conf_options": {
        "connection_string": "",
        "node_is_segmented": false,
        "tags": []
    },
    "app_path": "/mnt/tyk-gateway/apps",
    "enable_hashed_keys_listing": true,
    "enable_redis_rolling_limiter": true,
    "enable_sentinel_rate_limiter": false,
    "storage": {
        "type": "redis",
        "enable_cluster": true,
        "host": "tyk-redis-redis-cluster.tyk.svc.cluster.local",
        "port": 6379,
        "username": "",
        "password": "",
        "database": 0,
        "optimisation_max_active": 4000,
        "optimisation_max_idle": 2000
    },
    "enable_analytics": false,
    "analytics_config": {
        "type": "mongo",
        "csv_dir": "/tmp",
        "mongo_url": "",
        "mongo_db_name": "",
        "mongo_collection": "",
        "purge_delay": -1,
        "ignored_ips": []
    },
    "health_check": {
        "enable_health_checks": false,
        "health_check_value_timeouts": 60
    },
    "optimisations_use_async_session_write": true,
    "enable_non_transactional_rate_limiter": true,
    "enable_sentinel_rate_limiter": false,
    "enable_jsvm": true,
    "allow_master_keys": false,
    "policies": {
        "policy_source": "file",
        "policy_path": "/mnt/tyk-gateway/policies"
    },
    "hash_keys": true,
    "hash_key_function": "murmur128",
    "close_connections": false,
    "http_server_options": {
        "enable_websockets": true,
        "use_ssl": true,
        "server_name": "*",
        "min_version": 771,
        "certificates": [{
            "domain_name": "*",
            "cert_file": "/etc/certs/cert.pem",
            "key_file": "/etc/certs/key.pem"
        }]
    },
    "allow_insecure_configs": true,
    "coprocess_options": {
        "enable_coprocess": true,
        "coprocess_grpc_server": "",
        "python_path_prefix": "/opt/tyk-gateway"
     }, 
    "enable_bundle_downloader": true,
    "bundle_base_url": "http://tyk-bundle-service-http-folder.tyk.svc.cluster.local/",
    "public_key_path": "",
    "global_session_lifetime": 100,
    "force_global_session_lifetime": false,
    "max_idle_connections_per_host": 500,
    "enable_custom_domains": true,
    "pid_file_location": "/mnt/tyk-gateway/tyk.pid"
}

I have been able to reproduce the issue with your config. The number varies for me immediately I swap to using your config. But I cannot pinpoint exactly what field is causing this strange behaviour. I’ll dive deep and see what I find

Ok thanks, if you can share me your config as well that would be great.

Sure

{
  "allow_insecure_configs": true,
  "allow_master_keys": true,
  "analytics_config": {
    "csv_dir": "/tmp",
    "enable_detailed_recording": false,
    "mongo_url": "",
    "mongo_db_name": "",
    "mongo_collection": "",
    "purge_delay": -1,
    "type": "mongo"
  },
  "app_path": "/opt/tyk-gateway/apps/",
  "bundle_base_url": "<redacted>",
  "close_connections": false,
  "coprocess_options": {
    "coprocess_grpc_server": "",
    "enable_coprocess": true,
    "python_path_prefix": "/opt/tyk-gateway"
  },
  "disable_ports_whitelist": true,
  "drl_enable_sentinel_rate_limiter": false,
  "drl_notification_frequency": 1,
  "drl_threshold": 5,
  "enable_analytics": false,
  "enable_bundle_downloader": true,
  "enable_hashed_keys_listing": true,
  "enable_http_profiler": true,
  "enable_jsvm": true,
  "enable_non_transactional_rate_limiter": true,
  "enable_redis_rolling_limiter": true,
  "enable_sentinel_rate_limiter": false,
  "enable_websockets": true,
  "force_global_session_lifetime": false,
  "global_session_lifetime": 100,
  "hash_key_function": "murmur128",
  "hash_keys": true,
  "health_check": {
    "enable_health_checks": false,
    "health_check_value_timeouts": 60
  },
  "http_server_options": {
    "certificates": [
      {
        "domain_name": "*",
        "cert_file": "/opt/tyk-gateway/certs/localhost/cert.pem",
        "key_file": "/opt/tyk-gateway/certs/localhost/key.pem"
      }
    ],
    "enable_http2": true,
    "enable_websockets": true,
    "flush_interval": 1,
    "min_version": 771,
    "server_name": "*",
    "ssl_certificates": [],
    "ssl_insecure_skip_verify": false,
    "use_ssl": false
  },
  "jsvm_timeout": 120,
  "listen_port": 8080,
  "local_session_cache": {
    "disable_cached_session_state": true
  },
  "log_level": "info",
  "max_idle_connections_per_host": 500,
  "middleware_path": "/opt/tyk-gateway/plugins/",
  "monitor": {
    "enable_trigger_monitors": false,
    "configuration": {
      "method": "POST",
      "target_path": "<redacted>",
      "template_path": "templates/monitor_template.json",
      "header_map": {
        "some-secret": "0123456"
      },
      "event_timeout": 10
    },
    "global_trigger_limit": 100,
    "monitor_user_keys": true,
    "monitor_org_keys": false
  },
  "newrelic": {
    "app_name": "",
    "license_key": ""
  },
  "optimisations_use_async_session_write": true,
  "policies": {
    "_allow_explicit_policy_id": true,
    "_policy_record_name": "/opt/tyk-gateway/policies/policies.json",
    "policy_source": "file",
    "policy_path": "/opt/tyk-gateway/policies/"
  },
  "proxy_enable_http2": true,
  "public_key_path": "",
  "secret": "<redacted>",
  "storage": {
    "addrs": [
      "server1:6379",
      "server2:6380",
      "server3:6381"
    ],
    "database": 0,
    "enable_cluster": true,
    "host": "host.docker.internal",
    "hosts": null,
    "optimisation_max_active": 4000,
    "optimisation_max_idle": 2000,
    "password": "",
    "port": 6379,
    "timeout": 10,
    "type": "redis",
    "username": ""
  },
  "template_path": "/opt/tyk-gateway/templates",
  "tracing": {
    "enabled": false,
    "name": "zipkin",
    "options": {
      "reporter": {
        "batch_size": 0,
        "max_backlog": 0,
        "url": "<redacted>"
      },
      "sampler": {
        "mod": 0,
        "name": "",
        "rate": 0,
        "salt": 0
      }
    }
  },
  "track_404_logs": true,
  "tyk_js_path": "/opt/tyk-gateway/middleware/javascript/eventHandler/sessionHandler.js",
  "uptime_tests": {
    "config": {
      "checker_pool_size": 0,
      "enable_uptime_analytics": false,
      "failure_trigger_sample_size": 3,
      "time_wait": 60
    },
    "disable": true,
    "poller_group": ""
  },
  "use_db_app_configs": false
}

But I think the cause of the issue is

  "local_session_cache": {
    "disable_cached_session_state": true
  }

Can you add it to yours and let me know the result?

Adding this to config fixed the issue! Thanks