Tyk Latency taking quite large time

Hi,
While analysis logs in kibana I can see that the latency.total is 570.625 ms and latency.upstream is 400.782 ms , so the tyk latency is around ~160ms, which is quite large in our scenario because we have only 2 policies which are RateLimitByIp and recordRestHeader. We have also observed that is for the env where we have around 90lakh calls during the time period. So, is it because of the huge amount of calls or there’s some other reason.

We don’t have policies/ in-built middlewares (RateLimitByIp and recordRestHeader) like that. Maybe these could be custom plugins. Possibly JavaScript plugins or Virtual Endpoints

You can investigate the operation time of each middleware including plugins by enabling debug logs and examine the nanosecond ns field

...msg=Finished api_id=879ad0c50b954bcf638d621ab5dc7a9e api_name=gql-test code=200 mw=VersionCheck ns=83000 org_id=657b29246e9ffa0001ee4634 origin=xxx.xxx.xx.x path=/gql-test/

..msg=Finished api_id=879ad0c50b954bcf638d621ab5dc7a9e api_name=gql-test code=200 mw=RateCheckMW ns=27750 org_id=657b29246e9ffa0001ee4634 origin=xxx.xxx.xx.x path=/gql-test/

Or you could use tracing or Otel to discover long-running operations.

1 Like