"request error: Key has expired, please renew"

I have a custom JSVM middleware plugin w/ “Auth Token” security, that is generating a custom session and setting its expires to now+60s. The “hashedSessionKey” below is what is set as the custom “Auth Token” header for the API

// build session object
var newSessionString = "{\"expires\": "+expires_at+", "+ // THIS IS NOW + 60s
		"\"quota_max\": 1000, "+
		"\"quota_remaining\":1000, "+
		"\"quota_renewal_rate\":"+session_expires_seconds+", "+
		"\"alias\":\""+hashedSessionKey+"\", "+
		"\"access_rights\": "+
		  "{ \""+config.config_data.api_id+"\" : "+
				 "\"api_name\": \""+config.config_data.api_name+"\","+
				 "\"api_id\": \""+config.config_data.api_id+"\","+
				 "\"versions\": [\"Default\"],"+
				 "\"allowed_urls\": null"+

After this its does the following

TykSetKeyData(hashedSessionKey, JSON.stringify(newSession));

This works fine, and in a test, I have a client making continual requests to this API every second.

What I’ve noticed however is when the last request is made that coincides w/ the session expires, my plugin ensures a new session is created and TykSetKeyData is set succesfully.

The problem is that the middlware returns/exits, and then Tyk rejects the request with

Key has expired, please renew

And this is in the logs

2017-08-03T16:12:51.556954853Z time="Aug  3 16:12:51" level=info msg="curr_epoch=1501776771 new session expires_at:1501776831" type=log-msg
2017-08-03T16:12:51.557408370Z time="Aug  3 16:12:51" level=info msg="new session object = {\"access_rights\":{\"3b30b68340b8492858ea00be8c9b248e\":{\"allowed_urls\":null,\"api_id\":\"3b30b68340b8492858ea00be8c9b248e\",\"api_name\":\"/xx/my/auth/1.0\",\"versions\":[\"Default\"]}},\"alias\":\"5976507eb46f2e00016bb4a13a4e7ca913238ee243102a32d27e56bc\",\"expires\":1501776831,\"quota_max\":1000,\"quota_remaining\":1000,\"quota_renewal_rate\":60}" type=log-msg
2017-08-03T16:12:51.557571137Z time="Aug  3 16:12:51" level=info msg="Reset quota for key." inbound-key="****56bc" key=quota-3e769148
2017-08-03T16:12:51.558528163Z time="Aug  3 16:12:51" level=info msg="Key added or updated." api_id=-- expires=1501776831 key="****56bc" org_id= path=-- server_name=system user_id=system user_ip=--
2017-08-03T16:12:51.559237225Z time="Aug  3 16:12:51" level=info msg="Attempted access from expired key." key=5976507eb46f2e00016bb4a13a4e7ca913238ee243102a32d27e56bc origin= path="/xx/my/auth/1.0"
2017-08-03T16:12:51.559272986Z time="Aug  3 16:12:51" level=error msg="request error: Key has expired, please renew" api_id=3b30b68340b8492858ea00be8c9b248e org_id=5976507eb46f2e00016bb4a1 path="/" server_name="" user_id="****56bc" user_ip=

The next request by the client succeeds.

So the session bound to that key has its expires incremented and set properly via TykSetKeyData, but the current request fails. What should I do to correct this? Its like the session that I set via TykSetKeyData is not being used after the middleware returns, but the “old” session is w/ the old expires_at value?

Can you check if your tyk.conf has:


By default sessions are set to cache, set this to true to stop Tyk from caching keys locally on the node.

Set to true?

verified adding this to gateway tyk.conf fixes it

"local_session_cache" : {
      "disable_cached_session_state": true

What are the effects of disabling this performance wise? Why does this use case require it, but normal use-cases don’t?

Because you are manually managing the key, and then re-setting it’s expiry, normal use cases would just issue a new unique token. si a cache improves lookup times and defers writes.

The difference in terms of performance is that with the cache the lookuosnare in memory, whereas without its a round trip to redis.

@Martin FYI - am still seeing this happening despite

"local_session_cache" : {
      "disable_cached_session_state": true

It could be a race condition where a token is accessed or overwrites your changes before your changes are written, it’s all concurrent after all.

An easy fix would be to make your renewal check shorter than the actual expiry and then renew.

so the write back to Redis is async is what you are saying

So… the issue we are seeing is we have a client who’s last request was > 24 hours ago.

It makes a request, the session check fails, (expired), new one created and it gets written back via TykSetKeyData, x-internal-authorization header set, and my auth plugin exits.

The old session is still fetched by the auth framework in tyk and shows it still as expired (stale record)

So if I adjust my comparison on a session.expires for one fetched from TykSetKeyData to be < the actual intended expiry, I still don’t see how this helps. (I mean I can see it working for sessions that are still within the expiry window, but not for already expired ones)

Or are you referring to the session_lifetime property on the api def in combination w/ this expires attribute

Right, you can try locking it down using this setting:

optimisations_use_async_session_write: false

(it is usually defaulted to true), this might fix the slow write

@Martin I have set my tyk.conf with followings;

"local_session_cache" : {
       "disable_cached_session_state": true
"optimisations_use_async_session_write": false 

but the problem still exists. It always says, Key has expired, please renew

@Martin I think I got the reason of this situation. I use auth0 with tyk and I think there is a timezone difference between tyk servers I installed, and auth0. auth0 issues key by using its timezone, and default expire time of tyk is 1 hour. It is automatically expired even if you just logged in to system. When I change default expiration time to 24 hrs, it started to work. At least it works on my case.

Update: The Jwt expire time should be same on both tyk and provider (on my case auth0). You will get same error even you define more wide expiration time on tyk.

THere will be two expiration checks - one on the JWT itself and one on the internal session associated with the sub (owner) of the JWT.

It might be that the JWT is not expired but the user is.

Sorry - that was vague…

The way to fix this would be to get rid of the trial period on the policy (never expires) and then the key will be bounced when the JWT expires, and a new JWT will then just continue to work and retain the quota / rate limit data from the previous JWT.