How is Tyk doing response caching?

Imported Google Group message. Original thread at: Redirecting to Google Groups Import Date: 2016-01-19 21:06:38 +0000.
Sender:Khirod Kant Naik.
Date:Tuesday, 17 February 2015 07:36:06 UTC.

Okay we can cache a endpoint and from what I can see from the code I am guessing that it is caching the response which matches those endpoints only. What we want to add is to cache the responses only if response contains Caching header set to true. We also want to set a TTL in the header to specify the time till the response remains cached.

Of course when a request arrives we still have to check it’s response in Cache and only proxy the request in case of a cache-miss. In case of a POST request we should skip checking the response in cache, but that souldn’t be a problem since currently you are caching only safe-requests in the code. Any suggestions…

Imported Google Group message.
Sender:Martin Buhr.
Date:Tuesday, 17 February 2015 08:36:52 UTC.

Hi Khirod,

Yu’re correct, the cache in Tyk is quite naive (the branch was actually called naive-cache), it basically just caches the response as specified in the endpoint path spec. So at the moment caching is quite global and controlled downstream.

It should be possible to extend the caching system to match against a path spec (as it dows now), but add an option to the API Definition to specify a only_cache_when header map, this would basically allow you to specify cached paths (this activates the middleware), but only cache when your upstream system specifies it (the same would go for the TTL header). For simplicity I would suggest these go into a global option configuration, not on a per-spec basis (although possible, it’s trickier).

I’ll put this into the roadmap, always looking for ways to improve the cache - putting the option to enable control upstream makes sense and adds significant flexibility.

Getting this to work would involve extending the APIDefinition object in the tykcommon library to include some sort of bool to enable upstream control, and two options that specify header keys to look for regarding TTL and Enabling Cache (as well as it’s truth value, i.e. 1 or true or whatever). Then modifying the cache middleware to check those spec options and either build the cache (with TTL) or to proxy, it wouldn’t be too onerous to set up.

But apart from direct modification of the code, there’s no workaround at the moment for upstream cache control :-/

Thanks,
Martin

On Tuesday, February 17, 2015 at 7:36:06 AM UTC, Khirod Kant Naik wrote:
Okay we can cache a endpoint and from what I can see from the code I am guessing that it is caching the response which matches those endpoints only. What we want to add is to cache the responses only if response contains Caching header set to true. We also want to set a TTL in the header to specify the time till the response remains cached.

Of course when a request arrives we still have to check it’s response in Cache and only proxy the request in case of a cache-miss. In case of a POST request we should skip checking the response in cache, but that souldn’t be a problem since currently you are caching only safe-requests in the code. Any suggestions…