URL based Throttling as in RESTify


#1

Imported Google Group message. Original thread at: https://groups.google.com/forum/#!topic/tyk-community-support/1TabOLukEdM Import Date: 2016-01-19 21:04:41 +0000.
Sender:Khirod Kant Naik.
Date:Sunday, 8 February 2015 13:35:19 UTC.

I wanted to make sure if it is possible to throttle requests on API endpoints ?
Basically our use case is we want to limit the number of calls on different API endpoints and queue any of those additional requests so that they can be handled later ? Any suggestions …


#2

Imported Google Group message.
Sender:Martin Buhr.
Date:Monday, 9 February 2015 08:59:16 UTC.

Hi Khirod,

Documentation on internals: Not yet, we’re working on it, but we’ve been fcussing on getting the end-user docs right first :-/

RateLimitExceeded Event: Yes, this is the event that gets fired when a rate limit is hit by a specific key. You can also script your own event handlers in javascript now too for more complex interaction with an external reporting APi (such as an ESB),

In order to get a copy of the outbound request, you’d need to extend the EVENT_RateLimitExceededMeta struct in event_system.go with an additional field: OutboundReq: []byte, into which you could copy the request (using a byte buffer and the Write() method), this would give you the wire-protocol version of the request as a string (http://golang.org/pkg/net/http/#Request.Write), so you would need to parse it again in your middleware into an HTTP Request if you wanted to redirect it, although you could just encode ther equest and pass it as-is to your ESB and let your worker handle decoding and parsing the raw request data. Alternatively, you could copy the salient http.Request fields into a custom struct so that you have dot-notation access to them in the JS event handler.

I think having the raw request encoded in something like Base64 would be the best thing to do, as it means that there is no data loss and it leaves handling it up to the integrator.

I’ll put this in the roadmap, should be a quick change, we could even make it global, since most events in the system currently hang off the middleware, which means copying the request in would be beneficial to all.

Cheers,
Martin

On Monday, February 9, 2015 at 7:03:10 AM UTC, Khirod Kant Naik wrote:
And the throttling system event we are talking about is RateLimitExceeded Event as written in documentation right ?


#3

Imported Google Group message.
Sender:Khirod Kant Naik.
Date:Sunday, 8 February 2015 13:56:27 UTC.

Hi Khirod,

The throttler in Tyk operates across the whole API definition, which means any API endpoints it manges within that definition are throttled the same way.

To get something like this working you could use some clever redirects though:

For each endpoint, create a seperate API definition, this will mean that this API definition will need to listen on a new path (multiple defs cant listen to the same inbound URL). However you could then use NGinX to proxy traffic upstream according to your existing API endpoint configuration, this way the interface to your API is the same across the board, but Tyk is treating inbound requests as different Definitions and therefore will apply different throttlers to those endpoints.

It’s a bit complicated, and could mean having many definitions which make it hard to manage a configuration, but you will get the result you want.

For the second part of your request, I’m not sure we can help with anything out-of-the-box: all throttled requests are blocked, so redirecting them into a queue isn’t possible, though you could develop a custom event handler that handles the throttle event and passes it the raw request, currently event handlers only get limited metadata about the event (path, key, event), but it should be possible to pass the entire request data through, though you run the risk of having many long-lived goroutines and potentially flooding your queue / ESB.

It’s an interesting idea to make the request available to the event handler, I’ll add it to the roadmap as it could be useful to others.

Hope that helps :slight_smile:

Martin

On Sunday, February 8, 2015 at 1:35:19 PM UTC, Khirod Kant Naik wrote:
I wanted to make sure if it is possible to throttle requests on API endpoints ?
Basically our use case is we want to limit the number of calls on different API endpoints and queue any of those additional requests so that they can be handled later ? Any suggestions …


#4

Imported Google Group message.
Sender:Martin Buhr.
Date:Monday, 9 February 2015 06:05:19 UTC.

Okay ! that resolved my doubts. Is there any documentation for learning more about the internals of Tyk :).


#5

Imported Google Group message.
Sender:Khirod Kant Naik.
Date:Monday, 9 February 2015 07:03:10 UTC.

And the throttling system event we are talking about is RateLimitExceeded Event as written in documentation right ?


#6

Imported Google Group message.
Sender:Khirod Kant Naik.
Date:Monday, 9 February 2015 08:59:16 UTC.

Hi Khirod,

Documentation on internals: Not yet, we’re working on it, but we’ve been fcussing on getting the end-user docs right first :-/

RateLimitExceeded Event: Yes, this is the event that gets fired when a rate limit is hit by a specific key. You can also script your own event handlers in javascript now too for more complex interaction with an external reporting APi (such as an ESB),

In order to get a copy of the outbound request, you’d need to extend the EVENT_RateLimitExceededMeta struct in event_system.go with an additional field: OutboundReq: []byte, into which you could copy the request (using a byte buffer and the Write() method), this would give you the wire-protocol version of the request as a string (http://golang.org/pkg/net/http/#Request.Write), so you would need to parse it again in your middleware into an HTTP Request if you wanted to redirect it, although you could just encode ther equest and pass it as-is to your ESB and let your worker handle decoding and parsing the raw request data. Alternatively, you could copy the salient http.Request fields into a custom struct so that you have dot-notation access to them in the JS event handler.

I think having the raw request encoded in something like Base64 would be the best thing to do, as it means that there is no data loss and it leaves handling it up to the integrator.

I’ll put this in the roadmap, should be a quick change, we could even make it global, since most events in the system currently hang off the middleware, which means copying the request in would be beneficial to all.

Cheers,
Martin

On Monday, February 9, 2015 at 7:03:10 AM UTC, Khirod Kant Naik wrote:
And the throttling system event we are talking about is RateLimitExceeded Event as written in documentation right ?