Tyk SSE holding streaming data

Hi,

We are currently running open source tyk on kubernetes and we are attempting to host a connection to an upstream api using SSE. When we connect through our tyk gateway to this upstream api the gateway does not pass through the stream back to the client immediately. Instead the gateway seems to withold the stream until the upstream service has finished and then tyk releases all of the data at once to the client.

We have tried to change the flush interval to the lowest value, this has had no changes on the behaviour of the gateway. We tried disabling OTEL and setting close_connections to false and proxy_close_connections to false, nothing has changed sse problem we are facing

Any help will be appreciated

Nik

We have found an issue in the implimentation of TYK streaming (SSE) which causes caching to not be disabled on streaming requests. This causes the gateway to withhold the stream until it finishes.

1 Like

Thanks for reporting this. I can replicate the issue on v5.3.x.

Seems like it works fine on v5.0.12 and v5.2.6.

An internal ticket has been created and linked to the github issue raised.

I’ll reply to this topic as soon as I get an update on the ticket.

Hi Olu, just following up on this issue. Has there been any progress made here? I have a similar use case in mind.

@jay039 Hello and welcome to the community :partying_face:

I can see there was a fix pushed in source [TT-12318] SSE streaming is broken by titpetric · Pull Request #6391 · TykTechnologies/tyk · GitHub. However, tests are being run to check it’s stability.

There should be an update in the coming weeks. The fix should be coming out in release v5.3.3 and v5.5.0.

Hello,
I’ve worked on this topic following up on this ticket: https://community.tyk.io/t/multipart-streaming-mode/7264

I ended up forking Tyk to apply my modifications.
Now, this fork of tyk gateway 5.2.6 support the streaming by modifying the condition in the reverse_proxy.go file if the content type is of type “multipart/*”.
This version can handle large file uploads without side effects in memory, and the flow are pushed directely to the backend, part by part.

I also modified some code on the on_success / on_error side to avoid buffering the payload.

Available if you want more information.

PS: I haven’t pushed a pull request yet.

@Alexandre_Wintrebert I missed your comment. Would love to see the PR!