Hybrid: panic stack traces, container dies

Noticed one of our hybrid containers was reporting this over an over in the logs, then it died

goroutine 102386537 [chan send, 4 minutes]:
main.(*DynamicMiddleware).ProcessRequest.func1.1(0xc4785a7ce0)
/src/github.com/TykTechnologies/tyk/plugins.go:181 +0x6a
main.(*DynamicMiddleware).ProcessRequest.func1(0xc4785a7ce0, 0xc45b38d4e0, 0xc428b2f7d0, 0xf, 0xc429748700, 0x34e, 0x646, 0xc4267dd680, 0x222, 0x24a, …)
/src/github.com/TykTechnologies/tyk/plugins.go:185 +0x1e6
created by main.(*DynamicMiddleware).ProcessRequest
/src/github.com/TykTechnologies/tyk/plugins.go:185 +0x109f

goroutine 102393883 [chan send, 4 minutes]:
main.(*DynamicMiddleware).ProcessRequest.func1.1(0xc4507a1f80)
/src/github.com/TykTechnologies/tyk/plugins.go:181 +0x6a
main.(*DynamicMiddleware).ProcessRequest.func1(0xc4507a1f80, 0xc46c2365c0, 0xc428b2f7a8, 0x7, 0xc45abf6e00, 0x379, 0x64c, 0xc45fd1e500, 0x222, 0x24a, …)
/src/github.com/TykTechnologies/tyk/plugins.go:185 +0x1e6
created by main.(*DynamicMiddleware).ProcessRequest
/src/github.com/TykTechnologies/tyk/plugins.go:185 +0x109f

goroutine 102406513 [chan send, 3 minutes]:
main.(*DynamicMiddleware).ProcessRequest.func1.1(0xc4666e11a0)
/src/github.com/TykTechnologies/tyk/plugins.go:181 +0x6a
main.(*DynamicMiddleware).ProcessRequest.func1(0xc4666e11a0, 0xc45d477e90, 0xc428b2f7d0, 0xf, 0xc467a9d800, 0x34e, 0x646, 0xc45b5a8000, 0x222, 0x24a, …)
/src/github.com/TykTechnologies/tyk/plugins.go:185 +0x1e6
created by main.(*DynamicMiddleware).ProcessRequest
/src/github.com/TykTechnologies/tyk/plugins.go:185 +0x109f

Hello!

Thank you for the report!

Can you pls clarify which version you are using?

version hybrid 2.3.8

Any further information on this?

No direct info, however this error is part of tyk trying to recover from a middleware crashing out (the JSVM crashing out) and trying to recover / terminating the process immediately.

So the issue may be with what the middleware itself is trying to do - is there any more to this stack trace (e.g. from where it began)? Otherwise it might be worth investigating the middleware function itself?

Haven’t had it happen since, but shouldn’t the middleware execution be firewalled/sandboxed as to a failure there, not be able to kill the entire gateway?

It is sandboxed, the idea is that we cath the panic and defer it, but if there’s a memleak or it happens over a long period of time some other aspect of the failure may kill the gateway.

Another one: panic/die: fatal error: concurrent map iteration and map write · Issue #1216 · TykTechnologies/tyk · GitHub