Tyk Pump Panic with "stack exceeds 1000000000-byte limit"

Every now and then, my pump instance in docker fails

Any ideas? I purge ever 2 seconds to mongoDB.

You may want to run more than one purger, also - this usually only happens when the purger runs rampant (with a purge of 0), perhaps you have mis-spelled the setting?

Well I have a purge of 2, maybe I should try it with 1 then? :confused:

"analytics_storage_type": "redis",
    "analytics_storage_config": {
        "type": "redis",
        "host": "redis",
        "port": 6379,
        "hosts": null,
        "username": "",
        "password": "xxxx",
        "database": 0,
        "optimisation_max_idle": 100,
        "optimisation_max_active": 0,
        "enable_cluster": false
    "purge_delay": 2,
    "pumps": {
        "mongo": {
            "name": "mongo",
            "meta": {
                "collection_name": "tyk_analytics",
                "mongo_url": "mongodb://mongo:27017/tyk_analytics"
        "elasticsearch": {
                "name": "elasticsearch",
                "meta": {
                        "index_name": "tyk_analytics",
                        "elasticsearch_url": "liwnd070.my-tts.net:9200",
                        "enable_sniffing": "false",
                        "document_type": "tyk_analytics"
    "uptime_pump_config": {
        "collection_name": "tyk_uptime_analytics",
        "mongo_url": "mongodb://mongo:27017/tyk_analytics"
    "dont_purge_uptime_data": false

This is set to 0, so essentially your purger has no redis connections to use to do the purge, I reckon bumping this to a higher value should solve it.

1 Like

Hi @Lukas_Zaiser,

Did this fix the issue for you? We noticed there was a bug in the last 0.4 release that meant it wasn’t picking up configurations properly, if you update to the latest version it should be ok now.


I will try the update. Pump is running properly but crashes about once a week. As we have automatic restart of the container we don’t really notice it but still not ideal.

There’s been another update recently (0.4.2) which may address this.