Tyk Pump not working

I’m running tyk-gateway the installed version(not docker). Tyk-gateway and tyk-pump are installed in the same instance.

I’m trying to send the data to Moesif but not luck.

This is my pump.conf (application_id is a not the real one)

{
   "analytics_storage_type": "redis",
   "analytics_storage_config": {
       "type": "redis",
       "host": "localhost",
       "port": 6379,
       "hosts": null,
       "username": "",
       "password": "",
       "database": 0,
       "optimisation_max_idle": 100,
       "optimisation_max_active": 0,
       "enable_cluster": false,
       "redis_use_ssl": false,
       "redis_ssl_insecure_skip_verify": false
   },
   "log_level": "debug",
   "log_format": "text",
   "purge_delay": 1,
   "health_check_endpoint_name": "hello",
   "health_check_endpoint_port": 8083,
   "pumps": {
       "moesif": {
           "name": "moesif",
           "meta": {
             "application_id": "eyJhcaiOiIzOTE6MjE0IiwidmVysdfoi4234wIiw432b3JnIjoiMTYzOjE3OSIsImlhdCI6MTYzMDQ1NDQwMH0._NDy3XQHMRnHBACQM-p5j8DyehobD1nFQgzXXHSAYYs"
           }
         }
   },
   "dont_purge_uptime_data": false,
   "omit_detailed_recording": false,
   "max_record_size": 1000
}

This is my analytics related configuration inside tyk.conf

 "storage": {
        "type": "redis",
        "host": "localhost",
        "port": 6379,
        "username": "",
        "password": "",
        "database": 0,
        "optimisation_max_idle": 2000,
        "optimisation_max_active": 4000
    },
    "enable_analytics": true,
    "analytics_config": {
 	    "type": "",
    	"ignored_ips": [],
        "enable_detailed_recording": true
    },

When I see the log it seems OK

Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="## Tyk Analytics Pump, 1.4.0 ##"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="Connecting to redis cluster"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="Creating new Redis connection pool"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="--> [REDIS] Creating single-node client"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="[STORE] SET Raw key is: pump"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="Input key was: version-check-pump"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="[STORE] Setting key: version-check-pump"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="Input key was: version-check-pump"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="Serving health check endpoint at http://localhost:8083/hello ..."
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="Checking default Moesif Pump env variables with prefix TYK_PMP_PUMPS_MOESIF_META"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="Moesif Pump Initialized"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="Init Pump: moesif"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="'dont_purge_uptime_data' set to false, attempting to start Uptime pump! MongoDB Pump"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg=Init collection_name= url=
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=debug msg="Trying to set uptime pump with PMP_MONGO env vars"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="-- No max batch size set, defaulting to 10MB"
Sep 15 10:47:49 tyk-pump[28672]: time="Sep 15 10:47:49" level=info msg="-- No max document size set, defaulting to 10MB"

But I’m not receiving anything on the Moesif side. Need some help with this. Thanks

Hi @javiertc, can you share the version of the gateway and pump you are using?

Hi @Olu, I just changed "dont_purge_uptime_data": from false to true and now it works. The documentation is not very clear about what this parameter does and I wonder why in the examples they have it in false.

1 Like

That’s great! Happy to hear it now works. I tried it on my end and that fixed the issue.

Seems the issue was right under our noses in the logs.