Detailed Request Logging, Not Working


#1

Hello, I’ve enabled detailed request logging my analytics_config looks like:

"enable_analytics": true,
  "analytics_config": {
    "type": "mongo",
    "csv_dir": "/tmp",
    "mongo_url": "localhost",
    "mongo_db_name": "tyk_analytics",
    "mongo_collection": "transactions",
    "purge_delay": 100,
    "ignored_ips": [],
    "enable_detailed_recording": true,
    "enable_geo_ip": false,
    "geo_ip_db_path": "",
    "normalise_urls": {
          "enabled": true,
          "normalise_uuids": true,
          "normalise_numbers": true,
          "custom_patterns": []
      }
}

But every time I try to pull logs in the dashboard, I don’t see anything and there are no errors server side.

EDIT:

There is an error if you wait long enough in the log. It seems to not be able to connect? But the db does exist I’ve checked.

Jan 11 18:25:11 vastyk tyk-analytics[58128]: time="Jan 11 18:25:11" level=error msg="Something went wrong, couldn't get Log data: read tcp 127.0.0.1:45096->127.0.0.1:27017: i/o timeout"


#2

Hi dmtroll,

I think I’m going to need further details in order to investigate this issue. Is the error that you’ve posted still the only output that you receive from your logs or have you received any subsequent errors since then? If not, could you please confirm that the services for the Gateway, the Dashboard and the Pump are running on your server? Can you also please confirm as to whether Redis and MongoDB are running and that your Dashboard has been configured with the correct host name (details regarding the installation process can be found here).

Kind regards,
Jess @ Tyk


#3

Hi Jess,

I have the same issue as dmtroll has.


#4

Hi Eric,

Are the settings in your tyk.conf file the same as they were when you raised that issue last Friday? If so, I noticed that the enable_analytics option was set to false which might explain this error in your case. If setting this to true and restarting Tyk doesn’t fix the issue, there might be a problem with your Redis connection.

Kind regards,
Jess @ Tyk


#5

Hi Jess,

Nope, unfortunately it’s set to true

I’ll focus on monitoring the Redis logs then and let you know if I find something strange.
Thanks!


#6

Just to get an idea of the process; logs end up initially in Redis and then get written to MongoDB, right?

I noticed mongo hitting a lot of Virtual mem:

Last week I did some performance testing and might have flooded MongoDB at some point.

  • Is there a way to see whether Mongo is hitting limits?

#7

Hi Eric,

Yes, data from the Gateway is originally stored in Redis. It is then pushed into MongoDB by the Pump and the Dashboard uses the analytics data when outputting logs.

An issue in MongoDB could potentially explain this. I’m not sure if we would be able to help you with performance monitoring in MongoDB specifically (I think there are a few third party tools available that could help with this). It may be worth adding more RAM to your instance however. We usually recommend that the server MongoDB has been installed on use an SSD as well for an improved performance.

Kind regards,
Jess @ Tyk


#8

Hi Jess,

We are using a SAN which has some SSD’s in them.
I’ll check with systems engineering if we’re allowed to use it.

Thanks!


#9

What is likely to have happened here is that the data was recorded and was sent to your mongo DB.

However, MongoDB when it does a query over a range will try to load the entire working set into memory if there is not enough ram for the query then it willy ale a long time and potentially time out.

Since detailed logging generates massive amounts of data your tyk_analytics collection will balloon and take most of your available ram, causing simple queries to fail.

Now the tyk_analytics collection where this data is stored is only used for complex queries and the log browser, not for aggregate high level views used in other parts of the dashboard.

This means that the collection can be safely converted to a capped collection, which will basically turn it into stack where old records are discarded as the cap is reached.

Converting to a capped collection means that you will no longer have the timeouts if the cap is smaller than your RAM.

To convert to a capped collection is pretty easy:

https://docs.mongodb.com/manual/reference/command/convertToCapped/

M.