I updated my Docker Images from 2.2 to 2.3
I also did an update of the redis image.
I have all my configs and the redis data as VOLUME on my host system,
so if I delete all the containers and rebuild them, this should have no effect on the data.
After updating to 2.3, my old API KEYs were invalid. After creating new ones, it worked again.
Is this expected behaviour, e.g. because of the redis update?
That’s quite worrying - we’ll investigate - tokens should be working the same as before - there should be no difference between versions.
Can you confirm that there are still tokens in your redis instance (that are not new?).
If redis did not have persistence configured then a restart will lose all your data, but if not then they keys may be incompatible for some reason.
We just ran an upgrade scenario from 2.2 to 2.3 where a token was generated in 2.2 and then used in 2.3 with the same redis DB (this is not dockerised, this was a local installation from our package cloud - the same binaries running in the containers). The scenario showed that tokens continued to be valid across binary versions of the gateway - so I suspect that redis dropped your data when you restarted the container.
Thanks for your investigation! I will check my redis configuration and store now
I updated tyk dashboard and gateway to recent version. All details are there but when I use old API keys I am getting invalid API key error.
As per previous discussion I checked redis server data. There I could see many keys (eg apikey-bba1234), it also has API,organization , user quota etc related data.
Any help will be highly appreciated
What version did you upgrade from and did you back up your tyk.comf and dashboard.conf before upgrade?
Dashboard v0.9.7.0 to v.1.3.7
Gateway v1.9 to v2.3
Yes I have copy of both configuration files.
Wow, that’s a very big jump.
I would suggest you transpose your configurations (redis and mongo) into the templates that ship in the install/ directory of both the gateway and the dashboard as a lot has changed.
Post installation, I executed setup script with same details of redis and mongo, it has created config files for dashboard and gateway.
I compared old configuration file with current configuration files, server details are same in both files (old and new).
Should I post my configuration files here ?
Before you upgraded were you using plaintext keys or hashed keys?
Is there any log dat that you can share when the request gets bounced?
Also, can you edit old keys in the dashboard?
Here is the log generated
[Jul 20 05:13:08] WARN auth-mgr: Key not found in storage engine err=Key not found inbound-key=****d7ab
[Jul 20 05:13:08] INFO Attempted access with non-existent key. key=56499d8fcaa45b2d050000015efa83921e51234dr345f345ffbfd76a origin=xxx.x.x.x path=/search/courses
[Jul 20 05:13:08] ERROR gateway: request error: Key not authorised api_id=f343f3931123424c368712d75b1fe59a3 org_id=12345678fcaa45b2d05000001 path=/courses server_name=http://localhost:8801/ user_id= user_ip=xxx.x.x.x
No I was not able to edit old keys, but can regenerate new keys for a user.
Ok, that’s a bit odd.
Did you compile the old version of tyk yourself?
Yes, we compiled it with some change. We added one middleware to meet some of our needs.
And that right there is the problem. It’s the murmur hash lib we use, ours is vendored because the original author changed the seed value of the lib (and made it non-configurable) and breaking backwards compatibility - which we can’t tolerate,
So basically if you compiled Tyk with the non-vendored version, it’s likely it generates hashes that are incompatible with the Tyk mainline
The only option for you is to compile the latest version of Tyk Gateway yourself, but remove the murmur3 library from the vendors list so that you use the same version that your old Tyk was built to.