Redis key storage

Imported Google Group message. Original thread at: Redirecting to Google Groups Import Date: 2016-01-19 21:08:20 +0000.
Sender:Ben King.
Date:Thursday, 19 March 2015 10:52:46 UTC.

Hi all,

Can I just confirm that the access keys are used in plain text format for the Redis lookup, rather than being hashed?
I was wondering about security implications of storing unencrypted keys in the Redis store. Are the keys themselves considered as non-secure data?

Thanks,

Ben

Imported Google Group message.
Sender:Martin Buhr.
Date:Thursday, 19 March 2015 11:03:53 UTC.

Hi Ben,

That’s correct, they are not hashed before being put into the keystore, a few notes around that:

  1. The key structure is meaningful, it’s orgId+key, it makes it very quick to search for keys
  2. It’s assumed that keys are ephemeral and would be refreshed regularly, it’s not like a password and so to some degree they aren’t considered secure data
  3. We could probably do a lot more around securing the data store, which we haven’t looked at, e.g. basic auth passwords are still stored plaintext in the session object, so there’s things that need improving on that front.

:frowning:

Thanks,
Martin

  • show quoted text -

Imported Google Group message.
Sender:Ben King.
Date:Thursday, 19 March 2015 12:07:55 UTC.

Sounds good, Martin - as long as all these points are understood.

I guess the issue is that with key refresh being a manual process, how frequently do you want to be doing it from a practical perspective…

Thanks!

  • show quoted text -

Imported Google Group message.
Sender:Martin Buhr.
Date:Thursday, 19 March 2015 12:21:31 UTC.

Hi Ben,

Indeed - the key refresh is manual, but it can be automated using the REST API, it depends on how Tyk is integrated with your other systems.

When the portal feature is complete, we can look at having users self-serve key-refreshes (maybe not in the first version, but in later ones).

Needs thinking about, will ponder :slight_smile: Any ideas are welcome

Cheers,
Martin

  • show quoted text -

Imported Google Group message.
Sender:Ben King.
Date:Thursday, 19 March 2015 14:55:10 UTC.

Thanks, Martin.

Unfortunately, I’ve spoken to a security consultant regarding this, and storing keys in plain text may make Tyk a no-go for me.

We could update the Gateway to use hashed keys easily enough, but I’m guessing there’d be quite a few changes required on the Management API side too? :frowning:

Regards,

Ben

  • show quoted text -

Imported Google Group message.
Sender:Martin Buhr.
Date:Thursday, 19 March 2015 17:46:27 UTC.

HI Ben,

That’s a shame :frowning: but completely understandable.

I did a bit of experimenting, and have an experimental branch (experiment/hash) in the Tyk repo, which uses murmur3 to hash they keys as they pass through the storage manager. The tests seem to be passing, and the only-non-functional element is the key listing in the dashboard. However analytics (if you go to the key analytics URL directly) works fine.

However this breaks the multi-organisational structure of Tyk, since keys are segmented in two parts, you would need to hash orgIDs and KeyIds separately before running the search, which would mean pushing the hashing function up a level into the implementation instead of just the redis storage driver. There’s a way to do it, I just haven’t thought about it yet :slight_smile:

Will continue investigating, watch this space!

Cheers,
Martin

  • show quoted text -

Imported Google Group message.
Sender:Martin Buhr.
Date:Thursday, 19 March 2015 18:24:00 UTC.

Actually, having thought about it some more - if you are not using the dashboard for key management, everything will work. You can actually still add keys, and view their analytics and settings, and even update then, however there is no listing.

Segmentation and org level permissions from a security perspective should all still work fine. So actually all that would be required is a change to the UI.

The feature may need to be structured more so that the interface can react if hashed storage is in use,

:slight_smile:

Martin

Imported Google Group message.
Sender:Martin Buhr.
Date:Thursday, 19 March 2015 19:55:06 UTC.

I’ve added this to the roadmap for the next version of Tyk (1.6) as its a valid issue considering how these keys need to be secure.

However, and this is a bit annoying I guess, the new portal feature we are adding allows devs to self-serve API keys, and in their dashboard they get to see their usage graphs. The keys that are generated here are stored alongside the developer profile in Mongo and are not hashed (we need them for the analytics lookups as well as some ownership tests).

Which poses a dilemma, we close a potential security hole in the gateway only for it to still exist in the portal and dashboard, since if the database is breached then there’s a treasure trove of key data to be exploited.

API keys are also stored alongside analytics data, this would need to be changed to use the hashed representation, then the hashed key can be stored alongside the developer profile, this would make analytics work for the portal.

In the dashboard, API key rankings (biggest users, etc) wouldn’t be possible as only the hashed key would be available to the admin.

Overall it’s quite a large bit of work to make all the components behave properly, but it’s key to securing the solution so I’m quite eager to solve it.

Any input would be appreciated :slight_smile:

Cheers,
Martin

  • show quoted text -

  • show quoted text -


You received this message because you are subscribed to the Google Groups “Tyk Community Support” group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To view this discussion on the web, visit https://groups.google.com/d/msgid/tyk-community-support/f568cc5f-e993-413e-b085-afcc33c479d4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Imported Google Group message.
Sender:Ben King.
Date:Friday, 20 March 2015 11:50:57 UTC.

Well - I’ve done a bit of architectural re-jigging, and it seems like we have a way to avoid some of the stricter security requirements here, so hopefully we are okay! I am planning to implement additional authentication of our own, which means we are only using the Tyk key as a user identifier, rather than as a secure credential.

It seems to me as though you can’t really treat the key as an authentication credential (i.e. always stored hashed/encrypted at rest) unless it is totally separate and distinct from the concept of the user, so the analytic/dashboard management would be based around a user identifier, with the key merely being an attribute within (which can therefore always be stored in a hashed state) - I guess the tricky bit here is trying to do so without making the key check more convoluted.

Ben

Imported Google Group message.
Sender:Martin Buhr.
Date:Friday, 20 March 2015 12:00:45 UTC.

Hi Ben,

That is exactly what we are doing :slight_smile:

So far, we’ve done the following:
The gateway will hash all keys as they come in so they are stored securely
The gateway will also hash key data that is stored in the DB so that it is anonymised
The dashboard “API Keys” UI will be re-factored to use a direct lookup (e.g. type the full key in and request the data, it will request the key data from the gateway via the API, which will do the request to Redis via the hashed key, so the unencrypted key is only available throughout the function and must be known beforehand), this means editing and updating keys is fine so long as they are known
In the next version we are introducing a portal, and the portal has a concept of “Developers”, developer records have map of api-id:key in their record, we will store the obfuscated key in this map instead of the raw one. Now you have the relationship you are speaking about: User profile -> key-hash -> Analytics data, the auth token is never visible or available to anyone, which means it needs to be regenerated if it is lost (because it is now impossible to retrieve)
The best thing i that basically you can still do all key management with the original key, so long as it is known - this means that all integrations can work with raw keys if they need to and the key is only exposed to the user once and never again.

this will make xpiring keys extremely important because otherwise floating keys might exist!

Anyway, in the hash branch all tests are passing now and it’s stable, we’re working on portal integration now to make the UI clearner :slight_smile:

Very exciting stuff.

Thanks again,

Cheers,
Martin

  • show quoted text -

Imported Google Group message.
Sender:Martin Buhr.
Date:Friday, 20 March 2015 16:21:22 UTC.

This feature is now available in master, if you enable “hash_keys” in the tyk.conf all keys will be hashed in the DB, not backwards compatible on existing installs though :slight_smile:

M.

  • show quoted text -

I’m curious why murmur3 was chosen. Correct me if I’m wrong but the purpose of hashing the api-keys is to mitigate the effect of large data breaches (eg: someone gets read access to your Redis cluster). This is why we hash passwords. As far as I know, murmur is a non-cryptographyc hashing function which means it is not specifically designed to be difficult to reverse.

murmur3 was chosen for speed and the low chance of namespace collisions, tokens are not passwords, so we don’t treat them as such (passwords in Tyk are hashed using bcrypt).

Since we access tokens constantly, we do not want to lose performance constantly hashing them on each request.

It would be trivial to replace though - so we’re open to PR’s making this a configurable setting.

Thought so. You might be right, a cryptographic function might be just too slow for constant access. But it’s good to be aware of all the tradeoffs and their implications. Thanks for the answer.

I will give adding a configurable function a try. Been diving in the code lately and it seems fairly simple to add this as an option.

As a side note, it kinda beats the purpose of hashing if all keys and their associated hashes are logged together. Eg:
time="Apr 13 10:52:13" level=info msg="Reset quota for key." inbound-key=56fc0a4e38c3015ba4000001e28df584baa4494359075986d3992817 key=quota-5cdb385c.

Is there a way to selectively turn off log prefixes?

Haha, yes - it is an issue, but we thought it would be more valuable for an API owner to be able to view log files and trace activity for token actions than to hash everything, since the log data would be meaningless.

But yeah, that particular log might need amending, we don’t need to know their quota bucket.

Not that I am aware of - you could change the log level to Error only