Multiple segmented Tyk groups in same Redis

I get this debug message. Is this an error or not?

time=“Aug 17 10:30:25” level=debug msg=“AddOrUpdateServer error. Seems like you running multiple segmented Tyk groups in same Redis.” error=“Node notification from different tag group, ignoring.” serverData={9c5909aef86a solo-0926a772-3f07-4c08-b252-d89aab79a3ef 1 0 external}

I have two Tyk OSS (docker) on different hosts, using the same Redis. Gateway are tagged with different tags: “internal”, “external” for GW sharding.

Should I turn off GW’s “node_is_segmented” if not using Dashboard?

Hello @bkomac

This is expected. In many cases it’s safe to ignore those messages, but there are a few instances when it isn’t.

Gateways that are tagged should not share a Redis DB as they will conflict with one another’s key operations.

If Gateway A and Gateway B are serving different APIs, but sharing the same Redis but have different tags (in your case “internal”, “external” tags) you end up getting duplicate key Operations as each group ID will have one node flush the MDCB key operations stack.

That in itself isn’t so bad, but can cause some odd race conditions where key operations overwrite one another. However when you have API updates you can end up causing multiple hot reloads as each group hot reloads the cluster - and since they share a Redis DB each gateway across group IDs will reload multiple times.

For non-MDCB deployments it’s ok for Gateways to share the same Redis, whether they are sharded or not.

It’s a different situation for a MDCB deployment, where the Dashboard needs a local Gateway for managing keys in the control plane Redis. For Gateways in the data plane, it’s best to create separate clusters for each shard, since they are meant to operate together.

Thanks

Ok, thanks. I’m not using MDCB. Same APIs can be on both GW’s, but with different keys (different access level to the API). GW’s are on different domains, with APIs configured for that particular domain (dynamically set with GitOps).