One shared or multiple, per API?

We’ve begun adopting Tyk as our API Gateway and so far have a few setups for different APIs. As we add more, we’re discussing the long term approach. One shared cluster of gateways for all APIs vs API specific setups. My thoughts so far:

One shared:

  • (+) Easier to manage
  • (+) A more efficient use of resources
  • (-) If one API gets DDOS’d or otherwise breaks it, all APIs are affected
  • (-) Long term, with more customization/plugins it may affect performance or require more resources

Multiple setups, in addition to the inverse above:

  • (+) Gateways can be setup closer to origin to avoid network hops

Is all this true? Anything else to consider? Is there a general best practice?

Note, even if shared we’d probably have a few in different clouds/regions/datacenters/environments still but fewer than each API spinning up their own.

hello @Supergibbs, I’ve seen both models adopted. It really does boil down to tradeoffs of:

  • Scaling Hosting costs
  • Scaling overheads of managing additional clusters
  • Compliance requirements
  • Performance requirements

If you’re starting out, I’d suggest to go with one shared cluster and monitor API consumption and user feedback on performance. For busier APIs, you might then wish to put them into a separate dedicated cluster for those APIs only.

Are you hooking up Tyk pump with Prometheus/Elastic and visualizing with Grafana? Would be a good way to start getting insights into trends to shape the future design.

1 Like