We’re moving towards making our Tyk deployment public facing, so it’s the central access point into our APIs.
We’re doing some internal testing on our APIs and the infrastructure around them, and was hoping for a bit of insight in how much testing Tyk has done against a release, and more generally what is tested?
Looking to get an idea of your approach to testing and which (if any) areas are worth us spending time on double-checking?
For all features, where possible, we write a series of tests
Tests vary from simple unit tests to integrated tests where we run requests through the whole stack with various conditions
All features, while being developed, get manually tested before making it to develop branch
When we feel we have a stable release, we load test it with blitz to make sure nothing untoward is happening (memory leaks, crashes, pointer exceptions)
stable release candidates make it to the cloud platform where they are then monitored “in anger”
Final release gets cut and pushed to repos
We then manually test the installation procedure for each major type (Ubuntu, CentOS, Docker)
We finalise the release and make it public.
Our main focus for testing is on the really critical gateway component, since it’s the most public, and most important piece.
The place where we have most trouble testing is the dashboard UI and portal UI, which is completely manually tested at the moment.
Another thing we have difficulty testing is various combos of middleware components, for example we recently saw a bug where interactions between the URL rewriter and the caching layer (fixed now) caused some odd behaviour - there’s a lot of combinations
Hope that makes you feel better - we try very hard to ensure everything is tested thoroughly before it goes out the door. And we focus on the gateway, security and performance.