Error using consul LoadBalancing

Hi,
I have one Tyk Gateway and 2 docker containers of a REST test service (on 2 EC2 VM).
When I add a manual load balancing between the 2 containers it works perfectly.

But when a container shuts down, tyk continues trying to reach this service, so I decided to try “service discovery” with consul.io

I have a Consul cluster and in the service definition in the dashboard I keep the “Enable round-robin load balancing” checked and I clean the “Add LB targets” field.

I check “Enable service discovery”
Query endpoint : http://XXX.XXX.XXX.XXX:8500/v1/catalog/service/chuckrest
Does this endpoint return a list? : Checked
Are the values nested? : Unchecked
Data path : array.ServiceAddress
Is port information separate from the hostname? Checked
Port data path : array.ServicePort

When I try a REST call to the tyk endpoint I have an error :

 Error distribution:
 [100] Get http://XXX.XXX.XXX.XXX:8080/chuckrest/api/getrandomfact: EOF

This output is taken from an “ApacheBench like” container (https://hub.docker.com/r/ondrejmo/boom/)

And using fiddler I have this :

HTTP/1.1 504 Fiddler - Receive Failure
Date: Wed, 23 Mar 2016 14:42:34 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Cache-Control: no-cache, must-revalidate
Timestamp: 15:42:34.013
[Fiddler] ReadResponse() failed: The server did not return a complete response for this request. Server returned 0 bytes                                                                                                                                                                                                                                                                                                                                                                                                         

If I remove the discovery, everything works again.

I’am not more lucky this morning. All my tests failed. Here is the proxy section of my service definition:

"proxy": {
        "listen_path": "/chuckrest/",
        "target_url": "",
        "strip_listen_path": true,
        "enable_load_balancing": true,
        "target_list": [],
        "check_host_against_uptime_tests": false,
        "service_discovery": {
            "use_discovery_service": true,
            "query_endpoint": "http://XX.XX.XX.XX:8500/v1/catalog/service/chuckrest",
            "use_nested_query": false,
            "parent_data_path": "",
            "data_path": "array.ServiceAddress",
            "port_data_path": "array.ServicePort",
            "use_target_list": false,
            "cache_timeout": 60,
            "endpoint_returns_list": true
        }
    }

What’s going wrong?

Your configuration is wrong, for consul, which has output like this:

[
  {
    "Node": "foobar",
    "Address": "10.1.10.12",
    "ServiceID": "redis",
    "ServiceName": "redis",
    "ServiceTags": null,
    "ServiceAddress": "",
    "ServicePort": 8000
  },
  {
    "Node": "foobar2",
    "Address": "10.1.10.13",
    "ServiceID": "redis",
    "ServiceName": "redis",
    "ServiceTags": null,
    "ServiceAddress": "",
    "ServicePort": 8000
  }
]```

You need the following settings:

isNested = false
isTargetList = true
endpointReturnsList = true
portSeperate = true
dataPath = "Address"
parentPath = ""
portPath = "ServicePort"

At least that's how our test validates consul:

https://github.com/TykTechnologies/tyk/blob/master/service_discovery_test.go

Yesss! You saved my day! It works.
What’s the aim of “isTargetList=true”

Thank you.

Basically, the service discovery client is really quite flexible and can handle a lot of different data types, isTargetList (I think, IRRC) will check if the target path is a list object instead of a map object, in which case it changes it’s behaviour.