POST Request for creating Tyk Policy fails

My request:

{
	"default": {
		"rate": 1000,
		"per": 1,
		"quota_max": 100,
		"quota_renewal_rate": 60,
		"access_rights": {
			"moo": {
				"api_name": "moo",
				"api_id": "moo",
				"versions": [
					"Default"
				]
			}
		},
		"org_id": "moofarm",
		"hmac_enabled": false
	}
}

My output that i get from running against /tyk/policies

{
    "status": "error",
    "message": "Failed to create file!"
}

Hi @IkePCampbell,
Thank you for posting your issue here.
Are you using the Open Source Tyk Gateway installed in Kubernetes?
If so, I can replicate your issue when using the Tyk Gateway installed in Kubernetes. It seems that you are running to the similar issue as this one posted in our Github page. As mentioned in the response, this is kind of expected, since in OSS version both API and Policy are files. You should get same error for API endpoint in this case. So you either mount some volume, make it writable and etc. Or create policies only as files, and do not use API, if you can’t make volume writable.

On the other hand, upon testing I can create the policy successfully when using the Tyk Gateway Open Source installed in my local docker.

Hi, I hope it’s fine to revive this thread because I’d like to clarify something.

I have a similar issue. I’m trying to setup Tyk for tests using the official documentation for Helm installation on a K8s cluster. I see the same error in the logs while creating a new policy:

level=debug msg="Requesting policy forZGVmYXVsdC9odHRwYmlu"
level=error msg="Policy doesn't exist." polID=ZGVmYXVsdC9odHRwYmlu prefix=policy
level=debug msg="Creating new definition file"
level=error msg="Failed to create file! - open ZGVmYXVsdC9odHRwYmlu.json: read-only file system"

I understand that the gateway process can’t create a file because of the read-only file system. What I don’t understand is where it tries to create the file. What is the full path?

The full path is the value of policy.policy_path. My guess is that there is a write restriction at that path.

Might I ask how was the policy definition ZGVmYXVsdC9odHRwYmlu.json created?

The policy is being created by tyk-operator.

The user who is running the Tyk process in the container has write access to the policies file.

Here is a bit of the relevant configuration (/etc/tyk-gateway/tyk.conf):

        "policies": {
            "allow_explicit_policy_id": true,
            "policy_source": "file",
            "policy_record_name": "/mnt/tyk-gateway/policies/policies.json"
        },

Addition: it seems like the Tyk process inside a container can’t create a temporary file, but without knowing exact path it’s hard to debug.

Thanks for sharing the config file.

I think you need to change your config file to set a path rather than a resource file. You can modify the config file but a much better way would be to use environment variables

{
...
  "policies": {
     "policy_source": "file",
    "policy_path": "/mnt/tyk-gateway/policies"
  },
}
- name: TYK_GW_POLICIES_POLICYPATH
   value: /mnt/tyk-gateway/policies

Kindly remember to call the hot reload API after calling the policies create API.

Let us know how it goes

UPD: It works, see my next message.

If I change policy_path to a directory, I see the following error message in the gateway logs:

level=error msg="Couldn't unmarshal policies: read /mnt/tyk-gateway/policies/: is a directory" prefix=policy

And I still have

level=error msg="Failed to create file! - open dHlrL2h0dHBiaW4.json: read-only file system"

Apologies, setting policy_path to a directory is actually solving the issue. Before, I mistakenly was altering policy_record_name instead. What’s the difference between them, by the way? I wasn’t able to figure that out of the documentation.

1 Like

Great to hear that you got it working.

  • policy_record_name: This points to a file and was the only way of managing policies before the introduction of the policy REST APIs. This loads all the policies needed in a single file.

  • policy_path: This points to a directory/folder and was introduced in version 4.1 when managing policies through REST APIs was added. This is now the recommended way of managing your policies.

1 Like

Hi @Olu! I hope you don’t mind asking me a few more related questions to this topic: I’m running into the same issue as described in this thread. I have Open Source Tyk Gateway installed using the Tyk Helm Charts. What I don’t understand is the following:

  • While experimenting with this issue, I mounted a non-persistent volume to the Gateway (emptyDir) just for testing. It worked, but when I killed the pod to see how it behaves on Gateway pod restart, things got bad:

    • Operator had to be restarted to reconcile settings with the Gateway - from this thread I understand that this is by design. Has anything changed since that 2020 post? Resp. are there any recommendations how to deal with this: e.g. if in production for some reason the pod gets restarted (like K8S scaling), how to ensure Operator updates the Gateway ASAP?

    • After forcefully restarting Operator to trigger a reconciliation, Operator logs showed errors that it wasn’t able to apply the missing policies and the Gateway showed errors that it couldn’t find the requested policies (as the volume was wiped on restart). While I understand that with a persistent volume this shouldn’t happen, I was still a bit surprised that Operator/Gateway couldn’t get out of this error state. I had to delete all SecurityPolicies, reinstall Operator+Gateway to get it working again. Is there a more graceful way to get Operator+Gateway to recover from such error states?

  • What if I would like to have multiple gateway pods for load distribution/redundancy: does the Tyk Operator make sure that all Gateway pods get updated with policy changes as they’re stored as files inside containers? I noticed the following note in the installation guide:

    Please note that by default, Gateway runs as Deployment with ReplicaCount is 1. You should not update this part because multiple instances of OSS gateways won’t sync the API Definition.

    Does this mean that the OSS Gateway on K8S can’t be used with multiple pods at all? That would be a challenge for our production workloads that require some redundancy to spread load + reduce risk of downtimes.