HTTP PUT request gives 400 response (bad request) when on Knime server, but 200 (update succesfull) when run locally

We have a worklow scheduled to run on Knime server. The workflow is made in such a way that is also possible to manually run it on a local machine. When the flow runs locally, it works perfectly fine. The Put requests status response is 200, which means it is succesfull. When the exact same worklow it is run on the server, it works fine, except for the PUT request at the end.

What is weird, is that the workflows all succeed a series of GET request at the beginning of the flow. This rules out a problem concerning authentication. We know this because we let the worklow sent emails at an intermediate point in the flow.

I am not quite sure how to proceed, since I cannot share a workflow with the required credentials to reproduce the problem. So my main question is if other people have seen this type of behaviour before; the server giving back 400 responses to HTTP request, whereas the exact same code run localy works fine. and/or if anyone has any tips on how to debug this issue

What API are you calling with the PUT Request node? The output table contains a column with the response body even for failed request. The message in there may have more information about the client error.

Thanks for the suggestion to look at the error cause in the response body. The 400 error response seems to be caused by invalid data input. However, the exact same workflow produces a 200 response (succesfull update) when run locally. To be more precise: this is the error response:
{ “error” : { “code” : 400, “method” : “UPDATE”, “request” : “/nl/products/111460083.json”, “message” : “Invalid data input.” } }

Some more background:
I am calling the Lightspeed API. Documentation can be found here Introduction

I am updating products by calling the url
join(“https://api.webshopapp.com/nl/products/”, string($Internal_ID$), “.json”)

The payload consists of a JSON string, containing 2 fields called data01 and data03, each of which are also a JSON.
string(join(“{"product": {”, “"data01": "”, $Data_01$, “" , "data03": "”, $Data_03$, “"}}” ))

An example of a data03 field is
{"variable1":"value1","variable2":"value2","variable3":"value3","var4":"val4","var5":"val5","var6":"val6","var7":"val7","var8":"val8","var9":"val9"}

We use this as we can conveniently “cram” a lot of relevant data in our maximum of 3 custom fields.

But the core question remains. How is it possible for this workflow to run perfectly fine locally, while crashing on the server. Can it be that the server treats escape characters differently? What is causing this?

The “Server” has nothing to do with it. The workflow is run in the executor (a headless AP instance). Therefore I would check

  • Are the AP version identical locally and on the Server?
  • Is the data you are sending 100% identical? For example, a different OS can cause slightly different data (e.g. line endings) depending how you create the input for the request.

Good stuff. There was a node in the flow clearing any line ends (\r\n) from the payload, which when extended to also remove linux line ends (\n) resulted in a succesful PUT

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.