LLM Prompter Error

I’m having trouble running a Knime workflow that calls ChatGPt through the OpenAI API to query it. If I run the Post Request node, having passed it my API Key and the prompt I want to ask, it works fine. However, when I use the LLM Prompter node, I get a connection error even though OpenAI Authenticator and OpenAI LLM Connector were previously running successfully. What should I change to get the LLM Prompter to run correctly? What I get after LLM Prompter excecution is
File “C:\Users\fmc00\AppData\Local\Programs\KNIME\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\openai_base_client.py”, line 1534, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

ERROR LLM Prompter 3:35 Execute failed: Connection error.

Hey there,

Can you share screenshots of your workflow?

In particular:
OpenAI Authenticator → some model connector → prompter.

Also: configuration dialogue of Authenticator node with advanced settings expanded.

W/o that helping will be difficult as connection error is very generic…

Hi Martin
Super thanks for your help. I tried with a simple workflow. When I tried with a Post Request node, it works ok. However I get an error using a LLM Prompter node. I enclose screen shots of my workflow. Thanks again!!!









1 Like

Hey there,

I quickly replicated your setup - I think the issue is that the model you are using is a chat model. For other models listed it worked (e.g. the instruct model that shows as default)

It really should not be listed as an Option on LLM Prompter I think…
Error:

When selecting the same model via Chat Model Connector and prompting with Chat Model Prompter it works:

I think issue is that “All Models” literally lists all models - irrespective whether they are compatible with the API that the LLM Prompter or Chat Model Prompter node is using…

2 Likes

Hi Martin. I replicated my query using Chat Model Connector and Chat Model Prompter and I get the same conection error as I got using LLM Prompter. I guess this should be something in my PC or Knime configuration, but when using Post request the API calls work ok. I send to you my configuration of those new nodes, but I think they sare similar to the ones used with LLM Models. Thanks again for your help





Question:

Does any model run? Can you try e.g. GPT-4o or GPT-4o-mini?

If not:

What version are you on?

Can you also go and check the version of KNIME AI Extension:

Help => About KNIME Analytics Platform => Installation Details:
image

Thanks again Martin for your help and caring of my issue. My LLM Prompter & Chat Model Prompter do’nt work no matter with the model I se


lect. Post Request node works ok with gpt-3.5-turbo.

and this is what I have in the preferences section

Hmm. One thing that might be worthwhile trying is to uninstall AI Extension (you can select it in that overview that you shared and then bottom right click uninstall) and then re-install it…

Have to admit that I am not familiar with proxy settings in KNIME so unsure how it works if you add *.openai.com whether this also covers api.openai.com/v1 (your base URL…)

I uninstalled AI Extension & AI Assistant (Labs) and installed them again, and I added more proxy settings like api.openai* or api.openai.com/v1, but I still get the connection error

Ok I am really sorry but I’m out of ideas…:frowning: - best guess it it is indeed some sort of proxy issue, but no clue how to further debug that… maybe someone else here can pick it up :slight_smile:

Just chiming in as I experience the exact same issue while attempting to apply newly acquired kownledge from the recent AI courses. I tried several models and verified my API key has no restrictions.

Console Output with stack trace

  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python3.nodes_5.4.1.v202501291500\src\main\python\_node_backend_launcher.py", line 1055, in execute
    outputs = self._node.execute(exec_context, *inputs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python3.nodes_5.4.1.v202501291500\src\main\python\knime\extension\nodes.py", line 1237, in wrapper
    results = func(*args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python.llm_5.4.3.v202503051153\src\main\python\src\models\base.py", line 748, in execute
    responses = _call_model_with_output_format_fallback(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python.llm_5.4.3.v202503051153\src\main\python\src\models\base.py", line 1179, in _call_model_with_output_format_fallback
    return response_func(model)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python.llm_5.4.3.v202503051153\src\main\python\src\models\base.py", line 742, in get_responses
    return asyncio.run(
           ^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\asyncio\runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\asyncio\runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\asyncio\base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python.llm_5.4.3.v202503051153\src\main\python\src\models\base.py", line 640, in aprocess_batches_concurrently
    return await util.abatched_apply(func, prompts, n_requests)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python.llm_5.4.3.v202503051153\src\main\python\src\util.py", line 277, in abatched_apply
    outputs.extend(await afn(batch))
                   ^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\plugins\org.knime.python.llm_5.4.3.v202503051153\src\main\python\src\models\base.py", line 623, in aprocess_batch
    responses = await llm.abatch(sub_batch)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_core\runnables\base.py", line 905, in abatch
    return await gather_with_concurrency(configs[0].get("max_concurrency"), *coros)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_core\runnables\utils.py", line 71, in gather_with_concurrency
    return await asyncio.gather(*coros)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_core\runnables\base.py", line 902, in ainvoke
    return await self.ainvoke(input, config, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 306, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 871, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 831, in agenerate
    raise exceptions[0]
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_core\language_models\chat_models.py", line 999, in _agenerate_with_cache
    result = await self._agenerate(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\langchain_openai\chat_models\base.py", line 951, in _agenerate
    response = await self.async_client.create(**payload)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\openai\resources\chat\completions\completions.py", line 1927, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\openai\_base_client.py", line 1767, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\openai\_base_client.py", line 1461, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Program Files\knime_5.3.0\bundling\envs\org_knime_python_llm_5.4.3\Lib\site-packages\openai\_base_client.py", line 1562, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

2025-04-08 13:11:03,526 : ERROR : KNIME-Worker-1088-LLM Prompter 8:1462 :  : Node : LLM Prompter : 8:1462 : Execute failed: Error code: 404 - {'error': {'message': 'The model `gpt-4` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
org.knime.python3.nodes.PythonNodeRuntimeException: Error code: 404 - {'error': {'message': 'The model `gpt-4` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
	at org.knime.python3.nodes.CloseablePythonNodeProxy$FailureState.throwIfFailure(CloseablePythonNodeProxy.java:805)
	at org.knime.python3.nodes.CloseablePythonNodeProxy.execute(CloseablePythonNodeProxy.java:568)
	at org.knime.python3.nodes.DelegatingNodeModel.lambda$4(DelegatingNodeModel.java:180)
	at org.knime.python3.nodes.DelegatingNodeModel.runWithProxy(DelegatingNodeModel.java:237)
	at org.knime.python3.nodes.DelegatingNodeModel.execute(DelegatingNodeModel.java:178)
	at org.knime.core.node.NodeModel.executeModel(NodeModel.java:596)
	at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1284)
	at org.knime.core.node.Node.execute(Node.java:1049)
	at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:603)
	at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:98)
	at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:198)
	at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
	at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:369)
	at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:223)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
	at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

What bugs me a bit is that credentials have a user-secret pair but the OpenAI Api only provides a secret key with an optional name.

Very basic question - do you guys have credits in your account?

I think API usage is prepaid only…

Maybe if you have 0 credits you get an error?

you can check here: https://platform.openai.com/settings/organization/billing/overview

if balance says 0 you may need to charge up

1 Like

Just found that as well. Quite hidden and the budget in the top right is rather misleading. ChatGPT Plus seem to not automatically grant access to the API.

I think they changed from pay as you go with monthly billing to prepaid for the average ChatGPT plus user… might be different for enterprise accounts etc.

So, this happens to be really confusing for anyone who already pays for ChatGPT plus. When accessing https://platform.openai.com/ with an already existing and active ChatGPT user you are greeted with this in the top right.

image

However, accessing the Billing: https://platform.openai.com/settings/organization/billing/overview

Shows that there is no payment method available and the account is on a free tier. Not quite user nor consumer friendly to charge double but OpenAI must find ways to make money.

Solution

  1. Pay for ChatGPT and / or API usage
  2. Build a bot in Knime, i.e. by using Selenium, Palladian or Web Interaction Nodes, that open a browser session

Using the API is faster and great for automation but the ChatGPT Web UI is more intuitive and for “single purpose” tasks such as writing.

Only contributing to the confusion are the Knime Nodes which are green and indicating “yeah, let’s get cracking”. The 404 not found response then doesn’t help either.

PS: This is actually quite frustrating to say ther least as even after refilling my budget, the $20 for ChatGPT Plus are still added on top.

image

Best
Mike

Thanks again both for helping me. In my case I don’t think my issue is related to the credits balance of my OpenAI API, as I think I have credits enough. On the other hand, I can use this API key through the POST Request node of Knime without problems. Instead, when I use either Chat Model Prompter or LLM Prompter nodes I get a Connection Error. Previously to those node I connected succesfully to OpenAi through OpenAI Authenticator and OpenAI Chat Model Connector o OpenAI LLM Connector. I don’t know what to do more. I unistalled AI Extensions, Knime software, Python software, and installed them again, but I only can work using Post Request node…

I managed to solve this issue!!! Solved!! In fact, I saw a video in internet which shows how to use a knime wf in Knime Hub ( 2.2 - ChatGPT Data App). Learning the implementation of that wf I noticed that Chat Model Prompter node needs to only to have as an input the columns Prompt(message) and system(as the context or generic message) BUT ALSO the MESSAGE AND SYSTEM MESSAGE as flow variables. If you input the node with both a Table with columns and flow variables, it works and I get the answer of my prompt through that node and OpenAI API Key!!! By doing this, I avoided the Error Connection message (¿?) and now it works!!! .Thanks a lot for your help. I hope this could be useful for everybody!!!

2 Likes