Agent Prompter Execute failed: The model does not support tool calls.

In my flow below i od get the error:Execute failed: The model does not support tool calls.

Im using the gpt4all-falcon-newbpe-q4_0.gguf and the “Workflow to Tool” returns as in every video out there the “Issues while reading workflows”. The LLM Prompter works but not the agent Prompter. Did anybody managed to use local llm with the Agent Prompter? Or where is the issues with the gguf model?

There are some models that were not trained / built to understand how to use tools / tool calling and the model you are running seems to be one of those.

I tested agents with local models between 4bn and 8bn params in size - e.g. Qwen3…

You can read about my findings here:

1 Like

Tried 10 of the them including those gguf models with tooling without success

What may also be happening is that GPT4all in the current implementation does not support the underlying API that Agent Prompter / Chat View etc. use.

I tried to find hints in the documentation, but was not able to find it.

What I can recommend is to explore installing Ollama (which is what I do):

This allows you to run models locally and the “latest” open source models are available pretty soon after they are released.

With Ollama installed you can then use e.g. @mlauber71’s simple example which includes an Ollama component to get started very quickly:

2 Likes

Yes, GPT4All does not support tool calling at all.
Unfortunately, it’s also not a matter of updating since we are on the latest version.
At the moment it looks like GPT4All is actually no longer developed (last update was in Feb 25) that’s why we are looking for alternatives. One that seemed promising was llama-cpp-python but there are rumors of its demise as well.
The underlying library that does the heavy lifting is llama.cpp which is very actively developed but it doesn’t look like that can be said for it’s Python bindings.

If anyone has a recommendation for an actively maintained Python binding that is not at the risk of being abandoned, please let us know.

Ollama is a good option to run models locally next to the Analytics Platform (another option is LM Studio) but it’s not a good replacement for GPT4All which runs the models inside of the Analytics Platform.

5 Likes

Martin is too kind, since the example is actually his. I just added the Ollama part :slight_smile:

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.