In my flow below i od get the error:Execute failed: The model does not support tool calls.
Im using the gpt4all-falcon-newbpe-q4_0.gguf and the “Workflow to Tool” returns as in every video out there the “Issues while reading workflows”. The LLM Prompter works but not the agent Prompter. Did anybody managed to use local llm with the Agent Prompter? Or where is the issues with the gguf model?
There are some models that were not trained / built to understand how to use tools / tool calling and the model you are running seems to be one of those.
I tested agents with local models between 4bn and 8bn params in size - e.g. Qwen3…
Yes, GPT4All does not support tool calling at all.
Unfortunately, it’s also not a matter of updating since we are on the latest version.
At the moment it looks like GPT4All is actually no longer developed (last update was in Feb 25) that’s why we are looking for alternatives. One that seemed promising was llama-cpp-python but there are rumors of its demise as well.
The underlying library that does the heavy lifting is llama.cpp which is very actively developed but it doesn’t look like that can be said for it’s Python bindings.
If anyone has a recommendation for an actively maintained Python binding that is not at the risk of being abandoned, please let us know.
Ollama is a good option to run models locally next to the Analytics Platform (another option is LM Studio) but it’s not a good replacement for GPT4All which runs the models inside of the Analytics Platform.