If I use the “OpenAI Authenticator” connecting to Ollama via a local network while using a corporate proxy (Ollama rund locally) I get an error message until I deactivate the option “verify settings” which is somewhat counter intuitive
Hello @mlauber71,
Thank you for reporting this issue.
Can you use the model with a Chat Model or LLM Prompter afterwards?
The “Verify settings” option controls whether the Authenticator tries to connect to the API by listing models which not all providers support.
If I remember correctly, Ollama also doesn’t support it which would explain why the node fails if “Verify settings” is enabled.
In this blogpost here by @roberto_cadili on “How to leverage open source LLMs locally via Ollama”. He has mentioned the below :
Note. Make sure you uncheck the “Verify settings” box. This ensures that we don’t call the list models endpoint, allowing the node to execute successfully with the provided base URL.
I believe it is required to uncheck this box for successful execution of the node with the specified base URL.
@nemad the REST-API does list the available models though. It is not a big deal just curious why the verify settings would not work well with the proxy. But once you know this is not a problem.
That’s odd, if the API provides the models endpoint, then verify settings should work. We’ll investigate this and also improve the error message to include the missing of the endpoint as a potential cause for the error.