Hi.
In our company we are deploying out own local LLM. The team has created the access to be similar to using the OpenAI access, in that to use the LLM we need to both add the URL and the API key.
Please can i just double check based on a previous post i found, that for the API key I just need to add that into the Credentials Configuration node “Password” (all the other fields can be ignored) and change the URL to our local LLM in the advanced settings of the OpenAI Authenticator node? After that I am not sure if the Chat and LLM connectors would find our LLM model name.
Any clarification if someone has knowledge would be appreciated. thanks. Mark
Hey there,
I take:
- you have an endpoint where a llm is deployed
- it is compatible with OpenAI API
- you are unsure whether it is a Chat Model (Chat Model Prompter) or Instruct Model (LLM Prompter)
Regarding API key: you can use credential widget / configuration and set the API Key using password field as you correctly outlined. You can then pass the created credentials variable to OpenAI Authenticator Node and select it. There you can also set your base Url under advanced options.
The in order to work out if you have an instruct or chat model you can simply try the following:
- grab both OpenAI Chat Model Connector and OpenAI LLM Connector node
- create via string widget a variable that holds the model name (e.g. llama3.1) - set both Connector nodes to “All Models” and then set the model name via the created variable
- then connect both Connector nodes to Chat Model Prompter and LLM Prompter respectively and pass in some data (Prompt column for LLM Prompter, Role column and Message column for Chat Model Prompter) and then run both of them
Your set up should look something like this:
Set up connector nodes like this:
1 Like