AI - First Time configuration (How To)

Hi friends,

For the first time, I’m trying to configure an AI.
I’m still learning, and I’m using GEMINI

First
First I asked in this forum if it would be possible to connect to Gemini.

Second
I downloaded this workflow and I want to change the configuration of the LLM to Gemini, and try to test this workflow with the Gemini API.

My Steps :writing_hand:

1-Credentials Configuration
I’ve got the API Key from Google Ai Studio,

I have copied and paste to the Credentials node.

2-Open AI Authenticator

  • I’ve passed the variable "credential
  • Then copy paste a URL like @MartinDDDD explained in the other forum.

Result: both nodes are GREEN like picture above (so I’m imagining that is “OK” until here

3-Open AI Chat Model Connector
Open AI cht model has a list of Models to choose from.
But I need to pass Gemini Model

So, my intent was to create a String and pass it as a variable.

Open I don’t allow writing Models, only choose from fixed selection

So, I have tried some Models "strings’

  • gemini-2.0-flash
  • gemini-1.5-flash


**But it ended with the message below **
image

And I have tried to the next node, and nothing.

Summary: How could you help me? Where am I making mistakes?

Hey there again,

First of all kudos for the very detailed explanation of what you’ve done so far and what you’ve tried - makes it really easy and enjoyable to help.

I think in general you are on the right way with your set up including passing in a model name via flow variable.

Only thing that might be causing your problem I can see on the screenshots is using LLM Prompter vs. Chat Model Prompter:

As far as I know LLM Prompter uses completions API structure while Chat Model Prompter uses chat completion structure. If you try and use a model that needs chat completion in LLM Prompter you may get that error.

When making the change be mindful that Chat Model Prompter expects a conversation as in input - so a table with role column (ai or human) and a message column ( with your prompt ) This basic example should help you work that out:

1 Like

Hello Martin.
I apologize, but I was unable to execute.

I think the first time the OpenAI Authenticator worked was because I hadn’t checked “Verify settings.”

What I mean is that perhaps the initial error is already in the URL or something related to the API.

Am I using the API incorrectly?

1-Do I need to use the User Name or password fields? (Credentials Configuration node)


image

2-What if?
I pasted the URL wrong? (But I tried, and it didn’t work.)
And should be, something like this below? (but I had tried and did not worked)

Well, I’m lost :sweat_smile:

API Explanation:

Getting API

I see.

I think it is less likely the API key that is the issue.

I looked into the gemini docs and think that:

https://generativelanguage.googleapis.com/v1beta/openai

or

https://generativelanguage.googleapis.com/v1beta/openai/v1

This is based on the code example from here

from openai import OpenAI

client = OpenAI(
    api_key="GEMINI_API_KEY",
    base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)

response = client.chat.completions.create(
    model="gemini-2.5-flash",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": "Explain to me how AI works"
        }
    ]
)

print(response.choices[0].message)