FAISS Vector Store Creator - fails because of invalid API

Dear KNIMErs,

I am playing around with building a custom chatbot as laid out here: How to build a custom AI-powered job finder chatbot | KNIME

However, I already fail at the very first step, creating a vector store DB using the FAISS Vector Store Creator node.

It always returns this error:
Execute failed: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-proj-********************************************************************************************************************************************************wrQA. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

It sounds like the API key is not correct, but I double and triple checked, created new API keys in OpenAI’s web interface (platform.openai.com), carefully made sure I really have the right ones and yet, I always get the same error. The workflow executes fine (e. g. OpenAI Authenticator node, OpenAI Embeddings Connector node) up until the point where I want to create my custom Vector DB.

Anyone else has epxerienced this?

Can you post a screenshot of how your WF is wired up?

In general it sounds like that should be OK though.

In your OpenAI Auth node, under advanced settings, which base URL do you use?

Are you using the nodes from your own computer or from a computer within corporate environment?

2 Likes

Thank you for your response @MartinDDDD

I found out (by coincidence) that it may have been eventually a hickup on the API side.

How did I find out? I tried to log in to see if my KNIME workflow would have consumed any tokens (which would have been an indicator, that it somehow “accepted” the API credentials).

But I couldn’t log in and was sent into a strange website reload loop, when I wanted to access my API dashboard at OpenAI.

Was solved after an hour and then also the API credentials were immediately accepted.

Not sure if this really was the reason, but it sounds “reasonable”.

I could finally build the workflow / chatbot as per the example mentioned above. Now I need to wrap my head around how I could eventually replace OpenAI with DeepSeek for a better cost position (I find the API consumption by OpenAI quite expensive - nice to play around but not feasible for any productive use with 100s or even 1,000s of chats / requests).

Again, thanks for your kind support!

1 Like

Glad you managed to work it out.

Regarding cheaper embedding models:

You should be able to use any embedding model provided that it uses the structure of the OpenAI Embeddings API Endpoint:

https://platform.openai.com/docs/api-reference/embeddings

Unfortunately that may rule out some providers - e.g. I tried running local embedding models using Ollama, but as Ollama is not OpenAI Compatible (yet) for their Embeddings Endpoint it doesn’t work.

As far as I know Deepseek currently does not have any embeddings models so you may have to cross them off the list…

I found some article that compares OpenAI vs Open Source Alternatives - that might come in handy for your further research on that topic:

1 Like

Thanks for the feedback.

One question if you allow: the embeddings model is ONLY needed when adding my “custom information” to the Vector DB, right?

Or do I also need it to retrieve it from the DB?

1 Like

When you retrieve something your user message is sent to embeddings model – however I think the config is stored with your vector store so that is you may not see the embeddings nodes anywhere in your "Chat Model).

I covered this including a set up on how to create a document ingestion pipeline as a separate workflow to add more docs to your vector store in a video - so from one Youtuber to another feel free to check this one out :slight_smile:

The example workflows also contain a python view-based chat app that should render nicely on Hub!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.