Exploring Embedding Integration: Connecting OpenAI Nodes with Ollama for Advanced Use Cases

Hello,

I am currently exploring this sample workflow [Create a Vector Store]

I have successfully used the OpenAI nodes to connect to Ollama hosted on my localhost, utilizing the Llama 3.1 model for simple queries and prompts.

I am now looking to explore the next step, which involves using the embedding nodes. Anyone knows if the current OpenAI Embedding Connectors can be used to connect with Ollama? If so, which open-source model available on Ollama supports embeddings? I understand that Llama 3.1 is an LLM model and not an embedding model. I would like to test a use case by connecting an open-source embedding model through the OpenAI Embedding Connectors, via Ollama hosted locally.

Any assistance would be greatly appreciated.

@bluecar007 you could take a look at this example as well as at the accompanying article. The example has PDFs as well as CSV files or just text (log) files.

The node does use Llama3 but it is also possible to use llama 3.1. The embedding model is mxbai-embed-large but you could also use the Llama for embedding. It just will take more time.

1 Like