KNIME Extension - Node View connecting to Flask Chat App

Hi all,

looking for some advice. I’ve successfully build an extension to use Ollama-hosted LLMs. It works in that system prompt / user message are passed in via a table and the node runs inference with a local LLM and outputs the original table with an added column “Answer”.

I’ve started developing a second node that should allow the user to “Chat” with an Ollama hosted LLM.

I already managed to get working that on node execution a subprocess is spun up which runs a flask-based chat interface build using Googles MESOP and the interface can be accessed if the right URL:Port is put into one’s browser.

In a perfect world I want to access and show that in the node view.

So I have tried:

  1. to generate basic html and pass it to knext.view_html():
html = f"""
        <!DOCTYPE html>
        <html lang="en">
        <head>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
            <title>Chat Application</title>
        </head>
        <body>
            <iframe src="http://localhost:5000/chat" width="100%" height="600px"></iframe>
        </body>
        </html>
        """
        
        return knext.view_html(html)

==> results in:

(Uncaught TypeError: Cannot assign to read only property '__read' of object '[object Window]')

  1. As backup option I tried to generate html with a clickable link:
 html = f"""
        <!DOCTYPE html>
        <html lang="en">
        <head>
            <meta charset="UTF-8">
            <meta name="viewport" content="width=device-width, initial-scale=1.0">
            <title>Chat Application</title>
        </head>
        <body>
            <p>Click the link below to open the chat application in your browser:</p>
            <a href="http://localhost:32123/chat" target="_blank">Open Chat Application</a>
        </body>
        </html>
        """
        
        return knext.view_html(html)

==> getting error:

ERROR CEFNodeView Blocked opening ‘http://localhost:32123/chat’ in a new window because the request was made in a sandboxed frame whose ‘allow-popups’ permission is not set.

So right now only the undesired backup option 3 works, where the view just shows a message including the URL that needs to be copy & pasted into a browser…

Question: Any ideas on how to resolve the errors (honestly at this stage I assume that what I am doing is not meant to be done…). Any alternative ideas? Or is this in general just a bad idea? :-).

Dear @MartinDDDD,

awesome! Always nice to see what people develop with our development framework!

Thanks for the detailed error messages. I asked internally how we might resolve this.

Meanwhile, if you’re interested, we’d be happy to have you with your extension on board.

Best regards
Steffen

3 Likes

Hi steffen,

thanks a lot for your response - I already read through the guides on how to publish a node and it’s on my list to do in the next weeks.

For anyone who wants to try the Ollama Node that is working already, the “zipped” build / update site can be downloaded from my github here:

Any feedback is much welcome :-). Under the hood I’m using Langchain community ChatOllama.

1 Like

Dear @MartinDDDD,

thanks for that! Unfortunately, accessing a locally hosted website from a pure-Python node will not be possible, it poses a security risk.

Just in general, did you try accessing your Ollama server with our OpenAI nodes?

Best regards
Steffen

I did - I think the endpoints are not compatible:

Openai Chat Completions:

POST
 
https://api.openai.com/v1/chat/completions

Ollama:

http://localhost:11434/api/chat

As the OpenAI Authenticator allows only changing the base url, the outgoing request still hits /chat/completion rather than /chat…
image

I managed to prompt LLMs hosted via other methods that mimic the OpenAI API 1:1 (e.g. Llama CPP), but not Ollama.

Ollama probably right now the most convenient way to host LLMs locally…

I have to admit though that I didn’t try recently - I recall having seen some posts on LinkedIn recently…

2 Likes

Understood - thought that the error was there for a reason :-). How about the topic with including a link that opens a browser window or is that in the same basket?

If it is not possible then so be it - have some other ideas that I will look into next :wink:

Hi @MartinDDDD,

that is not my area, but maybe the OpenAI API of Ollama could help out here? According to one of our devs you could try to specify the model via flow variable then.

Maybe @mlauber71 has more expertise here than I do.

The first error indeed seems to be indeed in the same basket unfortunately.

Let me know about your other ideas once you explore them :slight_smile:

1 Like

@steffen_KNIME , @MartinDDDD I have integrated Llama3 (via Ollama) via REST-API and Python nodes. The downside is that the history of the chat will not be there (unless you would reload it every time) because the session will not stay open (which is what you want to do)

Until now I only did this via a Streamlit App that allows you to select documents and then have a chat where the context stays open (for a while):

3 Likes

Thanks both. Will definitely check out if I can make it work with OpenAI nodes using Ollama directly.

The Ollama inference node was a first prototype to learn how to build an extension anyways, but I have a more useful idea I’ll pursue next :-).

Edit: Did a quick test: So it is possible to use the local endpoint via OpenAI Authenticator when passing a random API key in. However the OpenAI Chat Model Connector seems to have validation build in on the model selected, even if it comes in via flow variable. In my test case I have passed in “phi3” and as this does not pass validation it falls back to gpt 3.5. When I then prompt the model I get the error that the model is not known (by my local Ollama)

3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.