Created Ollama nodes for AI Extension

Hi KNIME

I made Ollama nodes for KNIME AI Extension

image

image

Notes:

Code for the KNIME extension: KNIME AI Extension

After installation of extension, the source code was found here.

%AppData%/../Local/Programs/KNIME/plugins/org.knime.python.llm_5.4.3.v202503051153

In KNIME Analytics PlatformHelpAbout...Installation Details: Searching for KNIME AI Extension, class is org.knime.python.features.llm.feature.group

Installed the environment via miniforge3 Prompt.

cd C:/KNIME/knime-nodes/org.knime.python.llm_5.4.3.v20250416/src/main/python
conda env create -f env.yml
# conda env update --name llm_ollama_env -f env.yml --prune

Needed to fix category in org.knime.python.llm_5.4.3.v20250416/src/main/python/src/util.py, where path="/labs" was changed to path="/community", and name="AI Tutorial Development".

Deleted .pk12 file and linux binary with size of ~60MB.

Updated env.yml,

  • Added knime-extension, to be able to run tests.
  • To use conda-forge::
  • Removed packages that seemed not to be in use? scipy,bokeh,matplotlib,ipython.
  • Removed pip packages text_generation,griffe,ragas,umap-learn
  • Added missing langchain-community
  • Added langchain-ollama

Added folder: src\main\python\src\models\ollama
Took the code from deepseek and modified to fit ollama.

image

How do I do a Pull Request?

5 Likes

First of all - great that you worked out how to do that :-).

To your question: I think right now pull requests for KNIME-provided AI extensions are not “foreseen”. I know that the KNIME team is looking into options how people can potentially build on top of existing extensions (e.g. I have built some AI-related nodes as well, but could not re-use the port objects the AI extension uses…), but don’t think there is an ETA on such a feature as of yet…

Hi Martin.

Yes, that with the port objects was quite surprising.
I had initially made an “Ollama only” package, but ran into this problem:

knime.extension.nodes.InvalidParametersError: 
WARN  LLM Prompter
The provided input is unknown and therefore incompatible with the expected type LLM 
(got org.knime.python.ollama.models.ollama._chat.OllamaChatModelPortObject). 
Connect one of the following nodes: HF Hub LLM Connector, HF Hub Chat Model Connector, HF TGI LLM Connector, 
HF TGI Chat Model Connector, OpenAI LLM Connector, ... and 9 more.

But after moving the code into the original package, it worked together.
Thanks to the Forum for pointing this out!
Re-use Custom Port Object from existing extensions Python-based extensions - Node Development - KNIME Community Forum Cool person :wink:

As far as I understand, KNIME must somehow extract which nodes implement the LLMPortObject/LLMPortObjectSpec in src.models.base from the package.

And since this port specification is not part of core KNIME, all objects has to refer to the same specification.

I KNIME haven’t made their code publicly available in a repository, then there really is only one way.

Ref: 4 steps for your Python Team to develop KNIME nodes | KNIME

External sharing to the whole KNIME community on KNIME Hub 
requires sharing your code in a publicly available git repository 
(e.g. BitBucket, GitLab, GitHub), providing some testflows, and 
sending the link to KNIME community contributions mail. 
We will run some automated tests and get back to you. 

So
1: Fork the package and develop a community version.
2: Ask KNIME to add this as Community Extensions

It’s just a silly message…

Hey, don't use KNIME's Package, but 
this community package instead.
It has more features and accepts PR from the community. 

Found this in the Databricks nodes:

image

image

That seems very interesting, if it’s possible to reference Port types from other modules imported.

1 Like

That looks interesting indeed - I tried a fair few things, that didn’t seem to work… if you can replicate this that’d be great.

I think my issue was that I tried to re-use port object from Python-based extension in a python based extension, but in case in the databricks example both are Python extensions that may just make it work…

Hi @tescnovonesis,

Firstly, kudos on spelunking into the codebase and implementing the nodes! You even took care of the nice icon.

Secondly, you’ve certainly sparked a few conversations internally! We’ve long had the idea to open source (or, for starters, at least make the code public) the AI Extension precisely to enable these sorts of contributions, but never quite gotten around to it. And seeing this certainly moves the needle firmly in that direction.

Right now, we can’t give any promises though, since we still need to figure out a way forward (lots of moving parts involved in the process, but I’m positively hopeful!). But as soon as there’s any news to share, we’ll get right back to you.

In the meantime, would you mind sharing your code? And maybe share a bit why you decided to develop the nodes in the first place :slightly_smiling_face:

Cheers,
Ivan @ KNIME

3 Likes

Record a “+1” for me - having the ability to contribute or to “extend an extension” would be amazing :slight_smile:

1 Like

Hi @ivan_prigarin

We have a company hackathon coming up in May. With 20 cool projects submitted. Looking through the list, 4-5 of these ideas are interested in exploring AI for their business case.

Data is not allowed to leave the company network, unless the it has been scrutinized by Security. That’s why the OpenAI/DeepSeek nodes are banned, while there might be a way for the Azure nodes. This still requires a “scrutiny-round”. For a Hackathon and exploration phase, this is poison to the process.

We have decent GPU server in R&D where we have Ollama loaded.
So for a Hackathon, to explore the business ideas, this would be ideal.
You can throw all the data you wan’t against it, without data leaving company network.

I know that Ollama has OpenAI compatibility, but
this only works with the Chat/LLM prompting nodes.
Not with the Embedding nodes.

And then the Vector stores seems to only work on local computer.

So, I would like to explore the options for a Snowflake / PostgreSQL database for vector storing.

I will look into these options.

Here is the code:

env.yml

name: llm_ollama_env
channels:
  - conda-forge
  - knime
dependencies:
  - conda-forge::python=3.11
  - conda-forge::pip=25.0.1
  - knime::knime-python-base>=5.4
  - knime::knime-extension>=5.4
  - conda-forge::huggingface_hub=0.23.4
  - conda-forge::chromadb=0.5.23
  - conda-forge::faiss-cpu=1.7.4
  - conda-forge::pydantic=2.10.2
  - conda-forge::beautifulsoup4=4.12.3
  - pip:
      - langchain==0.3.14
      - langchain-community==0.3.14
      - langchain-openai==0.3.0
      - langchain-ollama==0.3.1
      - langchain-chroma==0.2.0
      - gpt4all==2.8.2
      - giskard==2.14.2

src\models\ollama

_auth.py

import knime.extension as knext
from ollama import Client
from base import AIPortObjectSpec

_default_ollama_api_base = "http://localhost:11434"


class OllamaAuthenticationPortObjectSpec(AIPortObjectSpec):
    def __init__(
        self, base_url: str = _default_ollama_api_base
    ) -> None:
        super().__init__()
        self._base_url = base_url

    @property
    def base_url(self) -> str:
        return self._base_url

    def validate_context(self, ctx: knext.ConfigurationContext):
        if not self.base_url:
            raise knext.InvalidParametersError("Please provide a base URL.")

    def validate_api_connection(self, ctx: knext.ExecutionContext):
        try:
            self._get_models_from_api(ctx)
        except Exception as e:
            raise RuntimeError(f"Could not access Ollama API at {self.base_url}") from e

    def _get_models_from_api(
        self, ctx: knext.ConfigurationContext | knext.ExecutionContext
    ) -> list[str]:
        ollama = Client(host=self.base_url, timeout=2)
        models_response = ollama.list()
        return [model["model"] for model in models_response["models"]]

    def get_model_list(self, ctx: knext.ConfigurationContext) -> list[str]:
        try:
            return self._get_models_from_api(ctx)
        except Exception:
            return ["ollama-chat", "ollama-reasoner"]

    def serialize(self) -> dict:
        return {
            "base_url": self._base_url,
        }

    @classmethod
    def deserialize(cls, data: dict):
        return cls(
            data.get("base_url", _default_ollama_api_base)
        )


class OllamaAuthenticationPortObject(knext.PortObject):
    def __init__(self, spec: OllamaAuthenticationPortObjectSpec):
        super().__init__(spec)

    @property
    def spec(self) -> OllamaAuthenticationPortObjectSpec:
        return super().spec

    def serialize(self) -> bytes:
        return b""

    @classmethod
    def deserialize(cls, spec: OllamaAuthenticationPortObjectSpec, storage: bytes):
        return cls(spec)

auth.py

import knime.extension as knext
# These import has to be relative. When using "src.models.ollama" the nodes disappear in the KNIME GUI.
from .base import ollama_icon, ollama_category
from ._auth import _default_ollama_api_base, OllamaAuthenticationPortObject, OllamaAuthenticationPortObjectSpec


ollama_auth_port_type = knext.port_type(
    "Ollama Authentication",
    OllamaAuthenticationPortObject,
    OllamaAuthenticationPortObjectSpec,
)


@knext.node(
    name="Ollama Authenticator",
    node_type=knext.NodeType.SOURCE,
    icon_path=ollama_icon,
    category=ollama_category,
    keywords=["Ollama", "GenAI"],
)
@knext.output_port(
    "Ollama API Authentication",
    "Authentication for the Ollama API",
    ollama_auth_port_type,
)
class OllamaAuthenticator:
    """Authenticates with the Ollama API via API key.

    **Note**: Default installation of Ollama has no API key.
    """

    base_url = knext.StringParameter(
        "Base URL",
        "The base URL of the Ollama API.",
        default_value=_default_ollama_api_base,
        is_advanced=False,
    )

    validate_api_connection = knext.BoolParameter(
        "Validate API Connection",
        "If set, the API connection is validated during execution by fetching the available models.",
        True,
        is_advanced=False,
    )

    def configure(
        self, ctx: knext.ConfigurationContext
    ) -> OllamaAuthenticationPortObjectSpec:
        spec = self.create_spec()
        spec.validate_context(ctx)
        return spec

    def execute(self, ctx: knext.ExecutionContext) -> OllamaAuthenticationPortObject:
        spec = self.create_spec()
        if self.validate_api_connection:
            spec.validate_api_connection(ctx)
        return OllamaAuthenticationPortObject(spec)

    def create_spec(self) -> OllamaAuthenticationPortObjectSpec:
        return OllamaAuthenticationPortObjectSpec(
            base_url=self.base_url
        )

_chat.py

import knime.extension as knext
from langchain_ollama import ChatOllama
# These import has to be relative. When using "src.models.x" the nodes disappear in the KNIME GUI.
from .._base import ChatModelPortObject, ChatModelPortObjectSpec, OutputFormatOptions
from ._auth import OllamaAuthenticationPortObjectSpec



class OllamaChatModelPortObjectSpec(ChatModelPortObjectSpec):
    """Spec of a Ollama Chat Model"""

    def __init__(
        self,
        auth: OllamaAuthenticationPortObjectSpec,
        model: str,
        temperature: float,
        num_predict: int,
        n_requests=1,
    ):
        super().__init__(n_requests)
        self._auth = auth
        self._model = model
        self._temperature = temperature
        self._num_predict = num_predict

    @property
    def model(self) -> str:
        return self._model

    @property
    def temperature(self) -> float:
        return self._temperature

    @property
    def num_predict(self) -> int:
        return self._num_predict

    @property
    def auth(self) -> OllamaAuthenticationPortObjectSpec:
        return self._auth

    def validate_context(self, ctx):
        self.auth.validate_context(ctx)

    def serialize(self) -> dict:
        return {
            "auth": self._auth.serialize(),
            "n_requests": self._n_requests,
            "model": self._model,
            "temperature": self._temperature,
            "num_predict": self._num_predict,
        }

    @classmethod
    def deserialize(cls, data: dict):
        auth = OllamaAuthenticationPortObjectSpec.deserialize(data["auth"])
        return cls(
            auth=auth,
            model=data["model"],
            temperature=data["temperature"],
            num_predict=data["num_predict"],
            n_requests=data.get("n_requests", 1),
        )


class OllamaChatModelPortObject(ChatModelPortObject):
    @property
    def spec(self) -> OllamaChatModelPortObjectSpec:
        return super().spec

    def create_model(
        self,
        ctx: knext.ExecutionContext,
        output_format: OutputFormatOptions = OutputFormatOptions.Text,
    ):
        if "reasoner" in self.spec.model:
            return ChatOllama(
                base_url=self.spec.auth.base_url,
                model=self.spec.model,
                temperature=1,
                num_predict=self.spec.num_predict,
            )

        return ChatOllama(
            base_url=self.spec.auth.base_url,
            model=self.spec.model,
            temperature=self.spec.temperature,
            num_predict=self.spec.num_predict,
        )

chat.py

import knime.extension as knext
# These import has to be relative. When using "src.models.ollama" the nodes disappear in the KNIME GUI.
from .base import ollama_icon, ollama_category
from .auth import ollama_auth_port_type
from ._auth import OllamaAuthenticationPortObject, OllamaAuthenticationPortObjectSpec
from ._chat import OllamaChatModelPortObject, OllamaChatModelPortObjectSpec


ollama_chat_model_port_type = knext.port_type(
    "Ollama Chat Model", OllamaChatModelPortObject, OllamaChatModelPortObjectSpec
)

def _list_models(ctx: knext.ConfigurationContext):
    if (specs := ctx.get_input_specs()) and (auth_spec := specs[0]):
        return auth_spec.get_model_list(ctx)
    return ["ollama-chat", "ollama-reasoner"]


@knext.node(
    name="Ollama Chat Model Connector",
    node_type=knext.NodeType.SOURCE,
    icon_path=ollama_icon,
    category=ollama_category,
    keywords=["Ollama", "GenAI", "Reasoning"],
)
@knext.input_port(
    "Ollama Authentication",
    "The authentication for the Ollama API.",
    ollama_auth_port_type,
)
@knext.output_port(
    "Ollama Chat Model",
    "The Ollama chat model which can be used in the LLM Prompter and Chat Model Prompter.",
    ollama_chat_model_port_type,
)
class OllamaChatModelConnector:
    """Connects to a chat model provided by the Ollama API.

    This node establishes a connection with a Ollama Chat Model. After successfully authenticating
    using the **Ollama Authenticator** node, you can select a chat model from a predefined list.

    **Note**: Default installation of Ollama has no API key.
    """

    model = knext.StringParameter(
        "Model",
        description="The model to use. The available models are fetched from the Ollama API if possible.",
        default_value="ollama-chat",
        choices=_list_models,
    )

    temperature = knext.DoubleParameter(
        "Temperature",
        description="""
        Sampling temperature to use, between 0.0 and 2.0.

        Higher values will lead to less deterministic but more creative answers.
        Recommended values for different tasks:

        - Coding / math: 0.0
        - Data cleaning / data analysis: 1.0
        - General conversation: 1.3
        - Translation: 1.3
        - Creative writing: 1.5
        """,
        default_value=1,
    )

    num_predict = knext.IntParameter(
        "Num Predict",
        description="The maximum number of tokens to generate in the response",
        default_value=4096,
    )

    def configure(
        self,
        ctx: knext.ConfigurationContext,
        auth: OllamaAuthenticationPortObjectSpec,
    ) -> OllamaChatModelPortObjectSpec:
        auth.validate_context(ctx)
        return self.create_spec(auth)

    def create_spec(
        self, auth: OllamaAuthenticationPortObjectSpec
    ) -> OllamaChatModelPortObjectSpec:
        return OllamaChatModelPortObjectSpec(
            auth=auth,
            model=self.model,
            temperature=self.temperature,
            num_predict=self.num_predict,
        )

    def execute(
        self, ctx: knext.ExecutionContext, auth: OllamaAuthenticationPortObject
    ) -> OllamaChatModelPortObject:
        return OllamaChatModelPortObject(self.create_spec(auth.spec))

base.py

import knime.extension as knext
from ..base import model_category

ollama_icon = "icons/ollama.png"
ollama_category = knext.category(
    path=model_category,
    name="Ollama",
    level_id="ollama",
    description="Ollama models",
    icon=ollama_icon,
)

src\models_base.py
(moved from base.py to allow for testing)

import knime.extension as knext
from base import AIPortObjectSpec


class OutputFormatOptions(knext.EnumParameterOptions):
    Text = (
        "Text",
        "Text output message generated by the model.",
    )

    JSON = (
        "JSON",
        """
        When JSON is selected, the model is constrained to only generate strings 
        that parse into valid JSON object. Make sure you include the string "JSON"
        in your prompt or system message to instruct the model to output valid JSON 
        when this mode is selected.  
        For example: "Tell me a joke. Please only reply in valid JSON."
        Please refer to the OpenAI [guide](https://platform.openai.com/docs/guides/structured-outputs/structured-outputs-vs-json-mode) 
        to see which models currently support JSON outputs.
        """,
    )


class LLMPortObjectSpec(AIPortObjectSpec):
    """Most generic spec of LLMs. Used to define the most generic LLM PortType"""

    def __init__(
        self,
        n_requests: int = 1,
    ) -> None:
        super().__init__()
        self._n_requests = n_requests

    @property
    def n_requests(self) -> int:
        return self._n_requests

    @property
    def supported_output_formats(self) -> list[OutputFormatOptions]:
        return [OutputFormatOptions.Text]


class LLMPortObject(knext.PortObject):
    def __init__(self, spec: LLMPortObjectSpec) -> None:
        super().__init__(spec)

    def serialize(self) -> bytes:
        return b""

    @classmethod
    def deserialize(cls, spec: LLMPortObjectSpec, storage: bytes):
        return cls(spec)

    def create_model(self, ctx: knext.ExecutionContext):
        raise NotImplementedError()


class ChatModelPortObjectSpec(LLMPortObjectSpec):
    """Most generic chat model spec. Used to define the most generic chat model PortType."""


class ChatModelPortObject(LLMPortObject):
    def __init__(self, spec: ChatModelPortObjectSpec) -> None:
        super().__init__(spec)

    def serialize(self):
        return b""

    @classmethod
    def deserialize(cls, spec, data: dict):
        return cls(spec)

    def create_model(self, ctx: knext.ExecutionContext):
        raise NotImplementedError()

src\knime_llm.py
(No src\models\ollama\__init__.py, since this will interfere with testing)

from models.ollama.auth import OllamaAuthenticator
from models.ollama.chat import OllamaChatModelConnector

\tests\test__auth.py

import unittest
from unittest.mock import MagicMock, patch
from ollama._types import ResponseError, ListResponse, ModelDetails
import datetime
import pathlib
import sys
sys.path.append(str(pathlib.Path(__file__).parent.parent.joinpath('src')))
from src.models.ollama._auth import OllamaAuthenticationPortObjectSpec


class TestGetModelsFromAPI(unittest.TestCase):
    def setUp(self):
        # Set up a mock context and credentials
        self.mock_ctx = MagicMock()
        self.spec = OllamaAuthenticationPortObjectSpec(base_url="http://localhost:11434")

    @patch("src.models.ollama._auth.Client")
    def test_get_models_success(self, mock_ollama_client):
        mock_response = MagicMock()
        mock_response.list.return_value = ListResponse(models=[
            ListResponse.Model(model='gemma3:12b', modified_at=datetime.datetime(2025, 3, 17), digest='6fd036cefda5093cc827b6c16be5e447f23857d4a472ce0bdba0720573d4dcd9', size=8149190199, details=ModelDetails(parent_model='', format='gguf', family='gemma3', families=['gemma3'], parameter_size='12.2B', quantization_level='Q4_K_M')), 
            ListResponse.Model(model='qwen2.5-coder:32b', modified_at=datetime.datetime(2025, 3, 8), digest='4bd6cbf2d094264457a17aab6bd6acd1ed7a72fb8f8be3cfb193f63c78dd56df', size=19851349856, details=ModelDetails(parent_model='', format='gguf', family='qwen2', families=['qwen2'], parameter_size='32.8B', quantization_level='Q4_K_M')), 
            ListResponse.Model(model='mxbai-embed-large:latest', modified_at=datetime.datetime(2025, 1, 27), digest='468836162de7f81e041c43663fedbbba921dcea9b9fefea135685a39b2d83dd8', size=669615493, details=ModelDetails(parent_model='', format='gguf', family='bert', families=['bert'], parameter_size='334M', quantization_level='F16')), 
            ListResponse.Model(model='nomic-embed-text:latest', modified_at=datetime.datetime(2025, 1, 27), digest='0a109f422b47e3a30ba2b10eca18548e944e8a23073ee3f3e947efcf3c45e59f', size=274302450, details=ModelDetails(parent_model='', format='gguf', family='nomic-bert', families=['nomic-bert'], parameter_size='137M', quantization_level='F16'))
        ])
        mock_ollama_client.return_value = mock_response
        # Call the method
        models = self.spec._get_models_from_api(self.mock_ctx)
        assert isinstance(models, list)
        assert len(models) > 0

    @patch("src.models.ollama._auth.Client")
    def test_get_models_404_not_found(self, mock_ollama_client):
        mock_response = MagicMock()
        mock_response.list.side_effect = ResponseError("HTTP Error 404. The requested resource is not found")
        mock_ollama_client.return_value = mock_response
        # Call the method
        self.spec._base_url = "http://localhost"
        with self.assertRaises(ResponseError) as context:
            self.spec._get_models_from_api(self.mock_ctx)

\tests\test__chat.py

import unittest
from unittest.mock import MagicMock, patch
import pathlib
import sys
sys.path.append(str(pathlib.Path(__file__).parent.parent.joinpath('src')))
from src.models.ollama._chat import OllamaChatModelPortObject


class TestCreateModel(unittest.TestCase):
    def setUp(self):
        # Set up a mock context and credentials
        self.mock_ctx = MagicMock()
        mock_spec = MagicMock()
        self.portobj = OllamaChatModelPortObject(spec=mock_spec)

    @patch("src.models.ollama._chat.ChatOllama")
    def test_create_model_standard(self, mock_chat_openai):
        # Setup mock spec
        self.portobj.spec.model = "normal-model"
        self.portobj.spec.auth.credentials = "test_creds"
        self.portobj.spec.auth.base_url = "http://localhost:11434"
        self.portobj.spec.temperature = 0.7
        self.portobj.spec.num_predict = 1000

        # Call method
        self.portobj.create_model(self.mock_ctx)

        # Verify ChatOllama was called with correct params
        mock_chat_openai.assert_called_once_with(
            base_url="http://localhost:11434",
            model="normal-model",
            temperature=0.7,
            num_predict=1000
        )

    @patch("src.models.ollama._chat.ChatOllama")
    def test_create_model_reasoner(self, mock_chat_openai):
        # Setup mock spec
        self.portobj.spec.model = "reasoner-model"
        self.portobj.spec.auth.credentials = "test_creds"
        self.portobj.spec.auth.base_url = "http://localhost:11434"
        self.portobj.spec.num_predict = 1000

        # Call method 
        self.portobj.create_model(self.mock_ctx)

        # Verify ChatOllama was called with correct params for reasoner
        mock_chat_openai.assert_called_once_with(
            base_url="http://localhost:11434",
            model="reasoner-model", 
            temperature=1,
            num_predict=1000
        )

Tests

conda activate llm_ollama_env
python -m unittest discover tests
# Or Test Single file / Class / Function
python -m unittest tests.test_utils
#
python -m unittest tests.test__auth
python -m unittest tests.test__auth.TestGetModelsFromAPI
python -m unittest tests.test__auth.TestGetModelsFromAPI.test_get_models_404_not_found
#
python -m unittest tests.test__chat
python -m unittest tests.test__chat.TestCreateModel

Hi @MartinDDDD

I did not manage to figure how to reference an Extension port.

In org.knime.bigdata.databricks_5.4.1.v202501301151\plugin.xml I found:

In org.knime.python.llm_5.4.3.v20250416\plugin.xml I only see this:

I tried almost all combinations of Port reference by ID, but I failed miserable.

So, forking seems the only way forward here.

1 Like

Hi @ivan_prigarin

I made a repo in github tlinnet/kollama: KNIME Ollama Extension

It’s a pretty clean setup, separating Ollama nodes from KNIME AI Extension nodes.

I will submit a request to let Kollama be added as KNIME Community Extension.

1 Like

That looks great!

One question:

For now, can you have both the KNIME AI Extension and this one installed in parallel or do you get any error because of PortObject names?

One suggestion: Can you bundle the extension and provide the content for local update site in your github repo as well? The users can just (for now) download from there and set it up locally.

Hi @MartinDDDD

You can have both extensions installed. But as you see, it’s not very convenient.

You can download a zip file that can serve as local update site here: Releases · tlinnet/kollama

1 Like

Hi @tescnovonesis!

You’re moving a little bit quicker than us :smiley:. Which is fantastic, thanks for all the effort!

As you mentioned, duplicating the entire rest of the AI Extension in the Kollama extension is not ideal, since then you have duplicates of all nodes with a lot of moving parts and dependencies, just to provide the Ollama connectivity. And having a separate extension with just the Ollama nodes won’t work nicely with the port types of the AI Extension due to certain limitations of our Python extension framework.

Until we close that gap, we would like to include the Ollama nodes directly in the AI Extension but clearly label and categorise them as a community contribution.

Here’s what that would look like:

  1. We make the AI Extension codebase public ASAP
  2. We provide you with a Contributor License Agreement (CLA) to read through and sign
  3. You open a PR with the Ollama nodes
  4. We work together on getting the PR merged, temporarily including the nodes in the extension, marked as a community contribution
  5. When the Python extension framework starts supporting cross-extension port types, we’d circle around and separate the Ollama nodes into their own proper community extension integrated with all the necessary port types of the AI Extension (you’ve already sent us a request to get Kollama onto the community update site, but it would make more sense to do that once we’ve solved the port type limitation :+1: )

All things going well, the nodes would be available in the nightly of the Analytics Platform and the AI Extension soon after that, and included in the next feature release.

Let us know what you think!

Ivan & the team

6 Likes

Hi @ivan_prigarin

Sounds like a great plan and exactly what I have hoped for.

Thanks !

6 Likes