Dear KNIME Team,
While the current GPT4All integration for local Large Language Models (LLMs) in the KNIME Analytics Platform is a great start, it is insufficient for many professional and academic needs. As you know, GPT4All’s support for different models is quite limited, and their parameter counts are relatively low.
This is where integrating Ollama could dramatically enhance KNIME’s capabilities. An Ollama integration would provide significant advantages:
- Extensive Model Support: Access to hundreds of modern LLMs, including popular ones like Gemma, DeepSeek, Qwen, and Llama.
- Scalability: Allows users to choose models that fit their hardware and needs, with parameter counts ranging from a few billion to hundreds of billions.
- Multimodal Capabilities: Enables the use of multimodal models like LLaVA, allowing workflows to process images alongside text.
Developing user-friendly Ollama integration nodes, similar to the existing ones for GPT4All, would make KNIME a much more powerful and flexible platform for local LLM development.
We would be very interested to know if this is on your future roadmap. Is there any potential timeline for an Ollama integration?
Thank you for your consideration and for all your great work.
Best regards,
Ali Alkan