Hi Geo, many thanks for the feedback and thoughts!
You raise a few really good points, let me contribute with some thoughts of my own.
Regarding general-purpose LLMs on the web being able to answer KNIME-related questions: yes, true to a degree, but that degree likely seems much higher than it actually is. From how you talk about LLMs, you probably know quite a bit about the not-so-inspiring reality of huge portions of LLM output being false. While it’s literally the inherent way they are meant to work (hallucinating the answer), you can, of course, steer those hallucinations towards correctness by providing concrete bits of relevant information. As you said, the chatbots online are pretty good at browsing the web to retrieve some relevant information, but usually that’s simply not enough. I can confidently say that because even K-AI, who has access to vector stores with carefully curated KNIME-specific information, can and does make mistakes in its answers. From this perspective, K-AI does provide quite a bit of value in terms of onboarding and learning, it’s just difficult to make this case when answers from ChatGPT always look so convincing.
Regarding K-AI’s worklfow-building capabilities compared to LLM-generated code in a scripting node: I totally understand your point, but this is more of a question about visual programming vs. high-code programming. To me the value of LLM-generated visual workflows is very apparent compared to LLM-generated scripts - clear representation of the flow of data, clear abstraction of each transformation done to that data, presumably well-documented via node and workflow annotations. Sure, you can compress all those steps into a single node with a Python script inside, but you lose all the benefits that come from programming such workflows visually. Even so, you do have access to K-AI inside scripting nodes, and you can still benefit from the rest of KNIME’s ecosystem even with a single scripting node in your workflow (deploy, schedule, etc.) 
Regarding privacy: we recently rolled out a note on this in our documentation, have a look - KNIME Analytics Platform User Guide . But yes, the two super important points you mention here are on our minds as well. We’re planning to let users have control over what’s accessible to K-AI and what isn’t (e.g. only table specs, or also node configurations, or also perhaps samples from actual data, and so on). And letting K-AI’s backend LLM be configurable only makes sense, that’s also definitely something we want to do.
Regarding turning all AI features off if you’d rather not have them - you can already do that right away: Preferences → KNIME → KNIME Modern UI → AI Assistant
Really enjoyed reading your feedback, many thanks again.
-Ivan