LLM Prompter can only be successfly executed (green) when
HuggingFaceH4/zephyr-7b-beta is being used and even using bigscience/mt0-large generates error 404.
despite the nodes of Credentials Configuration (Hugging Face API Key) & HF Hub Authenticator & HF Hub LLM Connector can be executed (green) successfully.
You’re right — the LLM Connector and Prompter nodes in KNIME currently support models that are hosted directly on Hugging Face’s Inference API. That’s why models like HuggingFaceH4/zephyr-7b-beta work, while others like bigscience/mt0-large or deepseek-ai models don’t — they aren’t served through Hugging Face’s own infrastructure.
Look for models that list HF Inference API under the “Inference Providers” section on Hugging Face.
There is a ticket to support other Inference Providers in Hugging Face Hub Connectors (AP-24349) for the future.