Troubleshooting Connection Issues with Company’s Internal Self-Hosted AI LLM

I am attempting to connect to my company’s internally self-hosted AI Large Language Model (LLM). To test this connection to the LLM server, I am using two nodes:

  1. The Credentials Configuration Node - where I enter the token.

  2. The OpenAI Authentication Node - as my company claims their API URL is OpenAI-compatible.

The OpenAI compitable Base URL provided by my company, which I manually entered into the OpenAI Authentication Node, is:
https://ABC-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com/chat/completions

When I execute the “OpenAI Authentication Node,” I receive the following error:
Execute failed: API connection failed. Please make sure that your base URL ‘https://ABC-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com/chat/completions’ is valid and uses a supported (‘http://’ or ‘https://’) protocol. It might also be caused by your network settings, proxy configuration, SSL certificates, or firewall rules.

Could I get some advice on how to troubleshoot this? Here are my specific questions:

  1. Could this be an issue of the Base URL not being compatible with the “OpenAI Authentication Node”? How can I determine this, and what steps can I take to verify it? Is there a way to inspect the source code of the node so I can collaborate with my IT team to confirm if it’s compatible with the base URL and its required parameters?

  2. I’ve noticed that other base URLs I’ve previously tested work fine, such as https://api.openai.com/v1 and http://localhost:11434/v1 (for a connection with self-hosted Ollama). Does the base URL need to follow a specific format, like ending with /v1, or is /chat/completions acceptable and irrelevant to the issue?

  3. When I enter the base URL (https://ABC-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com/chat/completions) into my web browser, I can reach the gateway without any issues and receive a response on the page (not a 404 or blocked page). Can I assume there are no network, firewall, or proxy issues based on this?

  4. Should I test the provided base URL using a Python script to rule out network or connectivity problems and narrow the issue down to the Base URL, the “OpenAI Authentication Node,” or my company’s internal LLM server?

I’d appreciate any advice on how to proceed with troubleshooting this issue.

Hey there,

so from OpenAI docs the full endpoint for chat completions is:

https://api.openai.com/v1/chat/completions

For KNIME nodes you need to provide base URL until v1:

https://api.openai.com/v1

Thus I would expect things to work if you define base URL as:

https://ABC-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com

Provided that there are no proxy issues, api key issues etc…

1 Like

I tired both the API root URL and the endpoint as suggested above and its still throwing the same error.

When I execute the “OpenAI Authentication Node,” I receive the following error:
Execute failed: API connection failed. Please make sure that your base URL ‘https://ABC-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com/chat/completions’ is valid and uses a supported (‘http://’ or ‘https://’) protocol. It might also be caused by your network settings, proxy configuration, SSL certificates, or firewall rules.

I decide to test to find out if its a network / cert / ssl related issue instead of a node issue and i perfomed the following:

I create a python script from my same laptop and executed using the Open AI compitable base URL using this script and this is the results that i received.

import requests

# Configuration
API_URL = "https://abc-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com/chat/completions"  # Replace with your server's URL
BEARER_TOKEN = "xx"  # Replace with your actual token

# Test payload
payload = {
    "model": "Meta-Llama-3.1-70B",  # or any model your server supports
    "messages": [{"role": "user", "content": "Hello!"}],
    "temperature": 0.7
}

headers = {
    "Authorization": f"Bearer {BEARER_TOKEN}",
    "Content-Type": "application/json"
}

# Make the request
response = requests.post(API_URL, json=payload, headers=headers)

# Output response
if response.status_code == 200:
    print(" Connection successful!")
    print("Response:")
    print(response.json())
else:
    print(f" Failed to connect. Status code: {response.status_code}")
    print("Response:")
    print(response.text)

I received the following error msg:

Error: HTTPSConnectionPool(host='abc-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com', port=443): Max retries exceeded with url: /chat/completions (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)')))
In [2]:

This lead me to suspect that why the “OpenAI Authentication Node” is failing maybe due to a cert issue.

Should my next steps be to navigate to https://ABC-llm-coordinator-preprod.ml-5w114b15-555.apps.apps.prod5.abc.com where the LLM server is hosted and export the certificate into a .pem file, then proceed to add the root certificate of my LLM server to knime using this method → keytool -import -alias our_root_cert -keystore “C:\Program Files\KNIME\plugins\org.knime.binary.jre.win32.x86_64_1.8.0.252-b09\jre\lib\security\cacerts” -file our-root-cert.pem
(I am taking guideance from another post here: Adding root certificate to KNIME - #3 by kixxalot)

Like to have some advice if i am on the right track in resolving this issue? Are there are possible issues that i should consider or think about how to resolve this?

Note my runtime environment: I am using the free knime version on a laptop trying to have knime node connect to a internal LLM server within my company.

I think you may be on to something with cert / proxy settings - really sorry, but I absolutely have no clue about these topics… hope someone else can pick this up :slight_smile:

Anyone has any advice how did Disable SSL Verify on the OpenAI Authenticator node? i think this is cause the error.

I see there is a “verify settings” checkbox on the Open AI Authenticator node but that is for “verify the settings by calling the list models enpoint”. I think this is not to Disable SSL Verify on the OpenAI Authenticator node

I tired to use ChatGPT for advice, to disable SSL verification for Knime below is the advice i received.

Edit the nime.inifile in your KNIME installation directory (same folder as knime.exe or knime launcher). Add the following line at the end of the file:

-Dcom.knime.ssl.disable=true

Save and restart KNIME.

I did try this but i am still receiving the error. Would like to understand technically from Knime support if its possible to disable SSL Verification or ignore all SSL errors for the Open AI Authenticator node?

Hello @bluecar007,

the OpenAI Authenticator does not support disabling SSL.
Disabling SSL is also generally not advised because it opens you up to man-in-the-middle attacks.
If you know that you can trust the certificate of your self-hosted AI, then adding it to your trusted certificates is the right way to go.

It might also be the case that your laptop already trusts the certificate and it’s only KNIME that doesn’t trust it because it brings its own set of certificates.
You can test this by using another program e.g. curl to query the LLM and if that succeeds you can configure KNIME to use your system certificates.
This can be done by adding the following lines to your knime.ini:

-Djavax.net.ssl.trustStore=NONE
-Djavax.net.ssl.trustStoreType=Windows-ROOT
-Dknime.python.cacerts=AP

The first two lines tell KNIME to use your system certificates and the last line configures all Python-based nodes (including the AI Extension) to use the certificates used by the AP.

I hope these steps can help you to get it to work.

Best regards,
Adrian

3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.