5.4.4 Snowflake Connector : Execute failed: Invalid state: Connection pool shut down. A potential cause is closing of a connection when a query is still running.

Hi,
We updated executors on our HUB instance to 5.4.4.

One user have 2 flows, that each run every 30 min.

After the update, the user got this error 2 times on one day.

Execute failed: Invalid state: Connection pool shut down. A potential cause is closing of a connection when a query is still running.

The same error happened for me. Re-running the flow did not error.

What may be common for the user and me, is that we both can have multiple flows that use the same KNIME HUB Microsoft OAuth2 secrets and connection to snowflake.

The user have 2-4 flows that are scheduled “close” in-time, ie. every 30 min.
While I have 1 flow that orchestrates execution of 6 other flows in parallel/sequence as needed.

2 additional users have reported issues.

On the question:
Currently, the observation is that it happens sporadically. Is that the same observation you have? Or does it happen every time?

Eric reports:
It happens every time from a scheduled workflow. I can get it to run properly when I manually kick it off.

Joe reports:
I have been getting the connection pool error about 30% of the time on my workflows. What seemed to work for me was configuring a new snowflake connector and checking use latest version box instead of using the connector in the Snowflake CDP Default component. After the change. I had one workflow scheduled with the new connector and another scheduled with the old connector. So far the workflow with the new connector has succeeded 50 times in a row. The workflow that I have not changed continues to fail sporadically.

Driver: snowflake_cdp_default is delivered via Customization Profile, and was delivered before the KNIME 5.4.4 update.

snowflake-jdbc-3.23.1.jar with size of 78,2 MB (82.092.164 bytes), and SHA256 hash: 56FA6E900472092D8C562AB02E4B44E1A52AAFB883CE9F40D8425BC49E685EEF

Downloaded from:

In AppData\Local\Programs\KNIME\plugins\org.knime.snowflake_5.4.4.v202504301444.jar,
snowflake-jdbc-3.23.1.jar : 78,2 MB (82.092.164 bytes) with SHA256 hash: 56FA6E900472092D8C562AB02E4B44E1A52AAFB883CE9F40D8425BC49E685EEF

So, they are equal.

Eric issue was found to be, that he had scheduled a workflow through the Classic KNIME Desktop interface, and thereby created a schedule in “TEAM” context.

This failed both Kerberos execution towards a HANA database and getting a personal OAuth2 secret for Snowflake connection.

So, we are back to 3 users reporting sporadic failure.

Ooohhhh sheize.

Just found a job, where a user had scheduled via Desktop, so it created in TEAMS context.
That job failed to find a SharePoint Oauth2 secret in personal secret store.

It was set to run every 2 minutes, with 4 retries per run. And with no notification mail to the user.
It may have consumed ALOT of pool resources.

Snowflake Sporadic failure

ISSUE RESOLVED (Awaiting confirmation)

A user have had issues scheduling of workflows, that did not work as expected. Jobs was failing for weird issues.

During troubleshooting, the user had set the 2 jobs to run every 2 minutes.

1 Job to connect to SharePoint and 1 Job to connecto to Snowflake. The job to connect to Snowflake was set to retry 3x extra if failed. The user had not set a notifation mail about failure, and apparently forgot the job.

The jobs have been running the whole day, failing every 2 minutes and apparently consumed the pool of connections to Snowflake.

The issue the user was facing, was the new “Default TEAM context”.

After the 2 jobs was disabled by the KNIME HUB Admin, the sporadic failure can not be triggered.

This cause of event seems most plausible, comparing to the timeline of KNIME HUB upgrade and user actions.

Conclusion: Snowflake Pool of connections was consumed by run-away scheduled jobs. The run-away scheduled jobs was created to debug unknown new TEAMS context feature.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.