Any idea why a DB Query Reader just keeps running?

Using the Snowflake connector - we’ve got a DB Query Reader which runs OK when I am in the office. When I run it from home - it just keeps running!

It seems to connect OK - just does not return the ‘green traffic light’

Any ideas why it would operate differently based on my location?

image

Hello @bradley_peter ,
Do you connect to a VPN and then run it, or you are just connected to the local network?

Thanks,
Sanket

1 Like

Hello - thanks for the reply - yes originally via VPN - which I thought might be the issue. I turned it off and it still made no difference. Node just runs. In the office it is fine.

If you can, try to convert your query to stored procedure. The one runs on server and returns just result set.

Hello,

Initial thoughts are that it seems, to me, like it would be something network-related, since it operates as expected in-office but doesn’t when you’re at home. The strange thing is that it is getting 2510 rows so far, so it’s doing something - I would honestly have expected either full results or no results. So you’ve got several oddities going on. And the fact that VPN doesn’t matter is weird, too.

To really dig into this, I’d want to know what version of AP is being used, what version of the Snowflake extensions/JDBC drivers are being used, and what the query is. I’d also go into AP and enable DEBUG logging, and then rerun the workflow to try to see what the DEBUG logs are reporting is going on in the blow-by-blow, to see if there’s timeout warnings, or exceptions, or anything like that.

Another thought is, if you use a DB Table Reader node after DB Query node, then we would expect it would point it more towards “is this a db/driver issue, or a network issue?”

Regards,
Nickolaus

2 Likes

Fixed it - I lowered the fetch size on the connector node and it has worked. Thanks for the connect Nick - the help’s always appreciated.

3 Likes

I spoke to soon.

I was doing a number of things at the same time and forgot I had added in a LIMIT 10 to the bottom of the SQL statement. Which when it runs the query in the node and it goes to the green traffic light.

If I increase this to 200 – it is fine. 500 and it then just keeps running.

So there is obviously nothing wrong with the driver or the query – just very odd behaviour on the amount of data being returned.

I tried 3 different drivers 3.12 through to 3.14 with the fetch size left at 100, and none of them returned the data if the LIMIT was set to 500

I tried to find the ‘cut off point’ and limit 475 worked and limit 476 didn’t. Anything over the 476 failed to return a green light on the node.

I also set-up the connection in DBEAVER software and it connects and SQLs the data fine. So there must be a set-up I am missing in KNIME

Hi @bradley_peter, I wonder if it may be related to this in some way, as there is an element to this that appears to be network related:

“JDBC driver internal error: Timeout waiting for the download of #chunk0 when running JDBC application and fetching large data from Snowflake”

This article on stackoverflow also appears to be related to your problem and points back at that snowflake “endpoints not whitelisted” issue:

of particular interset is the note added at the bottom of that StackOverflow post:
image

1 Like