This might be a nudge to @ScottF, but I previously reported issues with Send to Tableau Server timing out.
The issue is still happening, perhaps more frequently, and my data size hasn’t grown much since the original post (original post was 22M rows, data is about 25M rows).
I have tried partitioning the data table preceding the Send to Tableau Server node by splitting the table in half, and using two Send to Tableau nodes: first one overwrites, second one appends, so about 12M rows. Not only did it not solve the issue, but the Partition node added another ~1hr to the workflow when using a simple 50% cutoff. So I think I prefer not to have to use partitioning.
Would appreciate any pointers on resolving this timeout issue.
Hi @qdmt -
Sorry to hear you are still running into problems. Let me alert one of our developers to see if they can help.
Thanks very much! Would look forward to any help they can provide.
Hi @qdmt -
One of our devs looked at your logs and wasn’t able to find anything definitive. He hypothesized it might be a couple of issues at play:
- Expiration of your Tableau access token, the length of which is hardcoded in the node (AP-19366)
- The chunk size we use to send data is too small, also hardcoded (AP-19783)
We have tickets in place to address these issues, and added +1s from you on both. Sorry I don’t have better news for you in the short term.
Thanks @ScottF. Appreciate the update. In case a fresh log is helpful, I ran it last night, and the node failed at some point with:
ERROR Send to Tableau Server 3:2082 Execute failed: java.net.ConnectException: Connection timed out: no further information
I ran it again just now, it went through successfully (working ~50% of the time).
Other potentially relevant notes:
- On #1, I’m not sure if this is referring to the token expiry that’s set on Tableau’s side. If it is, I had recently created a new instance and had to refresh my tokens, but the issue persisted despite using a newly created token.
- The node typically fails somewhere between 90% & 100% progress, so after processing the rows.
- Curious if this issue is potentially causing the ‘hang’ I’m experiencing with Task Scheduler? - though I imagine when a node fails, the batch process would end (currently times out to 0x41306).