Issues with Parquet & Tableau Writer Nodes in 4.5.0

Hello!

I’m currently writing a table from a database to a directory on a shared network drive that will ultimately be ingested into Tableau.

The CSV and Excel Writers both work perfectly and are able to write files to the folder in question.

DB Nodes → DB Reader → CSV Writer (or Excel Writer)
Writer Options

  • Local File System
  • Overwrite if Exists

Path is \aaa\bbb\ccc & ccc\eee eee eee\fff ff\file.csv or file.xlsx

There are spaces in the directory names and the ampersand symbol - nothing I can do about that.
I assume path length is not an issue as CSV and Excel writer are both functioning.

The Tableau Writer throws the following error when writing to the same directory (.hyper extension)
I select the directory manually by using the browse interface.

ERROR Tableau Writer 3:17 Execute failed: The database "hyper.file:\ *** DIRECTORY PATH *\ " could not be created. Hyper was unable to resolve the database file path "\ ** DIRECTORY PATH ": Directory does not exist
Context: 0x5fdfad59

Again the directory does exist, it can see the directory when I manually select it using the dialog box.

The Parquet Writer seems to have slightly different issues when writing to the same directory. (.parquet extension)

  • Inside the node the full path in the “\network domain name\folder\folder” format is present. Selected using the browse function with either the local file system or mountpoint option.
  • In the error log it ignores several portions of the path and changes it to what you see in the following

ERROR Parquet Writer 3:15 Execute failed: C:*** folder structure***

This network drive is not C:…

There are no flow variables in play in any of these.

They are all → DB Reader → Writer Nodes
Nothing else is in between DB Reader and the Writer Nodes.

While me using Parquet is new (to me), the Tableau Writer has worked in 4.4.x and other versions.

Workflows that used to work - do not and throw the same error now.

Thanks,

Nathan

Thank you for reporting this, Nathan. Could you share your workflow and real or fake data that produces this error? Screen shots are a second resort but only if sharing the workflow is prohibited due to your company policy. Here is an example of how to share screenshots if you must.

@BenjaminMoser , I wonder if this is connected to the issue in the forum post I have linked as a reference above for Nathan.

1 Like

Hi Nathan, welcome to the forum. Thanks for bringing up this issue.

@victor_palacios, this could be related indeed. This is currently on my list for investigation. I’ll let you know once I have more information.

1 Like

Hi Victor (and @BenjaminMoser thanks for your feedback ,here is some more detail from the log file.

I cannot share the workflow file, but you will burst out laughing when you see the workflow picture (it could not be simpler). This is not the same as I was using with my original work, but to make your bug hunt easier I replicated the errors using the Kaggle Titanic Dataset (train.csv).

If there was a way for me to share non-publicly (perhaps a quick zoom call? that would probably be preferable, and if so please contact me directly).

What I notice when I look at the log files

  • Parquet reader: The file directory of the parquet reader in the log changes based on local/mountpoint even if the text in the dialogue box does not
  • Both are unable to resolve the directory path
  • CSV / Excel Writer have no issues and are writing to the exact same directory

Network paths partially suppressed in logs, but it is not C:
Sadly our network directories have spaces and & in them.
This decision predates me.

Log file for Tableau Writer

2021-12-22 10:40:23,654 : ERROR : KNIME-Worker-4-Tableau Writer 0:2 :  : Node : Tableau Writer : 0:2 : Execute failed: The database "hyper.file:\\***********************\KNIME%20Dev\Data\Titanic\tableau_writer.hyper" could not be created. Hyper was unable to resolve the database file path "\\*****************\KNIME%20Dev\Data\Titanic": Directory does not exist
Context: 0x5fdfad59
com.tableau.hyperapi.HyperException: The database "hyper.file:\\************************\KNIME%20Dev\Data\Titanic\tableau_writer.hyper" could not be created. Hyper was unable to resolve the database file path "\\***********************************\KNIME%20Dev\Data\Titanic": Directory does not exist
Context: 0x5fdfad59
	at com.tableau.hyperapi.Connection.<init>(Connection.java:177)
	at com.tableau.hyperapi.Connection.<init>(Connection.java:78)
	at org.knime.ext.tableau.hyperapi.TableauHyperAPIProcess$HyperAPIConnection.<init>(TableauHyperAPIProcess.java:210)
	at org.knime.ext.tableau.hyperapi.TableauHyperAPIProcess.createConnection(TableauHyperAPIProcess.java:125)
	at org.knime.ext.tableau.hyperapi.TableauHyperWriter.<init>(TableauHyperWriter.java:90)
	at org.knime.ext.tableau.hyperapi.write.TableauHyperWriterNodeModel2.execute(TableauHyperWriterNodeModel2.java:133)
	at org.knime.core.node.NodeModel.execute(NodeModel.java:758)
	at org.knime.core.node.NodeModel.executeModel(NodeModel.java:549)
	at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1267)
	at org.knime.core.node.Node.execute(Node.java:1041)
	at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559)
	at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
	at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
	at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
	at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:365)
	at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:219)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
	at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

Log File for Parquet Writer when the configuration is set to:
Mountpoint and then Local

2021-12-22 10:40:21,077 : ERROR : KNIME-Worker-3-Parquet Writer 0:3 :  : Node : Parquet Writer : 0:3 : Execute failed: /\\**********************************\KNIME Dev\Data\Titanic\parquet_writer.parquet
java.nio.file.NoSuchFileException: /\\**********************************\KNIME Dev\Data\Titanic\parquet_writer.parquet
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystem.toLocalPathWithAccessibilityCheck(LocalWorkflowAwareFileSystem.java:311)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystem.getEntity(LocalWorkflowAwareFileSystem.java:205)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.getEntity(LocalWorkflowAwareFileSystemProvider.java:121)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.checkSupport(LocalWorkflowAwareFileSystemProvider.java:112)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.newOutputStreamInternal(LocalWorkflowAwareFileSystemProvider.java:126)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.newOutputStreamInternal(LocalWorkflowAwareFileSystemProvider.java:1)
	at org.knime.filehandling.core.connections.base.BaseFileSystemProvider.newOutputStream(BaseFileSystemProvider.java:500)
	at java.base/java.nio.file.Files.newOutputStream(Unknown Source)
	at org.knime.bigdata.hadoop.filesystem.NioFileSystem.create(NioFileSystem.java:151)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:329)
	at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:292)
	at org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:658)
	at org.knime.bigdata.fileformats.parquet.ParquetFileFormatWriter.<init>(ParquetFileFormatWriter.java:112)
	at org.knime.bigdata.fileformats.parquet.ParquetFormatFactory.getWriter(ParquetFormatFactory.java:172)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.createWriter(FileFormatWriter2NodeModel.java:289)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.writeToFile(FileFormatWriter2NodeModel.java:257)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.write(FileFormatWriter2NodeModel.java:214)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:185)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:1)
	at org.knime.core.node.NodeModel.executeModel(NodeModel.java:549)
	at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1267)
	at org.knime.core.node.Node.execute(Node.java:1041)
	at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559)
	at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
	at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
	at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
	at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:365)
	at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:219)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
	at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

Log file extract when Parquet is set to Local

2021-12-22 10:47:08,087 : ERROR : KNIME-Worker-11-Parquet Writer 0:3 :  : Node : Parquet Writer : 0:3 : Execute failed: C:\**************************\KNIME Dev\Data\Titanic\parquet_writer.parquet
java.nio.file.NoSuchFileException: C:\*************************\KNIME Dev\Data\Titanic\parquet_writer.parquet
	at java.base/sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
	at org.knime.filehandling.core.fs.local.fs.LocalFileSystemProvider.newByteChannel(LocalFileSystemProvider.java:148)
	at java.base/java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
	at java.base/java.nio.file.Files.newOutputStream(Unknown Source)
	at org.knime.bigdata.hadoop.filesystem.NioFileSystem.create(NioFileSystem.java:151)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:329)
	at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:292)
	at org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:658)
	at org.knime.bigdata.fileformats.parquet.ParquetFileFormatWriter.<init>(ParquetFileFormatWriter.java:112)
	at org.knime.bigdata.fileformats.parquet.ParquetFormatFactory.getWriter(ParquetFormatFactory.java:172)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.createWriter(FileFormatWriter2NodeModel.java:289)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.writeToFile(FileFormatWriter2NodeModel.java:257)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.write(FileFormatWriter2NodeModel.java:214)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:185)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:1)
	at org.knime.core.node.NodeModel.executeModel(NodeModel.java:549)
	at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1267)
	at org.knime.core.node.Node.execute(Node.java:1041)
	at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559)
	at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
	at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
	at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
	at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:365)
	at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:219)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
	at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

Image of workflow also attached.

Nathan

1 Like

@NKlassen out of curiosity. Have you tried using the new Path variables? You could construct the filename in a Java node and convert it to a variable that the Parquet writer might use and see if that makes any difference:

And can you tell us what kind of file system it is you are writing to - what kind of shared directory it is?

Hello @mlauber71

Thanks for the suggestion.
I have not yet tried that, but this worked in 4.4 without a workaround.
I’ll give it a shot in the interim.

I do not have much insight into the architectural backend of the organization I work for.
So I’m not sure what I can provide that would be helpful.
It’s an NTFS shared drive that, to the best of my knowledge, is accessed by mostly Windows users.
It is accessilble to me as either \\Domain Name\Myriad\of\Directories
or
Driver_Letter:\Myriad\of\Directories

@victor_palacios @BenjaminMoser

I did try to replicate this locally at home over Christmas on my personal computer on a local drive using the same version (4.5) of KNIME.

I tried a variety of combinations of nested directories, directory names with spaces and symbols on them on my Windows 10 desktop. I didn’t encounter any issues on my local drive.

So this may be a network folder specific issue.

Nathan

Hi all,

My intuition tells me this is due to how these nodes handle spaces in UNC paths (i.e. on shared drives).

This is on my list to look into. I think I can do this by the end of the week. Chances are good that the next release will contain a fix for this issue, but I can’t make any promises right now.

I’ll let you know once I have further details.

Thanks for all the support. @NKlassen, thanks for offering a call to elaborate. I don’t think it is necessary right now but I’ll get back to you if I have a need.

Best,
Benjamin

2 Likes

My pleasure!

Just curious if there was a change in UNC related functionality between 4.4 and 4.5 that may have caused this?

Please let me know if I can provide any other support!

Take care!

1 Like

@NKlassen Not to UNC paths per se (as far as I’m aware), but we did make changes to how special characters in paths are handled.

1 Like

Hi,

I had a look at this and a fix is currently awaiting review (AP-18190 for internal reference).

I did not really have the chance to think much about finding a workaround for the intermediate time. Here’s one thing that worked for my simple test case. However, you may run into authentification issues or other problems.

The idea is to create a symbolic link to the problematic folder s.t. the link name does not contain spaces.

In the windows command prompt (you may need admin privileges), you can create a symbolic directory link in the current working directly with mklink /d link-target-name "\\host\path\files to read".

This creates a new link called link-target-name that points at the shared directory. You can then set the node to read from this path, e.g. C:\Users\Knime\Desktop\files-to-read-link instead of \\host\path\files to read.

Best Regards,
Ben

3 Likes

Thanks @BenjaminMoser for the update!

I’ll give this work around a test in the near future and let you know if it works :slight_smile:

Thanks for all your help!

Nathan

Hi @NKlassen,

the Parquet nodes do not support UNC paths, but there is a new SMB Connector node to connect Windows network things. Does the new connector node fix the problem with Parquet?

Cheers,
Sascha

1 Like

Hi all,

a potential workaround that is probably safer and easier is to map the share (i.e. the UNC path) to a local drive. In the node configuration, you should be able to select the target file/directory on the newly mapped drive just like it would be your good old C:\.

Cheers,
Ben

Thank you!

I also was finally able to find a ‘traditional’ filepath, not UNC path, which worked for the node in the interim as well.

Still looking forward to a future fix so I can implement the same way I do for other writer nodes :slight_smile:

Thank you for everyone’s help,

Nathan

Hi @sascha.wolke,

I’ll give this a try too and report back - thanks.

Nathan

@NKlassen I just realised you had already mentioned in an earlier post that you can access the directory via a drive letter. That’s exactly what I had had in mind when proposing another possible workaround yesterday.

In any case, we have a bugfix release coming up next week and chances are good that it will contain a change that improves the handling of UNC paths with special characters such as spaces. This will potentially fix your issue with the Tableau Writer.

Cheers,
Ben

1 Like

Hi,

a few days ago we released 4.5.1 which should fix the regressions with the Tableau writer. You can download the new version here.

Cheers,
Ben

3 Likes

Hi @BenjaminMoser ,

Did some testing using 4.5.1 - same workflow as before.

The Tableau Writer works, even with a UNC path! :slight_smile:
Unfortunately, the Parquet Writer does not work with a UNC path. :frowning:
Parquet Writer still requires the specific driver letter and associated path.

The actual node has the following directory in it and is set to Local File System.

\\DOMAIN\Area\Silly Directory Name with & and Spaces\Another Silly Directory with Spaces\KNIME Dev\Data\Titanic\test.parquet

The error messages shows the following but at the C:… which is not the appropriate drive letter.

ERROR Parquet Writer       4:9        Execute failed: C:\Area\Silly Directory Name with & and Spaces\Another Silly Directory with Spaces\KNIME Dev\Projects\Bug Reporting\test.parquet

The error messages are different when the node is set to Mount Point and LOCAL

ERROR Parquet Writer       4:6        Execute failed:  /\\DOMAIN\Area\Silly Directory Name with & and Spaces\Another Silly Directory with Spaces\KNIME Dev\Data\Titanic\parquet_writer_mtpoint.parquet

Please let me know how/if I can help.

Nathan

1 Like

Forgot to post the logs, apologies @BenjaminMoser

Local File System Setting for Parquet Node

2022-01-25 11:49:24,074 : ERROR : KNIME-Worker-26-Parquet Writer 4:9 :  : Node : Parquet Writer : 4:9 : Execute failed: C:\Area\Silly Directory with & and Spaces\Another Directory with Spaces\KNIME Dev\Projects\Bug Reporting\test.parquet
java.nio.file.NoSuchFileException: C:\Area\Silly Directory with & and Spaces\Another Directory with Spaces\KNIME Dev\Projects\Bug Reporting\test.parquet
	at java.base/sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
	at java.base/sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source)
	at org.knime.filehandling.core.fs.local.fs.LocalFileSystemProvider.newByteChannel(LocalFileSystemProvider.java:148)
	at java.base/java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source)
	at java.base/java.nio.file.Files.newOutputStream(Unknown Source)
	at org.knime.bigdata.hadoop.filesystem.NioFileSystem.create(NioFileSystem.java:146)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.createOrOverwrite(HadoopOutputFile.java:81)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:327)
	at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:292)
	at org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:658)
	at org.knime.bigdata.fileformats.parquet.ParquetFileFormatWriter.<init>(ParquetFileFormatWriter.java:112)
	at org.knime.bigdata.fileformats.parquet.ParquetFormatFactory.getWriter(ParquetFormatFactory.java:172)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.createWriter(FileFormatWriter2NodeModel.java:289)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.writeToFile(FileFormatWriter2NodeModel.java:257)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.write(FileFormatWriter2NodeModel.java:214)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:185)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:1)
	at org.knime.core.node.NodeModel.executeModel(NodeModel.java:549)
	at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1267)
	at org.knime.core.node.Node.execute(Node.java:1041)
	at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559)
	at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
	at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
	at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
	at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:367)
	at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:221)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
	at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

Mountpoint / Local Settings for Parquet Node

2022-01-25 11:51:36,378 : ERROR : KNIME-Worker-27-Parquet Writer 4:6 :  : Node : Parquet Writer : 4:6 : Execute failed: /\\DOMAIN\Area\Silly Directory with & and Spaces\Another Directory with Spaces\KNIME Dev\Data\Titanic\parquet_writer_mtpoint.parquet
java.nio.file.NoSuchFileException:/\\DOMAIN\Area\Silly Directory with & and Spaces\Another Directory with Spaces\KNIME Dev\Data\Titanic\parquet_writer_mtpoint.parquet
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystem.toLocalPathWithAccessibilityCheck(LocalWorkflowAwareFileSystem.java:311)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystem.getEntity(LocalWorkflowAwareFileSystem.java:205)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.getEntity(LocalWorkflowAwareFileSystemProvider.java:121)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.checkSupport(LocalWorkflowAwareFileSystemProvider.java:112)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.newOutputStreamInternal(LocalWorkflowAwareFileSystemProvider.java:126)
	at org.knime.filehandling.core.fs.knime.local.workflowaware.LocalWorkflowAwareFileSystemProvider.newOutputStreamInternal(LocalWorkflowAwareFileSystemProvider.java:1)
	at org.knime.filehandling.core.connections.base.BaseFileSystemProvider.newOutputStream(BaseFileSystemProvider.java:500)
	at java.base/java.nio.file.Files.newOutputStream(Unknown Source)
	at org.knime.bigdata.hadoop.filesystem.NioFileSystem.create(NioFileSystem.java:146)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:891)
	at org.apache.parquet.hadoop.util.HadoopOutputFile.createOrOverwrite(HadoopOutputFile.java:81)
	at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:327)
	at org.apache.parquet.hadoop.ParquetWriter.<init>(ParquetWriter.java:292)
	at org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:658)
	at org.knime.bigdata.fileformats.parquet.ParquetFileFormatWriter.<init>(ParquetFileFormatWriter.java:112)
	at org.knime.bigdata.fileformats.parquet.ParquetFormatFactory.getWriter(ParquetFormatFactory.java:172)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.createWriter(FileFormatWriter2NodeModel.java:289)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.writeToFile(FileFormatWriter2NodeModel.java:257)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.write(FileFormatWriter2NodeModel.java:214)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:185)
	at org.knime.bigdata.fileformats.node.writer2.FileFormatWriter2NodeModel.execute(FileFormatWriter2NodeModel.java:1)
	at org.knime.core.node.NodeModel.executeModel(NodeModel.java:549)
	at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1267)
	at org.knime.core.node.Node.execute(Node.java:1041)
	at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:559)
	at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
	at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:201)
	at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
	at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:367)
	at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:221)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
	at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
	at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

Hi @NKlassen,

thanks again for the detailed feedback.

As indicated above, the Parquet nodes do not officially support UNC paths. For the time being, you can look into Sascha’s suggestion to use the SMB Connector node or try one of the workarounds I suggested.

Cheers,
Ben