I've just tried your steps on one of my Macs with macOS 10.13 + KNIME 3.4.1 and cannot confirm these issues. The workflow loads just fine and also the data.zip file can be extracted without any problems.
Sorry, I cannot give any input on the actual issue. But probably macOS 10.13 is not the culprit.
on thing to mention. I have the very same behavior that i described in the PDF on one IMAC +on ON MACBOOK PRO. Both are now in High Sierra. Therefore i don't believe it's something like a bad setting i could have done somewhere.
I have upgraded to high Sierra, then found the problem. Then full reinstalled Knime in 3.4.1. Problem still there. Then i have upgraded the Macbook pro. And it's the same
It has nothing to do with XLS Node. It can get wrong with any node, such as Txt reader Node.
To illustrate one new workflow, reading one text file with 40 rows, and one with 30000 rows, and reading one xls file with 40 rows and one with 30000 rows.
Make this run.
Then save it.
Reopen and try to check what data was saved for each node. Only the nodes having processed small quantities will show what they have recorded into the .zip. The other one will give you "Loading port content" in the middle of a white screen.
You have made my day, i would have not understood why this is wrong for me and ok for you :)
This bug makes me believe that Knime is now very risky on High Sierra since we have no guaranty that correct data is used from one node to the next one....
Until Knime specialists have a look, i move back to Microsoft world.
I've just sent some detailed infos to you, Bernd. To keep this thread complete, here's just the Exception:
DEBUG Column Filter 0:2 Execute failed: Exception while accessing file: "/Users/blablabla/macOS_Loading_Test/File Reader (#1)/port_1/data.zip": invalid block type
java.lang.RuntimeException: Exception while accessing file: "/Users/blablabla/macOS_Loading_Test/File Reader (#1)/port_1/data.zip": invalid block type
at org.knime.core.data.container.ContainerTable.ensureBufferOpen(ContainerTable.java:294)
at org.knime.core.data.container.ContainerTable.size(ContainerTable.java:145)
at org.knime.core.node.BufferedDataTable.size(BufferedDataTable.java:383)
at org.knime.core.data.container.RearrangeColumnsTable.size(RearrangeColumnsTable.java:605)
at org.knime.core.node.BufferedDataTable.size(BufferedDataTable.java:383)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:652)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1128)
at org.knime.core.node.Node.execute(Node.java:915)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:561)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:95)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:179)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:110)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:328)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:204)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)
Caused by: java.util.zip.ZipException: invalid block type
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
at java.util.zip.ZipInputStream.read(ZipInputStream.java:194)
at org.knime.core.util.FileUtil.copy(FileUtil.java:266)
at org.knime.core.data.container.CopyOnAccessTask.createBuffer(CopyOnAccessTask.java:191)
at org.knime.core.data.container.CopyOnAccessTask.createBuffer(CopyOnAccessTask.java:157)
at org.knime.core.data.container.ContainerTable.ensureBufferOpen(ContainerTable.java:292)
... 17 more
Quick summary what we've learned so far. It's a problem during writing those zip files. This particular zip file is more a container of three files: two xml files (meta data) and one binary file (actual data). These different parts are written with different levels of compression, which is also supported by the zip standard/API. This seems to be broken on the new version MacOS now.
We have isolated the problem in a small standalone java program and will submit a bug report @ Apple.
I have exactly the same problem with "Tabel write" and "Tabel read". If I write a table with about 25mb and I try to read it afterwards, I get confusing errors like:
Execute failed: invalid code lengths set
Execute failed: invalid distance too far back
Execute failed: invalid block type
The worst point is, that those files can not be read with another KNIME version on Windows. So it means, that the written files are kind of damaged, which is a big problem!
Yes indeed. But it's clearly a bug in High Sierra. We have created a minimal Java application that reproduces the problem on High Sierra but works fine on any other operating system including Sierra. We have opened a bug report at Apple but didn't get any response so far.
i also encounter problems that Ralph mentions. Sorry to say this but to be on the safe side we should stop using Knime on HIGH Sierra... In case on not signaled inconsistencies elsewhere.
Maybe this should be told to HIGH Sierra users so that they don't get into problems. And to Sierra users TO NOT UPGRADE NOW....
On my side, i am downgrading back to Sierra one of my Macs, but it's lot of work since we must erase the whole HD before installing Sierra : so all programs have to be installed again (not only Knime).
Would it be possible to get an estimate date from apple for a fix ?
We'll add a warning to KNIME's welcome screen that there are issue on the newest MacOS.
We can't comment on when Apple releases a fix. It seems others have run into the same issue, too: https://stackoverflow.com/questions/46539453/tomcat-with-compression-enabled-causes-error-on-os-x-high-sierra (we believe it's the same issue: zip compression in java programs)
Apparently that didn't do it. Can someone provide a sequence of zlib calls that reproduces the issue? Possibly from the example program in the issue on github?