Unable to Reset Workflow and Save Latest Version - Help Needed

Hello everyone,

I’m currently facing an issue with my KNIME workflow where I’m unable to reset it properly. Additionally, when I try to save the workflow, it seems to only save an older version from about an hour ago along with some recent changes.

I would like to save the workflow exactly as it is right now, but also ensure that it can be reset and restarted correctly later. However, I’ve also noticed that I can’t copy any nodes over to a new workflow, which makes it even more challenging to troubleshoot or rebuild.

Has anyone else encountered this issue or can anyone offer advice on how to resolve it?

I’ve tried manually resetting individual nodes, but nothing seems to work. Any help or suggestions would be greatly appreciated!

Thank you!

@Th_offy Welcome to the KNIME forum. Can you tell us more about your system and where your knime workspace is located. Is it by any chance on a shared per cloud space.

Also: what operating system so you have and what version of knime? Can you access the log and tell us what it says about the attempts to save the workflow.

Have you tried exporting it as a knwf file?

Thank you!

My PC has the following specs:

  • i9-10900K @ 3.7 GHz
    
  • GPU: RTX 3070 Ti
    
  • 64 GB RAM
    
  • 2 TB SSD
    
  • Windows 11
    
  • KNIME 5.3.1
    

My workspace is located in a cloud space, but it’s saved locally.

After a while, KNIME went blank, and I had to restart it, so exporting was not an option.

Do you happen to know what the error “malloc of size 64 failed” means? My knime.ini is configured to allocate 16 GB of my system’s RAM.

1 Like

What kind of cloud would that be? Would the files be physically present at all times. Are other (knime) processes using the same space.

Maybe you can start a fresh log in debug mode and see if the problem appears again.

my worksspace is saved in my user folder which is synchronized over OneDrive.

logfile last lines:
File “C:\Users\thors\AppData\Local\Programs\KNIME\plugins\org.knime.python3.nodes_5.3.1.v202407291559\src\main\python\knime\extension\nodes.py”, line 1199, in wrapper
results = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\thors\AppData\Local\Programs\KNIME\plugins\org.knime.python.llm_5.3.1.v202407251209\src\main\python\src\indexes\base.py”, line 354, in execute
for batch in input_table.batches():
File “C:\Users\thors\AppData\Local\Programs\KNIME\plugins\org.knime.python3.arrow_5.3.1.v202407291559\src\main\python\knime_arrow_table.py”, line 400, in batches
yield ArrowTable(self._source[batch_idx])
~~~~~~~~~~~~^^^^^^^^^^^
File “C:\Users\thors\AppData\Local\Programs\KNIME\plugins\org.knime.python3.arrow_5.3.1.v202407291559\src\main\python\knime_arrow_backend.py”, line 261, in getitem
batch_without_names = self._get_batch(index)
^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\thors\AppData\Local\Programs\KNIME\plugins\org.knime.python3.arrow_5.3.1.v202407291559\src\main\python\knime_arrow_backend.py”, line 257, in _get_batch
return self._reader.get_batch(normalize_index(index, len(self)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\thors\AppData\Local\Programs\KNIME\plugins\org.knime.python3.arrow_5.3.1.v202407291559\src\main\python\knime_arrow_backend.py”, line 201, in get_batch
return pa.ipc.read_record_batch(self._source_file, self.schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “pyarrow\ipc.pxi”, line 1392, in pyarrow.lib.read_record_batch
File “pyarrow\error.pxi”, line 154, in pyarrow.lib.pyarrow_internal_check_status
File “pyarrow\error.pxi”, line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowMemoryError: malloc of size 64 failed

2024-08-25 23:59:48,223 : ERROR : KNIME-Worker-75-Vector Store Retriever 5:1847:1809:139 : : Node : Vector Store Retriever : 5:1847:1809:139 : Execute failed: malloc of size 64 failed
org.knime.python3.nodes.PythonNodeRuntimeException: malloc of size 64 failed
at org.knime.python3.nodes.CloseablePythonNodeProxy$FailureState.throwIfFailure(CloseablePythonNodeProxy.java:798)
at org.knime.python3.nodes.CloseablePythonNodeProxy.execute(CloseablePythonNodeProxy.java:562)
at org.knime.python3.nodes.DelegatingNodeModel.lambda$4(DelegatingNodeModel.java:180)
at org.knime.python3.nodes.DelegatingNodeModel.runWithProxy(DelegatingNodeModel.java:237)
at org.knime.python3.nodes.DelegatingNodeModel.execute(DelegatingNodeModel.java:178)
at org.knime.core.node.NodeModel.executeModel(NodeModel.java:588)
at org.knime.core.node.Node.invokeFullyNodeModelExecute(Node.java:1286)
at org.knime.core.node.Node.execute(Node.java:1049)
at org.knime.core.node.workflow.NativeNodeContainer.performExecuteNode(NativeNodeContainer.java:594)
at org.knime.core.node.exec.LocalNodeExecutionJob.mainExecute(LocalNodeExecutionJob.java:98)
at org.knime.core.node.workflow.NodeExecutionJob.internalRun(NodeExecutionJob.java:198)
at org.knime.core.node.workflow.NodeExecutionJob.run(NodeExecutionJob.java:117)
at org.knime.core.util.ThreadUtils$RunnableWithContextImpl.runWithContext(ThreadUtils.java:367)
at org.knime.core.util.ThreadUtils$RunnableWithContext.run(ThreadUtils.java:221)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at org.knime.core.util.ThreadPool$MyFuture.run(ThreadPool.java:123)
at org.knime.core.util.ThreadPool$Worker.run(ThreadPool.java:246)

this happens whitin a loop so i can’t finish the workflow…

i get the execute failed: malloc of size 64 failed

with the llm prompter and the vector store retriever

@Th_offy first concerning one drive please be aware of this


Then: how large is this vector store you are trying to use? Your machine in general seems to be capable.

In addition you could check if your Python and pyarrow versions are up to date while keeping the dependencies aligned with the llm python packages. Developments are so fast it might be necessary to keep track.


Then you could try and see if a smaller example would work on you machine. Maybe download the whole workflow group.

Then as a sidenote: logfiles are best attached as files or smaller ones being formatted to keep the forum readable.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.