Unfortunately, we don’t have much to go off here. I see you already followed the suggestions from this related forum post.
I’m wondering whether memory may still be the bottleneck, e.g. it looks like R may bring its own configuration to limit available memory. If possible, successfully running the R script on a smaller or simpler set of data may provide further indication of resource bottlenecks.
The full logs that admins can download in the WebPortal may also provide a stack trace with additional details (assuming they are downloaded recently after the error occurred). Should this be the case, you’d be welcome to share them with our enterprise support (support@knime.com) for a further investigation.
@wurz can you tell us what this workflow does and how large the data is. And does it fail when transferring the data back to knime or already within the code?
It is possible to run the same R script on a smaller set of data. How can get further information on the resources from that? Also from the log files from the server?
We’re also monitoring the KNIME servers resources. The R node also fails in case the available resources are not used completely. So it seems to be a limitation of the node, I would assume.