this is more of an Analytics Platform question, as it is independent whether the workflow is running on an Executor or Analytics Platform.
This even is a question you can pose for any software in general. In KNIME, there are ways of handling data in batches. Checkout chunk loops, for instance.
Some nodes have settings to run (faster) in-memory algorithms or cache intermediate results on disk. Be sure to untick any “Process in memory” checkboxes.
Of course you can also increase the allowed heap size beyond the physical memory present and configure the OS to use the main disk as swap, but this will be painfully slow; I’d advise against this approach.
You may also be able to change the order in which you process your data (filter first, before applying transformations), or use processing algorithms that are more memory efficient than others. I’m afraid this is very dependent on the task at hand.