I am using Create Databricks Environment node to connect knime to a database in Databricks. It works very well but when I tried to run a second node (DB Query Reader with SQL query) I got the following error: Execute failed: Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 675 tasks (10.0 GB) is bigger than spark.driver.maxResultSize (10.0 GB). I am using KNIME Analytics Platform v4.1.4.
Could you please help me with this error? The objective is obtaining KNIME data table with the result from the database query.
Thank you in advance.