I am trying to run a code but it keeps throwing me this error in pyspark Environment.
import knime.scripting.io as knio
No module named ‘knime’
I want to use this module to execute the code.
“df = knio.input_tables.to_pandas()”
@najihyeon the Local Big Data node would provide you with a basic PySpark context like for this example:
Other than that you would have to set up your (Py)Spark environment separately:
Hi @najihyeon -
Please don’t start two separate topics for the same question. To keep the forum tidy, I’ll close this thread, and we can keep the discussion going in your other one.