Scott Fincher (@ScottF) and Cynthia Padilla (@cpadilla) from KNIME held a webinar on the new features and functionality now available in the release of KNIME Analytics Platform 4.3 and KNIME Server 4.12.
We’ll take you through the new File Handling extensions for more powerful and user friendly data access and prep; new nodes that reduce the number of steps needed in a workflow, plus faster table transformation. We’ll look at the new Python enhancements making it easier to share and productionize workflows and components containing Python Scripting Nodes.
You’ll hear about the enhanced monitoring and admin interface in KNIME Server, an additional high availability feature, and elastic scaling on Azure.
Find out about our new additions to the Deep Learning extension around Keras and Tensorflow, take a walk around the KNIME Hub and hear about recent additions to the collection of KNIME Verified Components.
Watch a recording of this webinar here on our KNIMETV channel on Youtube
Are there any plans for an integration with PowerBI?
We already have integration with PowerBI. Please check our Hub Send to Power BI – KNIME Hub
Will there be other languages for BERT models than Portugese and English?
Our partner Redfield just updated the BERT Model Selector node to allow for selection of HuggingFace models, which support many languages other than English.
Talking about the file handling framework, I felt the lack of the possibility to rename files/folders in the new related transfer files/folders node. Is this planned in the roadmap?
It’s something we had with the old copy/move Files from the old framework but it lacked the file systems inputs.
Is it possible to force the type to be string in the CSV Reader?
Not yet but this is planned for the next major release.
Can you explain how to share a private space on the KNIME Hub?
This is demoed in the recording of the webinar (22 mins into the recording). Essentially there is a Manage Contributors section where you can add other forum members to your space.
Is something like the conda env propagation planned for R?
We have this on the list!
Are there plans to support multiple Python Conda environments to be picked on a by-node basis
Yes, we do this already! They can be set via flow variables
python3Command (for Python 2 and 3, respectively). There are also plans to expose these settings via dedicated components in the node dialog. The flow variables expect the full path to a conda environment directory, e.g. /home/marcel/miniconda3/envs/py3_knime on Linux, or the full path to a non-conda Python executable.
When is Spark 3.0 expected to be supported with Databricks?
We plan to release support for Spark 3.0 before April.