Performance problems - 20.000 and more data sets (visualization technology parallel coordinates)

Hello everyone,

How are you? My given task is to develop a visualisation tool for pixel based technologies (recursive patterns) for many high dimensional data. After validating many existing tools we decided to use Knime to get the job done. My problem is the following: I tested the performance for the visualisation by using the node "Parallel Coordinates" which is also intended to be used. I tested in on computers with different CPUs and different RAM. The result was that the application performance gets unusable after trying to display 20000 data sets and more. More RAM does not help (I tested it with a computer which has over 90 GBytes of RAM ;-) That should be enough...). And because the application is not multithreaded more CPUs also will not help. The problem is that there are some  use cases where over a million data sets should be displayed.

1) Is anyone experienced enough to give me a hint how to solve or at least reduce that problem? 

2) Does anyone knows if pixel based visualisation techniques are more performant or not so performant like parallel coordinates?

3) Can an own implementation improve the performance (I do not know if the problem can be parallelized)? Or are the implementors of the nodes which are delivered by Knime by default are professional developers or other experts and that is why it would be pretty implausible to expect that another person (and I am no expert) can make a more performant implementation of e.g. parallel coordinates?

I would be grateful for any answer.

Thanks in advance

Jonnyboy