I recently read the whitepaper that focuses on smart energy data from the Irish Smart Energy Trials.I downloaded the workflow example of Energy usage prediction as well.
I have a similar smartmeter dataset on which i am interested to perform the clustering analysis.
While i go through the workflows, I am stuck at the point where there is a metanode that does hourly and intraday value calculations. I was wondering the reason behind the design of the workflow figure attached to this mail. Why are we splitting the dataset using rowsplitters and then calling metanodes to perform intraday value calculations? Does this make the execution of the workflow faster? Also since my METERID numbers are all over the place, I am having to redo the workflow for this portion. So it is very crucial for me to understand why the workflow has been designed this way.
Also in one of the DM blogs Rosaria mentions about using Rush Accelerator for faster execution of the workflows. But i could not find the location where i could download the Rush Accelerator and try it.
Any ideas will help!! Thanks so much in advance.