Energy Usage Prediction WorkFlow.

 I recently read the whitepaper that focuses on smart energy data from the Irish Smart Energy Trials.I downloaded the workflow example of Energy usage prediction as well.

I have a similar smartmeter dataset on which i am interested to perform the clustering analysis.

While i go through the workflows, I am stuck at the point where there is a metanode that does hourly and intraday value calculations. I was wondering the reason behind the design of the workflow figure attached to this mail. Why are we splitting the dataset using rowsplitters and then calling metanodes to perform intraday value calculations? Does this make the execution of the workflow faster? Also since my METERID numbers are all over the place, I am having to redo the workflow for this portion. So it is very crucial for me to understand why the workflow has been designed this way.

Also in one of the DM blogs Rosaria mentions about using Rush Accelerator for faster execution of the workflows. But i could not find the location where i could download the Rush Accelerator and try it. 

Any ideas will help!! Thanks so much in advance.


Hi Supriya,

my laptop at that time could not handle all these data, and therefore all the splitting. You can definitely remove that if your laptop can handle the data all together.

I am working on a new version including big data engines instead of the complex and long lasting calculations in this old workflow.

Rush Accelerator has changed since then, now it is called DataFlow. You can find a trial version at but I am not sure how to integrate it anymore.

I hope this answers your questions.


Dr Rosaria,

Thanks so much for your response. I am definitely looking forward to the newer version of this workflow! In the meanwhile, I will try to fix the issues with the metanodes which are failing for my data mainly due the METER IDs differing and see if I can get the workflow running for my dataset(a smaller version of the dataset i mean).

Best Regards,