This week we’ll focus on a very important part of workflow development: testing! If you build a component or workflow that is part of a larger project, its potential bugs (or unexpected behaviors) may have catastrophic consequences down the line. To prevent that, automated testing is very effective. Do you usually create tests for your components and workflows?
Here is the challenge. Let’s use this thread to post our solutions to it, which should be uploaded to your public KNIME Hub spaces with tag JKISeason2-23.
Need help with tags? To add tag JKISeason2-23 to your workflow, go to the description panel in KNIME Analytics Platform, click the pencil to edit it, and you will see the option for adding tags right there. Let us know if you have any problems!
First of all, I am honoured to have been named COTM this month, and I owe a great deal of gratitude to my colleagues in this weekly challenge, I have learned so much from you, and I hope keep to learning together, Thank you!
As for this week’s challenge, nothing special, my solution is based on the previous solutions of my colleagues.
Here is my solution. As many other participants I also used Integrated deployment, however I tried to make the deployment process automatic: so if the test fails the workflow is never deployed.
Just for the example I created a simple second part of the workflow that follows after the tested part, so if the first part fails, the second part is not even executed. Of course the second part can also be tested, as well as any other part of the workflow.
Eventually if everything is fine all parts are combined together and the final workflow is deployed.
Hi @rfeigel
Your workflow is interesting . Btw, Can you explain in detail how the “Workflow Service Output” node works? Or, how will the output of this node be used? Thanks!
My workflow is very similar to everyone else’s this week. I used used the -Rule Engine- node to change the table to then be able to find the table differences.
As always on Tuesdays, here’s our solution to last week’s Just KNIME It! challenge
Note how we use a Try/Catch block to make sure that, even if the test fails, we can still gather information from its execution. This is particularly useful if one wants to build a report on multiple unit tests.