I’m trying to use loops to automate repetitive tasks on my workflows, but I’m still not confident enough with these nodes.
I will explain my situation so maybe you can help me out.
First of all, I have two datasets:
Dataset 1: a Google Ads export table with sales performance from various ad campaigns; each campaign contains ads to a specific product set, different campaigns can refer to the same product set
Dataset 2: a CRM export table with sales data from each product set and the campaign that generated that sale
My final objective is to join the two tables so that I have both Google Ads and CRM results in the same file. That is the easy part, though.
Problems arise when the CRM files includes results for some product set without a campaign. My manager asked me to take the results from these product sets and sum them to the best performing campaign for each set.
Example: Product-Set_1 has generated 3 sales without a campaign, so I need to sum those 3 sales to the results of the best performing campaign for that product set.
The problem is that the quantity of product sets generating campaign-less sales will surely vary every time I will have to make this report. So i wanted to try and use a loop to automate the filtering the math operations without having to do it by hand every time.
I created this example workflow to better explain the situation:
My objective is summing the values shown in the GrouBy node to the top seller campaign for the corresponding product set in the Google Ads Export.
Notice that every time I update the CRM export i might find different product sets that generated sales outside of a campaign. In this case these are prod. sets 1, 2 and 5, but next time might be 2, 3 and 4.
My initial idea was to filter the GroupBy node and the Google Ads export for every product set, but that is way to long of a process and it requires to be edited every time I update my dataset. That’s why I wanted to try and use loops to automate this process.