Query parallelization: Executing multiple queries with different output columns in parallel

Hi all,
I’m trying to execute multiple queries in parallel on the same database by using parallel chunk start and end.
However, since the number of columns in the output is different for each query, I’m getting the below error -

Execute failed: Cell count in row “Row1_#1” is not equal to length of column names array: 2 vs 4

Can someone help with a solution to make this work?

Just an idea:

Unpivot everything so that you have a flat file structure (Column Names, Column Values, some ID (if inside a loop maybe iteration number) that tells you how to put everything back together?

1 Like

Thanks @MartinDDDD … This worked…
Now I need to separate out the output for each of the queries into different flows. Also, the number of queries are not known, that needs to be set using a flow variable.
Is there any node in which the number of output ports can be set using a flow variable?
For eg, if I’m executing 4 queries in the loop, I need 4 output ports with output of each query in one output port… if I have 5 queries, then 5 output ports and so on.

I don’t think there is anything that allows a dynamic number of output ports.

But you certainly need to work out, which workflow each of the queries goes to right? Cause I take there cannot be an infinite number of different treatments of your data downstream (you need to define it for each permutation).

Let’s say you have 10 downstream processes for 10 different outputs from your queries - I think I’d iterate using a group loop over each iteration and inside this loop determine the correct output type (say 0-9 so 10 options) and then route it using Case Switch Start to 10 different branches)…