bug when changing number of columns


My workflow consists of a pivoting node and then a bit later a linear correlation (LC) node. When the number of columns for the output of pivot changes (after removing one group from the input table) then LC becomes misconfigured and will not execute. (adding a column is not a problem for the execution but the additional column will not be considered.) This is I believe a problem for many other nodes as well.

Could you please describe a standard process to avoid this behavior when developing my own nodes?

Do you actually consider this a bug, or are you just saying that this behavior is normal and that removing or adding columns will change the outcome dramatically in a way that the user should be 1. aware of this and 2. act on this each time? (I would disagree with the latter statement...)




I see your point and agree the node shouldn't turn to "unconfigured" if the input changes. There are a few nodes that behave like that. We'd like to change this by adding an "Enforce Exclusion"/"Enforce Inclusion" checker to the filter, specifying what is going to happen if the input changes. (There is already a working implementation in the "Column Filter" node.)

My suggested workaround is to use a Column Filter right before the Linear Correlation node and remove the irrelevant columns (check the "Enforce Inclusion" button).

As for good coding practice ... use the same as in the Column Filter implementation (FilterColumnNodeModel).


I am experiencing similar problems, but I cannot use your workaround because in my case the column names vary between two differents executions (I would like to automatize the analysis of several tables with different structure). Currently the Linear Correlation and other nodes does not have the "Enforce inclusion" policy, is there any workaround to manage (in a automatized way) tables with different structures?

thanks in advance