New feature ideas: Wireless connector or ghost nodes

Hi @dpowyslybbe , if it’s just a single connection, you can drag the additional node onto the connection between 2 nodes, and Knime will link the additional node automatically like this:
image

Drag the String Manipulation node (or any other node) from the Node Repository to the connection:
image

The connector will turn red, meaning that the node will be added on the connection, with Knime linking the 3 nodes for you:
image

It’s a quick way to add a node without having to manually link the nodes.

But for multiple connections, you can’t add the node like for this for all connections at once:
From:
image

To:
image

You can’t drag the node the same way as above to get Knime to automatically connect the multiple connections. You can only drag it to one connection only. For example (see the connection that turns red):
image

In this case, only the Node 7 will get the results of the String Manipulation.

EDIT: Similarly, you can also easily replace a node by dragging a node from the Repository onto the node you want to replace, and the new node will replace the old node, and the connections will be restored automatically, that is provided that the new node had the same amount and types of ports that the old node had. That’s kind of the “hack” I used to add the String Manipulation without having to manually connect back all these connections. I can’t demonstrate unfortunately, as my screen capture does not capture what I am dragging with the mouse.

1 Like

Hello there!

Indeed might be. Request is noted @dpowyslybbe.

Regarding use case explained by @bruno29a one suggestion I have seen on forum is to “centralize” or better said combine/overlay all outgoing connections from one node in one line to some point. This helps with decluttering workflow but possible could solve this case as well as dropping node could then work.

Br,
Ivan

I did not know about the replacing a node by dragging it onto the one I want to replace - that’s a superb hack. Thanks for sharing!

1 Like

Yup, this is a very useful feature @dpowyslybbe , no need to disconnect and reconnect :smiley:

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.

+1 from myself too. Thee ability to “beam” data would tremendously ease workflow orientation & management and free up time we can spend on workflow creation instead.

This becomes especially true for Color and Report connections that are just passed down in case if components are leveraged to keep things organized like here. Two our of three connections do almost nothing.

Here is an example how to beam color data by writing it to and reading it from a table. The only necessity required is to map the color variables in the component once.

What I find imperative to implement for the ghost, or as I’d call them data-beam nodes, is that if the input node got executed, so must the input nodes so that data must NOT get read over and over again saving IOPS.

Hey,

although I like the idea of having cleaner workflows I am not a huge fan of wireless connections, as it will make understanding of the workflows harder (personal oppinion).

But I want to propose something different and get your opinion on that:
I would call the concept highway connections. Meaning that you can define a point on your connection from which you want to drag out new connections. With that you could organise the connections way easier. In theory this is already possible as you can stack multiple connections on top of each other but it is currently cumbersome to do.

vs

Very interested in what you all think.
Greetings,
Daniel

3 Likes

You can create something like this already using an empty component or metanode


image

Very useful for routing flow variables in complex workflows.

best,
Gabriel

2 Likes

Maybe we should differentiate between different connection / port types. My example was primarily aimed at those types which do not carry data one would usually process like the color information or the report template data.

Though, that information becomes necessary upon writing PDF / HTML reports within a component or after data got transformed or read, which causes it to loose it’s color information.

When it comes to regular data connection, I mostly agree that the initial purpose is almost essential and should not b e “violated” as it can degrade the level fidelity which helps providing context to the workflow. Though, I must agree that managing connections and ports could be improved especially n Modern UI. So your idea @DanielBog of a data Highway I really welcome since I find myself using the Cache or NoOp Nodes from NodePit quite frequently when connections traverse long distances.

image

2 Likes

@mwiegand NoOp is a interesting node. I haven’t used it before. Thanks!

Here is my two cents.

Since I have an EE background, I like to borrow ideas from PCB routing directly for complexity.

In the workflow, I simply put those buses together and then branch them at the appropriate places. For example, in the picture, the gray connection line actually superimposes the red and blue lines.

In PCB software, some lines look good through automatic layout, but in KNIME, you can only remove the curved connections option first, and then manually adjust the connection line by line.(Yes, I know, Auto Layout, it just works, but not works as I expected)

1 Like

Hi @gab1one , yes this “distributor” pattern is currently what I do to avoid large numbers of such flows over long distances.

Something that makes it simpler to place the “distributor” component when there are already a large number of connected nodes would be good. (i.e. when trying to “refactor” the workflow).

This animation gives an idea of how I would currently go about refactoring the above example (relatively) painlessly

2024-07-10-11-10-32

however that method only really works for a situation where the data flow is what I would call simple, and I’m not having to “rewire” a whole bunch of upstream connectors on the original (“Data Generator”) node.

Take an example where the refactoring has to include inbound flows too (which we can imagine may be from a wide variety of nodes scattered around the workflow):

If I am to include a downstream “distributor” node, I really don’t want to have to go about reconnecting all the upstream nodes, which is what I’d have to do with the above approach, and so in this case, the method I have found for refactoring involves temporarily creating a component for the "original " node, performing the refactoring inside the component, and then expanding the component again, as per the following animation:

2024-07-10-11-33-40

It would be nice to see some option for performing this kind of refactoring, requiring fewer steps than the above, although it isn’t too arduous, and I would welcome a core “distributor” node, similar to the noop nodes, because there have been a few times when I have released public components containing the noop nodes, and then found myself having to answer questions on the forum about why the component doesn’t work (user needs to install the nodepit powernodes, and I ultimately find it easier to replace them with a “do nothing”-proxy node such as “add new rows” set to add zero rows) :wink:

2 Likes

I really like your analogy @HaveF. The curved connections I found much more easy to comprehend and work with especially if the grid is enabled. Here is a comparison, which was exactly the point in time I switched to curved connections thereafter, showing the issue.

A few enclosed components down the line it becomes quite challenging to comprehend the incoming data and their ports. Hence, I resorted to using the cache node (before getting to know the NoOp nodes) to add clearity.

Picking up your example of the PCB, what happens when a trace is continued on the other site of the PCB, wouldn’t you try to add a little annotation? Or even worse, when routed through a chip which requires you to use the schematics (if available).

Thinking about this more thoroughly, more advantages of a “data beam node” come to mind:

  1. Ability to quickly jump to its origin traversing kind through the workflow
  2. Ability to add more context compared to component inputs as those can change (port order and count)
  3. Each time data is passed, IOPS are incurred. The idea of a “data beam node” might ease that. Worth to note that my assumption might be totally wrong
1 Like

@takbb nice trick! I love it. Thanks for your animation!

I believe refactoring is a must have feature in future, this feature will greatly improve the readability and maintainability(This will ultimately lead to a less error-prone and more streamlined workflow). Of course, the refactoring feature is actually composed of many small operations or option improvements.

Your animation reminded me of another thing. This is actually a very typical operation mode. I was thinking that if there is a macro like Excel that can record these actions, then next time when I encounter the same problem, playing these macros can solve most of the repetitive operations!

1 Like

What about just allowing connection lines to be made invisible instead of “wireless” node connections (which I never really personally liked in Alteryx). That way the platform could have a show / hide setting for visibility, and they can show automatically when selecting a node? I might use that to clean something up as long as the UI cleanly showed the flow lines when needed.

The connection spots on the nodes should show a clear visual sign that hidden connections are present.

3 Likes

It just incurred to me that Knime already has a quite similar feature … the Call Workflow Service respectively Container Input (Table) node

image

About refactoring, I submitted a few feature requests some time ago to improve upon that by:

Or, during 5.3 Community Hacking Days, to improve the virtual port type of the report connection.

Refactoring is always a pain and becomes exponentially difficult when components wrapped in components are used. Though, I can currently not envision an improvement except what @takbb pointed out to use a component that not all connections got severed resulting in a situation where you ask yourself “where was that port connected to”.