Changing only cells containing a specific character


It’s my first post here, so, sorry for any misstatements.

So, SAP brings negative values from some reports with the negative signal (-) as the last character and I want it as the first to be more functional on Excel.
I tried some ideas with Column Expressions, but nothing work.

In the example below i just want to change the “12.240-” to “-12.240” in column P05.

Any ideas of how I can do this?

Thank you so much,

Hello @acout0

Welcome to the KNIME forum! A way to solve this is by using a Column List Loop.

What it does:

  • Go through each individual column of choice (in this case all P’s)
  • With a Column Expression check if the string contains a dash (If you don’t have it, drag and drop Column Expressions – KNIME Hub ).
  • Rebuild the number with a leading dash.
  • Append all processed columns again.

Changing only cells containing a specific character.knwf (13.1 KB)

Hope this helps!


It worked!
Thank you so much!
Loops are unusual on my workflows, as I don’t understand them very well yet.
but I will get there :slight_smile:

1 Like

I have borrowed @ArjenEX great solution and tried String Regex




Great alternative approach!

1 Like

Thank you Daniel!

I made both solutions in order to learn different ways to solve this!
It definetely helped me to improve on Knime!

Thank you both for this amazing help

1 Like

Hi @acout0 and welcome to the Knime Community.

Next time, please provide the data as it helps us to have some data to work with when we try to provide some solutions :slight_smile:

Looking at your screenshot, it looks like your data was generated via a Loop already. Could this not be fixed during the generation of the data? Having to re-run each column via a Loop again after seems very inefficient. I would probably want to integrate what @ArjenEX proposed in the Column Expressions into your original loop instead of add the whole workflow that @ArjenEX suggested.

Alternatively, any (Multi Column) nodes such as what @Daniel_Weikert used is very much quicker than looping through and processing each column individually.

So, it might actually be better if you could show us your workflow and what you are doing.


Thank you for the this tip @bruno29a !
I will remember this on my next topic.
I can just drag and drop the workflow here? It brings the data I’m using?

Also, you are correct, I already have a loop on my workflow.
Thanks in advance for any tips to have a better workflow and keep improving

See it below :slight_smile:
Análise Custos Fábrica.knwf (345.0 KB)

Hi @acout0 , I’ve checked your workflow.

So what I was trying to see was:

  1. How the values were originally generated, and it looks like they are coming from the csv files.
  2. What your original loop was about, and it’s for reading the multiple files in a folder, so you can’t really apply the changes in the original loops

That being said, here are my suggestions:

  1. I hope you know that the CSV Reader node allows you to read all the files (with option to filter) that are in a folder, using the “Files in folder” option:
    However, I can understand why you are looping through each file, because you only want to extract some rows , between row 30 and row 1000 via the Row Filter - does this range make sense and is it needed? Looking at your last file, you only had 86 records in the file, which means you’ll get 57 records (you are skipping the first 29 rows to start from the 30th: 86 - 29 = 57).
    If you don’t need this filtering, you don’t need the loop.

  2. You can use the option of “Append path column” option from the “Advanced Settings”, and the CSV Reader will append a column automatically for you containing the path of the source file:

This gets rid of the need for you to manually add the path via the Path to String (Variable) + Constant Value Column

  1. Looking at both sets of the csv files that you have, both sets have the same structure (9 String columns). This means that you can combine them all at the very beginning, and then process the whole data, instead of having 2 parallel identical sets of processes:
  • This is much faster to run
  • It’s much easier to maintain. Currently, if you need to change something, you need to do the same change in 2 places

If you still need to distinguish between the 2 sets of files (it seems you want to do comparisons of some aggregations between them), you can add your extra column for an identifier.

I would probably add that identifier right after your original loop (or you may extract it from the path info that you add via #2 above), and then concatenate from there.

Something like this would replace the first portion of your workflow:

The red portion of your workflow would be replaced by the green portion of my workflow.

Here’s a better view of it:

The changes are:

  1. No need for the Path to String (Variable) and the Constant Value Column to get the Path. I use the feature option from the CSV Reader node to automatically include the Path.

  2. I added the source of the files via the Constant Value Column:
    Top source:

Bottom source:

  1. In your GroupBy, you just need to add the column “Source” as an additional column to group by

  2. We then split the rows based on Source where the top and bottom split records should give you the same results as your original GroupBy nodes (Node 163 and Node 166)

The rest remains the same.

A side note:

You don’t need to link the Node 138 to Node 167. You have already linked the Node 138 to Node 136, meaning that any node that’s after Node 136 (downstream) will inherit the variables that exist at that point.


Thank you so much for those tips @bruno29a !
I really didn’t knew about the path input on CSV reader, so I just made some adjusts and it is shorter now:

I had made it that way due to youtube videos, and was very hard to me to create this workflow, it’s amazing to see a easier way to do this

Also, I don’t have to distinguish the files. These reports are accounting bookings in some areas of the company I work, and SAP doesn’t have an entire fiscal year (october to september) report, so I have to extract these two reports and Join them to have an entire year information.
So, do I have a better option to have the information below than by the Joiner?

Thank you again! I’m learning a lot from it

1 Like

Hi @acout0 , yes Joiner is perfectly fine in this case, if you want them on the same line.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.