Why would random leading zero be dropped only on some items?

Hi and thanks for any input,

You see on my image below that some of the items contain a leading zero and others do not.

In the input file for this data, all items do have a leading zero and I have verified such.

Any reason why sometimes I don’t get the leading zero brought in and sometimes I do?

thanks again for any insight.

What node do you use to load data? I can assume that it could be an encoding issue.

1 Like

It is loaded from Snowflake Connection node. See image below. Maybe the “Group By” node is causing this?

Hello @wethernt ,
It seems there’s a difference between input value and displayed value in origin. Your field is originally settled as text. You are currently importing input values.

You can modify it in origin or emulate displayed text in KNIME. Aiming to populate missing zeroes, you can use ‘String Manipulation’ node for a requested fixed length (i.e. == 10)

padLeft($numbers_column$, 10, "0")


Thank you for your help. I have been using KNIME for a couple of years but I am not more than intermediate user or advanced beginner. Thanks for help.

If my input is an Excel file are you saying I can establish column as “Text” before it goes to Snowflake?

Aside from this question, I tried to use the String Manipulation node per your suggestion (did so yesterday a few different ways as well). The output remains identical to my original image with no leading zero on spot areas.

Thanks again - sort of stumped.

Hello @wethernt, I’m a KINIME advanced beginner as well

If your origin is Excel; it happens that is stored on a cell by cell format basis. I’m not an expert either in Snowflake but I’ve been converting complex data from excel quite often. Excel users trend to be very creative.

A cell in excel can be numeric format but storing a numeric text item (including leading and trailing whitespaces), even performing as number; there are different level of storage in a cell (stored, displayed). A common ETL when reading from Excel to is double reading as number then as as text, and completing the resulting numeric nulls with text (selective leading zeroes are missed). It’s hard to say without seen your source data and how Snowflake import works…

Then it makes sense on better concentrate to populate leading zeroes in your KNIME String column. Other options sound like a wasted time to me.

Have you checked for trailing whitespaces in your data?

	replaceChars($numbers_column$, " ", "")
	, 10, "0")


P.S. Other encoded characters could be hided as well. If you can provide a sample of your column we can try to help

Thanks. Yes I did try this with regard to spaces in the data. I tried yesterday etc. and tried just now with your statement. The result is the same… data still looks like image in my initial post above. Strange because even if the data is missing a lead zero here and there I have no clue why things like padleft or other things being tried with string manipulation node do not seem to have impact.

A different approach by selecting only numeric characters with regex:

	regexReplace($numbers_column$, ".*?(\\d+).*", "$1")
	, 10, "0")



Well thanks so much but sorry to say… nothing changed when I ran that setup in the string manipulation code.

perhaps I can dummy something up and post for further comment and review.

1 Like

Ok, my idiot self has found that when I deployed the recommendation(s), I had the box checked in the node for a new column. When I reported back here, I was looking at the existing column NOT the new column. The New Column does in fact have the leading zero.

Please accept my apologies and know that I thank you for your help. I will mark the final suggestion as solution.


Don’t blame on yourself. ■■■■ happens to all of us.
I’m glad that you found a solution.



This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.