Yes, tokenization plays an important role. Punctuation erasure is usefule if you have e.g. a token "and." this would be found and filtered by the stop word filter (exact match "and"!="and."). Removing punctuation marks turns the token "and." into "and" which can be filtered by the stop word filter.
Tokenization is applied only by the Strings to Document node, parser nodes, and by tagger nodes. Other nodes, such as filters etc. do not change tokenization. Replacing characters in tokens will not change tokenization, not matter if there are whitespaces in the token or not.
What you could do to effect tokenization be replacements is to to replacements on the string columns (replace I'UE with I UE) and apply then the Strings to Documents which will do tokenization. Then I and UE will be considered as two tokens instead of I'UE as one token.
I hope this helps.
Cheers, Kilian