Latent semantic analysis (LSA)

Whatever the algorithm to be applied, the documents, including the one to be matched, should be preprocessed. I suppose that's been done.

If the String Distances node is connected to Similarity Search, you'll have a wider offer of string distance functions at your disposal. The choice depends indirectly on considerations such as importance of word order, word frequency, document length, etc.

If this simple approach fails, you could try the Bag of Words approach. For your application, you would boost rarer words rather than only frequent words, so the combined term weight TFxIDF is appropriate to build a term dictionary. After term feature selection, Similiarity Search can be applied using a numeric distance function. The choice of the function depends on whether bigger differences should be penalised or not.

The advantage of the Bag of Words approach is that you'll be able to trace the search results to the source. This will give you precious insight into the adequacy of the preprocessing and term dictionary steps.

Only after all this failed, I would go for more complex methods.