Crawl websites not just single pages

Hi all,

Is there a way to have knime and/or palladian to crawl an entire site by following the links it comes across in the html itself and then storing the html content pages to individual documents? 

I just want to set some outer limits like whether follow external domain links and let knime build a dataset for me. The parse webpage example buids customised urls were as I want the crawler to roam around and follow links it finds itself

I'm only a new user of knime so maybe I'm not thinking about the problem in the knime way 

No automatically. You need to extract the hyperlinks contained in a website yourself and loop over them until a breaking condition is reached. You can do this e.g. by using loops in KNIME, e.g. the Recursive Loop Start.

Cheers, Kilian