@mwiegand using some sort of trigger or blocking file to steer executions (or prevent them) is one way to overcome limitations in some extreme cases.
In general I would strongly advise not to write to the same file in parallel regardless if you pull it off to initially get the results. I would only use dedicated systems like databases that are built to receive write operations in parallel. Depending on the operating system, the surroundings, some aggressive virus scanner some micro delays you will most likely run into troubles in a real life scenario.
Trying to deliberately break a system might be interesting to test some fringe cases. In most productive environments there is a good chance you are setting yourself up for problems that are then complicated to first detect and then to fix. When it is not super important to squeeze the last microseconds out of a process (if this is the case you might need advanced in-memory techniques) it is best to keep things so that you can easily track them, give them unique names and better rely on a loop or something so as to safely collect. Also built-in pauses and caches might sometimes go a long way to improve stability as well as (like you have done) retry and catch constructs.
The question when it comes to robust processes is not if you are paranoid - the question is: are you paranoid enough …
on another note: I currently get an error message when trying to install the Nodepit Power Nodes. I do not have the energy to investigate. I would just use the Cache node instead.