Is there a way to see what the full prompt / message that is sent via the LLM Prompter node? I’ve been playing around a bit with some local GPT4All models which seem to have different templates and I’m finding it hard to understand quite what is going on. I can add templates to the GPT4All node, create a prompt in a table and also add in system messages in the prompter node. If I could see exactly what was sent to the model then it would make it much easier to know how to structure things like examples for in-context learning.