@Feynix there was a somewhat similar task where you would feed raw strings in chunks to a LLM and ask for a structured return. Maybe you can adapt the approach. Key will be to engineer the prompt to tell the model what to do.
In this case it uses a local LLM (Llama 3.2) but ist should also be possible to do this connecting to ChatGPT via KNIME nodes.