Feature request: Enable output of Agent Chat View / Agent Chat Widget Conversation history in raw JSON

Hi all,

I recently have experimented with using more complex agent set ups (20+ tools) with smaller models (e.g. nvidia nemotron 3 nano, functiongemma 270m) and as of 5.9 with random numbering of params removed I have observed better performance).

For functiongemma I wanted to try and fine tune for a specific use case, which requires example data of the use case. I wanted to generate this using the agent whilst being steered by a superior model. To make something like this easier I’d request an option to be added to output conversation history in “raw” JSON format so that it is not necessary to use Message Part Extractor and then create JSON structure to be used in fine tuning.

I assume that “under the hood” the raw output is transformed already into the Message data type (which probably is just a representation of raw JSON), so maybe not that big od a deal.

I envision a simple checkbox “output message history as JSON” and then just “one cell” in the output table.

Hello @MartinDDDD,

that’s an interesting use-case and I understand the desire for raw output.
A single cell containing the entire conversation doesn’t really fit into the tables we have at the moment. Would a JSON cell per message be OK instead? That could be an additional column in the conversation output.

1 Like

I think that could work - could you just put that into a list using group by and then save as JSON?

Have given this another thought - I think another requirement for my use case to work is that you can identify “conversation pairs” - i.e. user message and then the corresponding model response (or response chain to it). Right now I am not 100% sure if there is something in the response that comes back from the LLM, but this would be incredibly helpful

could you just put that into a list using group by and then save as JSON?

You’d probably need the Table to JSON node in order to get a single JSON cell.

Have given this another thought - I think another requirement for my use case to work is that you can identify “conversation pairs” - i.e. user message and then the corresponding model response (or response chain to it).

The role i.e. user or AI is part of the messages, and the user message is included in the output. It should be possible to construct the necessary pairs with a few nodes.

Even more sophisticated scenarios where both input and output consist of multiple messages are possible.

1 Like