Yup, looks like parsing that 3 GB json requires more than 16 GB of memory (On a sidenote, I tried with 28 GB and it still wasn’t enough). I suppose the JSON Reader node in its current state is not meant to parse files of that size. As a workaround, you could read the file line-by-line using a Line Reader node and then do the parsing of json objects manually. I’ve attached a workflow that should do that legwork, but it won’t exactly be fast. It can probably be heavily optimized.
Hi @marc-bux,
thank you for this interesting workaround.
It requires several hours for complete the process, but it seems working well.
I have just tested it with a limited amount of rows.