then i input my open ai api key and try to run this workflow. everything is green, however after i execute my LLM prompter node i get this msg only and no results “To show the port output, please execute the selected node”.
glad to see you are making progress - and thanks a lot for the recording.
I think the message that you are seeing is a generic one that you see in the table preview, if a node produces an error. Could you hover over the “red X” on the LLM Prompter and share what the error message says there?
You should also see the error when you open the Workflow Monitor in the panel on the left:
This is my error: Execute failed: Error code: 429 - {‘error’: {‘message’: ‘You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.’, ‘type’: ‘insufficient_quota’, ‘param’: None, ‘code’: ‘insufficient_quota’}}
i went to check my open ai project api page its like there is no usuage, not sure why there is a quota limit error. My permission for my api key is set to all. strange.
Also attached my credential config node. Anything i setup wrongly?
Hmm ok at least we now know it’s nothing technical on the KNIME side of things (for reference I just ran a query to gpt-4o-mini and no error on my end using my credentials). I’m also on a personal paid plan (chatGPT Plus…) and use pay as you go (so have to prepay and can then consume tokens)
Is the error linked to my base url for the api when i select the gpt4 mode? Any idea if i select gpt4 what is the open ai base url that i should select?
Can you open the table and increase the width of rows to validate they are indeed an empty string? Want to rule out that there are just “\n” (newline characters) at the top of the response. Normally if a cell is empty you have a “missing value” indicator (red question mark).
There are different APIs for chat models and instruct models - for chat Models you have to use the Chat Model Prompter (and OpenAI Chat Model Connector before that) - this is the case for gpt-4o-mini:
Yes you are right, just that in this view i was not able to see it. When i change data render to “string” instead of “multi-line string” i can see it now.
A follow up question. Currently i am using the “Table Creator Node” for (A) and i have to manually enter the data for the task and prompt column which i would like to replace it with a excel file with these 2 columns that i can dynmically update the data. Which nodes should it use?
Additionally, for the output “Response” Column, currently its shown in the result table of the “LLM Prompter node”. Instead of that can i have the output into an excel file? which node should i use for it?