Instructions:
Client side:
1. Add Stream Element to bubble page
2. Add Generate tokens action to a workflow
3. In same workflow, add Call GPT action
4. Populate token and cipher fields with the result of generate tokens action
5. Populate fields in Call GPT action as per the documentation (this is important)
6. Add the event "Stream Complete". This will trigger when GPT has finished and will give you access to the Streamed response" value.
Note 1: if you have stream set to "yes", then you can add this exposed state to any text element or other appropriate element which will take on the value of GPT's streamed response.
Note 2: Using function calling is optional but if you do, you cannot use the stream function. This is a GPT limitation
Note 3: The exposed state "Call cost" is based on the model you are using, the input tokens and the output token. Calculated price is as per the OpenAI pricing documentation. Pricing is in USD
Note 4: Pass through text is documented in the action
Check out the service URL for much more detailed instructions:
https://llm-connector-demo.bubbleapps.io/version-test