OPENAI - GPT STREAMING WITH RAG
====================================
OPENAI - GPT CHAT STREAMING - ELEMENT DESCRIPTION
------------------------------------------------------------------------------
OPENAI - GPT STREAMING WITH RAG provides GPT streaming with web search and app data retrieval capabilities.
STEP-BY-STEP SETUP
--------------------------------
0) Register on OpenAI and get your OPEN AI API KEY.
1) Implement ERROR workflow as per the demo so any errors are caught to your app
2) Register on plugins.wiseable.io. Create a new Credential which associates your BUBBLE APP URL and your OPENAI AI KEY.
The registration service will generate your PUBLIC ACCESS KEY. This key serves as a secure proxy for your real API key. It allows your application to communicate with the service without exposing your real API key. Since this PUBLIC ACCESS KEY is explicitly tied to your registered BUBBLE APP URL, it can only be used from that domain, ensuring that even if the key is publicly visible, it remains safe and cannot be misused by unauthorized sources.
3) In the Plugin Settings, enter your PUBLIC ACCESS KEY generated at the previous step.
4) Add the OPENAI - GPT CHAT STREAMING to the page on which the chat must be integrated. Select the RESULT DATA TYPE as Threads (Open AI).
5) Add an element supporting input text for the use pompt.
6) Integrate the logic into your application using the following OPENAI - CHATGPT CHAT STREAMING element's states and actions:
FIELDS:
- RESULT DATA TYPE : Must always be selected as Threads (OpenAI CHATGPT).
- MODEL NAME : Name of the CHATGPT model. See
https://platform.openai.com/docs/models. You may hide this data from eavesdroppers by using GET DATA FROM AN EXTERNAL API > OPENAI CHATGPT - ENCRYPT and set this field as result of this API.
- ROLE INFORMATION : Define the role of the AI assistant. You may hide thus data from eavesdroppers by using GET DATA FROM AN EXTERNAL API > OPENAI CHATGPT - ENCRYPT and set this field as result of this API.
- APP DATA RETRIEVAL : Set to yes to activate data retrieval from your app when needed to answer to the user prompt. Your app DATA API must be enabled.
- MAX TOKENS : The maximum number of tokens to generate in the completion. The token count of your prompt plus MAX TOKENS can't exceed the model's context length.
EVENTS :
- ERROR : Event triggered when an error occurs.
- STREAMING STARTED : Event triggered when the streaming starts.
- STREAMING ENDED : Event triggered when the streaming ends.
- CALL FUNCTION : Event triggered when a function triggers, as define in FUNCTIONS of SEND USER PROMPT action.
- THREADS TO SAVE: Event triggered when any of the thread has changed.
- READY: Event triggered when this element's fields have been set and element is ready to serve requests.
EXPOSED STATES:
Use any element able to show/process the data of interest (such as a Group with a Text field) stored within the result of the following states of the OPENAI - CHATGPT STREAMING WITH RAG :
- ERROR : Error message upon Error event trigger.
- IS STREAMING : Returns true when streaming is in progress.
- FUNCTION NAME : Name of the function, set upon CALL FUNCTION event.
- FUNCTION ARGUMENTS : Arguments of the function, set upon CALL FUNCTION event.
- FUNCTION CALL ID : Unique identifier of the function call, set upon CALL FUNCTION event.
- LATEST STOP REASON : Latest stop reason of the AI engine, populated upon STREAMING ENDED event.
- LATEST INPUT TOKEN USAGE : Latest input token usage of the AI engine, populated upon STREAMING ENDED event.
- LATEST INPUT CACHED USAGE : Latest input cached token usage of the AI engine, populated upon STREAMING ENDED event.
- LATEST OUTPUT TOKEN USAGE : Latest output token usage of the AI engine, populated upon STREAMING ENDED event.
- LATEST OUTPUT REASONING TOKEN USAGE : Latest output reasoning token usage of the AI engine, populated upon STREAMING ENDED event.
- ALL THREADS : List of threads containing a list of role and message content.
- ALL THREADS (RAW DATA) : String containing All Current Threads in JSON format. You may use this string to load threads in "Load Threads" action.
ELEMENT ACTIONS - TRIGGERED IN WORKFLOW:.
- SEND USER PROMPT : Send Prompt to the specified Thread ID, which is autogenerated if not set.
Inputs Fields :
- PROMPT : The user prompt.
- THREAD ID : Send the user prompt to this Thread ID. A valid Thread ID is one of the exposed state ALL THREADS. Autogenerated if not set.
- FUNCTIONS : Array containing functions definition. See
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#basic-concepts - TEMPERATURE : What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.
DATA RETRIEVAL SETTINGS
- WEB SEARCH : Allow models to search the web for the latest information before generating a response.
- VECTOR STORE IDS : List of Vector Store IDs Store of previously uploaded files to OpenAI.
- SET THREAD TITLE : Set a custom Thread Title of a given Thread ID.
Inputs Fields :
- THREAD ID : Set the title to this Thread ID. A valid Thread ID is one of the exposed state ALL THREADS.
- SEND FUNCTION RESULT : Send the function results. See
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#basic-concepts Inputs Fields :
- FUNCTION CALL ID : Unique identifier of the function call from FUNCTION CALL ID state, which is set upon set upon CALL FUNCTION event.
- FUNCTION RESULT : Result of the function to pass.
- TEMPERATURE : What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.
- DELETE THREAD ID : Delete the specified Thread ID.
Inputs Fields :
- THREAD ID : Delete the specified Thread ID.
- LOAD THREADS : Load the threads.
Inputs Fields :
- THREADS (RAW DATA) : String formatted as JSON-safe, containing the threads in JSON format.
7) (Optional) Add the OPENAI - GPT STREAMING WITH RAG - MARKDOWN PARSER element to display markdown formatting
IMPLEMENTATION EXAMPLE
======================
Feel free to browse the app editor in the Service URL for an implementation example.
TROUBLESHOOTING
================
Any plugin related error will be posted to the the Logs tab, "Server logs" section of your App Editor.
Make sure that "Plugin server side output" and "Plugin server side output" is selected in "Show Advanced".
> Server Logs Details:
https://manual.bubble.io/core-resources/bubbles-interface/logs-tab#server-logsPERFORMANCE CONSIDERATIONS
===========================
N/A
QUESTIONS ?
===========
Contact us at
bubble@wiseable.io for any additional feature you would require or support question.