MISTRAL AI - CHAT STREAMING WITH RAG
====================================
ELEMENT DESCRIPTION
----------------------------------
MISTRAL AI - CHAT STREAMING WITH RAG provides ChatGPT- like streaming capabilities to your app.
STEP-BY-STEP SETUP
--------------------------------
0) Register on Mistral AI. You will get your MISTRAL AI Key.
1) Implement ERROR workflow as per the demo so any errors are caught to your app.
2) Register on plugins.wiseable.io. Create a new Credential which associates your BUBBLE APP URL and your MISTRAL AI KEY.
The registration service will generate your PUBLIC ACCESS KEY. This key serves as a secure proxy for your real API key. It allows your application to communicate with the service without exposing your real API key. Since this PUBLIC ACCESS KEY is explicitly tied to your registered BUBBLE APP URL, it can only be used from that domain, ensuring that even if the key is publicly visible, it remains safe and cannot be misused by unauthorized sources.
3) In the Plugin Settings, enter your PUBLIC ACCESS KEY generated at the previous step.
4) Add the MISTRAL AI - CHAT STREAMING WITH RAG to the page on which the chat must be integrated. Select the RESULT DATA TYPE as Threads (Mistral AI).
5) Add an element supporting input text for the user prompt.
6) Integrate the logic into your application using the following MISTRAL AI - CHAT STREAMING WITH RAG element's states and actions:
FIELDS:
- RESULT DATA TYPE : Must always be selected as Threads (Mistral AI).
- MODEL NAME : Valid values
https://docs.mistral.ai/platform/endpoints. You may hide this data from eavesdroppers by using GET DATA FROM AN EXTERNAL API > MISTRAL AI - ENCRYPT and set this field as result of this API.
- ROLE INFORMATION : Define the role of the AI assistant. You may hide this data from eavesdroppers by using GET DATA FROM AN EXTERNAL API > MISTRAL AI - ENCRYPT and set this field as result of this API.
- APP DATA AGENT : Set to yes to activate data retrieval from your app when needed to answer to the user prompt. Your app DATA API must be enabled.
- MAX TOKENS : The maximum number of tokens to generate in the completion. The token count of your prompt plus MAX TOKENS can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
EVENTS :
- ERROR : Event triggered when an error occurs.
- STREAMING STARTED : Event triggered when the streaming starts.
- STREAMING ENDED : Event triggered when the streaming ends.
- CALL FUNCTION : Event triggered when a function triggers, as define in FUNCTIONS of SEND USER PROMPT action.
- THREADS TO SAVE: Event triggered when any of the thread has changed.
- READY: Event triggered when this element's fields have been set and element is ready to serve requests.
EXPOSED STATES:
Use any element able to show/process the data of interest (such as a Group with a Text field) stored within the result of the following states of the AZURE AI - CHATGPT ON YOUR DATA :
- ERROR : Error message upon Error event trigger.
- IS STREAMING : Returns true when streaming is in progress.
- FUNCTION NAME : Name of the function, set upon CALL FUNCTION event.
- FUNCTION ARGUMENTS : Arguments of the function, set upon CALL FUNCTION event.
- FUNCTION CALL ID : Unique identifier of the function call, set upon CALL FUNCTION event.
- LATEST STOP REASON : Latest stop reason of the AI engine, populated upon STREAMING ENDED event.
- LATEST INPUT TOKEN USAGE : Latest input token usage of the AI engine, populated upon STREAMING ENDED event.
- LATEST OUTPUT TOKEN USAGE : Latest output token usage of the AI engine, populated upon STREAMING ENDED event.
- ALL THREADS : List of threads containing a list of role and message content.
- ALL THREADS (RAW DATA) : String containing All Current Threads in JSON format. You may use this string to load threads in "Load Threads" action.
ELEMENT ACTIONS - TRIGGERED IN WORKFLOW:
- SEND PROMPT : Send Prompt.
Inputs Fields :
- PROMPT : The user prompt.
- FILES : List of Files to add to the prompt.. Supported only for models supporting text and image inputs.
- THREAD ID : Send the user prompt to this Thread ID. A valid Thread ID is one of the exposed state ALL THREADS. Autogenerated if not set.
- FUNCTIONS : Array containing functions definition. See
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#basic-concepts - TEMPERATURE : What sampling temperature to use, between 0 and 2. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 for ones with a well-defined answer.
- SET THREAD TITLE : Set a custom Thread Title of a given Thread ID.
Inputs Fields :
- THREAD ID : Set the title to this Thread ID. A valid Thread ID is one of the exposed state ALL THREADS.
- TiTTLE : New title
- SEND FUNCTION RESULT : Send the function results. See
https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#basic-concepts Inputs Fields :
- FUNCTION CALL ID : Unique identifier of the function call from FUNCTION CALL ID state, which is set upon set upon CALL FUNCTION event.
- FUNCTION RESULT : Result of the function to pass.
- DELETE THREAD ID : Delete the specified Thread ID.
Inputs Fields :
- THREAD ID : Delete the specified Thread ID.
- LOAD THREADS : Load the threads.
Inputs Fields :
- THREADS (RAW DATA) : String formatted as JSON-safe, containing the threads in JSON format.
7) (Optional) Add the MISTRAL AI - MARKDOWN & LATEX PARSER element to display markdown formatting
IMPLEMENTATION EXAMPLE
======================
Feel free to browse the app editor in the Service URL for an implementation example.
TROUBLESHOOTING
================
Any plugin related error will be posted to the the Logs tab, "Server logs" section of your App Editor.
Make sure that "Plugin server side output" and "Plugin server side output" is selected in "Show Advanced".
> Server Logs Details:
https://manual.bubble.io/core-resources/bubbles-interface/logs-tab#server-logsPERFORMANCE CONSIDERATIONS
===========================
N/A
QUESTIONS ?
===========
Contact us at
[email protected] for any additional feature you would require or support question.