0️⃣ : AUTOMATED CONFIGURATION
=============================================
The steps from 0) to 1) can be automatically performed by using this deployment template:
https://console.aws.amazon.com/cloudformation/home?#/stacks/create/review?stackName=BubbleComprehendSyncOnly&templateURL=https://bubble-resources.s3.amazonaws.com/deployment-assets/CloudFormation-AWSComprehendSyncOnly.yaml You will find the required parameters values used across the plugin in the "OUTPUT" tab of the created stack.
1️⃣: DETECT UNSAFE & TOXIC PROMPT (BACK-END)
=============================================
📋 ACTION DESCRIPTION
--------------------------------
DETECT UNSAFE & TOXIC PROMPT (BACK-END) inspects text and returns two safety classifications (SAFE PROMPT and UNSAFE PROMPT) along with eight toxicity classifications (TOXICITY, PROFANITY, HATE SPEECH, INSULT, GRAPHIC, HARASSMENT OR ABUSE, SEXUAL, VIOLENCE OR THREAT), each with a confidence score. The value range of each score is zero to one, where one is the highest confidence.
🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The credential configuration (steps 0-1) can be automatically performed using the deployment template mentioned in the AUTOMATED CONFIGURATION section.
0) Sign-up for AWS COMPREHEND:
https://console.aws.amazon.com/comprehend/home?p=ply&cp=bn&ad=c 1) Create your AWS ACCESS KEY & ACCESS KEY SECRET, then add to the credentials the AWS COMPREHEND READ-ACCESS policy:
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys 2) In the Plugin Settings, enter the following:
- AWS ACCESS KEY & ACCESS KEY SECRET
- AWS SERVICE ENDPOINT REGION (if not provided, default endpoint is "us-east-1").
3) Set up the "DETECT UNSAFE & TOXIC PROMPT (BACK-END)" action in the workflow.
Inputs Fields:
- TEXT TO ANALYZE: A UTF-8 text string in English language. The string has a maximum size of 10 KB.
Output Fields:
- SAFE SCORE: Returns the confidence score related to the safety of the prompt.
- UNSAFE SCORE: Returns the confidence score related to the unsafety of the prompt.
- TOXICITY SCORE: Returns the score related to the toxicity of the prompt.
- PROFANITY SCORE: Returns the score related to the profanity of the prompt.
- HATE SPEECH SCORE: Returns the score related to the hate speech character of the prompt.
- INSULT SCORE: Returns the score related to the insulting character of the prompt.
- GRAPHIC SCORE: Returns the score related to the graphic character of the prompt.
- HARASSMENT OR ABUSE SCORE: Returns the score related to the harassing or abusing character of the prompt.
- SEXUAL SCORE: Returns the score related to the sexual character of the prompt.
- VIOLENCE OR THREAT SCORE: Returns the score related to the violent or threatening character of the prompt.
2️⃣: AWS COMPREHEND - UNSAFE PROMPT (FRONT-END)
==============================================
📋 ELEMENT DESCRIPTION
--------------------------------
AWS COMPREHEND - UNSAFE PROMPT (FRONT-END) provides a visual element with the DETECT UNSAFE & TOXIC PROMPT (FRONT-END) action to analyze text for safety and toxicity. The front-end element is suitable for applications when reactivity is desired, such as mobile applications.
🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The credential configuration (steps 0-1) can be automatically performed using the deployment template mentioned in the AUTOMATED CONFIGURATION section.
0) Sign-up for AWS COMPREHEND:
https://console.aws.amazon.com/comprehend/home?p=ply&cp=bn&ad=c 1) Create your AWS ACCESS KEY & ACCESS KEY SECRET, then add to the credentials the AWS COMPREHEND READ-ACCESS policy:
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys 2) Register on plugins.wiseable.io. Create a new Credential which associates your BUBBLE APP URL, AWS ACCESS KEY & ACCESS KEY SECRET.
The registration service will generate your PUBLIC ACCESS KEY. This key serves as a secure proxy for your real API key. It allows your application to communicate with the service without exposing your real API key. Since this PUBLIC ACCESS KEY is explicitly tied to your registered BUBBLE APP URL, it can only be used from that domain, ensuring that even if the key is publicly visible, it remains safe and cannot be misused by unauthorized sources.
3) In the Plugin Settings, enter the following:
- PUBLIC ACCESS KEY (generated from plugins.wiseable.io)
- AWS SERVICE ENDPOINT REGION (if not provided, default endpoint is "us-east-1").
4) Add the AWS COMPREHEND - UNSAFE PROMPT (FRONT-END) element to the page where you want to analyze text.
5) Integrate the logic into your application using the following element's states and actions:
EVENTS:
- SUCCESS: Event triggered upon successful analysis
- ERROR: Event triggered upon error
EXPOSED STATES:
Use any element able to show/process the data of interest (such as a Group with a Text field) stored within the result of the following states:
- SAFE SCORE: Populated upon SUCCESS event. Returns the confidence score related to the safety of the prompt.
- UNSAFE SCORE: Populated upon SUCCESS event. Returns the confidence score related to the unsafety of the prompt.
- TOXICITY SCORE: Populated upon SUCCESS event. Returns the score related to the toxicity of the prompt.
- PROFANITY SCORE: Populated upon SUCCESS event. Returns the score related to the profanity of the prompt.
- HATE SPEECH SCORE: Populated upon SUCCESS event. Returns the score related to the hate speech character of the prompt.
- INSULT SCORE: Populated upon SUCCESS event. Returns the score related to the insulting character of the prompt.
- GRAPHIC SCORE: Populated upon SUCCESS event. Returns the score related to the graphic character of the prompt.
- HARASSMENT OR ABUSE SCORE: Populated upon SUCCESS event. Returns the score related to the harassing or abusing character of the prompt.
- SEXUAL SCORE: Populated upon SUCCESS event. Returns the score related to the sexual character of the prompt.
- VIOLENCE OR THREAT SCORE: Populated upon SUCCESS event. Returns the score related to the violent or threatening character of the prompt.
- ERROR MESSAGE: Populated upon ERROR event.
- IS PROCESSING: Set to true when processing is in progress, false otherwise.
ELEMENT ACTIONS - TRIGGERED IN WORKFLOW:
- DETECT UNSAFE & TOXIC PROMPT (FRONT-END): Analyze text for safety and toxicity scores.
Inputs Fields:
- TEXT TO ANALYZE: A UTF-8 text string in English language. The string has a maximum size of 10 KB.
🔍 IMPLEMENTATION EXAMPLE
======================
Feel free to browse the app editor in the Service URL for an implementation example.
ℹ️ ADDITIONAL INFORMATION
======================
> Entities details:
https://docs.aws.amazon.com/comprehend/latest/dg/how-entities.html> Syntax details:
https://docs.aws.amazon.com/comprehend/latest/dg/how-syntax.html> Key Phrases details:
https://docs.aws.amazon.com/comprehend/latest/dg/how-key-phrases.html> AWS COMPREHEND service limits:
https://docs.aws.amazon.com/comprehend/latest/dg/guidelines-and-limits.html#limits-all> AWS services availability per region:
https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/> AWS Service endpoints list:
https://docs.aws.amazon.com/general/latest/gr/rande.html⚠️ TROUBLESHOOTING
================
Any plugin related error will be posted to the the Logs tab, "Server logs" section of your App Editor.
Make sure that "Plugin server side output" and "Plugin client side output" is selected in "Show Advanced".
For front-end actions, you can also open your browser's developer console (F12 or Ctrl+Shift+I in most browsers) to view detailed error messages and logs.
Always check the ERROR MESSAGE state of the element and implement error handling using the ERROR event to provide a better user experience.
> Server Logs Details:
https://manual.bubble.io/core-resources/bubbles-interface/logs-tab#server-logs⚡ PERFORMANCE CONSIDERATIONS
===========================
For back-end actions, the maximum retrievable result set is capped at 30 seconds duration time - this does not apply to front-end actions.
⏱️ BACK-END ACTION START DELAY
-----------------------------------------------
Each time a server-side action is called, Bubble initializes a small virtual machine to execute the action. If the same action is called shortly after, the caching mechanism kicks in, resulting in faster execution on subsequent calls.
A useful workaround is to fire a dummy execution at page load, which pre-warms the Bubble engine for the next few minutes, reducing the impact of cold starts for your users.
❓ QUESTIONS?
===========
Contact us at
[email protected] for any additional feature you would require or support question.