1️⃣: CAMERA FOR AWS REKOGNITION - DETECT OBJECT
==========================
📋 ELEMENT DESCRIPTION
--------------------------------
CAMERA FOR AWS REKOGNITION - DETECT OBJECT is a visual element that provides real-time object detection directly from your device's camera. It identifies objects, events, and concepts in the camera feed and displays them with bounding boxes and labels.
🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 1) can be automatically performed by using the deployment template mentioned in the back-end actions setup below.
0) Sign-up for AWS REKOGNITION:
https://console.aws.amazon.com/rekognition/home?p=rkn&cp=bn&ad=c1) In the Plugin Settings, enter the following:
- AWS ACCESS KEY & SECRET ACCESS KEY
- AWS SERVICE ENDPOINT REGION (if not provided, default endpoint is "us-east-1").
- PUBLIC ACCESS KEY (generated from plugins.wiseable.io)
2) Register on plugins.wiseable.io. Create a new Credential which associates your BUBBLE APP URL, AWS ACCESS KEY & SECRET ACCESS KEY.
The registration service will generate your PUBLIC ACCESS KEY. This key serves as a secure proxy for your real API key. It allows your application to communicate with the service without exposing your real API key. Since this PUBLIC ACCESS KEY is explicitly tied to your registered BUBBLE APP URL, it can only be used from that domain, ensuring that even if the key is publicly visible, it remains safe and cannot be misused by unauthorized sources.
3) Add the CAMERA FOR AWS REKOGNITION - DETECT OBJECT element to your page.
4) Configure the element's appearance and behavior:
FIELDS:
- RESULT DATA TYPE: Returned type, must always be set to "RESULT (REKOGNITION VIDEO - DETECT OBJECTS)".
- LABEL BACKGROUND: Background color for non-focused labels.
- LABEL TEXT: Text color for non-focused labels.
- BOX COLOR: Box color for non-focused labels.
- FOCUSED BOX COLOR: Box color for focused labels.
- FOCUSED LABEL BACKGROUND: Background color for focused labels.
- FOCUSED LABEL TEXT: Text color for focused labels.
- FOCUS INSTANCE LABELS: List of labels to focus on.
EVENTS:
- SUCCESS: Event triggered upon successful detection of objects within a frame.
- ERROR: Event triggered upon error.
EXPOSED STATES:
Use any element able to show/process the data of interest stored within the result of the following states:
- IS CAMERA ACTIVE: Set to true when camera is active, false otherwise.
- IS PROCESSING: Set to true when processing is in progress, false otherwise.
- ERROR MESSAGE: Populated upon ERROR event.
- RESULTS: Populated upon SUCCESS event. Returns a list of Labels. For each, it returns the object name, bounding boxes size & coordinate, and confidence level.
- INPUT CAMERAS: List of detected cameras, populated after DETECT DEVICES action.
ELEMENT ACTIONS - TRIGGERED IN WORKFLOW:
- START CAMERA: Initialize the camera with optional camera name parameter.
- DETECT DEVICES: Detect available camera devices and populate INPUT CAMERAS state.
- START DETECTING OBJECTS: Begin object detection with the following parameters:
• MIN CONFIDENCE: Minimum confidence threshold (0-100).
• CAPTURE FREQUENCY: How often to capture frames (milliseconds).
• MAX LABELS TO SHOW: Maximum number of labels to display.
- STOP CAMERA: Stop the camera and all detection processes.
- STOP DETECTING OBJECTS: Stop detection but keep camera active.
2️⃣: START & GET OBJECT DETECTION (ASYNC)
================================
📋 ACTION DESCRIPTION
--------------------------------
Detect instances of real-world entities within a MPEG-4 or MOV video, encoded using the H. 264 codec and stored in AWS S3.
🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ If you do not have AWS S3 configured yet, the configuration steps can be automatically performed by using this deployment template:
https://console.aws.amazon.com/cloudformation/home?#/stacks/create/review?stackName=BubbleS3¶m_BucketName=BucketNameOfYourChoice&templateURL=https://bubble-resources.s3.amazonaws.com/deployment-assets/CloudFormation-AWSS3Plugin.yamlYou will find the required parameters values used to configure your AWS S3 plugin, for which "AWS S3 DROPZONE & SQS UTILITIES" is suggested, in the "OUTPUT" tab of the created stack.
ℹ️ The steps from 0) to 3) b) of START & GET OBJECT DETECTION (ASYNC) can be automatically performed by using this deployment template:
https://console.aws.amazon.com/cloudformation/home?#/stacks/create/review?stackName=BubbleRekognition&templateURL=https://bubble-resources.s3.amazonaws.com/deployment-assets/CloudFormation-AWSRekognitionAsync.yamlYou will find the required parameters values used across the plugin in the "OUTPUT" tab of the created stack.
Otherwise, follow these manual steps:
0) Sign-up for AWS REKOGNITION:
https://console.aws.amazon.com/rekognition/home?p=rkn&cp=bn&ad=c1) Configure AMAZON REKOGNITION VIDEO by following ALL the instructions:
https://docs.aws.amazon.com/rekognition/latest/dg/api-video-roles.html Write down your:
- ACCESS KEY & SECRET ACCESS KEY
- AWS SERVICE ENDPOINT REGION
- NOTIFICATION ROLE ARN
- SNS TOPIC ARN
- SQS QUEUE URL /!\ Make sure that SQS QUEUE encryption is disabled.
2) In the Plugin Settings, enter the following:
- AWS ACCESS KEY & SECRET ACCESS KEY
- AWS SERVICE ENDPOINT REGION (if not provided, default endpoint is "us-east-1").
3) Set-up in your workflow an action returning the BUCKET and KEY of your file to analyze.
a) If you do not already have such action, install the plugin "AWS S3 & SQS UTILITIES"
b) Create a AWS S3 BUCKET that will be used to store the file to analyze:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html c) Set up the "PUT FILE TO S3" action in the workflow.
Inputs Fields:
- FILE URL TO STORE: The file URL from the Bubble.io uploader, or a Protocol-relative URLs (//server/file.ext), or a HTTPS file URL (
https://server/file.ext). The file must be accessible through the HTTPS protocol.
- AWS S3 BUCKET NAME: Bucket Name to which the file will be saved.
- AWS S3 FILE NAME: Path & File Name to save to. The format must be [path/]filename.ext.
Example 1: path1/path2/filename.ext.
Example 2: filename.ext if the file is at the root of the bucket.
4) Set up the "START OBJECT DETECTION JOB" action in the workflow.
Inputs Fields:
- MIN CONFIDENCE: Specifies the minimum confidence that Amazon Rekognition Video must have in order to return a detected label. Confidence represents how certain Amazon Rekognition is that a label is correctly identified. 0 is the lowest confidence. 100 is the highest confidence. Default value is 50 if not specified.
- AWS S3 BUCKET NAME: AWS S3 bucket name from which the input file will be read.
- AWS S3 FILE NAME: AWS S3 file name for the input file. Enter here the video file from the Bubble.io uploader, or a Protocol-relative URLs (//server/video.mov), or a HTTPS video URL (
https://server/video.mov). The video must be encoded using the H. 264 codec. The supported file formats are MPEG-4 and MOV.
Example 1: path1/path2/filename.ext.
Example 2: filename.ext if the file is at the root of the bucket.
- NOTIFICATION ROLE ARN: ARN of an IAM role giving AWS REKOGNITION publishing permissions to the AWS SNS topic.
- SNS TOPIC ARN: AWS SNS topic ARN to which AWS REKOGNITION posts the completion status.
Output Fields:
- JOB ID: ID of the Job, to be reused in the "GET JOB STATUS FROM SQS" and "GET OBJECT DETECTION RESULTS".
5) Install the plugin "AWS S3 & SQS UTILITIES"
Set up the action "GET JOB STATUS FROM SQS" in a recurring workflow ('Do every x seconds'), to poll the job completion status on a regular basis.
Configure this recurring workflow to execute the next step once the job status is SUCCEEDED, using 'Only When' Event Condition, to retrieve the results.
Inputs Fields:
- QUEUE URL: URL of AWS SQS you set up at step 1, used to poll for AWS REKOGNITION job status messages.
- JOBID: ID of the job to poll, returned by "START OBJECT DETECTION JOB" action.
Output Fields:
- JOB STATUS: Valid values are SUCCEEDED, POLLING, IN_PROGRESS, PARTIAL_SUCCESS and FAILED or ERROR, with error or failure messages being appended to the status.
6) Set up the action "GET OBJECT DETECTION RESULTS" in the workflow.
Inputs Fields:
- JOB ID: ID of the job to poll, returned by "START OBJECT DETECTION JOB" action.
- MAX RESULTS: Maximum results per paginated calls from AWS. The largest value you can specify is 1000, any greater value will return 1000 results. The default value is 1000. This plugin auto-paginates AWS response based on this parameter.
- RESULT DATA TYPE: Returned type, must always be set to "RESULT (REKOGNITION VIDEO - DETECT OBJECTS)".
Output Fields:
- RESULTS: Returns a list of Labels. For each, it returns the object name, bounding boxes size & coordinate, and confidence level), and the timestamp.
🔍 IMPLEMENTATION EXAMPLE
======================
Feel free to browse the app editor in the Service URL for an implementation example.
ℹ️ ADDITIONAL INFORMATION
======================
> AWS REKOGNITION service limits:
https://docs.aws.amazon.com/rekognition/latest/dg/limits.html> AWS services availability per region:
https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/> AWS Service endpoints list:
https://docs.aws.amazon.com/general/latest/gr/rande.html⚠️ TROUBLESHOOTING
================
Any plugin related error will be posted to the Logs tab, "Server logs" section of your App Editor.
Make sure that "Plugin server side output" and "Plugin client side output" is selected in "Show Advanced".
For front-end actions, you can also open your browser's developer console (F12 or Ctrl+Shift+I in most browsers) to view detailed error messages and logs.
Always check the ERROR MESSAGE state of the element and implement error handling using the ERROR event to provide a better user experience.
> Server Logs Details:
https://manual.bubble.io/core-resources/bubbles-interface/logs-tab#server-logs⚡ PERFORMANCE CONSIDERATIONS
===========================
GENERAL
-------------
For back-end actions, the maximum retrievable result-set is capped at 30 seconds duration time - this does not apply to front-end actions.
⏱️ BACK-END ACTION START DELAY
-----------------------------------------------
Each time a server-side action is called, Bubble initializes a small virtual machine to execute the action. If the same action is called shortly after, the caching mechanism kicks in, resulting in faster execution on subsequent calls.
A useful workaround is to fire a dummy execution at page load, which pre-warms the Bubble engine for the next few minutes, reducing the impact of cold starts for your users.
❓ QUESTIONS?
===========
Contact us at
[email protected] for any additional feature you would require or support question.