1️⃣: GOOGLE VISION - FACES RECOGNITION (FRONT-END DESKTOP & NATIVE MOBILE)
===========================================================
📋 ELEMENT DESCRIPTION
--------------------------------
GOOGLE VISION - FACES RECOGNITION (FRONT-END DESKTOP & NATIVE MOBILE) provides a visual element for client-side face detection processing. It detects multiple faces within an image along with associated key facial attributes such as emotional state or wearing headwear.
The front-end or native mobile is suitable for applications when reactivity is desired, such as but not limited to, mobile applications. It supports multiple image formats and automatically optimizes images to meet Google Vision API requirements.
🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 1) can be automatically performed by logging in into your Google Cloud Console, opening the Cloud Shell (top right corner of your page) and copy pasting this command and press enter:
wget -q
https://storage.googleapis.com/bubblegcpdemo/demo-assets/wiseable-gcp-vision.py && python3 wiseable-gcp-vision.py
0) Set-up a project from Google Cloud Console :
https://cloud.google.com/vision/docs/setup- Create or select a project
- Enable the CLOUD VISION API for that project
- Create a service account
- Download a private key as JSON.
1) Open the private key JSON file with a text editor, copy/paste the following parameters from your file to the Plugin settings:
- CLIENT_EMAIL
- PROJECT_ID
- PRIVATE_KEY, including the -----BEGIN PRIVATE KEY-----\\n prefix and \\n-----END PRIVATE KEY-----\\n suffix.
2) Register on plugins.wiseable.io. Create a new Credential which associates your BUBBLE APP URL, GCP PROJECT_ID, CLIENT_EMAIL & PRIVATE_KEY.
The registration service will generate your PUBLIC ACCESS KEY. This key serves as a secure proxy for your real API key. It allows your application to communicate with the service without exposing your real API key. Since this PUBLIC ACCESS KEY is explicitly tied to your registered BUBBLE APP URL, it can only be used from that domain, ensuring that even if the key is publicly visible, it remains safe and cannot be misused by unauthorized sources.
3) Enter in the PLUGIN SETTINGS your PUBLIC ACCESS KEY (used for front-end element only).
4) Add the GOOGLE VISION - FACES RECOGNITION (FRONT-END DESKTOP & NATIVE MOBILE) element to the page on which the face detection feature must be integrated. Select the RESULT DATA TYPE as Returned type, must always be set to "RESULT (VISION - FACES RECOGNITION)".
5) Integrate the logic into your application using the following GOOGLE VISION - FACES RECOGNITION (FRONT-END DESKTOP & NATIVE MOBILE) element's states and actions:
FIELDS:
- RESULT DATA TYPE: Returned type, must always be set to "RESULT (VISION - FACES RECOGNITION)".
EVENTS:
- SUCCESS: Event triggered upon success
- ERROR: Event triggered upon error
EXPOSED STATES:
Use any element able to show/process the data of interest (such as a Group with a Text field) stored within the result of the following states of the GOOGLE VISION - FACES RECOGNITION (FRONT-END DESKTOP & NATIVE MOBILE) element:
- RESULTS: Populated upon SUCCESS event. Returns a list of faces details. These details include a bounding box of the face, face position coordinates, a confidence value (that the bounding box contains a face), emotions, and a fixed set of attributes such as facial landmarks (for example, coordinates of eye and mouth), presence of headwear.
- ERROR MESSAGE: Populated upon ERROR event.
- IS PROCESSING: Set to true when processing is in progress, false otherwise.
ELEMENT ACTIONS - TRIGGERED IN WORKFLOW:
- DETECT FACES ON IMAGE (FRONT-END DESKTOP & NATIVE MOBILE): Detect faces from an image file. Populate RESULTS state upon completion.
Inputs Fields:
- IMAGE: Image from a Bubble.io uploader, or a Protocol-relative URLs (//server/file.ext), a HTTPS file URL (
https://server/file.ext) or a Google Storage URI (gs://bucket/image.jpg). For both Protocol-relative and HTTPS URL, the file must be accessible through HTTPS Protocol.
2️⃣:: DETECT FACES ON IMAGE (BACK-END)
=======================
📋 ACTION DESCRIPTION
--------------------------------
DETECT FACES ON IMAGE (BACK-END) from an image file to return a list of faces details. These details include a bounding box of the face, face position coordinates, a confidence value (that the bounding box contains a face), emotions, and a fixed set of attributes such as facial landmarks (for example, coordinates of eye and mouth), presence of headwear.
🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 1) can be automatically performed by logging in into your Google Cloud Console, opening the Cloud Shell (top right corner of your page) and copy pasting this command and press enter:
wget -q
https://storage.googleapis.com/bubblegcpdemo/demo-assets/wiseable-gcp-vision.py && python3 wiseable-gcp-vision.py
Otherwise, follow these manual steps:
0) Set-up a project from Google Cloud Console :
https://cloud.google.com/vision/docs/setup- Create or select a project
- Enable the CLOUD VISION API for that project
- Create a service account
- Download a private key as JSON.
1) Open the private key JSON file with a text editor, copy/paste the following parameters from your file to the Plugin settings:
- CLIENT_EMAIL
- PROJECT_ID
- PRIVATE_KEY, including the -----BEGIN PRIVATE KEY-----\\n prefix and \\n-----END PRIVATE KEY-----\\n suffix.
2) Set up the "DETECT FACES ON IMAGE (BACK-END)" action in the workflow.
Inputs Fields:
- IMAGE: JPEG, PNG8, PNG24, GIF, Animated GIF (first frame only), BMP, WEBP, RAW, ICO, PDF, TIFF image file from the Bubble.io picture uploader, a Protocol-relative URLs (//server/image.jpg), a HTTPS image URL (
https://server/image.jpg) or a Google Storage URI (gs://bucket/image.jpg). For both Protocol-relative and HTTPS URL, the file must be accessible through HTTPS Protocol.
- RESULT DATA TYPE: Returned type, must always be set to "RESULT (VISION - FACES RECOGNITION)".
Output Fields:
- RESULTS: Returns a list of faces details. These details include a bounding box of the face, face position coordinates, a confidence value (that the bounding box contains a face), emotions, and a fixed set of attributes such as facial landmarks (for example, coordinates of eye and mouth), presence of headwear.
🔍 IMPLEMENTATION EXAMPLE
======================
Feel free to browse the app editor in the Service URL for an implementation example.
ℹ️ ADDITIONAL INFORMATION
======================
> Supported image formats:
https://cloud.google.com/vision/docs/supported-files> GOOGLE VISION service limits:
https://cloud.google.com/vision/quotas⚠️ TROUBLESHOOTING
================
Any plugin related error will be posted to the Logs tab, "Server logs" section of your App Editor.
Make sure that "Plugin server side output" and "Plugin client side output" is selected in "Show Advanced".
For front-end actions, you can also open your browser's developer console (F12 or Ctrl+Shift+I in most browsers) to view detailed error messages and logs.
Always check the ERROR MESSAGE state of the element and implement error handling using the ERROR event to provide a better user experience.
> Server Logs Details:
https://manual.bubble.io/core-resources/bubbles-interface/logs-tab#server-logs⚡ PERFORMANCE CONSIDERATIONS
===========================
GENERAL
-------------
For back-end actions, this implementation posts the file data to Google Cloud Vision API for non-Google Storage URLs (e.g: non-gs://). The maximum processing duration of this action is capped at 30 seconds as per Bubble.io design. This time limitation does not apply to front-end actions
⏱️ BACK-END ACTION START DELAY
-----------------------------------------------
Each time a server-side action is called, Bubble initializes a small virtual machine to execute the action. If the same action is called shortly after, the caching mechanism kicks in, resulting in faster execution on subsequent calls.
A useful workaround is to fire a dummy execution at page load, which pre-warms the Bubble engine for the next few minutes, reducing the impact of cold starts for your users.
❓ QUESTIONS?
===========
Contact us at
[email protected] for any additional feature you would require or support question.