MARKETPLACE
PLUGINS
AWS S3 DROPZONE & SQS UTILITIES
AWS S3 Dropzone & SQS Utilities logo

AWS S3 Dropzone & SQS Utilities

Published March 2021
   •    Updated this week

Plugin details

This plugin features a blazing fast AWS DROPZONE with support for preview, image and video resizing, compression, audio compression, and advanced upload capabilities including parallel uploads, resumable uploads, and multi-part (parallel) uploads. It also includes a powerful set of AWS S3 utilities, usable both as a standalone solution or in support of other plugin operations.

A script is provided to automatically configure your AWS account settings.

To use these actions in conjunction with our plugins, please refer directly to the plugin instructions.

The following elements for AWS S3 are provided:
- AWS S3 DROPZONE visual element
- AWS DROPZONE FILE PREVIEWER
- AWS S3 DROPZONE FILE UTILITIES (FRONT-END)

The following actions for AWS S3 are provided:

𝗕𝗮𝗰𝗸-𝗘𝗻𝗱 𝗔𝗰𝘁𝗶𝗼𝗻𝘀:
- GET UPLOAD PRESIGNED EXPIRING URL (BACK-END)
- GENERATE DOWNLOAD PRESIGNED EXPIRING URL (BACK-END)
- PUT FILE TO S3 (BACK-END)
- GET FILE BASE64 DATAURI FROM S3 (BACK-END)
- PUT BASE64 DATAURI TO S3 (BACK-END)
- DELETE FILE OR FOLDER FROM S3 (BACK-END)
- GET FILE METADATA FROM S3 (BACK-END)
- LIST FILES FROM S3 (BACK-END)
- COPY FILE BETWEEN S3 BUCKETS (BACK-END)
- SET FILE PUBLIC ACCESS IN S3 (BACK-END)
- CREATE BUCKET IN S3 (BACK-END)
- DELETE BUCKET IN S3 (BACK-END)

𝗙𝗿𝗼𝗻𝘁-𝗘𝗻𝗱 𝗔𝗰𝘁𝗶𝗼𝗻𝘀:
- GENERATE DOWNLOAD PRESIGNED EXPIRING URL (FRONT-END)
- DELETE FILE OR FOLDER FROM S3 (FRONT-END)
- SET FILE PUBLIC ACCESS IN S3 (FRONT-END)

The following actions for AWS SQS are provided:
- GET JOB STATUS FROM SQS

You may use Cloud2Cloud File Transfer Plugin (https://bubble.io/plugin/cloud2cloud-transfer-1682686574569x774496654310506500) to transfer files between storage providers, including Bubble.io.

Demo Link: https://awsutilitiesdemo.bubbleapps.io/version-test

Editor Link: https://bubble.io/page?type=page&name=index&id=awsutilitiesdemo-editor&tab=tabs-1

💡 𝗦𝘂𝗯𝘀𝗰𝗿𝗶𝗽𝘁𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗽𝗿𝗼𝗿𝗮𝘁𝗲𝗱. 𝗜𝗳 𝘆𝗼𝘂 𝗶𝗻𝘀𝘁𝗮𝗹𝗹 𝗮𝗻𝗱 𝘂𝗻𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝘁𝗵𝗶𝘀 𝗽𝗹𝘂𝗴𝗶𝗻 𝗶𝗻 𝗼𝗻𝗲 𝗱𝗮𝘆 𝘁𝗼 𝘁𝗲𝘀𝘁 𝗶𝘁 𝗼𝘂𝘁, 𝘆𝗼𝘂'𝗹𝗹 𝗼𝗻𝗹𝘆 𝗯𝗲 𝗰𝗵𝗮𝗿𝗴𝗲𝗱 𝟭/𝟯𝟬𝘁𝗵 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝗻𝘁𝗵𝗹𝘆 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗽𝘁𝗶𝗼𝗻 𝗳𝗲𝗲.

📖 𝗦𝘁𝗲𝗽-𝗯𝘆-𝗦𝘁𝗲𝗽 𝗶𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗶𝗻 𝘁𝗵𝗲 "𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀" 𝘀𝗲𝗰𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗗𝗲𝗺𝗼 𝗘𝗱𝗶𝘁𝗼𝗿 𝗶𝘀 𝗶𝗻 𝘁𝗵𝗲 "𝗟𝗶𝗻𝗸𝘀" 𝘀𝗲𝗰𝘁𝗶𝗼𝗻 𝗼𝗳 𝘁𝗵𝗲 𝗣𝗹𝘂𝗴𝗶𝗻 𝗣𝗮𝗴𝗲.

Our plugin portfolio: https://bubble.io/contributor/wiseable-1586609424436x711052886532460500

Contact us at [email protected] for any additional feature you would require or support question.

$129

One time  •  Or  $8/mo

4.8 stars   •   12 ratings
735 installs  
This plugin does not collect or track your personal data.

Platform

Web & Native mobile

Contributor details

wise:able logo
wise:able
Joined 2020   •   122 Plugins
View contributor profile

Instructions

1️⃣: AWS S3 DROPZONE =============================

📋 ELEMENT DESCRIPTION
--------------------------------
AWS S3 DROPZONE is a visual element allowing to drop, resize, compress and select single- or multi-files supporting uploading, opening and deleting files with advanced multipart upload capabilities for large files.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using this deployment template:

https://console.aws.amazon.com/cloudformation/home?#/stacks/create/review?stackName=BubbleS3&param_BucketName=BucketNameOfYourChoice&templateURL=https://bubble-resources.s3.amazonaws.com/deployment-assets/CloudFormation-AWSS3Plugin.yaml

You will find the required parameters values used across the plugin in the "OUTPUT" tab of the created stack.

Otherwise, follow these manual steps:

0) Sign-up for AWS S3: https://console.aws.amazon.com/s3/home?p=ply&cp=bn&ad=c

1) In AWS S3, create a BUCKET, go to PERMISSIONS and copy code in the CROSS-ORIGIN RESOURCE SHARING (CORS) section:

[
   {
       "AllowedHeaders": [
           "*"
       ],
       "AllowedMethods": [
           "GET",
           "PUT",
           "POST",
           "HEAD",
           "DELETE"
       ],
       "AllowedOrigins": [
           "*"
       ],
       "ExposeHeaders": [
           "Content-Length",
           "ETag",
           "Connection"
       ],
       "MaxAgeSeconds": 0
   }
]

In the "BLOCK PUBLIC ACCESS" area unlock public access to all options to allow access via the links generated by the plugin.

2) Create your AWS S3 ACCESS KEY & ACCESS KEY SECRET, then add to the credentials the AWS S3 FULL-ACCESS policy to the said BUCKET: https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys

3) Register on plugins.wiseable.io. Create a new Credential which associates your BUBBLE APP URL, AWS ACCESS KEY & ACCESS KEY SECRET.
The registration service will generate your PUBLIC ACCESS KEY. This key serves as a secure proxy for your real ACCESS KEY. It allows your application to communicate with the service without exposing your real ACCESS KEY. Since this PUBLIC ACCESS KEY is explicitly tied to your registered BUBBLE APP URL, it can only be used from that domain, ensuring that even if the key is publicly visible, it remains safe and cannot be misused by unauthorized sources.

4) In the Plugin Settings, enter the following:
  - AWS S3 ACCESS KEY & ACCESS KEY SECRET (for back-end actions)
  - AWS SERVICE ENDPOINT REGION (if not provided, default endpoint is "us-east-1")
  - PUBLIC ACCESS KEY (generated from plugins.wiseable.io) (for front-end elements)

5) Drag and drop the visual element AWS S3 DROPZONE in your app, containing the dropzone.

6) Select the AWS S3 DROPZONE element, in APPEARANCE section, configure the following fields:

FIELDS:
- RESULT DATA TYPE: Returned type. Must always be set to FILE METADATA (AWS S3 DROPZONE).
- ACCEPTED FILES TYPES: Either a case-insensitive filename extension, Format .ext. Example: .jpg, .pdf, or .doc. Or a standard MIME type, with no extensions. The string audio/* matches any audio file. The string video/* matches any video file. The string image/* matches any image file. Format type/subtype. Example: image/png, video/mp4. For more information, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17. Leave empty to accept any types.
- CAPTURE TYPE: When capturing live media on mobile devices, specifies either the user-facing or environment-facing media input to use, such as camera or microphone. Supported values: user | environment
- RETAIN FOLDER STRUCTURE: Retain folder structure when uploading a folder.
- MAX NUMBER OF FILES: Limit the maximum number of files in the Dropzone. Files beyond this limit will be in REJECTED FILES state.
- MAX FILE SIZE (MIB): Maximum allowed file size in MiB.
- UPLOAD BUCKET: Bucket Name of the file.
- UPLOAD FOLDER: Upload Folder. The format must be [path/]. Example 1: path1/path2/. Example 2: Leave empty if the file is at the root of the bucket.
- REGION: Bucket Region. See https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html#s3-tables-regions
- MAX PARALLEL UPLOADS: Max Parallel Uploads of Files, excluding chunks.
- AUTO UPLOAD: Set to true to trigger upload as soon as dropped in the Dropzone.
- PART SIZE (MIB): Part size in MiB to split the file in. Minimum value of 5 MiB. Please make sure that the total count of chunks for a given file does not exceed 1000.
- CREATE THUMBNAILS: Create thumbnails upon file drop.


IMAGE RESIZING & COMPRESSION SETTINGS:
- MAX WIDTH: If set, the image will be resized to this maximum width before upload. The other dimension will be adjusted proportionally to maintain the aspect ratio and avoid distortion.
- MAX HEIGHT: If set, the image will be resized to this maximum height before upload. The other dimension will be adjusted proportionally to maintain the aspect ratio and avoid distortion.
- IMAGE QUALITY: Image quality. The higher the number the higher the quality. Valid range is 0.0-1.0.

VIDEO RESIZING & COMPRESSION SETTINGS:
- CODEC: Target codec. Valid values: libx264 | libx265 | libvpx-vp9 | av1
- QUALITY FACTOR: Quality factor (0=best, 51=worst). Valid values: 0–51.
- PRESET: Encoding speed vs. compression. Valid values: ultrafast | superfast | veryfast | faster | fast | medium | slow | slower | veryslow.
- TUNE: Content optimization. Valid values: film | animation | fastdecode | zerolatency.
- OUTPUT RESOLUTION: Output resolution (e.g., 1280x720). Valid values: WxH (e.g., 1920x1080)

AUDIO COMPRESSION SETTINGS:
- CODEC: Target codec. Valid values: aac | libmp3lame | libvorbis | libopus | pcm_s16le | pcm_s24le | pcm_s32le.
- BITRATE: Target Bitrate. Valid values: 32k | 64k | 96k | 128k | 160k | 192k | 224k | 256k | 320k

7) Integrate the logic into your application using the following AWS S3 DROPZONE, states and actions:

EVENTS:
- ACCEPTED FILE LOOP: Event triggered for each accepted file, satisfying the ACCEPTED FILES TYPES and MAX FILE SIZE (MIB).
- REJECTED FILE LOOP: Event triggered for each rejected file, e.g. not satisfying the ACCEPTED FILES TYPES or the MAX FILE SIZE (MIB).
- UPLOADED FILE LOOP: Event triggered upon successful file upload.
- ALL FILES UPLOADED: Event triggered when all files have been processed.
- ERROR: Event triggered upon error.

EXPOSED STATES:
Use any element able to show/process the data of interest (such as a Group with a Text field) stored within the result of the following states of the AWS S3 DROPZONE element:
- ACCEPTED FILES: List of files' metadata satisfying the ACCEPTED FILES TYPES criteria in the dropzone. The available metadata are:
 • file_name - name of the dropped file.
 • upload_percentage - number between 0 and 100 showing the upload progress.
 • compression_percentage - number between 0 and 100 showing the compress progress.
 • file_size_bytes - size of the file in bytes.
 • object_url: value to set as input of the previewer,
 • mime_type: MIME-Type of the file. Value to set as input of the previewer,
 • status - added, queued, uploading, compressing, error, success.
   ▪ added - means the file is dropped to the dropzone and the destination URL is not assigned.
   ▪ queued - means to URL is assigned but there are other files uploading (2 files are uploaded at the same time).
   ▪ uploading - file is uploading.
   ▪ compressing - file is compressing.
   ▪ error - there was an error during the upload.
   ▪ success - upload is successful.
 • uuid - is a random identifier of this file.
 • upload_filepath - Path & File Name. The format is [path/]filename.ext.
 • upload_bytes_sent - upload progress in bytes.
 • bucket - bucket name where the file is uploaded.
 • region - region where the bucket is located.
- REJECTED FILES: List of rejected files' metadata. Same metadata as ACCEPTED FILES added the following:
 • message - rejection reason.
- TOTAL UPLOAD PROGRESS: Percentage of total uploaded bytes divided by the upload list's file's size.
- DRAGGING OVER: Returns true if a file is dragging over the dropzone, else otherwise.
- ERROR MESSAGE: Populated upon ERROR event.

ELEMENT ACTIONS - TRIGGERED IN WORKFLOW:
- REMOVE FILE FROM: Removes the specified file in the dropzone, identified by its UUID.
- RESET: Reset the dropzone to its initial state.
- CANCEL FILE UPLOAD: Cancel the specified file being uploaded, identified by its UUID.
- PROCESS UPLOAD QUEUE: Manually trigger the upload of queued files when AUTO UPLOAD is set to no.

2️⃣: AWS DROPZONE FILE PREVIEWER
================================

📋 ELEMENT DESCRIPTION
--------------------------------
AWS DROPZONE FILE PREVIEWER is a visual element allowing to display media preview along with controls when supported.

🔧 STEP-BY-STEP SETUP
--------------------------------

1) Drag and drop the visual element AWS DROPZONE FILE PREVIEWER in your app.

2) Select the AWS DROPZONE FILE PREVIEWER element, in APPEARANCE section, configure the following fields:

FIELDS:
- FILE OBJECT URL: Must contain the OBJECT_URL of the file to preview of the AWS S3 DROPZONE's ACCEPTED FILES state.
- MIME TYPE: Must contain the MIME-TYPE of the file to preview of the AWS S3 DROPZONE's ACCEPTED FILES state.
- SHOW CONTROLS: Show or hide media controls.

3) Integrate the logic into your application using the following AWS S3 DROPZONE, states and actions:

EXPOSED STATES:
Use any element able to show/process the data of interest (such as a Group with a Text field) stored within the result of the following states of the AWS S3 DROPZONE element:
- IS PREVIEWABLE: Returns yes if the media is previewable.

3️⃣: AWS S3 DROPZONE FILE UTILITIES (FRONT-END)
===============================================

📋 ELEMENT DESCRIPTION
--------------------------------
AWS S3 DROPZONE FILE UTILITIES (FRONT-END) provides front-end actions for AWS S3 operations with client-side processing, ideal for mobile applications and improved responsiveness.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform steps 0, to 4 from the AWS S3 DROPZONE setup.

1) Add the AWS S3 DROPZONE FILE UTILITIES (FRONT-END) element to the page where S3 operations must be integrated.

2) Integrate the logic into your application using the following element's states and actions:

EVENTS:
- SUCCESS: Event triggered upon success.
- ERROR: Event triggered upon error.

EXPOSED STATES:
- RESULTS: Populated upon SUCCESS event. Returns the operation results.
- ERROR MESSAGE: Populated upon ERROR event. Always check this state and implement error handling using the ERROR event to provide a better user experience.
- REQUESTED ACTION: Latest requested Action

ELEMENT ACTIONS - TRIGGERED IN WORKFLOW:
- GENERATE DOWNLOAD PRESIGNED EXPIRING URL (FRONT-END): Generate an URL allowing access to the specified object, expiring after the specified duration.
- DELETE FILE OR FOLDER FROM S3 (FRONT-END): Delete a file or folder from your AWS S3 Bucket.
- SET FILE PUBLIC ACCESS IN S3 (FRONT-END): Enable or disable public access via the virtual-hosted-style URL.

4️⃣: GET UPLOAD PRESIGNED EXPIRING URL (BACK-END)
===============================================

📋 ACTION DESCRIPTION
--------------------------------
GET UPLOAD PRESIGNED EXPIRING URL generates an URL allowing uploading for the specified bucket and path. This action is used in conjunction with the AWS S3 DROPZONE visual element.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "GET UPLOAD PRESIGNED EXPIRING URL (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name of the file.
- PATH & FILE NAME: Path & File Name to process. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.

Output Fields:
- URL: Returns the presigned expiring URL.

5️⃣: GENERATE DOWNLOAD PRESIGNED EXPIRING URL (BACK-END)
======================================================

📋 ACTION DESCRIPTION
--------------------------------
GENERATE DOWNLOAD PRESIGNED EXPIRING URL generates an URL allowing access to the specified object, expiring after the specified duration. The presigned URLs are useful if you want your user/customer to be able to download a specific object from your bucket, but you don't require them to have AWS security credentials, permissions, nor bucket public access.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "GENERATE DOWNLOAD PRESIGNED EXPIRING URL (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name of the file.
- PATH & FILE NAME: Path & File Name to process. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.
- EXPIRE AFTER (S): The number of seconds before the presigned URL expires. Defaults to 60 minutes.

Output Fields:
- URL: Returns the presigned expiring URL in Amazon S3 virtual-hosted-style format. Format is https://bucket-name.s3.Region.amazonaws.com/key-name?token.

6️⃣: PUT FILE TO S3 (BACK-END)
============================

📋 ACTION DESCRIPTION
--------------------------------
PUT FILE TO S3 stores a file in your AWS S3 Bucket, returning the object key or URL if the operation is successful. The file must be less than 22 megabytes to be executable in backend workflow.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Optionally, add the "DETECT FILE TYPE" action from our FILE & MEDIA TYPE DETECTOR plugin to automatically detect the MIME-TYPE, which is set to "content/octet-stream" by default in AWS S3, and might prevent the browser from displaying correctly the media.

2) Set up the "PUT FILE TO S3 (BACK-END)" action in the workflow.

Input Fields:
- FILE TO STORE (URL): File URL from the Bubble.io uploader, or a Protocol-relative URLs (//server/file.ext), or a HTTPS file URL (https://server/file.ext). The file must be accessible through the HTTPS protocol. The file must be less than 22 megabytes.
- CONTENT TYPE: A standard MIME type describing the format of the contents. Format type/subtype. Example: image/png, video/mp4. For more information, see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17. Defaulting to "application/octet-stream".
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name to which the file will be saved.
- PATH & FILE NAME: Path & File Name to save to. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.
- GENERATE URLS: Generate virtual-hosted-style URLs format in the response. Format is https://bucket-name.s3.Region.amazonaws.com/key-name.

Output Fields:
- FILE: Returns the object key or URL. For URL, format is https://bucket-name.s3.Region.amazonaws.com/key-name. Use this URL to retrieve the file, providing your bucket's permission allow getObject permission from the Internet.

7️⃣: GET FILE BASE64-DATAURI FROM S3 (BACK-END)
==============================================

📋 ACTION DESCRIPTION
--------------------------------
GET FILE BASE64-DATAURI FROM S3 retrieves the file Data URI from your AWS S3 Bucket, encoded in Base64. Use this action to load the URI in an element supporting this format, such as an audio player, or store the datastream into a database. The file must be less than 4.5 megabytes to be executable in backend workflow.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "GET FILE BASE64-DATAURI FROM S3 (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name from which the file will be retrieved.
- PATH & FILE NAME: Path & File Name to retrieve. The file must be less than 4.5 megabytes. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.

Output Fields:
- BASE64 DATAURI: Returns the base64-encoded file data.

8️⃣: PUT BASE64 DATAURI TO S3 (BACK-END)
======================================

📋 ACTION DESCRIPTION
--------------------------------
PUT BASE64 DATAURI TO S3 stores BASE64 Data URI to a file in your AWS S3 Bucket, returning the object key or URL if the operation is successful. The file must be less than 22 megabytes to be executable in backend workflow.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "PUT BASE64 DATAURI TO S3 (BACK-END)" action in the workflow.

Input Fields:
- BASE64 DATAURI: Base64-encoded DataURI. The expected format is data:[mimeType];base64,[base64Data] and length must be less than 22 megabytes.
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name to which the file will be saved.
- PATH & FILE NAME: Path & File Name to save to. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.
- GENERATE URLS: Generate virtual-hosted-style URLs format in the response. Format is https://bucket-name.s3.Region.amazonaws.com/key-name.

Output Fields:
- FILE: Returns the object key or URL. For URL, format is https://bucket-name.s3.Region.amazonaws.com/key-name. Use this URL to retrieve the file, providing your bucket's permission allow getObject permission from the Internet.

9️⃣: DELETE FILE OR FOLDER FROM S3 (BACK-END)
============================================

📋 ACTION DESCRIPTION
--------------------------------
DELETE FILE OR FOLDER FROM S3 deletes a file or folder from your AWS S3 Bucket, returning the object key or URL if the operation is successful.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "DELETE FILE OR FOLDER FROM S3 (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name from which the file will be deleted.
- PATH TO FILE OR FOLDER: Path to File or Folder to delete. The format must be [path/][filename.ext].
 Example 1: path1/path2/.
 Example 2: path1/path2/filename.ext.
 Example 3: filename.ext if the file is at the root of the bucket.
- GENERATE URLS: Generate virtual-hosted-style URLs format in the response. Format is https://bucket-name.s3.Region.amazonaws.com/key-name.

Output Fields:
- FILE OR FOLDER: Returns the object key or URL.

🔟: GET FILE METADATA FROM S3 (BACK-END)
=======================================

📋 ACTION DESCRIPTION
--------------------------------
GET FILE METADATA FROM S3 retrieves the metadata of the specified object.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "GET FILE METADATA FROM S3 (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name of the file.
- PATH & FILE NAME: Path & File Name to process. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.

Output Fields:
- FILE SIZE: Returns the size of the file's content in bytes.
- CREATED AT: Returns the date and time at which the file was created (RFC 3339 date-time).
- TAGS: Returns a list of tags. Each tag is formatted as key=value.

1️⃣1️⃣: LIST FILES FROM S3 (BACK-END)
==================================

📋 ACTION DESCRIPTION
--------------------------------
LIST FILES FROM S3 returns the list of files' key or URLs from a S3 bucket.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "LIST FILES FROM S3 (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name from which the list will be retrieved.
- PREFIX FILTER: Limit the response to keys that begin with the specified string, starting from the bucket name root.
- GENERATE URLS: Generate virtual-hosted-style URLs format in the response. Format is https://bucket-name.s3.Region.amazonaws.com/key-name.

Output Fields:
- LIST: Returns the list of files' key or URL.

1️⃣2️⃣: COPY FILE BETWEEN S3 BUCKETS (BACK-END)
=============================================

📋 ACTION DESCRIPTION
--------------------------------
COPY FILE BETWEEN S3 BUCKETS copy a file from a source bucket and path to a target bucket and path, then returns the object key or URL of the target file.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "COPY FILE BETWEEN S3 BUCKETS (BACK-END)" action in the workflow.

Input Fields:
- SOURCE BUCKET REGION: Source Bucket Region. Defaults to us-east-1 when not specified.
- SOURCE BUCKET NAME: Bucket Name of the source file.
- SOURCE PATH & FILE NAME: Source Path & File Name that will be copied. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.
- TARGET BUCKET REGION: Source Bucket Region. Defaults to us-east-1 when not specified.
- TARGET BUCKET NAME: Bucket Name to which the file will be retrieved from.
- TARGET PATH & FILE NAME: Target Path & File Name. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.
- GENERATE URLS: Generate virtual-hosted-style URLs format in the response. Format is https://bucket-name.s3.Region.amazonaws.com/key-name.

Output Fields:
- FILE: Returns the object key or URL.

1️⃣3️⃣: SET FILE PUBLIC ACCESS IN S3 (BACK-END)
=============================================

📋 ACTION DESCRIPTION
--------------------------------
SET FILE PUBLIC ACCESS IN S3 enables or disable public access via the virtual-hosted-style URL.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

The next 2 steps must be performed if the automated configuration script has not been used.

1) In your BUCKET settings in "OBJECT OWNERSHIP SECTION" check "ACLS ENABLED" option. It allows object-level access granting capabilities for this action.

2) Set up the "SET FILE PUBLIC ACCESS IN S3 (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name of the file.
- PATH & FILE NAME: Path & File Name. The format must be [path/]filename.ext.
 Example 1: path1/path2/filename.ext.
 Example 2: filename.ext if the file is at the root of the bucket.
- PUBLIC ACCESS: Enable public access via the virtual-hosted-style URL.
- GENERATE URLS: Generate virtual-hosted-style URLs format in the response. Format is https://bucket-name.s3.Region.amazonaws.com/key-name.

Output Fields:
- FILE: Returns the object key or URL.

1️⃣4️⃣: CREATE BUCKET IN S3 (BACK-END)
====================================

📋 ACTION DESCRIPTION
--------------------------------
CREATE BUCKET IN S3 creates a bucket in the specified region.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "CREATE BUCKET IN S3 (BACK-END)" action in the workflow.

Input Fields:
- BUCKET REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name to create.

Output Fields:
- BUCKET NAME: Returns the bucket name if successful.

1️⃣5️⃣: DELETE BUCKET IN S3 (BACK-END)
====================================

📋 ACTION DESCRIPTION
--------------------------------
DELETE BUCKET IN S3 deletes the empty specified bucket.

🔧 STEP-BY-STEP SETUP
--------------------------------
ℹ️ The steps from 0) to 2) can be automatically performed by using the deployment template mentioned in the AWS S3 DROPZONE setup.

0) If not already done, perform the steps 0, 2 and 3 of the AWS S3 DROPZONE setup.

1) Set up the "DELETE BUCKET IN S3 (BACK-END)" action in the workflow.

Input Fields:
- REGION: Bucket Region. Defaults to us-east-1 when not specified.
- BUCKET NAME: Bucket Name to delete.

Output Fields:
- BUCKET NAME: Returns the bucket name if successful.

1️⃣6️⃣: GET JOB STATUS FROM SQS
=============================

📋 ACTION DESCRIPTION
--------------------------------
GET JOB STATUS FROM SQS retrieves the Job status based on a JOBID from a valid AWS SQS QUEUE.

🔧 STEP-BY-STEP SETUP
--------------------------------
Please refer to the plugin requiring AWS SQS for detailed setup instructions.

Input Fields:
- AWS SQS QUEUE URL: Queue URL to poll containing Job's message to get the status from.
- JOB ID: JobID to get the Status from.

Output Fields:
- JOB STATUS: Valid statuses are SUCCEEDED, POLLING, IN_PROGRESS, PARTIAL_SUCCESS and FAILED or ERROR, with error or failure messages being appended to the status.

🔍 IMPLEMENTATION EXAMPLE
======================
Feel free to browse the app editor in the Service URL for an implementation example.

ℹ️ ADDITIONAL INFORMATION
======================
▶ Presigned URL may expire before the set expiration time depending on your credentials: https://aws.amazon.com/premiumsupport/knowledge-center/presigned-url-s3-bucket-expiration/

▶ Permissions details
- GET UPLOAD PRESIGNED EXPIRING URL (BACK-END) requires PutObject permission.
- GENERATE DOWNLOAD PRESIGNED EXPIRING URL (BACK-END) requires GetObject permission.
- PUT FILE TO S3 (BACK-END) requires PutObject permission.
- GET FILE BASE64 DATAURI FROM S3 (BACK-END) requires GetObject permission.
- PUT BASE64 DATAURI TO S3 (BACK-END) requires PutObject permission.
- DELETE FILE OR FOLDER FROM S3 (BACK-END) requires DeleteObject permission.
- GET FILE METADATA FROM S3 (BACK-END) requires GetObject, GetObjectTagging permissions.
- LIST FILES FROM S3 (BACK-END) requires ListBucket permission.
- COPY FILE BETWEEN S3 BUCKETS (BACK-END) requires GetObject, PutObject permissions.
- SET FILE PUBLIC ACCESS IN S3 (BACK-END) requires HeadObject, GetObject, PutObjectAcl permissions.
- CREATE BUCKET IN S3 (BACK-END) requires CreateBucket, PutBucketOwnershipControls, PutBucketPublicAccessBlock, PutBucketCors permissions.
- DELETE BUCKET IN S3 (BACK-END) requires DeleteBucket permission.

⚠️ TROUBLESHOOTING
================
Any plugin related error will be posted either to:
- Your browser Javascript console: Instructions here https://webmasters.stackexchange.com/questions/8525/how-do-i-open-the-javascript-console-in-different-browsers
- Logs tab, "Server logs" section of your App Editor.

Make sure that "Plugin server side output" and "Plugin client side output" is selected in "Show Advanced". Server Logs Details: https://manual.bubble.io/core-resources/bubbles-interface/logs-tab#server-logs

For front-end actions, you can also open your browser's developer console (F12 or Ctrl+Shift+I in most browsers) to view detailed error messages and logs.

Always check the ERROR MESSAGE state of the element and implement error handling using the ERROR event to provide a better user experience.

⚡ PERFORMANCE CONSIDERATIONS
===========================

𝗚𝗘𝗡𝗘𝗥𝗔𝗟
-------------
The largest retrievable result-set from AWS S3 for back-end actions is capped at 30 seconds duration time - this does not apply to front-end actions.

⏱️ 𝗕𝗔𝗖𝗞-𝗘𝗡𝗗 𝗔𝗖𝗧𝗜𝗢𝗡 𝗦𝗧𝗔𝗥𝗧 𝗗𝗘𝗟𝗔𝗬
-----------------------------------------------
Each time a server-side action is called, Bubble initializes a small virtual machine to execute the action. If the same action is called shortly after, the caching mechanism kicks in, resulting in faster execution on subsequent calls.

A useful workaround is to fire a dummy execution at page load, which pre-warms the Bubble engine for the next few minutes, reducing the impact of cold starts for your users.

𝗣𝗨𝗧 𝗙𝗜𝗟𝗘 𝗧𝗢 𝗦𝟯 (𝗕𝗔𝗖𝗞-𝗘𝗡𝗗)
-------------------------
This implementation posts file data to AWS S3.
The file must be less than 22 megabytes to be executable in backend workflow, as the maximum allowable file size is capped by Bubble.io's Workflow Action maximum execution time to perform this transfer operation.

𝗣𝗨𝗧 𝗕𝗔𝗦𝗘𝟲𝟰 𝗗𝗔𝗧𝗔𝗨𝗥𝗜 𝗧𝗢 𝗦𝟯 (𝗕𝗔𝗖𝗞-𝗘𝗡𝗗)
-------------------------
This implementation posts file data to AWS S3.
The file must be less than 22 megabytes to be executable in backend workflow, as the maximum allowable file size is capped by Bubble.io's Workflow Action maximum execution time to perform this transfer operation.

𝗚𝗘𝗧 𝗙𝗜𝗟𝗘 𝗕𝗔𝗦𝗘𝟲𝟰-𝗗𝗔𝗧𝗔𝗨𝗥𝗜 𝗙𝗥𝗢𝗠 𝗦𝟯 (𝗕𝗔𝗖𝗞-𝗘𝗡𝗗)
----------------------------------------------
This implementation gets file data from AWS S3.
The file must be less than 4.5 megabytes to be executable in backend workflow, as the maximum allowable file size is capped by Bubble.io's Workflow Action maximum execution time to perform this transfer operation.

❓ QUESTIONS?
===========
Contact us at [email protected] for any additional feature you would require or support question.

Types

This plugin can be found under the following types:
Api   •   Background Services   •   Element   •   Event   •   Action

Categories

This plugin can be found under the following categories:
Technical   •   Data (things)   •   Media   •   Video   •   Image   •   Input Forms   •   Visual Elements

Resources

Support contact
Documentation
Tutorial

Rating and reviews

Average rating (4.8)

Absolutely Amazing Plugin + Support
May 4th, 2025
This plugin works really great, exactly as described. What impressed me even more was the developer’s responsiveness. I had a specific feature request, and not only did they listen, but they added the feature very quickly. Highly recommend it to anyone considering it!
Very efficient
April 2nd, 2025
I used it to build an app for photo albums and it wotks perfectly. Support has been very fast to help me fix an issue I had. Great plugin
Works great!
June 23rd, 2024
Works great and the support was quick to reply and fixed the issue I was having with my setup!
Great add on!
January 29th, 2024
When it is crucial to store files non-public. It is also inevitable to sometimes connect with other pieces of software (e.g. AI API integrations). As this outbound security is not inherrent to Bubble's setup it is great that people invest time to create a plugin like this and make it available! Thanks!
Great Plugin
November 27th, 2023
Any chance you could add deleting an empty bucket? Thanks
Bubble