MARKETPLACE
PLUGINS
PROMPTLOCK AI SECURITY
PromptLock AI Security logo

PromptLock AI Security

Published December 2025
   •    Updated December 2025

Plugin details

Protect Bubble AI workflows from prompt injections and PII leaks - a drop-in security firewall for any LLM integration.
Building AI features in Bubble? PromptLock scans every prompt before it reaches your AI, blocking attacks and redacting sensitive data automatically.

What it does:

• Blocks prompt injection and jailbreak attempts in real-time
• Redacts PII/PHI (emails, phones, credit cards, SSNs, medical data)
• Returns sanitized "clean text" + risk score + detailed threat analysis
• Works with any AI API: OpenAI, Anthropic, Gemini, and more

Built for compliance: Automatic policy detection for HIPAA (healthcare), PCI-DSS (payments), and GDPR (personal data). Add AI features without worrying about regulatory exposure.

Simple integration: One API call. Send user input → get back safe, redacted text with a threat assessment. Use the risk score to block, flag, or allow - your choice.

Perfect for:

• AI chatbots handling customer data
• Form inputs connected to LLMs
• Any workflow where users interact with AI

Setup: Get your free API key at https://promptlock.io (includes 3,000 free prompts/month)

Documentation & live demo:

https://promptlock.io/bubble

https://promptlock.io/demo

Free

For everyone

stars   •   0 ratings
0 installs  
This plugin may track or collect your data. Learn how.

Platform

Web & Native mobile

Contributor details

Matthew logo
Matthew
Joined 2025   •   1 Plugin
View contributor profile

Instructions

Step-by-Step Setup

Demo page: https://promptlock-plugin.bubbleapps.io

1. Install the Plugin

Search for "PromptLock" in the Bubble plugin marketplace and click install.

Plugin Name: PromptLock AI Security

2. Configure API Key

Add your PromptLock API key to the plugin settings.

Settings → Plugins → PromptLock → API Key

3. Add Security Action

Insert the "Analyze Prompt" action before your AI workflow step.

Actions → Plugins → PromptLock → Analyze Prompt

4. Use Redacted Output

Pass the redacted_prompt result to your OpenAI or other AI action.

AI Action Input: Result of step X (Analyze Prompt)'s redacted_prompt

Data collection and tracking

This plugin sends user-submitted prompts to the PromptLock API for real-time security analysis.
Data processed:

- Prompt text submitted through the plugin
- No personal account information is collected

How data is used:

- Prompts are analyzed for security threats (prompt injection, jailbreak attempts)
- PII/PHI is detected and redacted before reaching your AI model
- Analysis results (risk scores, threat classifications) are returned to your app

Data handling:

- Prompts are processed in real-time and not stored permanently

- No data is sold or shared with third parties
- Processing occurs on secure servers

For complete details, see our Privacy Policy at https://promptlock.io/privacy

Types

This plugin can be found under the following types:
Api   •   Action

Categories

This plugin can be found under the following categories:
AI   •   Compliance

Resources

Support contact
Documentation
Tutorial

Rating and reviews

No reviews yet

This plugin has not received any reviews.
Bubble