OpenLLM Hub logo

OpenLLM Hub

Published January 2026
   •    Updated January 2026

Plugin details

🚀 EXCLUSIVE: NOW SUPPORTING LLAMA 4! Get access to the cutting-edge Llama 4 Scout & Maverick models via Groq. The fastest and smartest open-source AI is now in Bubble.
WHY OpenLLM Hub?

1. 👁️ Llama 4 Vision (Multimodal) The new Llama 4 models can "see". Upload images directly to the API to get descriptions, extract data from receipts, or analyze photos.

Action: Analyze Image (Vision)

2. 🧠 Auto-Context Memory (Chatbot Logic) Stop fighting with JSON lists. Our server-side action prepares the perfect chat history for you.

Automatically manages token limits.

Stitches User vs. AI messages effortlessly.

Action: Prepare Chat Context

3. 🧱 Strict JSON Mode Force the AI to output structured data directly to your database. Perfect for data extraction apps.

⚡️ POWERED BY GROQ

Zero Markup: Bring your own API Key.

Incredible Speed: 500+ tokens/s.

⚠️ Note on RAG: Vector Embeddings are currently disabled by the provider and will be re-added once available.


Demo Test: https://demo-app-56978.bubbleapps.io/version-test/openllmhub_free/1767896591797x820934283424475000

Editor: https://bubble.io/page?id=demo-app-56978&test_plugin=1767896733171x753026553353601000_current&tab=Design&name=openllmhub_free

Free

For everyone

stars   •   0 ratings
1 installs  
This plugin does not collect or track your personal data.

Other actions

Platform

Web & Native mobile

Contributor details

NoCoddo logo
NoCoddo
Joined 2025   •   61 Plugins
View contributor profile

Instructions

⚙️ SETUP & QUICK START GUIDE
1️⃣ GET YOUR API KEY (FREE)
Go to https://console.groq.com/keys, create a key, and copy it.
In the Plugin Settings, paste it in the "Authorization" fields with this format:
👉 Bearer gsk_your_key_here_...
(Don't forget the word 'Bearer' + space).

2️⃣ HOW TO BUILD A CHATBOT WITH MEMORY (The 2-Step Logic)
To make the AI remember previous messages, use our Server-Side helper:

Step A: Run Action "Prepare Chat Context (Pro)"
  - User Messages: List of texts (e.g., Current User's Chat Messages' Content)
  - AI Messages: List of texts (e.g., Current User's Chat Messages' AI Response)
  - Current Message: Input's value
  - Max Tokens: 6000 (adjust as needed)

Step B: Run Action "Generate Text (Advanced)"
  - Model ID: meta-llama/llama-4-scout-17b-16e-instruct
  - Json Context List: Result of Step A's json_body

3️⃣ HOW TO USE VISION (IMAGES)
Use the action "Analyze Image (Vision)".
  - Model ID: meta-llama/llama-4-scout-17b-16e-instruct
  - Image URL: Must start with "https://".
  ⚠️ TIP: If using Bubble's file uploader, add "https:" before the dynamic URL.

4️⃣ STRICT JSON MODE (DATA EXTRACTION)
To force the AI to output clean data (for saving to DB):
  1. Set "Output Type" to "json_object".
  2. IMPORTANT: You MUST write in your System Prompt: "You are a JSON generator. Output valid JSON only."
  3. The result will be a parseable JSON text.

Types

This plugin can be found under the following types:
Api   •   Action

Categories

This plugin can be found under the following categories:
Technical   •   AI

Resources

Support contact
Tutorial

Rating and reviews

No reviews yet

This plugin has not received any reviews.
Bubble