MARKETPLACE
PLUGINS
OPEN SOURCE LLMS - TOGETHER.AI
Open Source LLMs - Together.ai logo

Open Source LLMs - Together.ai

Published February 2025
   •    Updated December 2025

Plugin details

Enhance your Bubble applications with the power of open-source Large Language Models (LLMs) & AI image generation tools using the Together AI connector plugin.
This plugin simplifies integration with a variety of open-source AI models, eliminating the need for costly server infrastructure. Together AI manages the complex technical aspects, allowing you to effortlessly invoke AI capabilities through backend workflows and seamlessly display generated responses within your Bubble frontend.

Open-source Large Language Models (LLMs) consistently demonstrate performance comparable (or better) to their closed-source counterparts, ensuring you don't compromise on quality.

A key advantage of open-source models lies in their affordability and scalability. While closed-source LLMs often incur costs ranging from $10 to $30 per million tokens, open-source alternatives offer significantly lower pricing (as low as 0.10 cents / million tokens).

Furthermore, Together AI provides a generous allocation of free credits upon signup, enabling you to experience the platform's capabilities firsthand without any initial cost.

This plugin allows you to call any of the 50 chat, language or image AI models on Together AI including the following:

Language & Chat:
moonshotai/Kimi-K2-Instruct | $1.00 + / 1M tokens
Gryphe/MythoMax-L2-13b-Lite |  $0.10/per 1M tokens
Mixtral 8x22B Instruct | V0.1 $1.20/per 1M tokens
Mistral 7B Instruct V0.2 | $0.20/per 1M tokens
Mistral-7B-Instruct-v0.3 | $0.20/per 1M tokens
DeepSeek V-3 | $1.25//per 1M tokens
Deepseek-llm-67b-chat | $0.90/per 1M tokens
Llama 3.3 70B Instruct Turbo | $0.88/per 1M tokens
Llama 3.1 8B Instruct Turbo | $0.18/per 1M tokens
Llama 405B Instruct Turbo | $3.50/per 1M tokens
Qwen2.5-72B-Instruct-Turbo | $1.20/per 1M tokens
Qwen2-VL-72B-Instruct | $1.20/per 1M tokens
Qwen2.5-Coder-32B-Instruct | $0.80/per 1M tokens
Gemma-2 Instruct (27B) | $0.80/per 1M tokens
Gemma-2 Instruct (9B) | $0.30/per 1M tokens
Typhoon 1.5 8B Instruct | $0.18/per 1M tokens
Typhoon 1.5X 70B-awq | $0.88/per 1M tokens

Image (text prompt):
FLUX.1 [dev]  | 0.025 / 1M Pixels @ 28 Steps
FLUX.1 [schnell] | 0.003 / 1M Pixels @ 4 Steps

Image (text + image prompt)
FLUX.1 Depth [dev] | 0.025 / 1M Pixels @ 28 Steps
FLUX.1 Canny [dev] | 0.025 / 1M Pixels @ 28 Steps
FLUX.1 Redux [dev] | 0.025 / 1M Pixels @ 28 Steps

We eliminate the complexity of integrating new LLMs. As new open-source models become available, simply choose the desired LLM from Together.ai's library and update your application settings. This streamlined process ensures your AI workflows always utilize the most advanced models.

$49

One time  •  Or  $5/mo

stars   •   0 ratings
9 installs  
This plugin does not collect or track your personal data.

Platform

Web & Native mobile

Contributor details

Nebulum logo
Nebulum
Joined 2020   •   42 Plugins
View contributor profile

Instructions

TO BEGIN
👉  Step 1: Install the plugin. After installing the plugin, go to the plugin settings page within bubble (click on plugins in the sidebar and then click on this plugin to bring up the settings page). On this page you'll need to enter your together.ai API key. The API key starts with "Bearer" (without quotations. Therefore your API key should say "Bearer" and then a space, and then your API key. Like this:

"Bearer 44569....."

👉  Step 2: After you've entered your API key, all you need to do is add a button to a page so you can trigger a workflow. Within the workflow editor search for "LLM" or "image" and you'll see the various language or image models show up there.

👉  Step 3: Within the actions, there is documentation telling you how to fill out the prompt and system messages.

👉  Step 4 (for LLM responses): After calling on the LLM, in the next step you'll need to set a state with the response (i.e. response of step 1". Select "response of step 1, choices, first item, message content.

👉  Step 5 (for image responses): After calling on the LLM, in the next step you'll need to set a state with the response (i.e. response of step 1". Select "response of step 1, data, first item, URL.

That's it. You can now show the responses in your front-end by simply showing the value contained within the state holding the response.


👀Demo:
https://plugin-demos-48052.bubbleapps.io/version-test/ai_open_source_llm

🎓 Demo Editor:
https://bubble.io/page?id=plugin-demos-48052&tab=Design&name=ai_open_source_llm&type=page

Types

This plugin can be found under the following types:
Api   •   Action

Categories

This plugin can be found under the following categories:
Chat   •   Productivity   •   AI

Resources

Support contact
Documentation
Tutorial

Rating and reviews

No reviews yet

This plugin has not received any reviews.
Bubble