Langflow Meets Regolo.ai: How to Integrate Regolo.ai into Langflow in Minutes

Regolo.ai exposes OpenAI-compatible APIs, which means you can point Langflow’s “OpenAI” nodes at Regolo.ai’s endpoint and everything just works—no custom plugin required.

In this tutorial you’ll install Langflow, run it locally or on a server, and build a small flow using Regolo.ai chat models.

What is Langflow?

Langflow is a visual framework for building applications with large language models (LLMs). Instead of writing code, you connect blocks (called components) that represent models, prompts, tools, or data sources.

It’s built on top of LangChain, so you get the same flexibility and integrations, but with an intuitive drag-and-drop interface. This makes it easy to prototype, share, and deploy AI workflows in minutes.

What you’ll need

  • Regolo.ai API Key: You can generate this from your Regolo.ai account.
  • Regolo.ai Base URL : https://api.regolo.ai/v1

    To explore the available chat models, open the Regolo.ai API Swagger interface:
    • Click Authorize and enter your Regolo.ai API key.
    • Navigate to the endpoint GET /v1/model/info, click Try it out, and then Execute.
    • You’ll get a list of all available Regolo.ai models.
    • From there, just pick the model name you want. For example, in this guide we’ll use gpt-oss-120b.
      ⚠️ Note: Pay attention to the "mode" parameter. Use "chat" for chat models, and "embedding" if you want an embedding model.

Install & run Langflow

You can use pip, pipx, or uv. For example:

uv pip install -U langflow
uv run langflow run --host 0.0.0.0 --port 7860Code language: CSS (css)

If you don’t have uv:

curl -LsSf https://astral.sh/uv/install.sh | shCode language: JavaScript (javascript)

Open your browser at:
http://localhost:7860

If running on a remote server, open http://<SERVER_IP>:7860 (ensure port 7860 is allowed by your firewall/security group).

👉 Having issues with the installation? Don’t worry — you can find the full step-by-step guide here:
🔗 Langflow Installation Guide

Build the demonstration flow

Components

  1. Chat Input
  2. OpenAI
  3. Chat Output

Wiring

  • Chat Input → OpenAI (connect Chat Message to the OpenAI node’s Input)
  • OpenAI → Chat Output (connect Model Response to the Chat Output node’s Inputs)

Configure the OpenAI node to use Regolo.ai

Open the OpenAI node and set:

  • System Message
    Paste something like:
    You’re a brilliant copywriter. Write a lively, 5-sentence mini-script about the following topic. Keep it under 80 words and end with a short CTA.

  • Model Name
    Type the exact Regolo model name: gpt-oss-120b
    (You can use the model you prefer)

  • Paste your Regolo API key.

  • Paste the OpenAI API Base : https://api.regolo.ai/v1

⚠️ If the Model Name dropdown blocks typing on your build, set it via Environment Variables (Settings → Environment Variables), e.g.:

REGOLO_API_BASE = https://api.regolo.ai/v1
REGOLO_API_KEY = <your-key>
REGOLO_CHAT_MODEL = gpt-oss-120b

Then, in the OpenAI node, click the 🌐 icon next to each field and pick those variables.

Run the flow

  1. In Chat Input → Input Text, type a topic, e.g.:
    Regolo.ai: bringing flexibility and control to AI workflows
  2. Click Run (top right) or run the Chat Output node.
  3. The generated copy appears in Chat Output.

You’ve just proven that Regolo.ai drives an “OpenAI” node in Langflow!