# Using Regolo models with OpenCode

OpenCode is an open-source, terminal-native AI coding agent that supports 75+ LLM providers through an extensible configuration system. Because Regolo exposes a fully OpenAI-compatible inference API, wiring it into OpenCode takes fewer than five minutes and requires no special plugin — just a JSON file.

The result is a private, GPU-accelerated coding workflow: every prompt and code snippet stays within Regolo's european data centers, under a zero-retention policy, and fully inside EU jurisdiction.

What you need before starting

- A terminal with a modern emulator (WezTerm, Ghostty, Alacritty, or Kitty).[](https://regolo.ai/private-ai-coding-deploy-without-giving-away-your-code/)

- An active [Regolo account](https://regolo.ai/pricing) with an API key generated from your dashboard.[](https://regolo.ai/private-ai-coding-deploy-without-giving-away-your-code/)

- OpenCode installed (see Step 1 below).

## Step 1 — Install OpenCode

OpenCode ships as a single binary with installers for every major platform.

```
# macOS (Homebrew — recommended, always up to date)
brew install opencode-ai/tap/opencode

# Linux (curl installer)
curl -fsSL https://opencode.ai/install | bash

# Windows (Scoop or Chocolatey)
scoop install opencode
choco install opencode

# Verify
opencode --versionCode language: Python (python)
```

After installation, `opencode` will be available in your `$PATH`.

## Step 2 — Get your Regolo API key

Head to the [Regolo.ai dashboard](https://regolo.ai/), sign in, and create a new API key. Keep it in a secure place — never commit it directly to your repository.[](https://regolo.ai/private-ai-coding-deploy-without-giving-away-your-code/)

We recommend storing the key in an environment variable:

```
export REGOLO_API_KEY="your-regolo-api-key-here"Code language: Bash (bash)
```

You can add that line to your `~/.bashrc`, `~/.zshrc`, or equivalent shell profile so it is available in every terminal session.

## Step 3 — Configure OpenCode with Regolo

OpenCode resolves its configuration from multiple files, merging them in order of precedence. For a quick start, create a project-level config file `opencode.json` in your project directory. For user-wide access to Regolo models across all your projects, place the same file at `~/.config/opencode/opencode.json`.

The repository [regolo-ai/opencode-configs](https://github.com/regolo-ai/opencode-configs) contains ready-to-use, copy-paste configurations. The core configuration below registers Regolo.ai as a custom OpenAI-compatible provider:

```
{
  "$schema": "https://opencode.ai/config.json",
  "compaction": {
    "auto": true,
    "prune": true,
    "reserved": 10000
  },
  "provider": {
    "regolo-ai": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "regolo.ai",
      "options": {
        "baseURL": "https://api.regolo.ai/v1",
        "apiKey": "{env:REGOLO_API_KEY}"
      },
      "models": {
        "qwen3-coder-next": {
          "name": "qwen3-coder-next"
        },
        "gpt-oss-120b": {
          "name": "gpt-oss-120b",
          "options": {
            "reasoningEffort": "high",
            "textVerbosity": "low",
            "reasoningSummary": "auto",
            "include": ["reasoning.encrypted_content"]
          }
        },
        "gpt-oss-20b": {
          "name": "gpt-oss-20b"
        },
        "llama-3.3-70b-instruct": {
          "name": "Llama-3.3-70B-Instruct"
        },
        "llama-3.1-8b-instruct": {
          "name": "Llama-3.1-8B-Instruct"
        },
        "mistral-small3.2": {
          "name": "mistral-small3.2"
        },
        "mistral-small-4-119b": {
          "name": "mistral-small-4-119b"
        },
        "minimax-m2.5": {
          "name": "minimax-m2.5",
          "options": {
            "reasoningEffort": "high",
            "textVerbosity": "low",
            "reasoningSummary": "auto",
            "include": ["reasoning.encrypted_content"]
          }
        },
        "qwen3.5-122b": {
          "name": "qwen3.5-122b",
          "options": {
            "reasoningEffort": "high",
            "textVerbosity": "low",
            "reasoningSummary": "auto",
            "include": ["reasoning.encrypted_content"]
          }
        },
        "qwen3.5-9b": {
          "name": "qwen3.5-9b"
        },
        "gemma4-31b": {
          "name": "gemma4-31b"
        },
        "apertus-70b": {
          "name": "apertus-70b"
        }
      }
    }
  }
}Code language: JSON / JSON with Comments (json)
```

A few key points about this configuration:

- **`npm: "@ai-sdk/openai-compatible"`** tells OpenCode to use the generic OpenAI-compatible SDK adapter, which works for any provider exposing a `/v1/chat/completions` endpoint — including Regolo.ai.
- **`baseURL: "https://api.regolo.ai/v1"`** points all requests to Regolo's EU inference infrastructure.
- **`{env:REGOLO_API_KEY}`** is OpenCode's variable substitution syntax for reading from environment variables, keeping your key out of the config file.[](https://opencode.ai/docs/config/)
- The `reasoningEffort`, `textVerbosity`, and `reasoningSummary` options on models like `gpt-oss-120b`, `minimax-m2.5`, and `qwen3.5-122b` activate extended reasoning capabilities on those models when they support it.

> Always verify the latest model IDs and base URL against the official Regolo.ai documentation, as these details may change as new models are added to the platform.

## Step 4 — Start coding

Navigate to your project directory and launch OpenCode:

```
cd your-project
opencodeCode language: Bash (bash)
```

Once the TUI loads, run `/models` to browse all available Regolo models and pick one. You can also set a default model directly in the config:[](https://opencode.ai/docs/config/)

```
{
  "model": "regolo-ai/qwen3-coder-next"
}Code language: JSON / JSON with Comments (json)
```

Use OpenCode's **Plan mode** (press `Tab`) to have the model analyze and propose changes without touching any files, then switch to **Build mode** to apply them.[](https://opencode.ai/docs/)

## Choosing a model for your workload

Regolo exposes a range of [open models](https://regolo.ai/modes) suited to different coding tasks. Here is a practical breakdown:

| Model | Best for |
|---|---|
| **qwen3-coder-next** | General coding — recommended starting point; strong balance of quality and speed |
| **gpt-oss-120b** | Everyday feature work; better for documentation |
| **mistral-small-4-119b** | Boilerplate, small scripts, CI automation |
| **qwen3.5-122b / minimax-m2.5** | Reasoning-heavy tasks; extended thinking enabled |
| **gemma4-31b** | Balanced mid-tier alternative (not included in settings) |

For most teams starting out, `qwen3-coder-next` is the practical default for tasks. Switch to `minimax-m2.5` when working on deep refactoring, designing architecture, or analyzing large surface areas of code.

---

## Using environment variables instead of inline keys

If you prefer not to write the API key anywhere on disk, you can authenticate via environment variable alone. Remove the `apiKey` field from the config and set:

```
export OPENAI_API_KEY="your-regolo-api-key-here"Code language: Bash (bash)
```

OpenCode's `@ai-sdk/openai-compatible` adapter will pick this up automatically for the configured `baseURL`. Alternatively, you can run `/connect` inside the TUI, scroll to **Other**, and enter the provider ID `regolo-ai` along with your key — OpenCode will store credentials in `~/.local/share/opencode/auth.json`.

---

## Global vs. project config

OpenCode merges configuration from multiple sources. Understanding the precedence order helps you keep things tidy:

| Location | Scope | When to use |
|---|---|---|
| `~/.config/opencode/opencode.json` | User-wide | Provider registration, your personal API key |
| `opencode.json` (project root) | Per-project | Model selection, compaction settings, permissions |
| `OPENCODE_CONFIG` env var | One-off override | CI pipelines, scripts |

A clean pattern is to register the `regolo-ai` provider once in your global config with `{env:REGOLO_API_KEY}`, then control model selection per project. That way the API key never touches any project repository.

---

## Why this setup keeps your code private

Regolo.ai operates entirely within european data centers on renewable energy, with **zero data retention** on inference content. When you run OpenCode against Regolo, your prompts, code snippets, and context are processed in-memory for the duration of the request and never logged or stored on the provider side. For teams subject to GDPR, the EU AI Act, or internal IP-protection policies, this is the material difference compared to routing inference through third-party US-based infrastructure.

---

## Troubleshooting

**Models don't appear in `/models`**: confirm that the provider ID in the config (`regolo-ai`) matches the ID used if you ran `/connect`. Run `opencode auth list` to inspect stored credentials.[](https://opencode.ai/docs/providers/)

**401 Unauthorized**: check that `REGOLO_API_KEY` is exported in the shell where you run `opencode`, or that the `apiKey` field in the config resolves correctly.

**Wrong base URL errors**: the correct endpoint is `https://api.regolo.ai/v1`. Double-check there is no trailing slash and no extra path component.

**Reasoning models return empty responses**: models like `gpt-oss-120b` with `reasoningEffort` set require that `"reasoning.encrypted_content"` is included in the `include` array, as shown in the config above.

---

## FAQ

**Do I need to install anything beyond OpenCode itself?**
No. The `@ai-sdk/openai-compatible` npm package is loaded on demand by OpenCode's runtime — you do not need to install it separately.

**Can I use Regolo alongside other providers like OpenAI or Anthropic?**
Yes. OpenCode supports multiple providers simultaneously. You can have a `regolo-ai` block and an `openai` block in the same config and switch between them with `/models` at any time.

**Is the Regolo API fully compatible with the OpenAI interface?**
Yes. Regolo exposes `/v1/chat/completions` with the same request/response format as OpenAI, so any OpenAI-compatible client or SDK works without modification.

**Where can I find updated model IDs?**
Check the [Regolo.ai documentation](https://docs.regolo.ai/) and the [regolo-ai/opencode-configs](https://github.com/regolo-ai/opencode-configs) repository, which we keep updated as new models are added to the platform.

**Is there a free tier to try this?**
Regolo.ai offers a 30-day free trial. You can start at [regolo.ai](https://regolo.ai/) and generate your API key immediately without entering payment details upfront.

---

**Start your free 30-day trial at [regolo.ai](https://regolo.ai/) and deploy LLMs with complete privacy by design.**

👉 [Talk with our Engineers](https://regolo.ai/contacts/) or [Start your 30 days free →](https://regolo.ai/pricing)

---

- [Discord](https://discord.gg/ZzZvuR2y) - Share your thoughts
- [GitHub Repo](https://github.com/regolo-ai/) - Code of blog articles ready to start
- Follow Us on X [@regolo\_ai](https://x.com/regolo_ai)
- Open discussion on our [Subreddit Community](https://www.reddit.com/r/regolo_ai/)

---

*Built with ❤️ by the Regolo team. Questions? [regolo.ai/contact](https://regolo.ai/contact)* or chat with us on [Discord](https://discord.gg/ZzZvuR2y)