Handling big projects with AI in your terminal with Plandex

If you loved how Aider lets you handle tasks in simple projects with AI, Plandex will become your go-to self hosted utility when you are working on larger scale projects.

But why would you want to bother self hosting a complex project while Aider can work just well with a classic installation? And, is it possible to use Regolo to provide the LLMs to make it work perfectly?

About Plandex

Plandex is an open-source, terminal-based AI coding agent engineered to facilitate the development of substantial and intricate software projects.

This tool is designed to assist developers with a wide array of tasks, ranging from the generation of new features and applications to debugging, refactoring, and understanding existing codebases.

An important key attribute of Plandex is its capacity to manage extensive project contexts, with an effective context window of up to two million tokens, enabling it to process and understand large codebases. It also features a cumulative diff review sandbox, which allows developers to examine and approve AI-generated changes before they are integrated into the project, ensuring a controlled and safe development process.

Furthermore, Plandex offers configurable autonomy, allowing developers to choose the level of AI assistance that suits their specific task and comfort level. Notably, Plandex is designed to work with multiple language models from various providers, including Regolo, offering flexibility in leveraging different AI capabilities. Being open source, Plandex provides transparency and the option for users to host the application on their own infrastructure.

To resume, the design of Plandex appears to specifically target the challenges encountered when applying AI coding tools to large-scale software development.

Why Self-Hosting Plandex (and connecting it to a privacy-focused AI provider)

Choosing to self-host Plandex presents several advantages compared to utilizing its cloud-hosted options.

One significant benefit is the enhanced data privacy and security afforded by keeping sensitive project code and AI interaction data within a user’s own infrastructure. Local deployment, particularly through containerization technologies like Docker, provides a secure environment for managing proprietary information.

While Plandex’s architecture ensures that API keys are only stored temporarily in memory during active use, regardless of the hosting method , self-hosting grants users complete control over the server environment and its security measures.

Another compelling reason for self-hosting is the potential for cost control: running the Plandex server locally is presented as a free option, although this excludes the costs associated with the necessary infrastructure and the chosen LLM.

Given that the utilization of OpenAI’s API, as noted in the research material, can incur substantial expenses , self-hosting Plandex and integrating it with alternative LLM solutions, especially open-source models, can lead to significant long-term cost savings.

While Plandex Cloud offers the advantage of a rapid setup process , the decision to self-host caters to developers and organizations with distinct needs concerning data management, budgetary constraints, and the desire for enhanced technical control over their AI-powered coding workflow.

The trade-off lies between the convenience of a managed cloud service and the autonomy and potential long-term benefits of a self-managed environment.

Setting Up Your Self-Hosted Plandex Environment

The initial step towards utilizing a self-hosted instance of Plandex involves ensuring that the necessary prerequisites are met. For the local mode quickstart, the system must have Git, Docker, and Docker Compose installed.

Alternatively, for a more advanced self-hosting setup that does not rely on Docker, the system should have a PostgreSQL database (version 14 or later is recommended), a persistent file system, and Git installed.

If you intend to build the Plandex server from its source code, the build tools gcc, g++, and make are also required due to a dependency on the tree-sitter library. The specific hardware requirements, such as CPU, RAM, and disk space, will be contingent upon the anticipated usage and the resource demands of any custom LLM that will also be self-hosted.

So, to install with docker compose, make sure to have docker compose installed in your OS. If you need help with installing it, you can refer to the official docker docs as we’ve seen in other tutorials:

Self hosting Plandex

The installation of the Plandex server can be achieved through several methods, in particular, we have selected the installation of the server through docker compose and source code:

Running from docker compose (the more straightforward method)

For users opting for the Docker Compose-based local mode quickstart, the process begins by cloning the Plandex repository from GitHub using the command git clone https://github.com/plandex-ai/plandex.git. After navigating into the app directory with cd plandex/app, the local start script can be executed using ./start_local.sh. For a more fine-grained approach using Docker Compose, users can copy the default environment file to .env (cp _env.env), modify the .env file to override any default environment variables as needed, build the Docker image using docker compose build, and then start the containers with docker compose up.

Running from source code (requires you to verify if there are particular dependencies needed in the Plandex official docs)

If you wish to run the server from the source code is that the process involves cloning the repository (git clone https://github.com/plandex-ai/plandex.git) , navigating to the repository’s root directory (cd plandex/) , identifying the desired server version (VERSION=$(cat app/server/version.txt)) , checking out the corresponding version tag (git checkout server/v$VERSION) , moving into the server directory (cd app/server) , setting the environment variable for the base directory where Plandex will store its files (export PLANDEX_BASE_DIR=~/plandex-server) , and finally running the server using the Go runtime (go run main.go).

If you wish to run the server from the source code is that the process involves cloning the repository (git clone https://github.com/plandex-ai/plandex.git) , navigating to the repository’s root directory (cd plandex/) , identifying the desired server version (VERSION=$(cat app/server/version.txt)) , checking out the corresponding version tag (git checkout server/v$VERSION) , moving into the server directory (cd app/server) , setting the environment variable for the base directory where Plandex will store its files (export PLANDEX_BASE_DIR=~/plandex-server) , and finally running the server using the Go runtime (go run main.go).

Once the Plandex server is operational, the next step is the initial configuration and account setup. By default, the server listens on port 8099.

To interact with the self-hosted server, the Plandex Command Line Interface (CLI) must be installed on the user’s local machine. This can be done using the provided one-line installation script:

 curl -sL https://plandex.ai/install.sh | bashCode language: Bash (bash)

To create a new account on the self-hosted server, the user should execute the command plandex sign-in in their terminal and follow the prompts.

When prompted to choose between Plandex Cloud and another host, selecting ‘Local mode host’ and confirming the default host address (http://localhost:8099) will establish the connection.

This Docker installation method simplifies the deployment process by encapsulating dependencies, instead of the the source installation which offers greater control and the ability to integrate more deeply with existing infrastructure.

Furthermore, the requirement to install the Plandex CLI separately after the server is set up highlights the client-server architecture of Plandex. The CLI serves as the primary interface through which developers interact with the Plandex server, irrespective of its hosting location.

Integrating Plandex with Regolo

A key aspect of self-hosting Plandex is the ability to integrate it with custom Large Language Model (LLM) providers.

Plandex is designed to communicate with LLMs that adhere to the OpenAI API specification. So, it works perfectly with regolo.

Configuring Plandex to utilize a custom LLM provider can be achieved through two primary methods:

  • The less guided method involves using environment variables. Specifically, setting the OPENAI_API_BASE (https://api.regolo.ai/v1) environment variable to the base URL of the custom LLM provider’s API will direct Plandex to use that endpoint for its LLM interactions. Since regolo requires an API key for authentication, the OPENAI_API_KEY environment variable may also need to be set.

    This is conceptually similar to the way Aider handled models, for additional steps, please refer to the Plandex docs.

  • The most guided method involves using the Plandex CLI to add a custom model. Just use he command plandex models add which allows users to register a custom model. It will prompt you to select a provider, and you’ll select custom, then add custom provider named “regolo”, a model name from regolo’s available models (e.g. “Llama-3.3-70B-Instruct”), choose an id for the model (you can leave this blank to use the model name instead), and choose a description (you can leave this blank too).
    Then, add the base URL of regolo.ai (“https://api.regolo.ai/v1″) and your API key environment variable (e.g. REGOLO_API_KEY), which you can set with export REGOLO_API_KEY=”<your_api_key>”.
    To finalize, you have to choose the limits of the model and settings, my choices is: Max tokens=128000, Default Max Convo Tokens=10000, Max Output Tokens=8192, Reserved Output Tokens=8192, Preferred Output Format=XML, Is multi-modal image support enabled?=n.

Once a custom model has been added to the cli, it can be assigned to specific roles within Plandex using the plandex set-model command. In our case, we’ll create a project directory, mkdir project and once we are in it with cd project we’ll create a new plan with plandex, using plandex new. Then, we can use plandex set-model to select our model.
It will prompt us to choose a role for our I, for example “whole-file-builder”, then we’ll be able to select a model, and choose “regolo”, and then “Llama-3.3-70B-Instruct”.

Advanced Configuration and Customization

Plandex provides several avenues for users to fine-tune the behavior of their AI coding assistant, especially when working with custom LLMs.

Model settings can be adjusted using the plandex models command, which displays the currently configured AI models and their parameters for the active plan. To view a comprehensive list of all available models, including any custom ones that have been added, the command plandex models available can be used.

Specific parameters for individual models can be modified using the plandex set-model command, followed by the role (e.g., planner, builder), the setting to be changed (e.g., temperature, max-tokens), and the desired new value.

For instance, to control the randomness of the code generation process performed by the builder model, the temperature parameter can be adjusted using a command like plandex set-model builder temperature 0.7.


For more comprehensive control over model configurations, Plandex allows the creation and utilization of custom model packs. Model packs enable users to define a specific set of models to be used for all the different roles within Plandex simultaneously.

The plandex model-packs command lists the available model packs. To create a new custom model pack, the plandex model-packs create command is used. Users will be prompted to provide a name and a description for the new pack and then to select a specific model for each of the roles within Plandex (planner, architect, coder, builder, and summarizer) from the available providers, including any custom models that have been added.

Once a custom model pack is created, it can be applied to the current plan using the plandex set-model command followed by the name of the model pack. Furthermore, a model pack can be set as the default for all new plans by using the command plandex set-model default followed by the model pack’s name.

The ability to precisely control model settings and to define entire model packs within Plandex offers a robust mechanism for tailoring the behavior of the AI coding assistant when utilizing custom LLMs.

This level of granularity enables developers to optimize the tool for specific tasks, achieve a balance between cost and performance, and experiment with different LLM configurations to identify the most suitable setup for their particular development needs.

Moreover, the fact that model settings and even entire model packs are subject to version control within Plandex aligns with the tool’s overarching design philosophy of providing a safe and iterative development workflow.

This feature empowers users to confidently explore various LLM configurations and easily revert to previously established, stable settings if a new configuration does not yield the desired outcomes.

Final words

As always, we’d be happy to know that this tool has integrated with your daily workflow, making the usage of your terminal more efficient, and perhaps more enjoyable too.

I hope this tutorial has been useful to help you integrate Plandex with regolo. If you have any question or advice for us, please reach to us on our discord community, or by sending us an e-mail through Regolo’s contact us section.