How I Use OpenCode, Oh-My-OpenCode-Slim, and OpenSpec to Build My Own AI Coding Environment

Ride the wave of AI coding, don't get swept away by It

How I Use OpenCode, Oh-My-OpenCode-Slim, and OpenSpec to Build My Own AI Coding Environment.
How I Use OpenCode, Oh-My-OpenCode-Slim, and OpenSpec to Build My Own AI Coding Environment. Image by DALL-E-3

Introduction

I have never used Claude Code. The reason is simple. Claude Code is too expensive. Even with a subscription, the cost-to-value ratio does not work for my research.

So I have been building my own AI coding environment using OpenCode as the foundation, combined with Oh-My-OpenCode-Slim (multi-agent orchestration) and OpenSpec (SDD).

My take is this: if you understand what you want to build, and you know how to use coding tools properly, especially with well-written Spec files as constraints, frontier open-source models like Qwen3.6-Plus, Kimi-k2.5, and GLM-5 can handle your daily coding tasks just fine.

There is another huge advantage to open-source software: community power. The community can tune system prompts and model parameters to fit different models well, and get the most out of open-source models.

In this article, I want to share what I have learned from using OpenCode and its surrounding tools. I will skip the generic tutorials you find everywhere online and focus only on the details I think actually matter. I hope this helps you make better choices.


Tool Installation and Environment Setup

I will cover my experience in two parts: installing and configuring OpenCode and its related plugins, and my AI coding workflow.

Let's start with tool installation and environment setup, beginning with OpenCode itself.

Installing OpenCode

Unlike most coding agents that only offer a TUI-based command-line tool, OpenCode also comes with a desktop app with a graphical interface. I use the desktop app for my daily coding work. It is clearly much more efficient than the TUI version.

That said, you still need to install the command-line program first. From my testing, some plugins need a command-line environment to check whether OpenCode is installed on your machine during project initialization.

First, make sure you have Node.js installed. Then run this npm command to install the OpenCode CLI:

npm i -g opencode-ai

After that, go to the official website, download the OpenCode Desktop installer, and double-click to install.

Configuring OpenCode

After installing OpenCode Desktop, open the app. Once you select your project directory, you will land on the main OpenCode interface. The features are fairly intuitive, so I will not walk through each one. But before you type your first Hello World, you should check your terminal configuration first.

Configuring the terminal

I use Windows 11. On Windows, OpenCode Desktop defaults to PowerShell as its terminal. Many companies, though, do not allow PowerShell. If you are in a non-English locale, OpenCode may run into character encoding issues when running shell commands through PowerShell, causing those commands to fail.

In that case, you need to change your default terminal.

OpenCode uses the SHELL environment variable to determine which terminal to use. You can configure Windows Command Prompt (cmd.exe), WSL, or git bash. Personally, I prefer cmd.exe because I had already installed a lot of CLI tools before setting up OpenCode. Using cmd.exe directly saves me from reinstalling everything.

SET SHELL="%windir%\system32\cmd.exe"

Configuring model providers

Next, let's talk about how to configure model providers.

Open the settings window and select "Providers". You will see a list of the most popular model providers and API relays. If you want to use open-source models, though, they probably will not be on that list.

At that point, you might click "Custom provider" and manually fill in the model id, base url, api key, and so on. The problem is that OpenCode then has no idea about your model's context window size or pricing, which causes features like automatic context compression to stop working correctly.

The right approach is to click the "Show more providers" link at the bottom, find the provider you want to add, and enter that provider's api key.

Click the show more providers link and pick a provider for your open-source model.
Click the show more providers link and pick a provider for your open-source model. Image by Author

Once configured, all models from that provider will appear in the model list. These models come with metadata like context size and pricing, so context management plugins can work correctly.

By setting up your provider correctly, you'll get all kinds of metadata about your models.
By setting up your provider correctly, you'll get all kinds of metadata about your models. Image by Author

The downside is that you cannot directly see your provider ID this way, which makes it tricky to configure Oh-My-OpenCode-Slim later.

No worries. OpenCode already saved your provider configuration when you selected your provider. You can find it at ~/.local/share/opencode/auth.json. Your provider ID and API key are both there.

You can find your provider ID in the auth.json file.
You can find your provider ID in the auth.json file. Image by Author

Enable workspaces

The biggest difference between AI coding and traditional coding is that while you wait for the AI to work, you can actually work on another requirement at the same time. If you use git for version control, you would normally need to create a separate directory and check out a new branch.

Or you can use git's worktree feature to create a new worktree on top of your current branch. When you are doing parallel development, using worktrees is much more convenient than creating new branches.

Compared to OpenCode CLI, the desktop app has a clear advantage here: it natively supports the worktree feature. In OpenCode Desktop, this feature is called "workspace".

The way to open a workspace is a bit hidden. Right-click the project icon in the top-left corner of the window, then select "Enable Workspace" from the menu. From there, you can create multiple workspaces in the conversation list and work on them simultaneously. The corresponding branches and code directories will be created automatically.

Right-click on your project to enable the workspace.
Right-click on your project to enable the workspace. Image by Author

When the coding work in a workspace is done, you can ask the AI to submit a PR for the current code, then close the workspace. The branch and code directory that were created for it will be cleaned up automatically. Very convenient.

Choosing the right agent

If you want to use OpenCode without any extra plugins, pay attention to how you use agents.

OpenCode has two types of agents: primary agents, which you choose yourself, and sub-agents, which the primary agent calls on its own when needed.

Without any plugins installed, OpenCode only provides two primary agents: Build and Plan: The Build agent has full tool access and is the default choice. The Plan agent has no editing permissions. Its job is to ask you clarifying questions when you describe a requirement, and eventually produce an execution plan.

When you first try this, you might go straight to the Build agent. But for complex tasks, Build tends to just start coding based on its own interpretation. That is like looking through a straw. It fixes things locally without thinking through the overall architecture and design patterns.

The right approach is to start every new requirement with the Plan agent for requirement clarification, and get a solid execution plan first. Only then should you hand things off to Build to start development.

But even that is not enough. Model context is limited. As coding progresses, the execution plan from earlier in the conversation can get pushed out of the context window.

A better approach is to ask Build to save the execution plan as a Markdown file before starting to code. Review that file, confirm everything looks good, then start a fresh session and have Build load the execution plan document back in before executing.

The Plan agent generates a tasks file, and the Build agent executes them.
The Plan agent generates a tasks file, and the Build agent executes them. Image by Author

Once you start working this way, you will feel how much value comes from planning before executing. It also sets you up well for SDD coding later, which I will cover when we get to OpenSpec.

Always start a new session

I just mentioned that after forming a development plan, you should create a new session before continuing with coding. Why?

Because anyone familiar with LLMs knows that even though context windows are long today, and OpenCode does offer context compression, context rot is still a real problem. LLMs pay more attention to the beginning and end of the context window, and less to the middle. I call this positional bias.

So to make sure the LLM follows instructions accurately based on the conversation context, especially after forming an execution plan where you need precise execution, start a new session after each major milestone. Do not keep working in the same session forever.

Do not forget to create AGENTS.md

The AGENTS.md file is called "rules" in OpenCode. You can create it automatically with the /init command. It tells the LLM what rules to follow during coding.

You may ask: if this file is created by the LLM, does that not mean the LLM already knows all these rules internally? Is saving them to a file redundant?

Not at all. AGENTS.md is a file written specifically for the LLM to read. In my view, it serves three important purposes:

First, AGENTS.md acts as long-term memory for the project. It locks in facts and choices. For example, after asking the LLM to set up the project structure or create a new module, run /init once. The project architecture gets locked into AGENTS.md. Without this, the LLM will scan the entire project from scratch every time you start a new session, wasting a huge number of tokens.

Another example: if you use uv to manage your project and use uv sync --prerelease=allow to sync prerelease dependencies, write that clearly in AGENTS.md. This prevents the LLM from making errors with dependency management.

Second, AGENTS.md narrows the probability distribution and reduces hallucinations. LLMs are probability models. When facing a question, an LLM generates several possible answers with associated probabilities, then picks one based on the temperature parameter.

For example, when a method parameter can accept multiple types, the LLM might consider these options:

  1. Use Optional[int] (40% chance)
  2. Use int | None (40% chance)
  3. Use no type annotation at all (20% chance)

At that point, the LLM will randomly pick Optional[int] or int | None.

But once you explicitly require str | None syntax in AGENTS.md, the probability distribution shifts to:

  1. Use int | None (100% chance)
  2. Use Optional[int] (0% chance)

At this point, the LLM will just go ahead and pick int | None as the final answer.

Third, AGENTS.md serves a harness engineering purpose. You can give the LLM direct instructions through AGENTS.md that it must follow. For example, you can tell the LLM to communicate with you in Chinese, or require that during the spec-driven process, it cannot create new proposals without your approval.

Improve the success rate of Skills loading

By now, most people in the AI coding space have heard of Skills. But many find that Skills do not load reliably under normal conditions.

There are two reasons for this. On one hand, the description in a Skill's front matter is often unclear. We need to write the front matter carefully, especially the description part. It should clearly explain when the Skill applies and what it provides, so the LLM can load the right Skill for the situation.

On the other hand, for common coding scenarios, LLMs have learned so much during pretraining that they do not feel the need to load a Skill for extra guidance.

For this, there is a simple and proven fix. Just add this line to AGENTS.md:

💡 Unlock Full Access for Free!
Subscribe now to read this article and get instant access to all exclusive member content + join our data science community discussions.