Skip to main content

Overview

The ICRL CLI is an interactive, tool-calling coding assistant that lives in your terminal. If you’ve used Claude Code or OpenAI Codex, you’ll feel right at home — the core experience is the same: type a task in natural language, and the agent reads files, writes code, runs commands, and iterates until the job is done. The key difference is that ICRL gets better the more you use it. Every successful run is stored as a trajectory. On future tasks, the agent retrieves similar past trajectories and uses them as in-context examples, producing better plans and fewer mistakes over time. It’s a coding agent that learns your codebase.

icrl chat — Interactive Mode

icrl chat is the primary way to use ICRL. It launches a multi-turn terminal UI where you can have an ongoing conversation with the agent.
uv run icrl chat
When you start a session, you’ll see:
  ___ ____ ____  _
 |_ _/ ___|  _ \| |
  | | |   | |_) | |
  | | |___|  _ <| |___
 |___\____|_| \_\_____|

Type a task and press Enter. '/clear' to reset, 'exit' to quit.
Claude Opus 4.5 . ~/my-project . 12 examples
--------------------------------------------------
>>
The status line shows your current model, working directory, the number of stored trajectories, and (after the first turn) the current turn number.

What the Agent Can Do

Just like Claude Code, the ICRL chat agent has full access to your local environment:
  • Read, write, and edit files in your working directory
  • Run shell commands — git, python, npm, cargo, anything in your PATH
  • Search your codebase with glob patterns and regex
  • Search the web for documentation or solutions
  • Fetch web pages and parse their content
  • Ask you questions when the task is ambiguous
The agent follows a think-act-observe loop: it reads files to understand context, makes targeted changes, runs commands to verify, and iterates until done.

What Makes It Different from Claude Code

The agent works like Claude Code on any individual task. The difference is what happens between tasks:
  1. After a successful run, ICRL asks whether to store the trajectory (the full sequence of reasoning, actions, and observations).
  2. On future tasks, the agent retrieves similar past trajectories using semantic search and includes them as in-context examples.
  3. Over time, a curation system prunes low-utility trajectories so the example set stays high quality.
This means the agent learns patterns specific to your codebase — your project structure, your testing conventions, your preferred libraries — and applies that knowledge automatically.

Example Session

$ uv run icrl chat
>> Add input validation to the /api/users endpoint

  Read src/api/users.ts
  Read src/utils/validate.ts
  Edited src/api/users.ts
  $ npm test
  All tests passed.

Done

Added Zod schema validation to the POST /api/users handler,
checking email format, password length, and required fields.
Returns 400 with field-level errors on invalid input.

Store this successful run as a new example? [Y/n]: y

>> Now do the same for /api/posts
On the second task, the agent retrieves the /api/users trajectory and uses it as a reference — it already knows your project uses Zod, how your validation utilities are structured, and your test patterns.

Multi-Turn Conversations

The session maintains full conversation history across turns. You can ask follow-up questions, request changes to what the agent just did, or start entirely new tasks — all within the same session.
  • Type /clear to reset the conversation and start fresh
  • Type exit, quit, or q to end the session

Options

uv run icrl chat [OPTIONS]
FlagDescriptionDefault
-m, --model TEXTOverride the LLM modelclaude-opus-4-5
-d, --dir PATHSet the working directoryCurrent directory
--compareGenerate two candidate strategies and choose which to storeOff
--stats / --no-statsShow latency, token usage, and cache statistics--stats
-y, --auto-approve / --no-auto-approveAuto-approve file writes without confirmation--auto-approve

Compare Mode

Compare mode (--compare) is useful when you want to explore different approaches to a task:
uv run icrl chat --compare
The agent proposes two distinct strategies, executes both independently, and presents the results side by side. You then choose which trajectory to store (or reject both). This is particularly useful for tasks where the best approach isn’t obvious.

icrl run — Single-Task Mode

If you just need to fire off a one-shot task without an interactive session:
uv run icrl run "fix failing tests in this repo"
This runs the agent once, stores the trajectory if successful, and exits. It accepts the same options as chat, plus a few extras:
FlagDescription
--no-trainDon’t store the trajectory even if the run succeeds
--ablateRun with and without retrieved examples and print a comparison
-v, --verboseVerbose output
--vertex-credentials PATHPath to Vertex AI credentials file
--vertex-project TEXTGoogle Cloud project ID for Vertex AI
--vertex-location TEXTGoogle Cloud region for Vertex AI

Configuration

View Configuration

uv run icrl config show

Set a Value

uv run icrl config set <key> <value>
Available keys:
KeyDescriptionDefault
modelLLM model identifierclaude-opus-4-5
temperatureSampling temperature1.0
max_tokensMax tokens per response16384
max_stepsMax tool-call steps per turn200
kNumber of examples to retrieve3
context_compression_thresholdToken threshold for context compression80000
show_statsShow performance statisticstrue
auto_approveAuto-approve file operationstrue
db_pathCustom trajectory database pathProject-local
vertex_credentials_pathPath to Vertex AI credentials
vertex_project_idGoogle Cloud project for Vertex AI
vertex_locationGoogle Cloud region for Vertex AI

Reset Configuration

uv run icrl config reset

Trajectory Database

Every project gets its own trajectory database at <working_dir>/.icrl/trajectories. Use the --global flag on any db command to target the global fallback database instead.

Inspect

# Summary stats
uv run icrl db stats

# List stored trajectories
uv run icrl db list [--limit N]

# Show a specific trajectory
uv run icrl db show <trajectory_id_or_prefix>

# Semantic search across trajectories
uv run icrl db search "query" [--k N]

Manage

# Clear the database
uv run icrl db clear [--force]

# Validate that stored code artifacts still exist
uv run icrl db validate [trajectory_id] [--include-deprecated]

# List deprecated trajectories
uv run icrl db deprecated

# Prune low-utility or deprecated trajectories
uv run icrl db prune [--min-utility FLOAT] [--dry-run] [--force]

# Backfill artifacts for older trajectories
uv run icrl db extract-artifacts
All db commands accept --dir PATH to specify the project directory and --global to use the global database.

Provider Selection

The CLI auto-detects which LLM provider to use based on the model string:
  • If the model matches a Vertex AI alias or pattern, the CLI uses the Anthropic Vertex provider
  • Otherwise, it uses the generic LiteLLM provider (supports OpenAI, Anthropic, and many others)
The default model is claude-opus-4-5. To use a different model:
uv run icrl chat -m gpt-4o
uv run icrl chat -m claude-sonnet-4-5

Helpers

uv run icrl version    # Print version
uv run icrl --help     # Top-level help