Overview
The mock LLM provider lets you run the full ICRL training and evaluation pipeline without API keys or network access. It uses pattern matching on prompts to generate deterministic plans, reasoning, and actions for file system tasks.Source Files
| File | Purpose |
|---|---|
examples/mock_llm.py | MockLLMProvider implementation |
tests/test_with_mock.py | End-to-end demo using the mock |
Run
Why Use It
| Benefit | Description |
|---|---|
| Deterministic | Same inputs produce same outputs |
| Fast | No network latency |
| Zero cost | No API usage |
| CI-friendly | Tests run without secrets |
Provider Contract
The mock implements the same interface as any Python LLM provider:MockLLMProvider with LiteLLMProvider without changing Agent wiring:
What the Demo Covers
tests/test_with_mock.py runs four phases:
- Training — Trains on file system tasks; successful trajectories are stored
- Persistence — Creates a new agent that loads trajectories from disk
- Retrieval — Shows semantic search over stored trajectories
- Evaluation — Runs held-out tasks with retrieval enabled

