What ICRL Is
ICRL (In-Context Reinforcement Learning) is a trajectory-learning framework for LLM agents. It works by:- Running tasks in an environment.
- Storing successful trajectories.
- Retrieving similar prior steps during future runs.
- Curating low-utility trajectories over time.
What You Get
- Python package:
icrl - TypeScript package:
icrl - Python CLI:
icrl(tool-calling coding assistant) - TypeScript web demo: Next.js + Convex example
Package Scope
-
Python package focuses on:
- ReAct loop (
Agent,ReActLoop) - FAISS-backed
TrajectoryDatabase - Built-in providers (
LiteLLMProvider,AnthropicVertexProvider) - CLI and database utilities
- ReAct loop (
-
TypeScript package focuses on:
- Same algorithmic abstractions (
Agent,TrajectoryDatabase,TrajectoryRetriever,CurationManager) - Pluggable storage (
StorageAdapter), includingFileSystemAdapter - Built-in providers (
OpenAIProvider,AnthropicProvider,AnthropicVertexProvider)
- Same algorithmic abstractions (
How The Algorithm Runs
reset(goal)on environment.- Generate plan using retrieved examples.
- Repeat reasoning/action/observation steps.
- If successful in training mode, store trajectory.
- Update retrieval feedback and run curation periodically.
Next Steps
- Start with
/installation - Build first run at
/quickstart - Read algorithm details at
/core-concepts/icrl-algorithm

