The AI-Native
LLM Framework
for Rust.
Build production-grade agents with built-in AI assistance. 11 Claude Code skills mean you describe what you need — and get working Rust code, instantly.
// Build a production ReAct agent in <15 lines
use wesichain_graph::ReActGraphBuilder;
use wesichain_llm::OpenAiClient;
use wesichain_checkpoint_sqlite::SqliteCheckpointer;
let agent = ReActGraphBuilder::new()
.llm(OpenAiClient::gpt4o(api_key))
.tools(vec![web_search, calculator])
.checkpointer(
SqliteCheckpointer::new("agent.db").await?
)
.build()?;
let answer = agent
.invoke("Research Rust async runtimes", thread_id)
.await?; — generated by Claude Code via the wesichain-react skill
How it works
From description to production code in minutes
Install the Skills Pack
Run one command to install all 11 skills into Claude Code automatically.
curl -fsSL https://wesichain.pages.dev/skills.sh | bash Describe what you need
Tell Claude Code what to build. "Create a RAG pipeline with Qdrant and Ollama embeddings" is all it takes.
Ship production Rust
Get complete, idiomatic Wesichain code — proper error handling, checkpointing, and streaming — ready to compile and run.
Python → Rust
Familiar patterns, Rust performance
Wesichain mirrors LangChain and LangGraph concepts so migration is straightforward.
from langchain.agents import AgentExecutor
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
tools = [search, calculator]
agent = AgentExecutor(
llm=llm,
tools=tools,
handle_parsing_errors=True
)
result = agent.invoke({"input": "What is 2+2?"})use wesichain_graph::ReActGraphBuilder;
let graph = ReActGraphBuilder::new()
.llm(openai)
.tools(vec![search, calc])
.build::<AppState>()?;
let result = graph.invoke_graph(state).await?;AI-Native Development
The only Rust LLM framework with built-in AI assistance
Download the Wesichain Skills Pack and Claude Code instantly understands the entire framework — every API, pattern, and golden rule. No more digging through docs.
"Build a ReAct agent with web search"
Generates GraphBuilder, ToolNode, and checkpoint code
"Which memory type for 10k token context?"
Recommends ConversationSummaryMemory with context compression
"Add Postgres checkpointing to my agent"
Complete setup with connection pooling and migration handling
One-line install
curl -fsSL https://wesichain.pages.dev/skills.sh | bash copy Create a RAG pipeline with Ollama embeddings
Using wesichain-rag skill — setting up the pipeline:
use wesichain_rag::WesichainRag;
use wesichain_embeddings::OllamaEmbedding;
let rag = WesichainRag::builder()
.with_embedder(
OllamaEmbedding::new("http://localhost:11434")
.model("nomic-embed-text")
)
.build()?;
rag.add_documents(docs).await?;
let res = rag.query("Your question").await?; Why Wesichain
Built for developers who have been burned by the alternatives
AI-Native Development
Framework knowledge, instantly
The problem
Other frameworks require days of documentation reading. Wesichain includes 11 Claude Code skills so AI agents understand the entire framework immediately.
The solution
Download the Skills Pack, drag into Claude Code, and start building. "Create a ReAct agent" generates working code instantly.
Performance That Scales
Rust-native throughput
The problem
Python agents hit the GIL wall. Text processing, chunking, and embedding operations block your event loop.
The solution
Wesichain uses Tokio for true parallelism. Recursive text splitting hits 201–221 MiB/s — 2–4× typical Python performance.
Production-Ready
Deploy anywhere
The problem
Python deployments need virtualenvs, dependency management, and platform-specific wheels.
The solution
Compile to a single binary. Deploy to AWS Lambda, Kubernetes, or edge — no runtime dependencies. 2–5 MB memory per agent vs 80–150 MB in Python.
Built for production
Familiar patterns from LangChain and LangGraph, rewritten for Rust performance and reliability.
Composable Chains
LCEL-style composition with type-safe chaining. Build once, reuse everywhere, retry with a single method call.
let chain = prompt
.then(llm)
.then(parser)
.with_retries(3);
Learn more →
Resumable Graphs
Stateful agent workflows with checkpoint persistence across SQLite, Postgres, or Redis. Pause, resume, debug.
// Checkpoint anywhere
let state = graph.checkpoint().await?;
// Resume later
graph.resume(state).await?;
Learn more →
Streaming-First
Native async/await with Tokio. Stream tokens, tool calls, and state updates to any consumer in real time.
let mut stream = chain.stream(input);
while let Some(chunk) = stream.next().await {
sse.send(chunk).await?;
}
Find your crate →
Supported LLM Providers
Benchmark artifacts, not guesswork
Numbers below are pulled from versioned benchmark snapshots in the Wesichain repository.
Recursive splitter throughput
2-4x vs typical Python splitters (50-100 MiB/s)Connector payload microbenchmarks
| Benchmark | Wesichain | Baseline shape | Delta |
|---|---|---|---|
| Qdrant payload construction | 0.801 ms | 0.998 ms | 1.25x faster |
| Weaviate payload construction | 1.099 ms | 1.450 ms | 1.32x faster |
See full caveats and commands on /benchmarks.
Benchmark highlights — recursive text splitting
221 MiB/s
2–4× faster than typical Python splitters · 201 MiB/s at 1 MB input
Start building with
Wesichain today.
22 focused crates. 11 Claude Code skills. Reproducible benchmarks. Production Rust agents that compile to a single binary.
Sources: wesichain/docs/benchmarks/README.md · wesichain/docs/benchmarks/data/qdrant-2026-02-16.json · wesichain/docs/benchmarks/data/weaviate-2026-02-16.json