The Model Changes
Your AI assistant updates overnight. Suddenly, code generation breaks. Your workflow is destroyed. Your team is blocked.
High riskStop depending on external AI. rbee (pronounced "are-bee") gives you an OpenAI-compatible API that runs on ALL your home network hardware—GPUs, Macs, workstations—with zero ongoing costs.

You're building complex codebases with AI assistance. But what happens when your provider changes the rules?
rbee orchestrates AI inference across every device in your home network, turning idle hardware into a private, OpenAI-compatible AI platform.
Swap in the API, scale across your hardware, route with code, and watch jobs stream in real time.
TypeScript utilities for LLM pipelines and agentic workflows.
import { invoke } from '@llama-orch/utils';
const response = await invoke({ prompt: 'Generate a TypeScript function that validates email addresses', model: 'llama-3.1-70b', maxTokens: 500});
console.log(response.text);Works with any OpenAI-compatible client.
Run rbee free at home. Add collaboration and governance when your team grows.
Every plan includes the full rbee orchestrator. No feature gates. No artificial limits.
Prices exclude VAT. OSS license applies to Home/Lab.
“Spent $80/mo on Claude. Now I run Llama-70B on my gaming PC + old workstation. Same quality, $0 cost.”
“We pooled our team's hardware and cut AI spend from $500/mo to zero. OpenAI-compatible API—no code changes.”
“Cascading shutdown ends orphaned processes and VRAM leaks. Ctrl+C and everything cleans up.”
Join developers who've taken control of their AI infrastructure.
100% open source. No credit card required. Install in 15 minutes.