Blog

Introducing Lobu

announcement open-source

I started working on this last summer as a Slack bot called Peerbot. The idea: mention the bot in any channel, get a sandboxed Claude Code instance just for you. I spent months on the hard infrastructure — worker isolation, persistent volumes, a credential proxy so workers never touch real API keys.

By October I had a working product, but it was Slack-only. The core infrastructure wasn’t platform-specific at all — only the message handling was. So I refactored the platform layer into thin adapters and opened it up to Telegram and WhatsApp.

The biggest unlock came in February when I integrated OpenClaw. I’d been managing tool execution, session lifecycle, and process state manually — OpenClaw handles all of that. My gateway became purely about orchestration while OpenClaw handled the agent runtime.

Messaging platforms
Slack
Block Kit, interactive actions
Telegram
Mini App, inline buttons
WhatsApp
Reply buttons, list menus
Discord
Servers, DMs, markdown replies
Teams
Channels, bots, enterprise workflows
Google Chat
Cards v2, Workspace spaces
  • Link users across platforms with single sign-on
  • Approval flows, rich cards, buttons, and more
Bring your own agent
Lobu Memory
Lobu
Control Plane
  • Workers never see secrets
  • HTTP proxy with domain allowlist
  • MCP proxy with per-user OAuth
  • BYO provider keys (Anthropic etc.)
Equip your agent
Lobu Skills
OpenClaw Runtime
User A
isolated
OpenClaw Runtime
User B
isolated
OpenClaw Runtime
User C
isolated
  • One sandbox per user and channel
  • Kata Containers / Firecracker microVMs / gVisor on GCP
  • virtualized bash for scaling beyond 1000 users
  • No direct internet access (internal network)
  • Nix reproducible environments
  • OpenTelemetry for observability

What we solve on top of OpenClaw

OpenClaw is a great agent runtime. But running it for a team exposes real problems that the runtime itself doesn’t address.

Serverless execution. Stock OpenClaw runs as a long-lived process — you start it, it stays on, waiting for input. That’s fine on your laptop but it doesn’t work for multi-tenant infrastructure. You’d need one always-on process per user, burning compute 24/7. Lobu runs OpenClaw as serverless workers that scale to zero when idle and wake on the next message. Persistent volumes keep session state across restarts, so the agent picks up right where it left off.

Credential isolation. OpenClaw needs API keys to talk to LLM providers. In a multi-tenant setup, you can’t just set environment variables — every user has their own keys, and a compromised agent shouldn’t be able to read them. Workers don’t receive real API keys, ever. The gateway generates placeholder tokens (lobu_secret_<uuid>) and passes those instead. The real credentials stay in Redis. All outbound traffic flows through the gateway’s HTTP proxy, which swaps placeholders for real keys at request time. A compromised worker literally doesn’t have the secrets.

Network isolation. Workers sit on an internal-only Docker network with no direct internet access. Outbound connections are denied by default — you control what domains workers can reach through allowlists. Even if an agent tries to call home, there’s no route out.

Stability. OpenClaw agents can brick their own environment — install a bad package, corrupt the shell config, fill the disk. In a multi-tenant system that’s unacceptable. Each Lobu worker runs in an isolated container (Kata Containers, Firecracker microVMs, or gVisor) with resource limits or with just-bash for scaling beyond 1000 users. If an agent trashes its environment, it only affects that one user’s sandbox. The next session starts fresh from a clean image, or resumes from the last good persistent volume snapshot.

MCP proxy. OpenClaw supports MCP servers, but in a multi-tenant setup you need per-user authentication. Lobu’s gateway proxies MCP calls so each user authenticates once via OAuth, and the gateway injects their credentials transparently. Workers don’t manage MCP tokens — they just call the MCP endpoint and the gateway handles auth.

Skills and Nix

Every agent is configurable through a settings page — providers, skills, MCP servers, Nix packages, and permissions. All without touching config files.

Skills are modular bundles of instructions, MCP servers, system packages, and network requirements. A skill declares what it needs: integrations, Nix packages, and domains to allowlist. Tool visibility and approval policy live separately in lobu.toml, which keeps the capability manifest distinct from security controls. Lobu and Owletto now ship separate starter skills you can install with npx @lobu/cli@latest skills add lobu and npx owletto@latest skills add owletto. Teams can still create project-owned local skills, and agents can request skill installation mid-conversation — the user gets a prefilled settings link, approves, and the agent resumes.

Nix is how we handle reproducible environments. Instead of baking every possible tool into the worker image, users install what they need from the settings page — ffmpeg, python, curl, whatever. Nix gives us deterministic, conflict-free package management across sandboxes. It’s the same approach Replit uses for their development environments, and for the same reason: when you have thousands of isolated environments, you need package management that’s reproducible and doesn’t break between runs. The worker image ships with Nix tooling; packages are resolved and available at container startup.

One bot, everything in-app

The most important user flows still happen inside the same bot thread: messaging, auth handoffs, permission grants, and connection prompts. Teams can also manage agents from the admin UI when they want a broader control plane.

On Telegram, settings open as a native Mini App inside the chat. Authentication is handled by Telegram’s built-in signed payload — no tokens in URLs, no login screens. On Slack, the same settings page opens via Block Kit buttons with short-lived claim codes.

Both platforms share the same React settings page. When an agent needs a permission grant or a new integration mid-conversation, it posts the right UI natively — inline keyboard on Telegram, Block Kit button on Slack — back into the same thread. The user approves, and the agent continues.

Pricing

Pricing has evolved since launch. Today you can self-host the open-source stack for free, use managed cloud while it is free in beta, or work with us on an expert implementation. The latest details live on the pricing page.

Where this is going

The gateway is being rewritten with multi-tenancy as a first-class concern — usage tracking, audit logs, and a control plane so teams can manage agents without touching Kubernetes. The end state: push your agent config and Lobu handles the rest.

Try it

Add to Slack — free, BYO keys, nothing to deploy. For self-hosting: Docker or Kubernetes. The getting started guide walks through both.