Skills
A skill is a self-contained capability definition for a Lobu agent. Skills declare what external services, system packages, MCP servers, network access, and tool permissions an agent session needs. The gateway resolves these declarations at session start, so the worker container launches with the right environment — no manual setup required.
How skills work
Section titled “How skills work”SKILL.md (declarative config) ├── integrations → third-party API access, managed by Owletto ├── mcpServers → MCP tool servers, proxied through gateway ├── nixConfig → system packages provisioned via Nix ├── networkConfig → domain allowlist for the sandbox └── toolsConfig → agent tool permissions (allow / deny / strict)When a user enables a skill in the settings page:
- Gateway reads the skill manifest and resolves each section.
- Credentials for integrations are managed by Owletto; MCP server credentials are resolved by the gateway proxy — workers never see raw secrets.
- Nix packages are passed as environment variables (
NIX_PACKAGES,NIX_FLAKE_URL) to the worker entrypoint. - Network and tool policies are enforced at the orchestrator level (Docker / Kubernetes).
- The worker container starts with everything in place.
System packages with Nix
Section titled “System packages with Nix”Every Lobu worker container ships with Nix pre-installed. This gives agents access to over 100 000 packages from Nixpkgs without baking dependencies into a custom Docker image.
Why Nix
Section titled “Why Nix”Traditional approaches to giving containers extra tools — installing via apt-get at runtime or maintaining custom images per use case — are slow, fragile, or impossible when the container filesystem is read-only. Nix solves this:
- Declarative: list package names, get reproducible results.
- No root required: Nix installs into
/nix/storeunder theworkeruser. - Persistent across scale-to-zero: the Nix store is symlinked (Docker) or subPath-mounted (K8s) onto the workspace PVC. Packages survive container restarts.
- Composable: combine packages from settings, skill manifests, and repo-level config — they all merge into one
nix-shellinvocation.
Resolution order
Section titled “Resolution order”The worker entrypoint evaluates Nix configuration in priority order:
| Priority | Source | Mechanism |
|---|---|---|
| 1 | NIX_FLAKE_URL env var | nix develop <url> — full flake-based dev shell |
| 2 | NIX_PACKAGES env var | nix-shell -p <pkgs> — individual packages from Nixpkgs |
| 3 | flake.nix in workspace | Auto-detected, nix develop . |
| 4 | shell.nix in workspace | Auto-detected, nix-shell |
| 5 | .nix-packages file | One package name per line, fed to nix-shell -p |
Sources 1 and 2 come from the API (skill settings or agent config). Sources 3–5 are repo-level files the agent can create or a user can commit. API sources always win.
Example: adding ffmpeg and ImageMagick
Section titled “Example: adding ffmpeg and ImageMagick”In the settings page, add ffmpeg and imagemagick to the agent’s system packages. The gateway passes NIX_PACKAGES=ffmpeg,imagemagick to the worker. On startup:
# Worker entrypoint (simplified)nix-shell -p ffmpeg imagemagick --command "bun run src/index.ts"The agent process and every child process (Bash tool calls, etc.) now have ffmpeg and convert on $PATH.
Persistence model
Section titled “Persistence model”| Environment | Strategy |
|---|---|
| Docker (dev) | Nix store copied to /workspace/.nix-store, then symlinked back to /nix/store. Survives container restart. |
| Kubernetes (prod) | Init container copies /nix/store to PVC subpath. Main container mounts PVC subpath at /nix/store and /nix/var. |
Both approaches mean the first boot pays the install cost; subsequent sessions reuse the cached store.
Flake support
Section titled “Flake support”For complex environments (pinned Nixpkgs, overlays, custom derivations), point the agent at a flake:
nixConfig: flakeUrl: "github:user/my-agent-env"The worker runs nix develop github:user/my-agent-env --command <entrypoint>, giving the agent a fully reproducible shell.
Integrations
Section titled “Integrations”Integration auth for third-party APIs (GitHub, Google, Linear, etc.) is handled by Owletto. Workers access these APIs through Owletto MCP tools, which manage OAuth credentials, token refresh, and API proxying. Workers never see OAuth tokens directly.
MCP Servers
Model Context Protocol servers for extended capabilities.
| Name | Type | URL |
|---|---|---|
| Owletto | sse | ${env:AUTH_MCP_URL} |
MCP servers
Section titled “MCP servers”MCP (Model Context Protocol) servers extend agent capabilities with additional tools. Skills can declare MCP servers, and the gateway handles proxying and credential injection.
- HTTP/SSE transport: workers connect to MCP servers through the gateway proxy. No direct outbound network required.
- Per-user credentials: each user authenticates independently via a device-code flow. The gateway injects the correct token when proxying MCP requests for a given user.
mcpServers: my-memory: url: https://memory.example.com/mcp type: sseAgent discovery
Section titled “Agent discovery”Agents can discover and install additional skills at runtime:
SearchSkills— search the registry for skills matching a taskInstallSkill(withid) — request installation or upgrade of a specific skill- The user receives a prefilled settings link to review and approve
- Once approved, the skill is active for future sessions
Users always control what integrations and permissions their agent uses.
SKILL.md format
Section titled “SKILL.md format”Skills are defined as SKILL.md files — Markdown with YAML frontmatter:
---name: My Skilldescription: What this skill does
integrations: - id: google authType: oauth
mcpServers: my-mcp: url: https://my-mcp.example.com type: sse
nixConfig: packages: [jq, ripgrep, pandoc]
networkConfig: allowedDomains: - api.example.com
toolsConfig: allowedTools: - Read - Bash(git:*) deniedTools: - DeleteFile strictMode: true---
# My Skill
Instructions and behavioral rules for the agent go here as Markdown.The body acts as a system prompt extension — it shapes how the agentuses the declared capabilities.Frontmatter reference
Section titled “Frontmatter reference”| Field | Type | Description |
|---|---|---|
name | string | Display name shown in settings and search results |
description | string | Short summary for the skill registry |
integrations | array | Third-party services the skill requires (managed by Owletto) |
integrations[].id | string | Integration identifier (e.g. google, github) |
integrations[].authType | oauth | apiKey | Authentication method |
mcpServers | object | MCP server connections (keyed by server ID) |
mcpServers.<id>.url | string | Server endpoint URL |
mcpServers.<id>.type | sse | http | Transport type |
nixConfig.packages | string[] | Nix packages to install (from Nixpkgs) |
nixConfig.flakeUrl | string | Nix flake URL for a full dev shell |
networkConfig.allowedDomains | string[] | Domains the worker sandbox can reach |
toolsConfig.allowedTools | string[] | Tools the agent is allowed to use |
toolsConfig.deniedTools | string[] | Tools explicitly blocked |
toolsConfig.strictMode | boolean | When true, only allowedTools are available |
Markdown body
Section titled “Markdown body”The body after the frontmatter is injected into the agent’s system prompt when the skill is active. Use it to define behavioral rules, workflows, or domain-specific instructions. The agent follows these instructions alongside its base prompt.
Full registry source
Section titled “Full registry source”The built-in system skills and providers are config-driven:
- Local:
config/system-skills.json - GitHub: config/system-skills.json
View full system-skills.json
{
"skills": [
{
"id": "owletto",
"name": "Owletto Memory",
"description": "Long-term memory across conversations. Use when you need to remember user preferences, recall past context, or store important facts.",
"instructions": "Check Owletto memory at the start of each conversation for relevant context. Store important user preferences, facts, and decisions.",
"hidden": true,
"mcpServers": [
{
"id": "owletto",
"name": "Owletto",
"url": "${env:AUTH_MCP_URL}",
"type": "sse"
}
]
},
{
"id": "groq",
"name": "Groq",
"description": "Fast LLM inference via Groq",
"providers": [
{
"displayName": "Groq",
"iconUrl": "https://www.google.com/s2/favicons?domain=groq.com&sz=128",
"envVarName": "GROQ_API_KEY",
"upstreamBaseUrl": "https://api.groq.com/openai",
"apiKeyInstructions": "Get your API key from <a href=\"https://console.groq.com/keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Groq Console</a>",
"apiKeyPlaceholder": "gsk_...",
"sdkCompat": "openai",
"defaultModel": "llama-3.3-70b-versatile",
"modelsEndpoint": "/v1/models",
"stt": {
"enabled": true,
"sdkCompat": "openai",
"transcriptionPath": "/v1/audio/transcriptions",
"model": "whisper-large-v3-turbo"
}
}
]
},
{
"id": "gemini",
"name": "Gemini",
"description": "Google Gemini models",
"providers": [
{
"displayName": "Gemini",
"iconUrl": "https://www.google.com/s2/favicons?domain=gemini.google.com&sz=128",
"envVarName": "GEMINI_API_KEY",
"upstreamBaseUrl": "https://generativelanguage.googleapis.com/v1beta/openai",
"apiKeyInstructions": "Get your API key from <a href=\"https://aistudio.google.com/apikey\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Google AI Studio</a>",
"apiKeyPlaceholder": "AIza...",
"sdkCompat": "openai",
"defaultModel": "gemini-2.0-flash",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "together-ai",
"name": "Together AI",
"description": "Open-source models hosted by Together",
"providers": [
{
"displayName": "Together AI",
"iconUrl": "https://www.google.com/s2/favicons?domain=together.ai&sz=128",
"envVarName": "TOGETHER_API_KEY",
"upstreamBaseUrl": "https://api.together.xyz/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://api.together.ai/settings/api-keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Together AI Dashboard</a>",
"apiKeyPlaceholder": "tok_...",
"sdkCompat": "openai",
"defaultModel": "meta-llama/Llama-3.3-70B-Instruct-Turbo",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "nvidia",
"name": "NVIDIA NIM",
"description": "NVIDIA-hosted models and inference APIs",
"providers": [
{
"displayName": "NVIDIA NIM (free)",
"iconUrl": "https://www.google.com/s2/favicons?domain=nvidia.com&sz=128",
"envVarName": "NVIDIA_API_KEY",
"upstreamBaseUrl": "https://integrate.api.nvidia.com/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://build.nvidia.com/settings/api-keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">NVIDIA Build</a>",
"apiKeyPlaceholder": "nvapi-...",
"sdkCompat": "openai",
"modelsEndpoint": "/v1/models",
"defaultModel": "nvidia/moonshotai/kimi-k2.5"
}
]
},
{
"id": "z-ai",
"name": "z.ai",
"description": "z.ai coding and reasoning models",
"providers": [
{
"displayName": "z.ai",
"iconUrl": "https://www.google.com/s2/favicons?domain=z.ai&sz=128",
"envVarName": "Z_AI_API_KEY",
"upstreamBaseUrl": "https://api.z.ai/api/coding/paas/v4",
"apiKeyInstructions": "Get your API key from <a href=\"https://z.ai/manage-apikey/apikey-list\" target=\"_blank\" class=\"text-blue-600 hover:underline\">z.ai</a>",
"apiKeyPlaceholder": "zai-...",
"sdkCompat": "openai"
}
]
},
{
"id": "elevenlabs",
"name": "ElevenLabs",
"description": "Voice and speech models",
"providers": [
{
"displayName": "ElevenLabs",
"iconUrl": "https://www.google.com/s2/favicons?domain=elevenlabs.io&sz=128",
"envVarName": "ELEVENLABS_API_KEY",
"upstreamBaseUrl": "https://api.elevenlabs.io",
"apiKeyInstructions": "Get your API key from <a href=\"https://elevenlabs.io/app/api-keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">ElevenLabs</a>",
"apiKeyPlaceholder": "sk_...",
"sdkCompat": "openai"
}
]
},
{
"id": "fireworks",
"name": "Fireworks AI",
"description": "Fast inference for open-source models",
"providers": [
{
"displayName": "Fireworks AI",
"iconUrl": "https://www.google.com/s2/favicons?domain=fireworks.ai&sz=128",
"envVarName": "FIREWORKS_API_KEY",
"upstreamBaseUrl": "https://api.fireworks.ai/inference/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://fireworks.ai/account/api-keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Fireworks Dashboard</a>",
"apiKeyPlaceholder": "fw_...",
"sdkCompat": "openai",
"defaultModel": "accounts/fireworks/models/llama-v3p3-70b-instruct",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "mistral",
"name": "Mistral",
"description": "Mistral AI models",
"providers": [
{
"displayName": "Mistral",
"iconUrl": "https://www.google.com/s2/favicons?domain=mistral.ai&sz=128",
"envVarName": "MISTRAL_API_KEY",
"upstreamBaseUrl": "https://api.mistral.ai/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://console.mistral.ai/api-keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Mistral Console</a>",
"apiKeyPlaceholder": "sk-...",
"sdkCompat": "openai",
"defaultModel": "mistral-large-latest",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "deepseek",
"name": "DeepSeek",
"description": "DeepSeek reasoning and coding models",
"providers": [
{
"displayName": "DeepSeek",
"iconUrl": "https://www.google.com/s2/favicons?domain=deepseek.com&sz=128",
"envVarName": "DEEPSEEK_API_KEY",
"upstreamBaseUrl": "https://api.deepseek.com",
"apiKeyInstructions": "Get your API key from <a href=\"https://platform.deepseek.com/api_keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">DeepSeek Platform</a>",
"apiKeyPlaceholder": "sk-...",
"sdkCompat": "openai",
"defaultModel": "deepseek-chat",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "openrouter",
"name": "OpenRouter",
"description": "Multi-provider model router with per-user billing",
"providers": [
{
"displayName": "OpenRouter",
"iconUrl": "https://www.google.com/s2/favicons?domain=openrouter.ai&sz=128",
"envVarName": "OPENROUTER_API_KEY",
"upstreamBaseUrl": "https://openrouter.ai/api/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://openrouter.ai/keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">OpenRouter</a>, or connect via OAuth for per-user billing",
"apiKeyPlaceholder": "sk-or-...",
"sdkCompat": "openai",
"defaultModel": "anthropic/claude-sonnet-4",
"modelsEndpoint": "/v1/models",
"stt": {
"enabled": true,
"sdkCompat": "openai",
"transcriptionPath": "/audio/transcriptions",
"model": "whisper-1"
}
}
]
},
{
"id": "cerebras",
"name": "Cerebras",
"description": "Ultra-fast inference on Cerebras hardware",
"providers": [
{
"displayName": "Cerebras",
"iconUrl": "https://www.google.com/s2/favicons?domain=cerebras.ai&sz=128",
"envVarName": "CEREBRAS_API_KEY",
"upstreamBaseUrl": "https://api.cerebras.ai/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://cloud.cerebras.ai/\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Cerebras Cloud</a>",
"apiKeyPlaceholder": "csk-...",
"sdkCompat": "openai",
"defaultModel": "llama-3.3-70b",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "opencode-zen",
"name": "OpenCode Zen",
"description": "Curated AI gateway with 40+ coding-optimized models",
"providers": [
{
"displayName": "OpenCode Zen",
"iconUrl": "https://www.google.com/s2/favicons?domain=opencode.ai&sz=128",
"envVarName": "OPENCODE_ZEN_API_KEY",
"upstreamBaseUrl": "https://opencode.ai/zen/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://opencode.ai/auth\" target=\"_blank\" class=\"text-blue-600 hover:underline\">OpenCode Zen</a>",
"apiKeyPlaceholder": "zen-...",
"sdkCompat": "openai",
"defaultModel": "anthropic/claude-sonnet-4",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "xai",
"name": "xAI",
"description": "Grok models by xAI",
"providers": [
{
"displayName": "xAI",
"iconUrl": "https://www.google.com/s2/favicons?domain=x.ai&sz=128",
"envVarName": "XAI_API_KEY",
"upstreamBaseUrl": "https://api.x.ai/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://console.x.ai/\" target=\"_blank\" class=\"text-blue-600 hover:underline\">xAI Console</a>",
"apiKeyPlaceholder": "xai-...",
"sdkCompat": "openai",
"defaultModel": "grok-3",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "perplexity",
"name": "Perplexity",
"description": "Search-augmented AI models",
"providers": [
{
"displayName": "Perplexity",
"iconUrl": "https://www.google.com/s2/favicons?domain=perplexity.ai&sz=128",
"envVarName": "PERPLEXITY_API_KEY",
"upstreamBaseUrl": "https://api.perplexity.ai",
"apiKeyInstructions": "Get your API key from <a href=\"https://www.perplexity.ai/settings/api\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Perplexity Settings</a>",
"apiKeyPlaceholder": "pplx-...",
"sdkCompat": "openai",
"defaultModel": "sonar-pro",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "cohere",
"name": "Cohere",
"description": "Enterprise AI with Command and Embed models",
"providers": [
{
"displayName": "Cohere",
"iconUrl": "https://www.google.com/s2/favicons?domain=cohere.com&sz=128",
"envVarName": "COHERE_API_KEY",
"upstreamBaseUrl": "https://api.cohere.com/compatibility/v1",
"apiKeyInstructions": "Get your API key from <a href=\"https://dashboard.cohere.com/api-keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">Cohere Dashboard</a>",
"apiKeyPlaceholder": "co-...",
"sdkCompat": "openai",
"defaultModel": "command-r-plus",
"modelsEndpoint": "/v1/models"
}
]
},
{
"id": "openai",
"name": "OpenAI",
"description": "OpenAI Models",
"providers": [
{
"displayName": "OpenAI",
"iconUrl": "https://www.google.com/s2/favicons?domain=openai.com&sz=128",
"envVarName": "OPENAI_API_KEY",
"upstreamBaseUrl": "https://api.openai.com",
"apiKeyInstructions": "Get your API key from <a href=\"https://platform.openai.com/api-keys\" target=\"_blank\" class=\"text-blue-600 hover:underline\">OpenAI Dashboard</a>",
"apiKeyPlaceholder": "sk-...",
"sdkCompat": "openai",
"defaultModel": "gpt-4o",
"modelsEndpoint": "/v1/models"
}
]
}
]
}