Core Concepts
If you are confused about how the pieces of Savine fit together, this page is your mental model reference.
Architecture
Savine acts as the control plane between your application and the underlying LLMs and sandbox environments.
flowchart TD
App[Your Application] -->|API Request| Gateway[Savine API Gateway]
subgraph Savine Platform
Gateway --> Engine[AgentGraphEngine]
Engine --> LLM[LLM Provider / BYOK]
Engine --> Queue[Task Queue]
Engine --> Mem[Memory Layer]
Engine --> Tool[Tool Gateway]
Tool --> Sandbox[Sandboxed Execution Environment]
Engine -.-> Obs[Observability & Billing]
endKey Components
Agent
What it is: A configuration file (agent.json), not running code. Why it matters: Platform runs the agent, the agent.json just defines its behaviour. Savine provides the engine; you provide the blueprint. Read the agent.json spec →
Task
What it is: The unit of work. One task equals one single invocation of an agent. Why it matters: A task has an input, runs through an execution loop, and produces an output. All tasks run asynchronously by default and can be polled or streamed.
Execution Loop
What it is: The strict THINK → ACT → OBSERVE cycle that powers autonomy. Why it matters: By enforcing this cycle strictly, Savine prevents agents from jumping into infinite loops, hallucinating tool calls, or spinning out of control. The platform controls the loop.
System
What it is: A connected network of agents defined via system.json. Why it matters: Rather than building massive monolithic prompts, you pass tasks between narrow, specialised agents. One API endpoint orchestrates the whole pipeline. Read the system.json spec →
Tool Gateway
What it is: The proxy layer that all tool requests jump through. Why it matters: Your agent doesn't execute Python directly; it asks the Tool Gateway to do it. This centralizes permissions checking, metering, and audit logging automatically. Read the Tools guide →
Runtime Engine
What it is: The XState state machine (AgentGraphEngine) that controls every task execution. Why it matters: It is infrastructure running underneath your configuration, taking care of retries, backoff, checkpointing, and circuit breakers silently.
Sandbox
What it is: The isolated execution environment (using gVisor) where tools run. Why it matters: If an agent runs arbitrary Python or accesses a filesystem, it happens in a completely isolated container with strictly defined network/resource rules, destroyed immediately after use.
BYOK (Bring Your Own Key)
What it is: Supplying your own API keys for LLM providers (e.g. OpenAI, Google). Why it matters: You maintain your billing relationship with the AI labs, and you own your data. Savine encrypts your keys at rest and only injects them into the runtime memory.
Glossary
| Term | Definition | Documentation |
|---|---|---|
| AgentGraphEngine | The core deterministic state machine that executes agents. | Runtime Engine |
| BYOK | Bring Your Own Key. Supplying API tokens for LLMs directly to Savine. | LLM Configuration |
| key_ref | The string in an agent.json referencing an environment variable API key. | Deployment Guide |
| Manifest | The JSON configuration describing an agent or system. | Agent Spec |
| Sandbox | Secure container environment where your agent's code/tools execute. | Security Guide |
| System | Orchestration of multiple Agents working together in sequence/parallel. | System Spec |
| Task | A single request/response lifecycle given to an agent or system. | API Reference |
| Trace | The step-by-step log of an execution loop (THINK, ACT, OBSERVE). | Observability |