Skip to content

Migrate from LangChain

If you're moving to Savine from a bloated LangChain script, this guide will help you adapt your mental models. The conceptual difference is simple: Savine defines agents declaratively; LangChain builds them imperatively.

Concept Mapping

LangChainSavineDifference
LLMChainagent.jsonNo Python required. LLM settings are JSON keys.
AgentExecutorAgentGraphEngineSavine's state machine handles retries and loops automatically underneath.
Tool / @toolTool GatewayTools are strictly sandboxed environments. Custom tools are HTTP endpoints.
RunnableSequencesystem.json sequenceRouting and sequential flows are purely DAG JSON objects.

Code Comparison

Before (LangChain)

A fragile 50-line script that requires infra hosting.

python
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, Tool
from custom_web_search import get_search

llm = ChatOpenAI(temperature=0, model="gpt-4")
tools = [
    Tool(
        name="Search",
        func=get_search,
        description="useful for when you need to answer questions about current events"
    )
]

agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)

try:
    response = agent.run("Who won the superbowl last night?")
    print(response)
except Exception as e:
    # Manual error handling
    print("Agent crashed:", e)

After (Savine)

A durable 5-line JSON file deployed as an API.

json
{
  "name": "langchain-migrated-agent",
  "llm": { "provider": "openai", "model": "gpt-4o", "key_ref": "OPENAI_API_KEY" },
  "tools": ["web_search"]
}

Then simply call it:

bash
savine deploy
savine tasks submit --agent langchain-migrated-agent --input "Who won the superbowl last night?"

Workarounds for Unsupported Features

1. LangSmith Integration

Savine provides native, built-in observability out of the box (Traces, Latency, Loops). Simply click the Traces tab in the Savine dashboard.

2. Output Parsers (Pydantic / Structured Output)

LangChain uses OutputParsers heavily. Savine currently supports passing a structured JSON schema constraint directly via the response_format JSON schema within agent.json. The LLM natively guarantees valid schemas without parsing hacks.

3. Extremely Complex Tool Chains

If your LangChain script uses massive, legacy nested Python functions, wrap that code in a small Flask/FastAPI service and expose it as a Custom Tool schema to Savine. Call it over HTTP. Do not try to port 1,000 lines of complex Python directly into python_exec.