Integration Guide
LangGraph
Connect LangGraph to DashClaw and get your first governed action into /decisions in under 20 minutes.
Instance URL detected: https://your-dashclaw-instance.example.com
Deploy DashClaw
Get a running instance. Click the Vercel deploy button or run locally.
Already have an instance? Skip to Step 2.
Install the DashClaw Python SDK and LangGraph
Create a virtual environment and install the required packages.
Terminal
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install dashclaw==2.6.0 langgraph==1.1.3 langchain-core==1.2.21 python-dotenv
Set environment variables
Create a .env file with your DashClaw connection details. No LLM API key required for the example.
.env
DASHCLAW_BASE_URL=https://your-dashclaw-instance.example.com DASHCLAW_API_KEY=<your-workspace-api-key>
Add a governance node to your LangGraph graph
The governance node calls DashClaw guard before the research node runs. If the guard blocks, the research node skips execution.
main.py
from dashclaw import DashClaw
from langgraph.graph import StateGraph, END
from typing import TypedDict
class AgentState(TypedDict):
topic: str
research_result: str
governance_decision: str
action_id: str
claw = DashClaw(
base_url=os.environ["DASHCLAW_BASE_URL"],
api_key=os.environ["DASHCLAW_API_KEY"],
agent_id="langgraph-research-agent",
)
def governance_node(state: AgentState) -> AgentState:
"""Check DashClaw guard before proceeding."""
result = claw.guard({
"action_type": "research",
"declared_goal": f"Research topic: {state['topic']}",
"risk_score": 30,
})
decision = result.get("decision", "allow")
if decision == "block":
return {**state, "governance_decision": "blocked"}
action = claw.create_action(
"research",
f"Research topic: {state['topic']}",
risk_score=30,
)
return {**state, "governance_decision": decision, "action_id": action["action_id"]}
# Wire the graph
graph = StateGraph(AgentState)
graph.add_node("governance", governance_node)
graph.add_node("research", research_node)
graph.set_entry_point("governance")
graph.add_edge("governance", "research")
graph.add_edge("research", END)
app = graph.compile()The governance node runs before your tool node. If the guard decision is 'block', the research node returns early. If 'allow', it proceeds and calls update_outcome when done.
Run the governed LangGraph agent
Execute the example and watch the governance flow.
Terminal
python main.py
No OPENAI_API_KEY needed — the example simulates LLM output. Only the DashClaw SDK calls are real.
See the result in DashClaw
Open your DashClaw dashboard to confirm the action was recorded.
Go to /decisions — you should see your action in the ledger with action_type 'research', agent_id 'langgraph-research-agent', and status 'completed'.
Clone the full example
The complete runnable example is in the DashClaw repo.
Terminal
git clone https://github.com/ucsandman/DashClaw.git cd DashClaw/examples/langgraph-governed pip install -r requirements.txt python main.py
For production LangChain integrations, the Python SDK also includes a DashClawCallbackHandler (sdk-python/dashclaw/integrations/langchain.py) that automatically governs all LLM calls.
What success looks like
Go to /decisions — you should see your action in the ledger with action_type 'research', agent_id 'langgraph-research-agent', and status 'completed'.
Navigate to /decisions in your DashClaw instance. Your action should appear in the ledger within seconds of the agent run.
Governance as Code
Drop a guardrails.yml in your project root to enforce policies without code changes. DashClaw evaluates these rules at the guard step before any action executes.
guardrails.yml
version: 1
project: my-langgraph-agent
description: >
Governance policy for a LangGraph research agent.
High-risk external writes require approval.
Low-risk reads are auto-allowed.
policies:
- id: approve_external_writes
description: Writing to external systems requires human approval
applies_to:
tools:
- api.post
- file.write
- database.insert
rule:
require: approval
- id: allow_research
description: Read-only research is low risk
applies_to:
tools:
- web.search
- document.read
rule:
allow: true