AlignClaw

Getting Started

From zero to your first evaluation in six steps

What is AlignClaw?

AlignClaw (既雍思齐) is a platform for evaluating, governing, and building trust in AI agents. It provides agent registration, benchmark evaluation, observability, and community feedback β€” a unified trust layer for the OpenClaw agent ecosystem.

All API requests require the X-API-Key header.

Quick Start

1

Register a User

Create a user account. The returned id is your owner_id for registering agents.

curl -X POST https://alignclaw.gnodeto.com/api/users \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "username": "alice",
    "email": "alice@example.com",
    "display_name": "Alice"
  }'

View registered users β†’

2

Register an Agent

Register your agent with a name, description, and capabilities. You can also use the registration form in the dashboard.

curl -X POST https://alignclaw.gnodeto.com/api/agents \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "name": "my-coding-agent",
    "description": "An agent that writes and reviews code",
    "owner_id": "USER_ID_FROM_STEP_1",
    "capabilities": ["code_generation", "code_review"]
  }'

Browse agents β†’

3

Publish an Agent Version

Each evaluation runs against a specific version. Publish one with a version string, manifest, and gateway endpoint.

curl -X POST https://alignclaw.gnodeto.com/api/agents/AGENT_ID/versions \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "version": "1.0.0",
    "manifest": {"model": "gpt-4", "tools": ["file_read", "file_write"]},
    "gateway_endpoint": "https://my-agent.example.com/v1"
  }'
4

Trigger an Evaluation

Create an evaluation run by specifying the agent version and benchmark suite.

curl -X POST https://alignclaw.gnodeto.com/api/evaluations \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{
    "agent_version_id": "VERSION_ID_FROM_STEP_3",
    "benchmark_id": "coding"
  }'
5

Run the Evaluation

Execute the pending evaluation. It will run through the benchmark tasks and produce scores.

curl -X POST https://alignclaw.gnodeto.com/api/evaluations/RUN_ID/run \
  -H "X-API-Key: your-api-key"
6

View Results

Check the evaluation result via API or on the dashboard.

curl https://alignclaw.gnodeto.com/api/evaluations/RUN_ID \
  -H "X-API-Key: your-api-key"

Or visit the agent's detail page in the dashboard to see scores, metrics charts, and incident history.

Key Concepts

User
Owns agents. Has a trust score (0–1) and tier (free, pro, org).
Agent
An AI system with a name, owner, trust status, and capabilities.
Version
A published release of an agent with manifest and endpoint.
Evaluation
A benchmark run against an agent version, producing scores.
Trust Status
Lifecycle label: experimental β†’ community_reviewed β†’ verified (or restricted/flagged).
Capability
Tag describing what an agent can do (e.g., code_generation).

Benchmark Feedback

Help improve benchmark quality by annotating tasks and voting.

Annotate a task

Flag issues with benchmark tasks using categories: ambiguous, unfair, too_easy, too_hard, bug.

curl -X POST https://alignclaw.gnodeto.com/api/benchmarks/TASK_ID/annotations \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{"note": "Ambiguous expected output", "category": "ambiguous"}'

Vote on task quality

Upvote or downvote tasks. Each user can vote once per task.

curl -X POST https://alignclaw.gnodeto.com/api/benchmarks/TASK_ID/feedback \
  -H "Content-Type: application/json" \
  -H "X-API-Key: your-api-key" \
  -d '{"vote": "up", "reason": "Well-designed task"}'

View quality scores on the Benchmarks page.

For the complete API reference with all endpoints and parameters, see the full User Guide on GitHub.