Skip to main content

Judgment Ontology – A decade of tradecraft, formalized.

JUDGMENT ONTOLOGY

A decade of tradecraft, formalized.

The operational layer that makes Sayari’s AI trustworthy. Built from eleven years of real-world tradecraft – encoding skills, failure modes, and judgment criteria into every AI decision.

ontology.evaluate
{
  "skill": "source_verification",
  "input": "No sanctions matches found",
  "traps_detected": [
    "confidence_inflation",
    "source_gap"
  ],
  "judgment": "FAIL",
  "rationale": "High confidence on thin evidence.
  No ownership traversal performed.",
  "remediation": "traverse_ownership_chain"
}

Our world model gave us intelligence. Our tradecraft is what we built to make sense of it.

The Judgment Ontology encodes tradecraft into Sayari’s AI – not as abstract rules, but as patterns drawn from thousands of real investigations and real outcomes.

Tradecraft produced judgment: knowing what to do next when the answer isn’t obvious.

300+
Hours of recorded expert investigations
66K+
Analyst judgments labeled and encoded
1K+
Written reports and findings
11
Years of real-world commercial intelligence

The Evaluation System

What Sayari’s agents learn from, and what their performance is measured against – encoding tradecraft as skills, failure modes, and judgment criteria.

PRINCIPLE 01

Reasoning matters. An AI that reaches the right answer through faulty reasoning will fail at scale – and you won’t be able to tell the difference.

PRINCIPLE 02

The Ontology constrains AI to reason like experienced analysts. A million queries, same quality – because the judgment is sound, not lucky.

judgment_evaluation
skills: [source_verification, ownership_traversal,
         jurisdiction_analysis, temporal_reasoning,
         network_analysis, confidence_calibration,
         lateral_creativity]

trap_patterns: 13 codified failure modes
labeled_decisions: 67,000+
scoring: GREEN / AMBER / RED
output: verdict + rationale + sources + next_steps

See the Difference

Same prompt. Two very different answers.

Generic AI
query: "Assess Meridian Trading LLC"

response: "Meridian Trading LLC is registered
in Delaware. No sanctions matches found.
The company appears to be a standard
trading entity with no significant
risk indicators."

confidence: 0.91
reasoning: none
sources_checked: 1
traps_caught: 0
Sayari’s Agents (Ontology-Guided)
query: "Assess Meridian Trading LLC"

ontology.evaluate: {
  skills_applied: [
    "source_verification",
    "ownership_traversal",
    "jurisdiction_analysis"
  ],
  traps_caught: [
    "confidence_inflation",   // 0.91 on 1 source
    "jurisdiction_blindness", // DE ≠ operations
    "ownership_opacity"       // 3-layer BVI chain
  ],
  reasoning: "visible",
  uncertainty: "calibrated",
  verdict: "ESCALATE"
}
Not a benchmark. A judgment system.

Not a leaderboard. A judgment system.

Most AI evaluation measures accuracy on generic tasks. The Judgment Ontology measures whether AI reasons the way an experienced analyst actually reasons.

Whether it checks sources or assumes them. Whether it flags what it doesn’t know or fills the gap with confidence.

Whether it catches the structure a sanctions evader built to be missed.

Not a leaderboard metric. A quality system for decisions that have consequences – and it improves continuously.

See the ontology in your domain.

Bring your evaluation criteria. We’ll show you how the Judgment Ontology adapts – and what your current AI stack is missing.