Verdict's Position
This Disclosure is supplemental to (and incorporated by reference into) the Terms of Service and the Privacy Policy. To the extent of any conflict, the Terms control.
What AI Features Are For
The Service may use machine-learning models, including Anthropic and OpenAI models, to:
- Summarize evidence batches and produce human-readable abstracts for review.
- Search and retrieve over evidence batches by natural-language query.
- Generate FRE 902(14) certification drafts, EU AI Act Article 12 logging summaries, and insurer schema mappings — as drafts, always subject to human review.
- Detect anomalies or policy violations in agent event streams and surface them for human triage.
- Assist with developer ergonomics — code completion, schema validation, error explanation.
What AI Features Are NOT For
The Service's AI features are not designed and not permitted to be used to:
- Provide legal advice, attorney work product, or anything that could be reasonably understood as the practice of law.
- Provide medical, accounting, tax, financial, investment, fiduciary, or other regulated professional advice.
- Make autonomous decisions with legal or similarly significant effects on a natural person without human review.
- Determine the admissibility of evidence in any forum — that is the role of judges, arbitrators, and tribunals.
- Determine coverage, loss, or liability decisions in insurance claims — that is the role of carriers and adjusters.
- Produce any output reasonably likely to be relied on in a court, regulatory, or insurance setting without human verification.
- Operate as a substitute for the licensed professional who must verify and authorize the underlying agent action.
Hallucination & Limitations
Large language models hallucinate. They produce text that is fluent, confident, and incorrect. Verdict has chosen models and prompting strategies to reduce this risk for our specific tasks, but we cannot and do not promise to eliminate it. Specific known limitations of the AI features in the Service include:
- Citations to statutes, rules, or cases produced by AI features must be verified against authoritative sources. AI is known to fabricate citations.
- Numerical summaries and aggregates derived by AI features must be verified against the underlying data.
- Cross-jurisdictional analyses (e.g., comparing a U.S. evidentiary rule to an EU AI Act provision) are first-pass aids and should be reviewed by counsel familiar with both regimes.
- AI features may reflect biases present in training data. Outputs about people, organizations, or groups should be treated with corresponding caution.
- AI features do not know facts that occurred after the model's training cutoff or that exist only in private data not provided to the model.
Human-Verification Requirement
This is a contractual requirement under the Acceptable Use Policy § 5. It is also the rule that most professional-responsibility regimes require (e.g., ABA Model Rules 1.1 and 3.3 for attorneys).
Model Versioning & Changelog
Where AI features materially change — different base model, different prompt-program, materially different fine-tuning, materially different retrieval set — Verdict will publish a notice in the changelog at /changelog and, for Enterprise customers, an email notice. We do not silently swap models for features that produce evidence-record summaries or any output reasonably likely to be relied on by courts or insurers.
NIST AI RMF 1.0 Alignment
Verdict aligns its AI development and operation practices with the four functions of NIST AI RMF 1.0:
- Govern. A documented AI policy, internal review of new AI features, model-card requirements for shipped features, and incident-response plan with AI-specific playbooks.
- Map. Each AI feature has a documented intended use, prohibited uses, affected stakeholders, and risk tier.
- Measure. Pre-launch evaluation against task-specific accuracy benchmarks, hallucination tests on representative cases, and disaggregated-evaluation where applicable.
- Manage. Continuous monitoring, customer feedback loops, change-management gates, and a rollback procedure for regressions.
EU AI Act Posture
The Service's primary feature — sealing evidence records — is logging infrastructure that is itself a tool to help providers and deployers comply with Article 12 (record-keeping) of Regulation (EU) 2024/1689 (EU AI Act). Where AI features of the Service are deployed in the EU and meet the definition of an AI system under Article 3, Verdict treats them as follows:
- Article 5 (prohibited practices). Verdict does not develop or deploy systems for prohibited practices and forbids customers from doing so via the AUP § 3.
- Annex III high-risk areas. Verdict's AI features are not designed for use as a safety component of a product or in the Annex III areas (administration of justice, biometric identification, education, employment, essential services, law enforcement, migration, critical infrastructure). Customers must not deploy AI features of the Service in those areas without prior written consent from Verdict and a deployment plan that meets the obligations of the Act.
- Article 50 transparency. Outputs of AI features are clearly marked as machine-generated where presented to end users. Verdict does not generate synthetic content in deepfake-style modalities (image, audio, video).
- General-purpose AI models. Verdict is a deployer of upstream foundation models (Anthropic, OpenAI). We rely on those providers' GPAI obligations under Articles 53–55, while implementing our own transparency, evaluation, and incident-response controls described above.
Customer Obligations
If you use the Service's AI features in your own deployment:
- You remain the "provider" or "deployer" of your AI system. You are responsible for your obligations under the EU AI Act, NIST AI RMF, and other applicable frameworks.
- You will configure human-in-the-loop review before any output reaches a third party in reliance.
- You will disclose AI assistance where required by the rules of your forum or regulator (e.g., court orders on AI disclosure, agency rules).
- You will not bypass the safety filters or rate limits we publish.
AI Incident Response
If you observe a serious malfunction, a safety-affecting output, or a near-miss involving an AI feature of the Service, report it to ai-incident@verdict.systems with the batch ID, timestamp, prompt or query, observed output, and the harm you observed or anticipate. Verdict will acknowledge within 24 hours, investigate, and, where appropriate, issue a notice to affected customers and / or to relevant regulators (e.g., under EU AI Act Article 73 for high-risk systems, where applicable).
Contact
AI feature questions: ai@verdict.systems.
AI incident reports: ai-incident@verdict.systems.
Legal and compliance: legal@verdict.systems.
Email ai@verdict.systems. For all other legal matters: legal@verdict.systems.
Postal: Verdict Systems Inc. · Attn: Legal · Houston, Texas, USA