Services

Independent assessment. Practical evidence.

We test AI decision systems for bias and help organizations build the governance structures to manage them responsibly.

Core Service

Bias Testing for AI Decision Systems

We analyze AI outcomes across demographic groups and intersections to identify adverse impact — in hiring, lending, insurance, risk scoring, and any context where AI decisions affect people. Selection rates, impact ratios, and error-rate analysis with reproducible methodology and practical remediation guidance.

Includes AEDT compliance audits for automated hiring and promotion tools, structured for NYC Local Law 144 public disclosure requirements.

Statistical adverse impact analysis and documentation. Not legal advice; not a guarantee of regulatory outcomes.

Governance

AI Governance Readiness Review

A plain-English assessment of your AI governance posture — policies, roles, documentation, accountability structures. You receive a findings report, risk-ranked gap analysis, and a 90-day remediation roadmap aligned to the framework that matters most for your context.

What You Get

Evidence that holds up within the agreed framework.

Our audit reports are designed for the people who need to act on them — compliance officers, risk committees, and boards. Clear findings, reproducible methodology.

What an Audit Includes

Demographic group identification and sample size validation
Selection rate calculation per group
Impact ratio analysis with four-fifths rule assessment
Intersectional analysis (e.g., sex × race/ethnicity)
Small-sample exclusion documentation with rationale
Risk-ranked findings with remediation guidance
Complete methodology disclosure and reproducibility artifacts
Limitations and scope disclaimers
Sample Output — Gender AnalysisIllustrative
GroupApplicantsAbove-Threshold RateImpact RatioResult
Male1,29151.4%1.00Pass
Female1,20948.5%0.94Pass
Illustrative only. Example threshold: IR ≥ 0.80 (context-dependent). Thresholds and interpretation depend on sample sizes and applicable regulatory framework.
Anonymized Example

AI Hiring Screen — Adverse Impact Detection

System: AI-generated candidate scoring model used to screen 2,500+ applicants across five job categories.

Finding: Single-attribute analysis showed all race/ethnicity groups above the 0.80 threshold except one (IR = 0.76). Intersectional analysis revealed compounding: the intersection of that group with female gender produced an IR of 0.69 — significantly below threshold and invisible in single-attribute testing alone.

Outcome: Board-ready report delivered in 12 business days with risk-ranked findings and specific recommendations for model recalibration and monitoring cadence.

Methodology

How we test.

Metrics We Use

  • Selection rate / above-threshold rate per group
  • Adverse impact ratio (four-fifths rule)
  • True positive / false positive rate parity
  • Intersectional group analysis
  • Sample size validation and small-group handling

Metrics are chosen with the client based on context, applicable policy, and the specific decision system under review.

Frameworks We Align To

  • Singapore FEAT Principles (MAS)
  • AI Verify Testing Framework (IMDA)
  • NIST AI Risk Management Framework
  • ISO/IEC 42001 AI Management Systems
  • NYC Local Law 144 (AEDT)
  • EEOC Uniform Guidelines (Four-Fifths Rule)

Get Started

Ready to know where you stand?

We'll tell you if you're a fit and what data we'd need — then you decide.

hello@trustminerva.com