Design Partner Program · 5–10 spots

Six months of LQS Enterprise, free.
For the first 10 risk teams that say yes.

We're picking 5–10 design partners in regulated industries — banks, hospitals, LLM labs, public-sector AI buyers — for a 6-month free pilot of LQS Enterprise. You get unlimited cert generation, private-mode scoring in your VPC, and direct line to our scorer team. We get a logo, a quote, and a case study.

Spots open · 8 of 10 remaining
The deal

Trade access for credibility. Both ways.

Fair, simple, written. No usage limits during the pilot. No "design partner" pricing tricks at the end. You're free to walk; we're free to keep your name + quote regardless.

You get · 6 months

Full LQS Enterprise tier

  • Unlimited signed certs — score every dataset you train on, at any scale.
  • Private-mode scoring — Docker image runs in your AWS / GCP / Azure tenancy. Your data never leaves your perimeter.
  • BYO-key signing — sign certs with your own Ed25519 key if you'd rather not chain to ours.
  • Custom calibration corpus for your domain (finance / clinical / legal / etc.) — we tune the scorer to your data distribution.
  • Direct line to the scorer team — Slack channel, weekly check-in, root-cause any score that doesn't match your intuition.
  • Methodology paper review — your model risk team reviews our methodology paper before we publish to arXiv. Your suggestions land in v3.2.
  • Custom dim or weight — if your auditors care about a dimension we don't yet score, we add it for the pilot.
Pilot value: conservatively $90k–$240k of enterprise tier + custom work. Free for 6 months. Continuation pricing locked at 50% off retail for year 2 if you stay.
We ask · in writing

One logo, one quote, one case study

  • Logo on labelsets.ai — pseudonymized OK ("a top-3 US regional bank"). You veto specific phrasing.
  • One quote from your model risk lead, compliance officer, or data science lead. We draft it; you edit it; you approve it.
  • One case study — 1-page write-up of how your team used LQS in a model package. Hosted at labelsets.ai/case-studies/[you]. You keep redaction rights.
  • Quarterly call with our team — 30 min, candid, what's working and what isn't.
  • Optional: 1 conference talk co-presented if relevant ("we used LabelSets LQS for our SR 11-7 evidence package at FinAI").
Out of scope: we won't ask for your training data. We won't share your performance numbers without permission. We won't use your name in fundraising decks without permission. Standard MNDA covers all of it.
Who fits

Not for everyone. Specifically for these four.

If your team checks one of these boxes, you're our target. Cold-emails outside this profile get a polite "thanks, but we've taken our 10."

Banking · model risk
SR 11-7 model risk teams at US banks
Citing third-party data quality in MRM packages. ECOA + OCC 2011-12 angle. Smaller community/regional banks especially welcome — less procurement bureaucracy.
Health · clinical AI
Health systems + medical-device OEMs
FDA 21 CFR Part 11 + HHS §1557 obligations. Subgroup-equity evidence on training data. Particularly a fit if you're building clinical decision support or radiology AI.
LLM lab
Foundation-model + fine-tuning teams
Need benchmark-clean training corpora. Want contamination_clean evidence per public eval. Especially fit for safety-eval / red-team-clean training data audits.
Public sector · audit
Regulators, agencies, public-sector AI buyers
EU AI Act Art. 10 governance compliance. NIST AI RMF MEASURE 2.2/2.3 evidence packages. Vendor onboarding workflows for AI tools.
For the outbound

Email template that actually gets replied to.

If you're an internal champion at a target firm and your sourcing team needs to send the cold-email — copy this. Targeting model-risk leads at banks; adapt for the other 3 segments.

Cold-email template · model risk · banks
Subject: SR 11-7 documentation for AI training data — quick question

Hi {{first name}},

You're listed as data steward on {{company}}'s recent model package — saw your name in the disclosure on {{platform/source}}.

Quick question: when your MRM team asks for "training data lineage" evidence, what's the artifact you currently file? In conversations with model-risk leads at five other US regionals, the most common answer has been "a spreadsheet plus a README" — which gets kicked back roughly 40% of the time.

We built LabelSets LQS specifically for this. It's a 19-dimension, cryptographically-signed quality rating for AI training data — designed to be cited directly in SR 11-7 evidence packages. Auditor verifies the Ed25519 signature offline, no LabelSets in the trust chain.

We're picking 5–10 design partners for a 6-month free pilot of the enterprise tier (unlimited signed certs, private-mode scoring in your VPC, custom calibration). In exchange we ask for one logo, one quote, and one case study — all subject to your veto on phrasing.

Methodology + 19-dim breakdown: labelsets.ai/lqs-methodology
Deal terms (in writing): labelsets.ai/pilot

Worth a 20-minute call?

— {{your name}}

P.S. If MRM evidence isn't your beat anymore, would appreciate a forward to whoever owns it now.
Targeting tip: LinkedIn search "Model Risk Manager" + bank name → filter for < 5,000 employee firms. ~30% reply rate is realistic for that targeting.
Apply

Three fields, no demo, no sales call.

Two-line application. If you're a fit we email back within 48 hours with a Calendly link. If you're not, we say so and refund your time.

We'll email you back within 48 hours. We won't add you to a marketing list. We won't share your email with anyone.