pip install labelsets The Python SDK mirrors the hosted scoring API with one addition: local file scoring without uploading. Drop it into a Jupyter notebook, a CLI, a CI pipeline, a training loop. Every score produces the same signed cert format as the marketplace uses. Same crypto, same rating, same methodology.
The SDK ships with offline scoring (local files), hosted scoring (HF/Zenodo URLs), cert verification, and a CLI. No API key required for the free tier — 100 scores/month.
npx @labelsets/lqs-cli for Node. Docker image labelsets/lqs:latest for air-gapped tenancies.
Every ML engineer's home. Below is what the SDK looks like in a real notebook. Copy the install above, paste the cells, run them against your own parquet/JSONL file.
--offline.lqs command. lqs score ./data.parquet --out cert.json. Use in CI pipelines or cron jobs.lqs.log_to_wandb(result) or lqs.log_to_mlflow(result). LQS becomes a first-class training metric on your existing dashboard.lqs.contamination(data) returns a per-benchmark overlap rate. Worth the install price alone.Drop this into .github/workflows/lqs.yml. Every commit that touches a dataset file auto-scores and posts the cert hash as a PR comment with the embed badge.
Public Q2 2026. Beta access to the first 100 ML engineers who sign up — we use the feedback to tune the scorer + API surface. No drip campaign. One email when PyPI goes live.