Biometric identity verification for AI models at inference time. Zero false acceptances.
$ pip install fallrisk-itpuf
============================================================ IT-PUF | Biometric Identity for AI Models Patent Application 63/982,893 | fallrisk.ai ============================================================ Loaded: Qwen/Qwen2.5-0.5B-Instruct Loaded: HuggingFaceTB/SmolLM2-1.7B-Instruct Loaded: TinyLlama/TinyLlama-1.1B-Chat-v1.0 Loaded: meta-llama/Llama-3.2-1B-Instruct Loaded: google/gemma-2-2b-it Loaded: facebook/opt-1.3b Loaded: tiiuae/falcon-mamba-7b-instruct Cross-comparing 7 anchors... ✅ FAR: 0/84 Min separation: 0.0819 (815.9× ε) Closest pair: Qwen2.5-0.5B ↔ SmolLM2-1.7B ───────────────────────────────────────────────────────── Continuous monitoring & signed attestations: fallrisk.ai
SHA-256 verifies files. It cannot verify running inference. Once a model is loaded into GPU memory, file hashes are irrelevant — in-memory weight tampering, unauthorized LoRA injections, and model namespace reuse (LLMjacking) are invisible to every existing integrity check.
EU AI Act Article 15 requires continuous monitoring of high-risk AI systems. The compliance deadline is August 2026. No current tool can prove the model serving your API right now is the model you approved.
IT-PUF extracts a behavioral fingerprint — the δ-gene — from how a model's probability mass competes at the output layer. This geometric invariant is determined by training-induced weight geometry and cannot be predicted without running inference on the exact model.
The commercial measurement engine executes a challenge-response protocol, sending curated prompts and measuring the Fisher-protected observable at two internal measurement sites. The resulting fingerprint vector is compared against an enrolled anchor using L2 distance. Accept or reject. No retraining, no model modification, no specialized hardware.
Admitted.| Claim | Evidence | Status |
|---|---|---|
| Spoofing impossibility | Coq theorem T4 (NoSpoofing.v) — KL budget exhaustion at all scales | Proven |
| Architecture invariance | Transformer, Parallel Transformer, Mamba SSM — δ_norm within 8% | Validated |
| Quantization robustness | NF4 cross-family margin 6.8×. BF16↔FP16 ε = 1.003×10⁻⁴ | Validated |
| Multi-seed hardening | 4 seeds → 3.8× increase in spoofing cost. r_eff = 52 independent constraints. | Validated |
| Stiffness scales with params | S_min = 1.18 across 12 models, 3 families, 147× parameter range | Bounded |
# Install (numpy only, no GPU) $ pip install fallrisk-itpuf # Download demo anchors (7 models, 4 families, 2 arch types) $ wget https://github.com/fallrisk-ai/IT-PUF/releases/download/v0.1.0/demo_anchors.tar.gz $ tar xzf demo_anchors.tar.gz # Cross-compare all anchors $ itpuf audit --anchors ./demo_anchors/ # Inspect a single anchor $ itpuf info --anchor ./demo_anchors/Qwen2.5-0.5B-Instruct_anchor.json
============================================================ IT-PUF | Biometric Identity for AI Models Patent Application 63/982,893 | fallrisk.ai ============================================================ Model: Qwen/Qwen2.5-0.5B-Instruct Architecture: transformer (dual_site_standard) Layers: 24 ε: 1.00e-04 Contract: af95294c8e62ba88 Prompt bank: 8445007bfaa9b94a Enrolled at: 2026-02-18T14:32:07+00:00 Seeds: [42, 123, 456, 789] Seed 42: dim=64, mean=0.0672, std=0.0318, min=0.0098, max=0.1547 Seed 123: dim=64, mean=0.0658, std=0.0325, min=0.0087, max=0.1612 Seed 456: dim=64, mean=0.0681, std=0.0301, min=0.0112, max=0.1498 Seed 789: dim=64, mean=0.0644, std=0.0337, min=0.0076, max=0.1583 ───────────────────────────────────────────────────────── Continuous monitoring & signed attestations: fallrisk.ai
# Or use the Python API directly from itpuf import Anchor, compute_far anchors = [Anchor.load(f) for f in Path("demo_anchors").glob("*.json")] report = compute_far(anchors) print(f"FAR: {report['n_false_accepts']}/{report['n_pairs']}") print(f"Min separation: {report['min_ratio']:.1f}× ε")
The theoretical foundation is described in the technical whitepaper:
A. Coslett, "The δ-Gene: Inference-Time Physical Unclonable Functions
from Architecture-Invariant Output Geometry." 2026.
PDF
Zenodo DOI
GitHub
The formal verification stack comprises 311 theorems across 16 Coq
files cited in the paper, with 820+ theorems across 53 files in the
broader research program. The Coq source compiles under Rocq 9.1 with
zero uses of Admitted and zero vacuous definitions.
Deploying models in regulated environments? The measurement engine provides enrollment, continuous heartbeat verification, and fleet-calibrated challenge banks optimized for your specific model population.
[email protected]