Sigilant Labs runs controlled benchmarks across candidate configurations (quantization, context, batch, and runtime parameters) and produces a recommendation with the supporting artifacts you need for review and reproducibility.
Outputs include per-variant metrics, gate results, and exportable JSON/CSV suitable for internal sign‑off and iteration tracking.
Select a model artifact and specify a target hardware profile (e.g., CPU class or cloud flavor). Choose candidate quants and constraints.
Sigilant evaluates variants under consistent conditions to reduce run-to-run variance and surface tradeoffs.
Inspect metrics and gates, then export artifacts (JSON/CSV) for documentation, sharing, and future comparisons.
Is this a subscription? Not initially. We currently support prepaid credit packs. Subscription plans may be introduced later.
What consumes credits? Credits are consumed when a run is executed. Estimated consumption is shown before confirmation.
Do results vary? Yes. Performance depends on hardware, model, and workload. We provide controlled settings and report variance where applicable.
How do I get access? Use the contact page to request access; we onboard accounts and provide console credentials.