CLI Reference
The nextstat CLI provides all major operations as subcommands. Built from the ns-cli crate.
Commands
nextstat fit
Run Maximum Likelihood Estimation on a workspace.
nextstat fit --input workspace.jsonnextstat hypotest
Asymptotic CLs hypothesis test at a given signal strength μ.
nextstat hypotest --input workspace.json --mu 1.0 nextstat hypotest --input workspace.json --mu 1.0 --expected-set
nextstat hypotest-toys
Toy-based CLs hypothesis test with parallel execution.
# CPU (all cores) nextstat hypotest-toys --input workspace.json \ --mu 1.0 --n-toys 10000 --seed 42 --threads 0 # NVIDIA GPU nextstat hypotest-toys --input workspace.json \ --mu 1.0 --n-toys 10000 --gpu cuda # Apple Silicon GPU nextstat hypotest-toys --input workspace.json \ --mu 1.0 --n-toys 10000 --gpu metal
nextstat upper-limit
Compute the CLs upper limit by scanning the signal strength.
nextstat upper-limit --input workspace.json \ --expected \ --scan-start 0 --scan-stop 5 --scan-points 201
nextstat scan
GPU-accelerated profile likelihood scan. Shares a single GpuSession across all scan points with warm-start.
nextstat scan --input workspace.json \ --start 0 --stop 5 --points 21 --gpu cuda
nextstat audit
Inspect a pyhf/HS3 workspace: channel, sample, and modifier counts plus unsupported features.
nextstat audit --input workspace.json nextstat audit --input workspace.json --format json --output audit.json
nextstat export histfactory
Convert a pyhf workspace back to HistFactory XML + ROOT histogram files. Optional --python generates a driver script.
nextstat export histfactory --input workspace.json --out-dir export/ nextstat export histfactory --input workspace.json \ --out-dir export/ --prefix meas --overwrite --python
nextstat import histfactory
Parse combination.xml + channel XMLs, read ROOT histograms, and produce a pyhf-style workspace.json.
nextstat import histfactory --input config/combination.xml \ --output workspace.json
nextstat import trex-config
Best-effort migration of TRExFitter .config files. Supports ReadFrom: NTUP and ReadFrom: HIST.
nextstat import trex-config --input myfit.config \ --output workspace.json nextstat import trex-config --input myfit.config \ --output workspace.json --analysis-yaml --coverage-json
ReadFrom: HIST (filter-only mode)
- ›Wraps an existing HistFactory export via
HistoPath:orCombinationXml:. - ›Region: blocks act as an include-list for channels —
Variable/Binningnot required. - ›Sample: blocks act as an include-list for samples —
Filenot required. Per-sampleRegions:filters are respected. - ›Empty channels are dropped automatically unless explicitly selected via Region: blocks (error).
nextstat timeseries kalman-viz
Produce a plot-friendly JSON artifact with smoothed states, observations, marginal normal bands, and optional forecast.
nextstat timeseries kalman-viz --input kalman_1d.json nextstat timeseries kalman-viz --input kalman_1d.json \ --level 0.99 --forecast-steps 20
nextstat ranking
Compute nuisance parameter impacts (ranking plot data).
nextstat ranking --input workspace.json nextstat ranking --input workspace.json --gpu cuda
nextstat report
Generate distributions, pulls, correlations, yields, and uncertainty ranking from a workspace.
nextstat report --input workspace.json --output report/ nextstat report --input workspace.json --output report/ \ --blind --deterministic --render
nextstat validation-report
Generate a unified validation artifact (JSON + optional PDF) combining Apex2 results with workspace fingerprints.
nextstat validation-report \ --apex2 tmp/apex2_master_report.json \ --workspace workspace.json \ --out validation_report.json \ --pdf validation_report.pdf \ --deterministic
nextstat preprocess
Run systematics preprocessing: smoothing (353QH), pruning (shape/norm/overall), with declarative YAML config and content-hash caching.
nextstat preprocess --config preprocess.yaml \ --input workspace.json --output workspace_smooth.json
nextstat survival
Fit parametric (Weibull, log-normal AFT) and semi-parametric (Cox PH) survival models.
nextstat survival fit --model cox --input data.json nextstat survival predict --model weibull --input data.json
nextstat build-hists
Run the ntuple-to-workspace pipeline: fill histograms from ROOT TTrees and produce a pyhf JSON workspace.
nextstat build-hists --config analysis.yaml \ --output workspace.json
nextstat run
Execute a full analysis pipeline from an Analysis Spec v0 YAML file (import → fit → scan → report).
nextstat run analysis_spec.yaml
nextstat unbinned-fit
MLE fit for an unbinned (event-level) model. Configuration is a JSON file following the unbinned_spec_v0 schema.
nextstat unbinned-fit --config unbinned.json nextstat unbinned-fit --config unbinned.json --threads 0
PDFs (v0): gaussian, crystal_ball, double_crystal_ball, exponential, chebyshev, argus, voigtian, spline, histogram, histogram_from_tree, kde, kde_from_tree, product, flow, conditional_flow, dcr_surrogate. Yield modifiers: normsys (Code1), weightsys (Code0/Code4p).
nextstat unbinned-scan
Profile likelihood scan for an unbinned model. Requires model.poi in the spec.
nextstat unbinned-scan --config unbinned.json \ --start 0 --stop 5 --points 21 --threads 0
nextstat unbinned-fit-toys
Generate Poisson toys and fit each for an unbinned model.
nextstat unbinned-fit-toys --config unbinned.json \ --n-toys 100 --seed 42 --threads 0 # Generate from MLE point instead of init nextstat unbinned-fit-toys --config unbinned.json \ --n-toys 100 --seed 42 --gen mle
nextstat unbinned-hypotest
Compute asymptotic qₘ (and q₀ if μ=0 is within bounds) for an unbinned model.
nextstat unbinned-hypotest --config unbinned.json --mu 1.0 nextstat unbinned-hypotest --config unbinned.json --mu 1.0 --gpu cuda
nextstat unbinned-hypotest-toys
Toy-based CLs (q̃) for unbinned models. GPU-accelerated batch fitting with optional device-resident toy sampling.
nextstat unbinned-hypotest-toys --config unbinned.json \ --mu 1.0 --n-toys 1000 --seed 42 --threads 0 # CUDA with device-resident sampling + sharding nextstat unbinned-hypotest-toys --config unbinned.json \ --mu 1.0 --n-toys 10000 --gpu cuda --gpu-sample-toys --gpu-shards 4 # Metal nextstat unbinned-hypotest-toys --config unbinned.json \ --mu 1.0 --n-toys 10000 --gpu metal
nextstat unbinned-merge-toys
Merge shard outputs from unbinned-fit-toys --shard into a single result. Validates consistent config across shards.
# CPU farm mode: split 10k toys across 4 nodes nextstat unbinned-fit-toys --config spec.json --n-toys 10000 --shard 0/4 -o shard0.json nextstat unbinned-fit-toys --config spec.json --n-toys 10000 --shard 1/4 -o shard1.json nextstat unbinned-fit-toys --config spec.json --n-toys 10000 --shard 2/4 -o shard2.json nextstat unbinned-fit-toys --config spec.json --n-toys 10000 --shard 3/4 -o shard3.json # Merge nextstat unbinned-merge-toys shard0.json shard1.json shard2.json shard3.json -o merged.json
nextstat viz render
Render a single JSON viz artifact to an image file without full report generation. Supports pulls, corr, and ranking.
nextstat viz render --kind pulls --input pulls.json --output pulls.png nextstat viz render --kind corr --input corr.json --output corr.png \ --corr-top-n 40 --dpi 220
nextstat version
Print the version string and exit.
nextstat versionnextstat-server (Separate Binary)
A self-hosted REST API for shared GPU inference. Built as a separate binary in crates/ns-server.
# Build cargo build -p ns-server --release cargo build -p ns-server --features cuda --release # Run nextstat-server --port 3742 --gpu cuda nextstat-server --host 0.0.0.0 --port 3742 --threads 8
| Method | Path | Description |
|---|---|---|
| POST | /v1/fit | MLE fit (workspace → FitResult) |
| POST | /v1/ranking | Nuisance parameter ranking |
| POST | /v1/batch/fit | Batch fit (up to 100 workspaces) |
| POST | /v1/batch/toys | GPU-accelerated batch toy fitting |
| POST | /v1/models | Upload workspace to model cache |
| GET | /v1/models | List cached models |
| DELETE | /v1/models/:id | Evict model from cache |
| GET | /v1/health | Server health + cached_models count |
Models are cached by SHA-256 hash. Pass model_id instead of workspace to skip re-parsing. LRU eviction at 64 models. See Inference Server docs for full reference.
Input Format Auto-Detection
All commands accepting --input automatically detect the JSON format. Both pyhf and HS3 (HEP Statistics Serialization Standard v0.2) workspaces are supported natively.
# pyhf workspace (auto-detected) nextstat fit --input workspace.json # HS3 workspace from ROOT 6.37+ (auto-detected) nextstat fit --input workspace-postFit_PTV.json
Detection is instant (prefix scan of the first ~2 KB). HS3 files are identified by the presence of "distributions" and "hs3_version" keys. All modifier types are supported: normfactor, normsys, histosys, staterror, shapesys, shapefactor, lumi. Unknown types are silently skipped (forward-compatible).
Interpolation Defaults
- ›
--interp-defaults root(default) — Code4/Code4p smooth interpolation for NormSys/HistoSys. - ›
--interp-defaults pyhf— Code1/Code0 strict pyhf defaults (exponential/piecewise linear). - ›HS3 inputs always use ROOT HistFactory defaults (Code1 for NormSys, Code0 for HistoSys), regardless of this flag.
Common Flags
| Flag | Description |
|---|---|
| --input <path> | Path to workspace JSON file |
| --mu <float> | Signal strength for hypothesis test |
| --n-toys <int> | Number of toy experiments |
| --seed <int> | Random seed for reproducibility |
| --threads <int> | Thread count (0 = all cores, 1 = deterministic) |
| --gpu <cuda|metal> | GPU backend for toy generation |
| --expected-set | Return expected band (±1σ, ±2σ) |
| --parity | Deterministic mode (Kahan summation, single-threaded, bit-exact) |
| --interp-defaults <root|pyhf> | Interpolation code defaults (root=Code4/Code4p, pyhf=Code1/Code0) |
| --deterministic | Stable JSON key ordering in report outputs |
| --log-level <level> | Logging verbosity (error, warn, info, debug, trace) |
| --bundle <path> | Save reproducible run bundle (inputs + outputs + env) |
