Quickstart
This guide takes you from zero to your first statistical fit in under 5 minutes. Make sure you have NextStat installed first — see the Installation guide.
Step 1: Get a sample workspace
A "workspace" is a JSON file that describes your statistical model: the observed data, signal and background expectations, and systematic uncertainties. NextStat uses the same JSON format as pyhf.
Let's create a minimal workspace with one signal region, one signal sample, and one background sample:
# Save this as workspace.json (or copy-paste into a file)
cat > workspace.json << 'EOF'
{
"channels": [
{
"name": "singlechannel",
"samples": [
{
"name": "signal",
"data": [5.0, 10.0],
"modifiers": [
{ "name": "mu", "type": "normfactor", "data": null }
]
},
{
"name": "background",
"data": [50.0, 60.0],
"modifiers": [
{ "name": "uncorr_bkguncrt", "type": "shapesys", "data": [5.0, 12.0] }
]
}
]
}
],
"observations": [
{ "name": "singlechannel", "data": [55.0, 65.0] }
],
"measurements": [
{ "name": "Measurement", "config": { "poi": "mu", "parameters": [] } }
],
"version": "1.0.0"
}
EOFStep 2: Run your first fit (Python)
Open a Python shell or create a script called my_first_fit.py:
import json
import nextstat
# 1. Load the workspace JSON
with open("workspace.json") as f:
workspace = json.load(f)
# 2. Build a model from the workspace
model = nextstat.from_pyhf(json.dumps(workspace))
# 3. Run a maximum-likelihood fit
result = nextstat.fit(model)
# 4. Print the results
poi_idx = model.poi_index()
print("Signal strength (mu):", result.bestfit[poi_idx])
print("Uncertainty: ", result.uncertainties[poi_idx])
print("All parameters: ", result.bestfit)
print("All uncertainties: ", result.uncertainties)Run it:
python3 my_first_fit.py
Expected output (values may differ slightly):
Signal strength (mu): 0.9999... Uncertainty: 0.3... All parameters: [0.999..., 1.0..., 0.99...] All uncertainties: [0.3..., 0.08..., 0.15...]
What just happened? NextStat found the parameter values that best describe your observed data. The signal strength mu ≈ 1.0 means the observed data is consistent with the signal+background hypothesis. The other parameters are nuisance parameters (systematic uncertainties) that were profiled during the fit.
Step 3: Test a hypothesis
A hypothesis test tells you whether the data is compatible with a given signal strength. The CLs method is the standard approach in particle physics:
import json
import nextstat
with open("workspace.json") as f:
workspace = json.load(f)
model = nextstat.from_pyhf(json.dumps(workspace))
# Test the hypothesis mu=1.0 (signal exists at nominal strength)
result = nextstat.hypotest(model, mu_test=1.0)
print("CLs value: ", result.cls)
print("CLs+b value: ", result.clsb)
print("CLb value: ", result.clb)
print("Excluded (95%)?", "YES" if result.cls < 0.05 else "NO")Expected output:
CLs value: 0.43... CLs+b value: 0.24... CLb value: 0.55... Excluded (95%)? NO
What does this mean? CLs > 0.05 means we cannot exclude the signal hypothesis at 95% confidence level. The signal is compatible with the data.
Step 4: Compute upper limits (Brazil band)
An upper limit scan finds the maximum signal strength that is still compatible with the data:
import json
import nextstat
with open("workspace.json") as f:
workspace = json.load(f)
model = nextstat.from_pyhf(json.dumps(workspace))
# Compute expected and observed upper limits
limits = nextstat.upper_limit(model)
print("Observed upper limit:", limits.observed)
print("Expected -2σ: ", limits.expected_minus2)
print("Expected -1σ: ", limits.expected_minus1)
print("Expected median: ", limits.expected)
print("Expected +1σ: ", limits.expected_plus1)
print("Expected +2σ: ", limits.expected_plus2)Step 5: Use the command-line interface
NextStat also ships a CLI binary. All the same operations are available from the terminal:
# Fit a workspace nextstat fit --input workspace.json # Hypothesis test (asymptotic CLs) nextstat hypotest --input workspace.json --mu 1.0 # Hypothesis test with expected bands nextstat hypotest --input workspace.json --mu 1.0 --expected-set # Upper limit scan (201 points from mu=0 to mu=5) nextstat upper-limit --input workspace.json \ --expected --scan-start 0 --scan-stop 5 --scan-points 201 # Toy-based hypothesis test (10k toys, all CPU cores) nextstat hypotest-toys --input workspace.json \ --mu 1.0 --n-toys 10000 --seed 42 --threads 0 # GPU-accelerated toys (NVIDIA) nextstat hypotest-toys --input workspace.json \ --mu 1.0 --n-toys 10000 --gpu cuda # GPU-accelerated toys (Apple Silicon) nextstat hypotest-toys --input workspace.json \ --mu 1.0 --n-toys 10000 --gpu metal
Step 6: Use from Rust (optional)
If you are a Rust developer, you can use NextStat as a library in your own project:
use ns_inference::mle::MaximumLikelihoodEstimator;
use ns_translate::pyhf::{HistFactoryModel, Workspace};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Load the workspace
let json = std::fs::read_to_string("workspace.json")?;
let workspace: Workspace = serde_json::from_str(&json)?;
// 2. Build the model
let model = HistFactoryModel::from_workspace(&workspace)?;
// 3. Fit
let mle = MaximumLikelihoodEstimator::new();
let result = mle.fit(&model)?;
// 4. Print results
println!("Best-fit parameters: {:?}", result.parameters);
println!("NLL at minimum: {}", result.nll);
Ok(())
}Try it in your browser (no install needed)
Don't want to install anything yet? Try the WASM Playground — it runs NextStat entirely in your browser via WebAssembly. No server, no Python, no setup.
What to explore next
- Architecture — understand how NextStat is built
- Python API reference — all functions and classes
- Bayesian sampling (NUTS) — posterior inference with MCMC
- Regression & GLM — linear, logistic, Poisson models
- Survival analysis — Kaplan-Meier, Cox PH, parametric models
- GPU acceleration — CUDA and Metal for batch fitting
- CLI reference — all command-line options
