Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Internal Functions Reference

UCSD Psychology

All bossanova.internal functions organized by what they do, not where they live. Functions are classified by verb-prefix naming convention (see Style Guide).

Summary

GroupCount
Builders29
Computers84
Fitters & Solvers11
Parsers & Evaluators6
Dispatchers & Resolvers9
Generators & Simulators11
Accessors & Extractors13
Transformers15
Guards7
Formatters & Comparators4
Math Primitives122
Total311

Builders

Construct complex objects — containers, matrices, grids, DataFrames.

FunctionSignatureDescriptionModule
build_all_pairwise_matrix(n_levels) -> np.ndarrayBuild all pairwise contrasts between EMM levelsmarginal
build_bracket_contrast_matrix(expr, levels) -> tuple[np.ndarray, list[str]]Build contrast matrix and labels from bracket contrast expressionmarginal
build_cholesky_with_derivs(theta_group, n_re, structure) -> tuple[np.ndarray, list[np.ndarray]]Build Cholesky factor L and its derivatives w.r.t. thetainference
build_contrast_matrix(contrast_type, levels, normalize) -> np.ndarrayBuild a contrast matrix based on contrast typemarginal
build_design_matrices(spec, data) -> tuple[DesignResult, FormulaSpec]Build X and y matrices from a parsed formula specformula
build_emm_reference_grid(data, bundle, focal_var) -> np.ndarrayBuild reference grid X matrix for EMM computationinfer
build_equation(spec, bundle, formula_spec, explanations) -> MathDisplayBuild a structural LaTeX equation from model containersrendering
build_family(family_name, link_name) -> FamilyCreate a Family object from family and link namesfamily
build_helmert_matrix(n_levels) -> np.ndarrayBuild Helmert contrasts (each level vs mean of previous levels)marginal
build_lambda_sparse(theta, n_groups_list, re_structure, metadata) -> sp.csc_matrixBuild sparse block-diagonal Lambda matrix from thetasolvers
build_lambda_template(n_groups_list, re_structure, metadata) -> LambdaTemplateBuild a Lambda template for efficient repeated updatessolvers
build_mixed_post_fit_state(fit, bundle, data, stacklevel) -> tuple[VaryingState | None, VaryingSpreadState | None]Compute BLUPs, variance components, and emit convergence warningsfit
build_pairwise_matrix(n_levels) -> np.ndarrayBuild (n-1) linearly independent pairwise contrastsmarginal
build_poly_matrix(n_levels, degree) -> np.ndarrayBuild orthogonal polynomial contrast matrix for EMMsmarginal
build_predict_grid(data, focal_var, response_col, grouping_factors, focal_values, n_points, varying_vars, at) -> pl.DataFrameBuild a Cartesian-product prediction gridfit
build_random_effects(group_ids_list, n_groups_list, group_names, random_names, re_structure, X_re, re_structures_list, group_levels_list, term_permutation) -> RandomEffectsInfoBuild complete random effects specificationdesign
build_random_effects_from_spec(spec, data) -> RandomEffectsInfo | NoneBuild random effects design matrix from FormulaSpecformula
build_reference_design_matrix(X_names, focal_var, levels, X_means, set_categoricals) -> np.ndarrayBuild design matrix for reference grid pointsdesign
build_reference_grid(bundle, focal_vars, at, covariate_means) -> pl.DataFrameConstruct reference grid for marginal effects evaluationmarginal
build_reference_row(X_names, focal_var, focal_level, X_means, set_categoricals) -> np.ndarrayBuild a single row of the reference design matrixdesign
build_rng(seed) -> RNGCreate RNG from seed (convenience function)rng
build_sequential_matrix(n_levels) -> np.ndarrayBuild sequential (successive differences) contrastsmarginal
build_slope_reference_matrix(X_names, focal_var, X_means, delta) -> tuple[np.ndarray, np.ndarray]Build reference matrices for computing marginal slopesdesign
build_sum_to_zero_matrix(n_levels) -> np.ndarrayBuild sum-to-zero contrasts (deviation coding)marginal
build_transform(name) -> StatefulTransformCreate a stateful transform instance by nametransforms
build_treatment_matrix(n_levels, ref_idx) -> np.ndarrayBuild treatment (Dunnett-style) contrasts against a reference levelmarginal
build_z_crossed(group_ids_list, n_groups_list, X_re_list, layouts) -> sp.csc_matrixBuild Z matrix for crossed random effectsdesign
build_z_nested(group_ids_list, n_groups_list, X_re_list) -> sp.csc_matrixBuild Z matrix for nested random effectsdesign
build_z_simple(group_ids, n_groups, X_re, layout) -> sp.csc_matrixBuild Z matrix for single grouping factordesign

Computers

Pure numerical/statistical calculations.

FunctionSignatureDescriptionModule
compute_agq_deviance(theta, beta, X, Z, y, family, n_groups_list, re_structure, group_ids, nAGQ, metadata, prior_weights, pirls_max_iter, pirls_tol, eta_init, lambda_template, factor_cache) -> floatCompute AGQ deviance for Stage 2 optimizationsolvers
compute_aic(loglik, k) -> floatCompute Akaike Information Criterioninference
compute_akaike_weights(ic_values) -> np.ndarrayCompute Akaike weights from information criterion valuesinference
compute_batch_size(n_items, bytes_per_item, max_mem, min_batch, max_batch) -> intCompute optimal batch size for jax.lax.mapbatching
compute_bic(loglik, k, n) -> floatCompute Bayesian Information Criterioninference
compute_bootstrap_params(spec, bundle, n_boot, seed, n_jobs) -> np.ndarrayGenerate bootstrap distribution of coefficient estimatesinfer
compute_bootstrap_pvalue(observed, boot_samples, null, alternative) -> np.ndarrayCompute bootstrap p-valuesinfer
compute_cell_info(residuals, data, factor_columns) -> CellInfoCompute cell-based variance information for Welch inferenceinference
compute_chi2_test(L, coef, vcov) -> Chi2TestResultCompute Wald chi-square test for L @ β = 0inference
compute_ci(estimate, se, critical, alternative) -> tuple[np.ndarray, np.ndarray]Compute confidence interval boundsinference
compute_coefficient_inference(coef, vcov, df, conf_level, null, alternative) -> InferenceResultCompute inference statistics for regression coefficientsinference
compute_compound_bracket_contrasts(bundle, fit, focal_var, contrast_expr, data, spec, effect_scale, resolved) -> MeeStateCompute bracket contrasts for a compound focal variablemarginal
compute_conditional_emm(bundle, fit, focal_var, explore_formula, spec, varying_offsets, grouping_var, effect_scale, levels, at_overrides, set_categoricals) -> MeeStateCompute per-group conditional EMMs incorporating intercept BLUPsmarginal
compute_conditional_slopes(bundle, fit, focal_var, explore_formula, spec, varying_offsets, grouping_var, effect_scale) -> MeeStateCompute per-group conditional slopes incorporating BLUPsmarginal
compute_contrast_variance(L, vcov) -> np.ndarrayCompute variance-covariance of linear contrasts L @ βinference
compute_contrasts(emm, contrast_matrix) -> np.ndarrayApply contrast matrix to EMMsmarginal
compute_cooks_distance(residuals, hat, sigma, p) -> np.ndarrayCompute Cook’s distance for influenceinference
compute_cr_vcov(X, residuals, cluster_ids, XtX_inv, cr_type) -> NDArray[np.float64]Compute cluster-robust covariance matrix for Gaussian mixed modelsinference
compute_cv_metrics(spec, bundle, k, seed, holdout_group_ids) -> 'CVState'Compute k-fold or leave-one-out cross-validation metricsinfer
compute_deviance(loglik) -> floatCompute deviance from log-likelihoodinference
compute_diagnostics(model_type, spec, bundle, fit, coef_for_predict, varying_spread, cv, has_intercept) -> pl.DataFrameCompute model-level diagnostics as a single-row DataFramefit
compute_emm(bundle, fit, focal_var, explore_formula, levels, at_overrides, set_categoricals, spec, how, effect_scale) -> MeeStateCompute estimated marginal means for a categorical focal variablemarginal
compute_f_pvalue(f_stat, df1, df2) -> floatCompute p-value from F-statisticinference
compute_f_test(L, coef, vcov, df_resid) -> FTestResultCompute F-test for linear hypothesis L @ β = 0inference
compute_glm_cr_vcov(X, residuals, irls_weights, cluster_ids, XtWX_inv, cr_type) -> NDArray[np.float64]Compute cluster-robust covariance matrix for non-Gaussian mixed modelsinference
compute_glm_hc_vcov(X, residuals, irls_weights, XtWX_inv, hc_type) -> NDArray[np.float64]Compute heteroscedasticity-consistent covariance matrix for GLMinference
compute_gradient_richardson(func, x, d, eps, r, v, zero_tol) -> np.ndarrayCompute gradient using Richardson extrapolationdifferentiation
compute_hc_vcov(X, residuals, XtX_inv, hc_type) -> NDArray[np.float64]Compute heteroscedasticity-consistent covariance matrixinference
compute_hessian_numerical(func, x, step_size) -> np.ndarrayCompute Hessian using central finite differencesdifferentiation
compute_hessian_richardson(func, x, d, eps, r, v, zero_tol) -> np.ndarrayCompute Hessian using Richardson extrapolation with genD methoddifferentiation
compute_inverse_variance_weights(data, y_col, group_col, valid_mask) -> WeightInfoCompute inverse-variance weights from a factor columnweights
compute_irls_quantities(y, eta, family) -> tuple[np.ndarray, np.ndarray]Compute IRLS working weights and working responsesolvers
compute_jackknife_coefs(spec, bundle) -> np.ndarrayCompute leave-one-out jackknife coefficient estimatesinfer
compute_jacobian_numerical(func, x, step_size) -> np.ndarrayCompute Jacobian using central finite differencesdifferentiation
compute_jacobian_richardson(func, x, d, eps, r, v, zero_tol) -> np.ndarrayCompute Jacobian using Richardson extrapolationdifferentiation
compute_joint_test(fit, bundle, spec, terms, errors, data) -> JointTestStateCompute joint hypothesis tests for model termsmarginal
compute_leverage(X, weights, XtWX_inv) -> np.ndarrayCompute diagonal of hat matrix (leverage values)inference
compute_mc_iteration(seed, dgp_fn, dgp_params, fit_fn) -> dict[str, Any] | NoneExecute a single Monte Carlo iterationsimulation
compute_mee_bootstrap(spec, bundle, mee, data, conf_level, n_boot, ci_type, seed, null, alternative, save_resamples) -> 'tuple[MeeState, np.ndarray | None]'Compute bootstrap inference for marginal effectsinfer
compute_mee_inference(mee, vcov, df_resid, conf_level, null, alternative) -> MeeStateCompute delta method inference for marginal effectsmarginal
compute_mee_inference_fallback(mee, bundle, fit, data, df_resid, conf_level, null, alternative) -> 'MeeState'Compute inference for MEE without L_matrix (fallback path)marginal
compute_mee_permutation(spec, bundle, fit, mee, data, conf_level, se_obs, n_perm, seed, null, alternative, save_resamples) -> 'tuple[MeeState, np.ndarray | None]'Compute permutation-based inference for marginal effectsinfer
compute_mee_se(mee, bundle, fit, data) -> np.ndarrayCompute standard errors for MEE estimates (means or slopes)marginal
compute_metadata(bundle) -> pl.DataFrameCompute model metadata as a single-row DataFramefit
compute_mu_with_new_re(bundle, fit, spec, rng) -> np.ndarrayCompute conditional mean with newly sampled random effectssimulation
compute_mvt_critical(conf_level, corr, df, tol) -> floatCompute multivariate-t critical value for simultaneous inferenceinference
compute_optimizer_diagnostics(model_type, fit) -> pl.DataFrameCompute optimizer convergence diagnostics as a single-row DataFramefit
compute_params_asymptotic(spec, bundle, fit, conf_level, errors, data, null, alternative) -> 'InferenceState'Compute asymptotic (Wald) inference for model parametersinfer
compute_params_bootstrap(spec, bundle, fit, conf_level, n_boot, ci_type, seed, save_resamples, n_jobs, null, alternative) -> 'InferenceState'Compute bootstrap inference for parametersinfer
compute_params_bootstrap_mixed(spec, bundle, fit, conf_level, n_boot, ci_type, seed, save_resamples, n_jobs, null, alternative) -> 'InferenceState'Compute bootstrap inference for mixed model parametersinfer
compute_params_cv_inference(spec, bundle, data, conf_level, link_override, k, seed, holdout_group_ids) -> tuple['InferenceState', 'CVState']Compute CV-based parameter importance via ablationinfer
compute_params_permutation(spec, bundle, fit, conf_level, n_perm, seed, save_resamples, null, alternative) -> 'InferenceState'Compute permutation-based inference for model parametersinfer
compute_pls_invariants(X, y) -> PLSInvariantsPre-compute quantities that are constant during optimizationsolvers
compute_prediction_asymptotic(pred, bundle, fit, spec, conf_level) -> 'PredictionState'Compute asymptotic inference for predictions via delta methodinfer
compute_prediction_bootstrap(spec, bundle, pred, conf_level, n_boot, ci_type, seed) -> 'PredictionState'Compute bootstrap inference for predictionsinfer
compute_predictions_from_formula(formula, data, spec, bundle, fit, formula_spec, pred_type, varying, allow_new_levels, n_points) -> 'PredictionState'Parse a predict formula, build the grid, compute predictions, and attach grid...fit
compute_profile_inference(spec, bundle, fit, conf_level, n_steps, verbose, threshold) -> 'ProfileState'Compute profile likelihood CIs for variance componentsinfer
compute_pvalue(statistic, df, alternative) -> np.ndarrayCompute p-values from test statisticsinference
compute_r_squared(y, residuals, n, p, has_intercept) -> tuple[float, float]Compute R-squared and adjusted R-squared from raw arraysfit
compute_satterthwaite_df(vcov_beta, jacobian_vcov, hessian_deviance, min_df, max_df, tol) -> np.ndarrayCompute Satterthwaite degrees of freedom for each fixed effectinference
compute_satterthwaite_emm_df(bundle, fit, spec, L) -> np.ndarrayCompute Satterthwaite denominator df for EMM contrast rowsinfer
compute_satterthwaite_summary_table(beta, beta_names, vcov_beta, vcov_varpar, jac_list) -> dict[str, list]Compute full coefficient table with Satterthwaite df and p-valuesinference
compute_satterthwaite_t_test(beta, se, df, conf_level) -> dict[str, np.ndarray]Compute t-statistics, p-values, and confidence intervalsinference
compute_sd_jacobian(theta, sigma, group_names, random_names, re_structure) -> tuple[np.ndarray, np.ndarray]Compute SDs and Jacobian of SDs w.r.t. varpar = [theta, sigma]inference
compute_se_from_vcov(vcov) -> np.ndarrayCompute standard errors from variance-covariance matrixinference
compute_sigma_se_wald(model) -> floatCompute Wald standard error for sigmainference
compute_simulation_inference(simulations, conf_level) -> 'SimulationInferenceState'Compute inference for simulationsinfer
compute_slopes(bundle, fit, focal_var, explore_formula, spec, effect_scale) -> MeeStateCompute marginal slope for a continuous focal variablemarginal
compute_slopes_crossed(bundle, fit, focal_var, resolved, data, spec, formula_spec, effect_scale, delta_frac) -> MeeStateCompute crossed slopes over focal variable x condition gridmarginal
compute_slopes_finite_diff(bundle, fit, focal_var, explore_formula, spec, formula_spec, data, how, effect_scale, delta_frac) -> MeeStateCompute marginal slopes via centered finite differencesmarginal
compute_sparse_cholesky(A) -> SparseFactorizationFactor a sparse symmetric positive definite matrixlinalg
compute_studentized_residuals(residuals, hat, sigma) -> np.ndarrayCompute internally studentized (standardized) residualsinference
compute_t_critical(conf_int, df, alternative) -> float | np.ndarrayCompute t-distribution critical value for confidence intervalinference
compute_t_test(L, coef, vcov, df) -> TTestResultCompute t-test for a single contrast L @ β = 0inference
compute_tukey_critical(conf_level, k, df) -> floatCompute Tukey HSD critical value for pairwise comparisonsinference
compute_varying_spread_state(theta, sigma, re_meta) -> VaryingSpreadStateCompute VaryingSpreadState (variance components) from theta parametersfit
compute_varying_state(theta, u, re_meta, data) -> VaryingStateCompute VaryingState (BLUPs) from fitted random effects parametersfit
compute_vcov_schur_sparse(X, Z, Lambda, weights, sigma2) -> np.ndarrayCompute variance-covariance matrix of fixed effects via Schur complementlinalg
compute_vif(X, X_names) -> pl.DataFrameCompute variance inflation factorsinference
compute_wald_ci_varying(theta, sigma, vcov_varpar, group_names, random_names, re_structure, conf_level) -> tuple[np.ndarray, np.ndarray]Compute Wald CIs for variance components on SD scaleinference
compute_wald_statistic(contrast_values, contrast_vcov) -> floatCompute Wald statistic for testing L @ β = 0inference
compute_welch_satterthwaite_df_per_coef(X, cell_info) -> NDArray[np.float64]Compute per-coefficient Welch-Satterthwaite degrees of freedominference
compute_wilson_ci(p_hat, n, level) -> tuple[float, float]Wilson score confidence interval for a binomial proportionsimulation
compute_z_critical(conf_int, alternative) -> floatCompute z-distribution critical value for confidence intervalinference

Fitters & Solvers

Solver entry points and linear algebra solver steps.

FunctionSignatureDescriptionModule
fit_glm_irls(spec, bundle, max_iter, tol) -> FitStateFit generalized linear model using Iteratively Reweighted Least Squaresfit
fit_glm_irls(y, X, family, weights, max_iter, tol) -> dictFit GLM using IRLS algorithmsolvers
fit_glmer_pirls(spec, bundle, max_iter, max_outer_iter, tol, verbose, nAGQ, use_hessian) -> FitStateFit generalized linear mixed model using Penalized IRLSfit
fit_glmm_pirls(X, Z, y, family, n_groups_list, re_structure, metadata, theta_init, prior_weights, max_outer_iter, pirls_max_iter, pirls_tol, verbose, two_stage, lambda_template, nAGQ, group_ids) -> dictFit GLMM using PIRLS with outer optimization over thetasolvers
fit_lmer_pls(spec, bundle, max_iter, verbose) -> FitStateFit linear mixed-effects model using Penalized Least Squaresfit
fit_model(spec, bundle, solver, max_iter, max_outer_iter, tol, verbose, nAGQ, use_hessian) -> FitStateDispatch to appropriate fitter based on model specificationfit
fit_ols_qr(spec, bundle) -> FitStateFit ordinary or weighted least squares using QR decompositionfit
solve_atol(scale, cond, n, safety) -> floatAbsolute tolerance for linear solve operationstolerances
solve_pls_sparse(X, Z, Lambda, y, pls_invariants) -> dictSolve Penalized Least Squares system using Schur complementsolvers
solve_rtol(cond, safety) -> floatRelative tolerance for linear solve operationstolerances
solve_weighted_pls_sparse(X, Z, Lambda, z, weights, beta_fixed, ZL, ZL_dense, factor_S22, factor_S, pattern_template) -> dictSolve weighted Penalized Least Squares for GLMMsolvers

Parsers & Evaluators

String-to-structure conversion and formula AST evaluation.

FunctionSignatureDescriptionModule
parse_conf_int(conf_int) -> floatParse flexible confidence interval input to floatinference
parse_design_column_name(name) -> DesignColumnInfoParse design matrix column name into componentsdesign
parse_explore_formula(formula, model_terms) -> ExploreFormulaSpecParse an explore formula stringmarginal
parse_fit_kwargs(spec, kwargs, nAGQ) -> tuple[ModelSpec, str | None, dict[str, object]]Validate and extract fitting parameters from **kwargsfit
parse_formula(formula, data, factors, custom_contrasts) -> FormulaSpecParse formula and detect categoricals from dataformula
parse_predict_formula(formula, data, response_col, grouping_factors, n_points) -> tuple[pl.DataFrame, list[str]]Parse an explore-style formula and build a prediction gridfit

Dispatchers & Resolvers

Route to specialized implementations and resolve ambiguity.

FunctionSignatureDescriptionModule
dispatch_infer(how, conf_level, errors, null, alternative, last_op, spec, bundle, fit, data, link_override, mee, pred, simulations, varying_spread, is_mixed, n_boot, n_perm, ci_type, seed, n_jobs, save_resamples, k, n_steps, verbose, threshold, profile_auto, holdout_group_ids) -> InferResultDispatch inference to the correct backend based on method and last operationinfer
dispatch_marginal_computation(parsed, bundle, fit, data, spec, formula_spec, varying_offsets, effect_scale, varying, how, inverse_transforms, by) -> MeeStateRoute a parsed explore formula to the appropriate marginal computationmarginal
dispatch_mee_inference(how, mee, spec, bundle, fit, data, conf_level, errors, null, alternative, n_boot, n_perm, ci_type, seed, save_resamples) -> 'tuple[MeeState, np.ndarray | None]'Dispatch marginal effects inference to the appropriate methodinfer
dispatch_params_inference(how, spec, bundle, fit, data, conf_level, errors, null, alternative, link_override, n_boot, n_perm, ci_type, seed, n_jobs, save_resamples, k) -> InferenceStateDispatch parameter inference to the appropriate methodinfer
dispatch_prediction_inference(how, pred, spec, bundle, fit, conf_level, n_boot, ci_type, seed, k, holdout_group_ids) -> tuple[PredictionState, CVState | None]Dispatch prediction inference to the appropriate methodinfer
resolve_condition_values(cond, data) -> list | NoneResolve a :class:Condition to concrete values or Nonefit
resolve_conditions(conditions, bundle, data) -> ResolvedConditionsClassify each Condition into the appropriate typed bucketmarginal
resolve_sigma(sigma) -> floatResolve optional sigma to a concrete floatfamily
resolve_solver(spec) -> strSelect the appropriate solver for a model configurationfit

Generators & Simulators

Synthetic data creation and multi-step workflows.

FunctionSignatureDescriptionModule
generate_data_from_spec(sim_spec, family, response_var) -> pl.DataFrameGenerate a synthetic dataset from a simulation specificationsimulation
generate_glm_data(n, beta, family, link, sigma, x_type, distributions, seed) -> tuple[pl.DataFrame, dict]Generate GLM data with known parameterssimulation
generate_glmer_data(n_obs, n_groups, beta, theta, family, re_structure, obs_per_group, distributions, seed) -> tuple[pl.DataFrame, dict]Generate GLMM data with known parameterssimulation
generate_group_kfold_splits(group_ids, k, seed) -> list[tuple[np.ndarray, np.ndarray]]Generate group-aware k-fold cross-validation indicesinfer
generate_kfold_splits(n, k, seed) -> list[tuple[np.ndarray, np.ndarray]]Generate k-fold cross-validation train/test indicesinfer
generate_lm_data(n, beta, sigma, x_type, distributions, seed) -> tuple[pl.DataFrame, dict]Generate linear model data with known parameterssimulation
generate_lmer_data(n_obs, n_groups, beta, theta, sigma, re_structure, obs_per_group, distributions, seed) -> tuple[pl.DataFrame, dict]Generate linear mixed model data with known parameterssimulation
run_monte_carlo(dgp_fn, dgp_params, fit_fn, n_sims, seed, n_jobs, verbose) -> MonteCarloResultRun a Monte Carlo simulation studysimulation
run_power_analysis(formula, family, response_var, power, n, seed, coef, sigma, var_specs) -> pl.DataFrameRun simulation-based power analysis for a model formulasimulation
run_power_study(sweep_grid, dgp_fn, fit_fn, n_sims, seed, alpha, ci_level, n_jobs, verbose) -> pl.DataFrameRun power analysis across a sweep gridsimulation
simulate_responses_from_fit(fit, bundle, spec, nsim, seed, varying) -> pl.DataFrameSimulate new responses from a fitted modelsimulation

Accessors & Extractors

Retrieve configuration/data and pull pieces from structures.

FunctionSignatureDescriptionModule
extract_base_term(name) -> strExtract base term name from column namedesign
extract_categorical_variables(X_names) -> set[str]Find all categorical base variable names from design matrix columnsdesign
extract_ci_bound(spline, zeta_target, lower_bound, upper_guess) -> floatExtract CI bound by finding where spline equals target zetainference
extract_factors_from_formula(formula, data) -> list[str]Extract factor (categorical) column names from a model formulainference
extract_level_from_column(name, focal_var) -> str | NoneExtract level value for a specific focal variable from column namedesign
get_available_memory_gb() -> floatQuery available system memory in GBbatching
get_backend() -> BackendNameGet the current backend namebackend
get_contrast_labels(levels, contrast_type) -> list[str]Generate human-readable labels for contrastsmarginal
get_display_digits() -> intGet the number of significant figures for DataFrame display outputconfig
get_ops() -> 'ArrayOps'Get array operations for the current backendbackend
get_singular_tolerance() -> floatGet the current singular tolerance for mixed modelsconfig
get_theta_lower_bounds(n_theta, re_structure, metadata) -> list[float]Get lower bounds for theta parametersfit
get_valid_rows(X) -> tuple[NDArray[np.bool_], NDArray, int]Identify valid (non-NA) rows in a design matrixpredict

Transformers

Transform, convert, or reshape data.

FunctionSignatureDescriptionModule
apply_bracket_contrasts(mee_state, expr) -> MeeStateApply bracket contrast expression to an EMM MeeStatemarginal
apply_bracket_contrasts_grouped(mee_state, expr) -> MeeStateApply bracket contrasts within each condition group of a crossed MeeStatemarginal
apply_contrasts(mee_state, contrast_type, fit, degree, ref_idx, level_ordering) -> MeeStateApply contrast matrix to marginal means/effectsmarginal
apply_contrasts_grouped(mee_state, contrast_type, degree, ref_idx, level_ordering) -> MeeStateApply contrasts within each condition group of a crossed MeeStatemarginal
apply_link(link, mu) -> 'np.ndarray'Apply link function by name: η = g(μ)family
apply_link_deriv(link, mu) -> 'np.ndarray'Apply link function derivative by name: dη/dμfamily
apply_link_inverse(link, eta) -> 'np.ndarray'Apply inverse link function by name: μ = g⁻¹(η)family
apply_rhs_bracket_contrast(mee_state, expr) -> MeeStateApply a bracket contrast on a RHS condition columnmarginal
apply_sqrt_weights(X, Z, y, weights) -> tuple[np.ndarray, sp.csc_matrix, np.ndarray, np.ndarray |...Apply sqrt(weights) transformation to design matrices and responsesolvers
convert_coding_to_hypothesis(coding_matrix) -> NDArray[np.float64]Convert a coding matrix back to interpretable hypothesis contrastsdesign
convert_theta_ci_to_sd(ci_theta, theta_opt, sigma_opt, group_names, random_names, re_structure) -> tuple[dict[str, tuple[float, float]], np.ndarray, np.ndar...Convert theta-scale CIs to SD-scale CIsinference
expand_double_verts(formula) -> tuple[str, dict]Expand || syntax into separate uncorrelated random effects termsformula
expand_nested_syntax(formula) -> tuple[str, dict]Expand nested / syntax into separate crossed random effects termsformula
expand_sweep_grid(base_n, base_coef, base_sigma, base_varying, power_config) -> list[dict[str, Any]]Full factorial grid from base DGP + power sweep overridessimulation
to_markdown(df, path, caption) -> strConvert a Polars DataFrame to a markdown table, optionally saving to filerendering

Guards

Validation, predicates, and introspection.

FunctionSignatureDescriptionModule
check_convergence(fit, re_meta) -> list[ConvergenceMessage]Run convergence diagnostics on a fitted mixed modelfit
detect_rank_deficiency(X, X_names) -> RankInfo | NoneDetect rank deficiency in a design matrix via pivoted QRlinalg
detect_weight_type(data, col) -> boolCheck if a column is categorical (should use inverse-variance weights)weights
has_full_rank(A) -> boolCheck if matrix has full column ranktolerances
is_singular(theta, tol) -> boolCheck whether a mixed model fit is singularconfig
is_well_conditioned(A, threshold) -> boolCheck if matrix is well-conditioned for stable computationtolerances
validate_fit_method(spec, method_str) -> ModelSpecValidate and apply a user-specified fitting method to a ModelSpecfit

Formatters & Comparators

Display string production and model comparison.

FunctionSignatureDescriptionModule
compare_aic(models) -> pl.DataFrameCompare models by AIC with delta-AIC and Akaike weightscompare
compare_bic(models) -> pl.DataFrameCompare models by BIC with delta-BIC and Schwarz weightscompare
format_convergence_warnings(messages) -> strFormat convergence messages for display as warning textconvergence
format_pvalue_with_stars(p_val) -> strFormat p-value with R-style significance codesinference

Math Primitives

Family, link, distribution, and coding functions (no verb prefix).

FunctionSignatureDescriptionModule
adjust_pvalues(pvalues, method) -> np.ndarrayAdjust p-values for multiple comparisonsinference
algorithm_comparison_atol(scale, cond, safety) -> floatAbsolute tolerance for comparing different algorithmstolerances
algorithm_comparison_rtol(cond, safety) -> floatRelative tolerance for comparing different algorithmstolerances
array_to_coding_matrix(contrasts, n_levels, normalize) -> NDArray[np.float64]Convert user-specified contrasts to a coding matrix for design matricesdesign
augment_data_with_diagnostics(raw_data, fit, bundle) -> pl.DataFrameAugment raw data with diagnostic columns after fitfit
augment_spread_with_profile_ci(spread, profile, conf_level) -> 'VaryingSpreadState'Augment variance components with profile likelihood confidence intervalsinfer
backend(name) -> Iterator[None]Context manager for temporary backend switchingbackend
beta(a, b) -> DistributionBeta distributiondistributions
bias(estimates, true_value) -> floatCompute bias: E[beta_hat] - beta_truesimulation
binomial(n, p) -> DistributionBinomial distributiondistributions
binomial_deviance(y, mu) -> jnp.ndarrayBinomial unit deviance: d(y, μ) = 2[y log(y/μ) + (1-y) log((1-y)/(1-μ))]family
binomial_dispersion(y, mu, df_resid) -> floatDispersion parameter for binomial familyfamily
binomial_initialize(y, weights) -> jnp.ndarrayInitialize μ for binomial familyfamily
binomial_loglik(y, mu) -> jnp.ndarrayBinomial conditional log-likelihood (per observation)family
binomial_variance(mu) -> jnp.ndarrayBinomial variance function: V(μ) = μ(1-μ)family
chi2(df) -> DistributionChi-squared distributiondistributions
clear_ops_cache() -> NoneClear the backend operations cachebackend
cloglog_link(mu) -> jnp.ndarrayComplementary log-log link function: η = log(-log(1-μ))family
cloglog_link_deriv(mu) -> jnp.ndarrayCloglog link derivative: dη/dμ = 1/((1-μ) * (-log(1-μ)))family
cloglog_link_inverse(eta) -> jnp.ndarrayCloglog inverse link: μ = 1 - exp(-exp(η))family
combine_resolved(a, b) -> ResolvedConditionsMerge two ResolvedConditions, with b taking precedence on conflictsmarginal
compose_contrast_matrix(C, X_ref) -> np.ndarrayCompose contrast matrix with prediction matrixmarginal
coverage(ci_lower, ci_upper, true_value) -> floatCompute coverage probabilitysimulation
dataframe_to_markdown(df, caption) -> strConvert a Polars DataFrame to a pipe-delimited markdown tablerendering
decomposition_atol(A, safety) -> floatAbsolute tolerance for decomposition propertiestolerances
delta_method_se(X_pred, vcov) -> np.ndarrayCompute standard errors for predictions via delta methodinference
diagnose_convergence(theta, theta_lower, group_names, random_names, re_structure, sigma, converged, boundary_adjusted, restarted, optimizer_message, singular_tol, corr_tol) -> list[ConvergenceMessage]Analyze model convergence state and generate diagnostic messagesconvergence
empirical_se(estimates) -> floatCompute empirical standard error (SD of estimates)simulation
equation_to_markdown(equation, explanations, include_explanations) -> strWrap a LaTeX equation in display math delimiters for Quartorendering
execute_fit(spec, bundle, data, raw_data, formula, custom_contrasts, weights_col, offset_col, missing, is_mixed, solver_override, fit_kwargs) -> FitResultExecute the full fit lifecycle: bundle rebuild → fit → post-fit state → diagn...fit
execute_simulate(spec, bundle, fit, data, formula, is_mixed, n, nsim, seed, coef, sigma, varying, power, var_specs) -> SimulateResultExecute simulation: power analysis, post-fit sampling, or pre-fit generationsimulation
exponential(rate) -> DistributionExponential distribution (rate parameterization)distributions
f_dist(df1, df2) -> DistributionF distributiondistributions
figure_to_html(fig, dpi) -> strConvert matplotlib figure to base64-encoded HTML img tagdistributions
fill_valid(result, valid_mask, values) -> NDArrayFill valid positions in result array with computed valuespredict
fitted_atol(X, y, cond, safety) -> floatAbsolute tolerance for fitted value comparisonstolerances
gamma(shape, rate, scale) -> DistributionGamma distributiondistributions
gamma_deviance(y, mu) -> jnp.ndarrayGamma unit deviance: d(y, μ) = 2[-log(y/μ) + (y - μ)/μ]family
gamma_dispersion(y, mu, df_resid) -> floatEstimate dispersion parameter for Gamma familyfamily
gamma_initialize(y, weights) -> jnp.ndarrayInitialize μ for Gamma familyfamily
gamma_loglik(y, mu) -> jnp.ndarrayGamma conditional log-likelihood (per observation)family
gamma_variance(mu) -> jnp.ndarrayGamma variance function: V(μ) = μ²family
gaussian_deviance(y, mu) -> jnp.ndarrayGaussian unit deviance: d(y, μ) = (y - μ)²family
gaussian_dispersion(y, mu, df_resid) -> floatEstimate dispersion parameter for Gaussian familyfamily
gaussian_initialize(y, weights) -> jnp.ndarrayInitialize μ for Gaussian familyfamily
gaussian_loglik(y, mu) -> jnp.ndarrayGaussian conditional log-likelihood (per observation)family
gaussian_variance(mu) -> jnp.ndarrayGaussian variance function: V(μ) = 1family
glm_score_atol(X, y, cond, weights, iterative, safety) -> floatAbsolute tolerance for GLM score equation checkstolerances
glmm_deviance(y, mu, family, logdet, sqrL, prior_weights) -> floatCompute GLMM deviance via Laplace approximationsolvers
glmm_deviance_objective(theta, X, Z, y, family, n_groups_list, re_structure, metadata, prior_weights, pirls_max_iter, pirls_tol, verbose, lambda_template, factor_cache, pattern_template, pirls_result_cache) -> floatCompute GLMM deviance for outer optimizationsolvers
hat_matrix_atol(X, cond, safety) -> floatAbsolute tolerance for hat matrix property checkstolerances
helmert_coding(levels) -> NDArray[np.float64]Build Helmert contrast matrixdesign
helmert_coding_labels(levels) -> list[str]Get column labels for Helmert contrastdesign
identify_column_type(name) -> Literal['intercept', 'continuous', 'categorical']Identify column type from name (simplified version)design
identity_link(mu) -> jnp.ndarrayIdentity link function: η = μfamily
identity_link_deriv(mu) -> jnp.ndarrayIdentity link derivative: dη/dμ = 1family
identity_link_inverse(eta) -> jnp.ndarrayIdentity inverse link: μ = ηfamily
inference_atol(coef, safety) -> floatAbsolute tolerance for inference result comparisonstolerances
init_na_array(n, dtype) -> NDArrayCreate an array of NaN valuespredict
inverse_link(mu) -> jnp.ndarrayInverse link function: η = 1/μfamily
inverse_link_deriv(mu) -> jnp.ndarrayInverse link derivative: dη/dμ = -1/μ²family
inverse_link_inverse(eta) -> jnp.ndarrayInverse link inverse: μ = 1/ηfamily
lmm_deviance_sparse(theta, X, Z, y, n_groups_list, re_structure, method, lambda_template, pls_invariants, metadata, sqrtwts) -> floatCompute LMM deviance for optimizationsolvers
lock_backend() -> NoneLock the backend to prevent switching after model fittingbackend
log_link(mu) -> jnp.ndarrayLog link function: η = log(μ)family
log_link_deriv(mu) -> jnp.ndarrayLog link derivative: dη/dμ = 1/μfamily
log_link_inverse(eta) -> jnp.ndarrayLog inverse link: μ = exp(η)family
logit_link(mu) -> jnp.ndarrayLogit link function: η = log(μ/(1-μ))family
logit_link_deriv(mu) -> jnp.ndarrayLogit link derivative: dη/dμ = 1/(μ(1-μ))family
logit_link_inverse(eta) -> jnp.ndarrayLogit inverse link: μ = 1/(1 + exp(-η))family
mean_se(std_errors) -> floatCompute mean of standard errors across simulationssimulation
normal(mean, sd) -> DistributionNormal (Gaussian) distributiondistributions
optimize_theta(objective, theta0, lower, upper, rhobeg, rhoend, maxfun, verbose) -> dictOptimize theta using BOBYQA via NLOPTsolvers
orthogonality_atol(n, safety) -> floatAbsolute tolerance for orthogonality checkstolerances
per_factor_re_info(re_meta, group_names) -> tuple[str | list[str], list[str] | dict[str, list[str]]]Split global RE metadata into per-factor structures and namesfit
poisson(mu) -> DistributionPoisson distributiondistributions
poisson_deviance(y, mu) -> jnp.ndarrayPoisson unit deviance: d(y, μ) = 2[y log(y/μ) - (y - μ)]family
poisson_dispersion(y, mu, df_resid) -> floatDispersion parameter for Poisson familyfamily
poisson_initialize(y, weights) -> jnp.ndarrayInitialize μ for Poisson familyfamily
poisson_loglik(y, mu) -> jnp.ndarrayPoisson conditional log-likelihood (per observation)family
poisson_variance(mu) -> jnp.ndarrayPoisson variance function: V(μ) = μfamily
poly_coding(levels) -> NDArray[np.float64]Build orthogonal polynomial contrast matrixdesign
poly_coding_labels(levels) -> list[str]Get column labels for polynomial contrastdesign
probit_link(mu) -> jnp.ndarrayProbit link function: η = Φ⁻¹(μ)family
probit_link_deriv(mu) -> jnp.ndarrayProbit link derivative: dη/dμ = 1/φ(Φ⁻¹(μ))family
probit_link_inverse(eta) -> jnp.ndarrayProbit inverse link: μ = Φ(η)family
profile_likelihood(model, conf_level, threshold, n_steps, verbose) -> dictCompute profile likelihood confidence intervals for variance componentsinference
profile_theta_parameter(param_idx, param_name, param_opt, theta_opt, dev_opt, deviance_fn, lower_bounds, n_steps, threshold, verbose) -> dictProfile a single theta parameter bidirectionally from its MLEinference
qr_solve(X, y) -> QRSolveResultSolve least squares via pivoted QR decompositionlinalg
qr_solve_jax(X, y) -> QRSolveResultSolve least squares via pivoted QR decomposition (returns backend arrays)linalg
rejection_rate(p_values, alpha) -> floatCompute rejection rate (proportion of p-values < alpha)simulation
reset_backend() -> NoneReset backend state (for testing only)backend
residual_atol(X, y, cond, safety) -> floatAbsolute tolerance for residual orthogonality checkstolerances
rmse(estimates, true_value) -> floatCompute root mean squared errorsimulation
round_float_columns(df, digits) -> pl.DataFrameRound all Float64 columns to digits significant figuresrounding
round_sigfigs(x, n) -> np.ndarrayRound array values to n significant figuresrounding
sample_response(family, mu, sigma, rng) -> np.ndarraySample response values from a GLM family distributionfamily
satterthwaite_df_for_contrasts(L, vcov_beta, vcov_varpar, jac_list, min_df, max_df) -> np.ndarrayCompute Satterthwaite degrees of freedom for arbitrary contrastsinference
sequential_coding(levels) -> NDArray[np.float64]Build sequential (successive differences) contrast matrixdesign
sequential_coding_labels(levels) -> list[str]Get column labels for sequential contrastdesign
set_backend(name) -> NoneSet the backend to use for computationsbackend
set_display_digits(digits) -> NoneSet the number of significant figures for DataFrame display outputconfig
set_singular_tolerance(tol) -> NoneSet the global singular tolerance for mixed modelsconfig
sum_coding(levels, omit) -> NDArray[np.float64]Build sum (effects) contrast matrixdesign
sum_coding_labels(levels, omit) -> list[str]Get column labels for sum contrastdesign
svd_solve(X, y, rcond) -> SVDSolveResultSolve least squares via SVD (handles rank deficiency)linalg
svd_solve_jax(X, y, rcond) -> SVDSolveResultSolve least squares via SVD (returns backend arrays)linalg
t(df, loc, scale) -> DistributionStudent’s t distribution (location-scale parameterization)distributions
t_dist(df, mean, sd) -> DistributionStudent’s t distributiondistributions
tdist_deviance(y, mu) -> jnp.ndarrayPlaceholder - use tdist(df=...) factory to get proper functionfamily
tdist_dispersion(y, mu, df_resid) -> floatEstimate dispersion (scale) parameter for Student-t familyfamily
tdist_initialize(y, weights) -> jnp.ndarrayInitialize μ for Student-t familyfamily
tdist_loglik(y, mu) -> jnp.ndarrayPlaceholder - use tdist(df=...) factory to get proper functionfamily
tdist_robust_weights(y, mu, scale) -> jnp.ndarrayPlaceholder - use tdist(df=...) factory to get proper functionfamily
tdist_variance(mu) -> jnp.ndarrayStudent-t variance function: V(μ) = 1family
theta_to_cholesky_block(theta, dim) -> np.ndarrayConvert theta vector to lower-triangular Cholesky blocksolvers
theta_to_variance_components(theta, sigma, group_names, random_names, re_structure) -> tuple[list[str], list[float]]Convert theta parameters to named variance componentsvariance
treatment_coding(levels, reference) -> NDArray[np.float64]Build treatment (dummy) contrast matrixdesign
treatment_coding_labels(levels, reference) -> list[str]Get column labels for treatment contrastdesign
uniform(low, high) -> DistributionUniform distributiondistributions
update_lambda_from_template(template, theta) -> sp.csc_matrixUpdate Lambda matrix from template using new theta valuessolvers
write_text(text, path) -> NoneWrite text content to a file, creating parent directoriesrendering