Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Linear algebra solvers for least squares and sparse systems.

Classes:

NameDescription
CHOLMODFactorizationWrapper around CHOLMOD sparse Cholesky factorization.
QRSolveResultResult container for QR solve.
SPLUFactorizationWrapper around scipy.sparse.linalg.splu factorization.
SVDSolveResultResult container for SVD solve.
SparseFactorizationAbstract base class for sparse matrix factorization.

Functions:

NameDescription
compute_sparse_choleskyFactor a sparse symmetric positive definite matrix.
compute_vcov_schur_sparseCompute variance-covariance matrix of fixed effects via Schur complement.
detect_rank_deficiencyDetect rank deficiency in a design matrix via pivoted QR.
qr_solveSolve least squares via pivoted QR decomposition.
qr_solve_jaxSolve least squares via pivoted QR decomposition (returns backend arrays).
svd_solveSolve least squares via SVD (handles rank deficiency).
svd_solve_jaxSolve least squares via SVD (returns backend arrays).

Modules:

NameDescription
qrQR-based least squares solver with rank deficiency detection.
rankRank deficiency detection for design matrices.
schurSchur complement variance-covariance computation for mixed models.
sparseSparse factorization and solve with CHOLMOD/splu backend abstraction.
svdSVD-based least squares solver, robust to rank deficiency.

Classes

CHOLMODFactorization

CHOLMODFactorization(factor: 'Any')

Bases: SparseFactorization

Wrapper around CHOLMOD sparse Cholesky factorization.

CHOLMOD provides optimized sparse Cholesky factorization for symmetric positive definite matrices.

Attributes:

NameTypeDescription
_factorThe underlying CHOLMOD Factor object.

Functions:

NameDescription
cholesky_inplaceUpdate factorization with new values (same sparsity pattern).
logdetCompute log-determinant via Cholesky.
solveSolve A @ x = b using Cholesky factor.

Parameters:

NameTypeDescriptionDefault
factor‘Any’sksparse.cholmod.Factor object.required

Functions

cholesky_inplace
cholesky_inplace(A: sp.spmatrix) -> None

Update factorization with new values (same sparsity pattern).

Reuses the symbolic analysis (fill-reducing permutation) from the original factorization. Only performs numeric factorization with the new values. This is much faster than a full cholesky() call.

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix with same sparsity pattern as original.required
logdet
logdet() -> float

Compute log-determinant via Cholesky.

For SPD matrix A with Cholesky factor L (A = L @ L.T): log|A| = log|L|² = 2 * log|L| = 2 * sum(log(diag(L)))

CHOLMOD’s logdet() returns this directly.

Returns:

TypeDescription
floatlog
solve
solve(b: np.ndarray) -> np.ndarray

Solve A @ x = b using Cholesky factor.

Parameters:

NameTypeDescriptionDefault
bndarrayRight-hand side.required

Returns:

TypeDescription
ndarraySolution x.

QRSolveResult

QRSolveResult(coef: Array, vcov: Array, sigma2: Array, fitted: Array, residuals: Array, df_resid: int, rank: int, R: Array, pivot: Array) -> None

Result container for QR solve.

Attributes:

NameTypeDescription
coefArrayCoefficient estimates, shape (p,). NaN for dropped columns.
vcovArrayVariance-covariance matrix, shape (p, p). NaN for dropped columns.
sigma2ArrayResidual variance (scalar).
fittedArrayFitted values, shape (n,).
residualsArrayResiduals, shape (n,).
df_residintResidual degrees of freedom (n - rank).
rankintEffective rank of design matrix.
RArrayR matrix from QR decomposition (upper triangular, permuted).
pivotArrayColumn permutation from pivoted QR, shape (p,).

Attributes

R
R: Array
coef
coef: Array
df_resid
df_resid: int
fitted
fitted: Array
pivot
pivot: Array
rank
rank: int
residuals
residuals: Array
sigma2
sigma2: Array
vcov
vcov: Array

SPLUFactorization

SPLUFactorization(lu: 'Any', n: int)

Bases: SparseFactorization

Wrapper around scipy.sparse.linalg.splu factorization.

splu provides general sparse LU factorization. For SPD matrices, it’s less efficient than Cholesky but works in environments where CHOLMOD isn’t available (e.g., Pyodide).

Attributes:

NameTypeDescription
_luThe underlying SuperLU object from splu.
_nMatrix dimension.

Functions:

NameDescription
cholesky_inplaceUpdate factorization with new values.
logdetCompute log-determinant from LU decomposition.
solveSolve A @ x = b using LU factorization.

Parameters:

NameTypeDescriptionDefault
lu‘Any’scipy.sparse.linalg.SuperLU object.required
nintMatrix dimension.required

Functions

cholesky_inplace
cholesky_inplace(A: sp.spmatrix) -> None

Update factorization with new values.

Note: scipy.sparse.linalg.splu does not support in-place updates, so this performs a full re-factorization. No performance benefit in the NumPy/Pyodide backend, but provides API consistency.

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix (sparsity pattern should match original).required
logdet
logdet() -> float

Compute log-determinant from LU decomposition.

For LU factorization with pivoting: A = P @ L @ U where P is permutation, L is lower triangular, U is upper triangular.

For SPD matrices, |A| > 0 so: log|A| = log|L| + log|U|

Since L has unit diagonal: log|L| = 0 So: log|A| = sum(log(|diag(U)|))

Note: For general (non-SPD) matrices, the sign from permutation P and negative diagonal elements would need to be tracked. For SPD matrices used in mixed models, this is not needed.

Returns:

TypeDescription
floatlog
solve
solve(b: np.ndarray) -> np.ndarray

Solve A @ x = b using LU factorization.

Parameters:

NameTypeDescriptionDefault
bndarrayRight-hand side (1D or 2D array).required

Returns:

TypeDescription
ndarraySolution x.

SVDSolveResult

SVDSolveResult(coef: Array, vcov: Array, sigma2: Array, fitted: Array, residuals: Array, df_resid: int, rank: int, s: Array) -> None

Result container for SVD solve.

Attributes:

NameTypeDescription
coefArrayCoefficient estimates, shape (p,).
vcovArrayVariance-covariance matrix, shape (p, p).
sigma2ArrayResidual variance (scalar).
fittedArrayFitted values, shape (n,).
residualsArrayResiduals, shape (n,).
df_residintResidual degrees of freedom.
rankintEffective rank of design matrix.
sArraySingular values, shape (min(n, p),).

Attributes

coef
coef: Array
df_resid
df_resid: int
fitted
fitted: Array
rank
rank: int
residuals
residuals: Array
s
s: Array
sigma2
sigma2: Array
vcov
vcov: Array

SparseFactorization

Bases: ABC

Abstract base class for sparse matrix factorization.

Wraps either CHOLMOD or splu to provide a unified interface.

Functions

cholesky_inplace
cholesky_inplace(A: sp.spmatrix) -> None

Update factorization with new matrix values (same sparsity pattern).

This enables efficient re-factorization when only the values change but the sparsity pattern remains constant. The expensive symbolic analysis is reused, only numeric factorization is performed.

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix with same sparsity pattern as original.required
logdet
logdet() -> float

Compute log-determinant of the factored matrix.

Returns:

TypeDescription
floatlog
solve
solve(b: np.ndarray) -> np.ndarray

Solve A @ x = b.

Parameters:

NameTypeDescriptionDefault
bndarrayRight-hand side vector or matrix.required

Returns:

TypeDescription
ndarraySolution x.

Functions

compute_sparse_cholesky

compute_sparse_cholesky(A: sp.spmatrix) -> SparseFactorization

Factor a sparse symmetric positive definite matrix.

Uses CHOLMOD if available (fast specialized Cholesky) or splu as fallback (general LU, works in Pyodide and when sksparse is not installed).

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix in CSC format. Will be converted if needed.required

Returns:

TypeDescription
SparseFactorizationSparseFactorization object with solve() and logdet() methods.

Examples:

>>> import scipy.sparse as sp
>>> A = sp.eye(100, format='csc') + sp.random(100, 100, density=0.1)
>>> A = A @ A.T  # Make SPD
>>> factor = compute_sparse_cholesky(A)
>>> x = factor.solve(b)
>>> logdet = factor.logdet()

compute_vcov_schur_sparse

compute_vcov_schur_sparse(X: np.ndarray, Z: sp.spmatrix, Lambda: sp.spmatrix, weights: np.ndarray | None = None, sigma2: float = 1.0) -> np.ndarray

Compute variance-covariance matrix of fixed effects via Schur complement.

Uses the block matrix inversion formula to compute vcov(beta) without explicitly inverting the full augmented system. Handles both LMM (unweighted or pre-transformed) and GLMM (IRLS-weighted) cases.

vcov(beta) = sigma2 * (X’X - X’ZL S22^{-1} ZL’X)^{-1} vcov(beta) = sigma2 * (X’X - X’ZL S22^{-1} ZL’X)^{-1}

For GLMMs (with IRLS weights W): vcov(beta) = (X’WX - X’WZL S22^{-1} ZL’WX)^{-1}

where S22 = ZL’[W]ZL + I and ZL = Z @ Lambda.

Parameters:

NameTypeDescriptionDefault
XndarrayFixed effects design matrix (n x p).required
ZspmatrixRandom effects design matrix (n x q, sparse).required
LambdaspmatrixCholesky-like factor (q x q, sparse).required
weightsndarray | NoneDiagonal weights for IRLS (n,). None for unweighted (LMM).None
sigma2floatResidual variance scaling. For LMMs, pass the estimated sigma^2. For GLMMs, leave as 1.0 (no scaling).1.0

Returns:

TypeDescription
ndarrayVariance-covariance matrix (p x p).

detect_rank_deficiency

detect_rank_deficiency(X: np.ndarray, X_names: tuple[str, ...]) -> RankInfo | None

Detect rank deficiency in a design matrix via pivoted QR.

Uses scipy’s pivoted QR decomposition to detect numerically zero columns. Applies R’s tolerance: eps * max(m, n) * |R[0,0]|.

Parameters:

NameTypeDescriptionDefault
XndarrayDesign matrix of shape (n, p).required
X_namestuple[str, ...]Column names of X (length p).required

Returns:

TypeDescription
RankInfo | NoneRankInfo if X is rank-deficient, None if full rank.

Examples:

>>> import numpy as np
>>> X = np.array([[1, 2, 2], [1, 3, 3], [1, 4, 4]])
>>> info = detect_rank_deficiency(X, ("Intercept", "x1", "x2"))
>>> info is not None
True
>>> info.rank
2
>>> info.dropped_names
('x2',)

qr_solve

qr_solve(X: np.ndarray, y: np.ndarray) -> QRSolveResult

Solve least squares via pivoted QR decomposition.

Uses pivoted QR for numerical stability and rank detection. This matches R’s lm() behavior: rank-deficient columns get NaN coefficients.

Parameters:

NameTypeDescriptionDefault
XndarrayDesign matrix of shape (n, p).required
yndarrayResponse variable of shape (n,).required

Returns:

TypeDescription
QRSolveResultQRSolveResult containing:
QRSolveResult- coef: Coefficient estimates, shape (p,). NaN for dropped columns.
QRSolveResult- vcov: Variance-covariance matrix, shape (p, p). NaN for dropped columns.
QRSolveResult- sigma2: Residual variance
QRSolveResult- fitted: Fitted values, shape (n,)
QRSolveResult- residuals: Residuals, shape (n,)
QRSolveResult- df_resid: Residual degrees of freedom (n - rank)
QRSolveResult- rank: Effective rank of design matrix
QRSolveResult- R: R matrix from QR decomposition (upper triangular, permuted)
QRSolveResult- pivot: Column permutation indices, shape (p,)

Note: Uses pivoted Householder QR decomposition for rank detection. X[:, pivot] = Q @ R where Q is orthogonal and R is upper triangular.

For rank-deficient matrices, columns with |R_ii| < tol are dropped and their coefficients set to NaN (matching R’s lm() behavior).

Examples:

>>> X = np.array([[1, 2], [1, 3], [1, 4]])
>>> y = np.array([1.0, 2.0, 3.0])
>>> result = qr_solve(X, y)
>>> result.coef
array([-1.,  1.])
>>> # Rank-deficient case
>>> X = np.array([[1, 2, 2], [1, 3, 3], [1, 4, 4]])  # col 3 = col 2
>>> result = qr_solve(X, y)
>>> result.rank
2
>>> np.isnan(result.coef).sum()  # One coefficient is NaN
1

qr_solve_jax

qr_solve_jax(X: Array, y: Array) -> QRSolveResult

Solve least squares via pivoted QR decomposition (returns backend arrays).

Uses pivoted QR to detect and handle rank deficiency:

This matches R’s lm() behavior for collinear predictors.

Parameters:

NameTypeDescriptionDefault
XArrayDesign matrix of shape (n, p), as backend array.required
yArrayResponse variable of shape (n,), as backend array.required

Returns:

TypeDescription
QRSolveResultQRSolveResult with backend arrays.

svd_solve

svd_solve(X: np.ndarray, y: np.ndarray, rcond: float = 1e-15) -> SVDSolveResult

Solve least squares via SVD (handles rank deficiency).

SVD is more robust to rank deficiency and multicollinearity.

Parameters:

NameTypeDescriptionDefault
XndarrayDesign matrix of shape (n, p).required
yndarrayResponse variable of shape (n,).required
rcondfloatRelative condition number for rank determination. Singular values smaller than rcond * largest_singular_value are treated as zero.1e-15

Returns:

TypeDescription
SVDSolveResultSVDSolveResult containing:
SVDSolveResult- coef: Coefficient estimates, shape (p,)
SVDSolveResult- vcov: Variance-covariance matrix, shape (p, p)
SVDSolveResult- sigma2: Residual variance
SVDSolveResult- fitted: Fitted values, shape (n,)
SVDSolveResult- residuals: Residuals, shape (n,)
SVDSolveResult- df_resid: Residual degrees of freedom
SVDSolveResult- rank: Effective rank of design matrix
SVDSolveResult- s: Singular values, shape (min(n, p),)

Note: Uses Singular Value Decomposition: X = U S V^T. beta_hat = V S^{-1} U^T y (using pseudoinverse for rank deficiency). Var(beta_hat) = sigma^2 V S^{-2} V^T.

Handles rank deficiency by identifying small singular values (< rcond * max(S)) and treating them as zero.

Examples:

>>> # Rank deficient case
>>> X = np.array([[1, 2, 2], [1, 3, 3], [1, 4, 4]])  # col 3 = col 2
>>> y = np.array([1.0, 2.0, 3.0])
>>> result = svd_solve(X, y)
>>> result.rank  # Will be 2, not 3
2

svd_solve_jax

svd_solve_jax(X: Array, y: Array, rcond: float = 1e-15) -> SVDSolveResult

Solve least squares via SVD (returns backend arrays).

This is the internal API for use within bossanova where backend arrays are preferred. For user-facing code, use svd_solve() instead.

Parameters:

NameTypeDescriptionDefault
XArrayDesign matrix of shape (n, p), as backend array.required
yArrayResponse variable of shape (n,), as backend array.required
rcondfloatRelative condition number for rank determination.1e-15

Returns:

TypeDescription
SVDSolveResultSVDSolveResult with backend arrays.

Modules

qr

QR-based least squares solver with rank deficiency detection.

Classes:

NameDescription
QRSolveResultResult container for QR solve.

Functions:

NameDescription
qr_solveSolve least squares via pivoted QR decomposition.
qr_solve_jaxSolve least squares via pivoted QR decomposition (returns backend arrays).

Attributes

Classes

QRSolveResult
QRSolveResult(coef: Array, vcov: Array, sigma2: Array, fitted: Array, residuals: Array, df_resid: int, rank: int, R: Array, pivot: Array) -> None

Result container for QR solve.

Attributes:

NameTypeDescription
coefArrayCoefficient estimates, shape (p,). NaN for dropped columns.
vcovArrayVariance-covariance matrix, shape (p, p). NaN for dropped columns.
sigma2ArrayResidual variance (scalar).
fittedArrayFitted values, shape (n,).
residualsArrayResiduals, shape (n,).
df_residintResidual degrees of freedom (n - rank).
rankintEffective rank of design matrix.
RArrayR matrix from QR decomposition (upper triangular, permuted).
pivotArrayColumn permutation from pivoted QR, shape (p,).
Attributes
R
R: Array
coef
coef: Array
df_resid
df_resid: int
fitted
fitted: Array
pivot
pivot: Array
rank
rank: int
residuals
residuals: Array
sigma2
sigma2: Array
vcov
vcov: Array

Functions

qr_solve
qr_solve(X: np.ndarray, y: np.ndarray) -> QRSolveResult

Solve least squares via pivoted QR decomposition.

Uses pivoted QR for numerical stability and rank detection. This matches R’s lm() behavior: rank-deficient columns get NaN coefficients.

Parameters:

NameTypeDescriptionDefault
XndarrayDesign matrix of shape (n, p).required
yndarrayResponse variable of shape (n,).required

Returns:

TypeDescription
QRSolveResultQRSolveResult containing:
QRSolveResult- coef: Coefficient estimates, shape (p,). NaN for dropped columns.
QRSolveResult- vcov: Variance-covariance matrix, shape (p, p). NaN for dropped columns.
QRSolveResult- sigma2: Residual variance
QRSolveResult- fitted: Fitted values, shape (n,)
QRSolveResult- residuals: Residuals, shape (n,)
QRSolveResult- df_resid: Residual degrees of freedom (n - rank)
QRSolveResult- rank: Effective rank of design matrix
QRSolveResult- R: R matrix from QR decomposition (upper triangular, permuted)
QRSolveResult- pivot: Column permutation indices, shape (p,)

Note: Uses pivoted Householder QR decomposition for rank detection. X[:, pivot] = Q @ R where Q is orthogonal and R is upper triangular.

For rank-deficient matrices, columns with |R_ii| < tol are dropped and their coefficients set to NaN (matching R’s lm() behavior).

Examples:

>>> X = np.array([[1, 2], [1, 3], [1, 4]])
>>> y = np.array([1.0, 2.0, 3.0])
>>> result = qr_solve(X, y)
>>> result.coef
array([-1.,  1.])
>>> # Rank-deficient case
>>> X = np.array([[1, 2, 2], [1, 3, 3], [1, 4, 4]])  # col 3 = col 2
>>> result = qr_solve(X, y)
>>> result.rank
2
>>> np.isnan(result.coef).sum()  # One coefficient is NaN
1
qr_solve_jax
qr_solve_jax(X: Array, y: Array) -> QRSolveResult

Solve least squares via pivoted QR decomposition (returns backend arrays).

Uses pivoted QR to detect and handle rank deficiency:

This matches R’s lm() behavior for collinear predictors.

Parameters:

NameTypeDescriptionDefault
XArrayDesign matrix of shape (n, p), as backend array.required
yArrayResponse variable of shape (n,), as backend array.required

Returns:

TypeDescription
QRSolveResultQRSolveResult with backend arrays.

rank

Rank deficiency detection for design matrices.

Functions:

NameDescription
detect_rank_deficiencyDetect rank deficiency in a design matrix via pivoted QR.

Classes

Functions

detect_rank_deficiency
detect_rank_deficiency(X: np.ndarray, X_names: tuple[str, ...]) -> RankInfo | None

Detect rank deficiency in a design matrix via pivoted QR.

Uses scipy’s pivoted QR decomposition to detect numerically zero columns. Applies R’s tolerance: eps * max(m, n) * |R[0,0]|.

Parameters:

NameTypeDescriptionDefault
XndarrayDesign matrix of shape (n, p).required
X_namestuple[str, ...]Column names of X (length p).required

Returns:

TypeDescription
RankInfo | NoneRankInfo if X is rank-deficient, None if full rank.

Examples:

>>> import numpy as np
>>> X = np.array([[1, 2, 2], [1, 3, 3], [1, 4, 4]])
>>> info = detect_rank_deficiency(X, ("Intercept", "x1", "x2"))
>>> info is not None
True
>>> info.rank
2
>>> info.dropped_names
('x2',)

schur

Schur complement variance-covariance computation for mixed models.

Functions:

NameDescription
compute_vcov_schur_sparseCompute variance-covariance matrix of fixed effects via Schur complement.

Functions

compute_vcov_schur_sparse
compute_vcov_schur_sparse(X: np.ndarray, Z: sp.spmatrix, Lambda: sp.spmatrix, weights: np.ndarray | None = None, sigma2: float = 1.0) -> np.ndarray

Compute variance-covariance matrix of fixed effects via Schur complement.

Uses the block matrix inversion formula to compute vcov(beta) without explicitly inverting the full augmented system. Handles both LMM (unweighted or pre-transformed) and GLMM (IRLS-weighted) cases.

vcov(beta) = sigma2 * (X’X - X’ZL S22^{-1} ZL’X)^{-1} vcov(beta) = sigma2 * (X’X - X’ZL S22^{-1} ZL’X)^{-1}

For GLMMs (with IRLS weights W): vcov(beta) = (X’WX - X’WZL S22^{-1} ZL’WX)^{-1}

where S22 = ZL’[W]ZL + I and ZL = Z @ Lambda.

Parameters:

NameTypeDescriptionDefault
XndarrayFixed effects design matrix (n x p).required
ZspmatrixRandom effects design matrix (n x q, sparse).required
LambdaspmatrixCholesky-like factor (q x q, sparse).required
weightsndarray | NoneDiagonal weights for IRLS (n,). None for unweighted (LMM).None
sigma2floatResidual variance scaling. For LMMs, pass the estimated sigma^2. For GLMMs, leave as 1.0 (no scaling).1.0

Returns:

TypeDescription
ndarrayVariance-covariance matrix (p x p).

sparse

Sparse factorization and solve with CHOLMOD/splu backend abstraction.

Classes:

NameDescription
CHOLMODFactorizationWrapper around CHOLMOD sparse Cholesky factorization.
SPLUFactorizationWrapper around scipy.sparse.linalg.splu factorization.
SparseFactorizationAbstract base class for sparse matrix factorization.

Functions:

NameDescription
compute_sparse_choleskyFactor a sparse symmetric positive definite matrix.

Classes

CHOLMODFactorization
CHOLMODFactorization(factor: 'Any')

Bases: SparseFactorization

Wrapper around CHOLMOD sparse Cholesky factorization.

CHOLMOD provides optimized sparse Cholesky factorization for symmetric positive definite matrices.

Attributes:

NameTypeDescription
_factorThe underlying CHOLMOD Factor object.

Functions:

NameDescription
cholesky_inplaceUpdate factorization with new values (same sparsity pattern).
logdetCompute log-determinant via Cholesky.
solveSolve A @ x = b using Cholesky factor.

Parameters:

NameTypeDescriptionDefault
factor‘Any’sksparse.cholmod.Factor object.required
Functions
cholesky_inplace
cholesky_inplace(A: sp.spmatrix) -> None

Update factorization with new values (same sparsity pattern).

Reuses the symbolic analysis (fill-reducing permutation) from the original factorization. Only performs numeric factorization with the new values. This is much faster than a full cholesky() call.

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix with same sparsity pattern as original.required
logdet
logdet() -> float

Compute log-determinant via Cholesky.

For SPD matrix A with Cholesky factor L (A = L @ L.T): log|A| = log|L|² = 2 * log|L| = 2 * sum(log(diag(L)))

CHOLMOD’s logdet() returns this directly.

Returns:

TypeDescription
floatlog
solve
solve(b: np.ndarray) -> np.ndarray

Solve A @ x = b using Cholesky factor.

Parameters:

NameTypeDescriptionDefault
bndarrayRight-hand side.required

Returns:

TypeDescription
ndarraySolution x.
SPLUFactorization
SPLUFactorization(lu: 'Any', n: int)

Bases: SparseFactorization

Wrapper around scipy.sparse.linalg.splu factorization.

splu provides general sparse LU factorization. For SPD matrices, it’s less efficient than Cholesky but works in environments where CHOLMOD isn’t available (e.g., Pyodide).

Attributes:

NameTypeDescription
_luThe underlying SuperLU object from splu.
_nMatrix dimension.

Functions:

NameDescription
cholesky_inplaceUpdate factorization with new values.
logdetCompute log-determinant from LU decomposition.
solveSolve A @ x = b using LU factorization.

Parameters:

NameTypeDescriptionDefault
lu‘Any’scipy.sparse.linalg.SuperLU object.required
nintMatrix dimension.required
Functions
cholesky_inplace
cholesky_inplace(A: sp.spmatrix) -> None

Update factorization with new values.

Note: scipy.sparse.linalg.splu does not support in-place updates, so this performs a full re-factorization. No performance benefit in the NumPy/Pyodide backend, but provides API consistency.

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix (sparsity pattern should match original).required
logdet
logdet() -> float

Compute log-determinant from LU decomposition.

For LU factorization with pivoting: A = P @ L @ U where P is permutation, L is lower triangular, U is upper triangular.

For SPD matrices, |A| > 0 so: log|A| = log|L| + log|U|

Since L has unit diagonal: log|L| = 0 So: log|A| = sum(log(|diag(U)|))

Note: For general (non-SPD) matrices, the sign from permutation P and negative diagonal elements would need to be tracked. For SPD matrices used in mixed models, this is not needed.

Returns:

TypeDescription
floatlog
solve
solve(b: np.ndarray) -> np.ndarray

Solve A @ x = b using LU factorization.

Parameters:

NameTypeDescriptionDefault
bndarrayRight-hand side (1D or 2D array).required

Returns:

TypeDescription
ndarraySolution x.
SparseFactorization

Bases: ABC

Abstract base class for sparse matrix factorization.

Wraps either CHOLMOD or splu to provide a unified interface.

Functions
cholesky_inplace
cholesky_inplace(A: sp.spmatrix) -> None

Update factorization with new matrix values (same sparsity pattern).

This enables efficient re-factorization when only the values change but the sparsity pattern remains constant. The expensive symbolic analysis is reused, only numeric factorization is performed.

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix with same sparsity pattern as original.required
logdet
logdet() -> float

Compute log-determinant of the factored matrix.

Returns:

TypeDescription
floatlog
solve
solve(b: np.ndarray) -> np.ndarray

Solve A @ x = b.

Parameters:

NameTypeDescriptionDefault
bndarrayRight-hand side vector or matrix.required

Returns:

TypeDescription
ndarraySolution x.

Functions

compute_sparse_cholesky
compute_sparse_cholesky(A: sp.spmatrix) -> SparseFactorization

Factor a sparse symmetric positive definite matrix.

Uses CHOLMOD if available (fast specialized Cholesky) or splu as fallback (general LU, works in Pyodide and when sksparse is not installed).

Parameters:

NameTypeDescriptionDefault
AspmatrixSparse SPD matrix in CSC format. Will be converted if needed.required

Returns:

TypeDescription
SparseFactorizationSparseFactorization object with solve() and logdet() methods.

Examples:

>>> import scipy.sparse as sp
>>> A = sp.eye(100, format='csc') + sp.random(100, 100, density=0.1)
>>> A = A @ A.T  # Make SPD
>>> factor = compute_sparse_cholesky(A)
>>> x = factor.solve(b)
>>> logdet = factor.logdet()

svd

SVD-based least squares solver, robust to rank deficiency.

Classes:

NameDescription
SVDSolveResultResult container for SVD solve.

Functions:

NameDescription
svd_solveSolve least squares via SVD (handles rank deficiency).
svd_solve_jaxSolve least squares via SVD (returns backend arrays).

Attributes

Classes

SVDSolveResult
SVDSolveResult(coef: Array, vcov: Array, sigma2: Array, fitted: Array, residuals: Array, df_resid: int, rank: int, s: Array) -> None

Result container for SVD solve.

Attributes:

NameTypeDescription
coefArrayCoefficient estimates, shape (p,).
vcovArrayVariance-covariance matrix, shape (p, p).
sigma2ArrayResidual variance (scalar).
fittedArrayFitted values, shape (n,).
residualsArrayResiduals, shape (n,).
df_residintResidual degrees of freedom.
rankintEffective rank of design matrix.
sArraySingular values, shape (min(n, p),).
Attributes
coef
coef: Array
df_resid
df_resid: int
fitted
fitted: Array
rank
rank: int
residuals
residuals: Array
s
s: Array
sigma2
sigma2: Array
vcov
vcov: Array

Functions

svd_solve
svd_solve(X: np.ndarray, y: np.ndarray, rcond: float = 1e-15) -> SVDSolveResult

Solve least squares via SVD (handles rank deficiency).

SVD is more robust to rank deficiency and multicollinearity.

Parameters:

NameTypeDescriptionDefault
XndarrayDesign matrix of shape (n, p).required
yndarrayResponse variable of shape (n,).required
rcondfloatRelative condition number for rank determination. Singular values smaller than rcond * largest_singular_value are treated as zero.1e-15

Returns:

TypeDescription
SVDSolveResultSVDSolveResult containing:
SVDSolveResult- coef: Coefficient estimates, shape (p,)
SVDSolveResult- vcov: Variance-covariance matrix, shape (p, p)
SVDSolveResult- sigma2: Residual variance
SVDSolveResult- fitted: Fitted values, shape (n,)
SVDSolveResult- residuals: Residuals, shape (n,)
SVDSolveResult- df_resid: Residual degrees of freedom
SVDSolveResult- rank: Effective rank of design matrix
SVDSolveResult- s: Singular values, shape (min(n, p),)

Note: Uses Singular Value Decomposition: X = U S V^T. beta_hat = V S^{-1} U^T y (using pseudoinverse for rank deficiency). Var(beta_hat) = sigma^2 V S^{-2} V^T.

Handles rank deficiency by identifying small singular values (< rcond * max(S)) and treating them as zero.

Examples:

>>> # Rank deficient case
>>> X = np.array([[1, 2, 2], [1, 3, 3], [1, 4, 4]])  # col 3 = col 2
>>> y = np.array([1.0, 2.0, 3.0])
>>> result = svd_solve(X, y)
>>> result.rank  # Will be 2, not 3
2
svd_solve_jax
svd_solve_jax(X: Array, y: Array, rcond: float = 1e-15) -> SVDSolveResult

Solve least squares via SVD (returns backend arrays).

This is the internal API for use within bossanova where backend arrays are preferred. For user-facing code, use svd_solve() instead.

Parameters:

NameTypeDescriptionDefault
XArrayDesign matrix of shape (n, p), as backend array.required
yArrayResponse variable of shape (n,), as backend array.required
rcondfloatRelative condition number for rank determination.1e-15

Returns:

TypeDescription
SVDSolveResultSVDSolveResult with backend arrays.