root_pcp()
implements the convex PCP algorithm "Square root principal
component pursuit" as described in
Zhang et al. (2021)
, outfitted with environmental health (EH)-specific extensions as described
in Gibson et al. (2022).
Given an observed data matrix D
, and regularization parameters lambda
and
mu
, root_pcp()
aims to find the best low-rank and sparse estimates L
and S
. The L
matrix encodes latent patterns that govern the observed
data. The S
matrix captures any extreme events in the data unexplained by
the underlying patterns in L
.
Being convex, root_pcp()
determines the rank r
, or number of latent
patterns in the data, autonomously during it's optimization. As such, the
user does not need to specify the desired rank r
of the output L
matrix
as in the non-convex PCP model rrmc()
.
Experimentally, the root_pcp()
approach to PCP modeling has best been able
to handle those datasets that are governed by well-defined underlying
patterns, characterized by quickly decaying singular values. This is typical
of imaging and video data, but uncommon for EH data. For observed data with a
complex low rank structure (slowly decaying singular values), like EH data,
rrmc()
may offer a better model estimate.
Three EH-specific extensions are currently supported by root_pcp()
:
The model can handle missing values in the input data matrix
D
;The model can also handle measurements that fall below the limit of detection (LOD), if provided
LOD
information by the user; andThe model is also equipped with an optional non-negativity constraint on the low-rank
L
matrix, ensuring that all output values inL
are \(> 0\).
Usage
root_pcp(
D,
lambda = NULL,
mu = NULL,
LOD = -Inf,
non_negative = TRUE,
max_iter = 10000,
verbose = FALSE
)
Arguments
- D
The input data matrix (can contain
NA
values). Note that PCP will converge much more quickly whenD
has been standardized in some way (e.g. scaling columns by their standard deviations, or column-wise min-max normalization).- lambda, mu
(Optional) A pair of doubles each in the range
[0, Inf)
regularizingS
andL
.lambda
controls the sparsity of the outputS
matrix; larger values penalize non-zero entries inS
more stringently, driving the recovery of sparserS
matrices.mu
adjusts the model's sensitivity to noise; larger values will penalize errors between the predicted model and the observed data more severely. It is highly recommended the user tunes both of these parameters usinggrid_search_cv()
for each unique data matrixD
. By default, bothlambda
andmu
areNULL
, in which case the theoretically optimal values are used, calculated according toget_pcp_defaults()
.- LOD
(Optional) The limit of detection (LOD) data. Entries in
D
that satisfyD >= LOD
are understood to be above the LOD, otherwise those entries are treated as below the LOD.LOD
can be either:A double, implying a universal LOD common across all measurements in
D
;A vector of length
ncol(D)
, signifying a column-specific LOD, where each entry in theLOD
vector corresponds to the LOD for each column inD
; orA matrix of dimension
dim(D)
, indicating an observation-specific LOD, where each entry in theLOD
matrix corresponds to the LOD for each entry inD
.
By default,
LOD = -Inf
, indicating there are no known LODs for PCP to leverage.- non_negative
(Optional) A logical indicating whether or not the non-negativity constraint should be used to constrain the output
L
matrix to have all entries \(\geq 0\). By default,non_negative = TRUE
.- max_iter
(Optional) An integer specifying the maximum number of iterations to allow PCP before giving up on meeting PCP's convergence criteria. By default,
max_iter = 10000
, suitable for most problems.- verbose
(Optional) A logical indicating whether or not to print information in real time over the course of PCP's optimization. By default,
verbose = FALSE
.
Value
A list containing:
L
: The rank-r
low-rank matrix encoding ther
-many latent patterns governing the observed input data matrixD
.dim(L)
will be the same asdim(D)
. To explicitly obtain the underlying patterns,L
can be used as the input to any matrix factorization technique of choice, e.g. PCA, factor analysis, or non-negative matrix factorization.S
: The sparse matrix containing the rare outlying or extreme observations inD
that are not explained by the underlying patterns in the correspondingL
matrix.dim(S)
will be the same asdim(D)
. Most entries inS
are0
, while non-zero entries identify the extreme outlying observations inD
.num_iter
: The number of iterations taken to reach convergence. Ifnum_iter == max_iter
thenroot_pcp()
did not converge.objective
: A vector containing the values ofroot_pcp()
's objective function over the course of optimization.converged
: A boolean indicating whether the convergence criteria were met beforemax_iter
was reached.
The objective function
root_pcp()
optimizes the following objective function:
$$\min_{L, S} ||L||_* + \lambda ||S||_1 + \mu ||L + S - D||_F$$
The first term is the nuclear norm of the L
matrix, incentivizing L
to be
low-rank. The second term is the \(\ell_1\) norm of the S
matrix,
encouraging S
to be sparse. The third term is the Frobenius norm
applied to the model's noise, ensuring that the estimated low-rank and sparse
models L
and S
together have high fidelity to the observed data D
.
The objective is not smooth nor differentiable, however it is convex and
separable. As such, it is optimized using the Alternating Direction
Method of Multipliers (ADMM) algorithm Boyd et al. (2011), Gao et al. (2020).
The lambda
and mu
parameters
lambda
controls the sparsity ofroot_pcp()
's outputS
matrix; larger values oflambda
penalize non-zero entries inS
more stringently, driving the recovery of sparserS
matrices. Therefore, if you a priori expect few outlying events in your model, you might expect a grid search to recover relatively largerlambda
values, and vice-versa.mu
adjustsroot_pcp()
's sensitivity to noise; larger values ofmu
penalize errors between the predicted model and the observed data (i.e. noise), more severely. Environmental data subject to higher noise levels therefore require aroot_pcp()
model equipped with smallermu
values (since higher noise means a greater discrepancy between the observed mixture and the true underlying low-rank and sparse model). In virtually noise-free settings (e.g. simulations), larger values ofmu
would be appropriate.
The default values of lambda
and mu
offer theoretical guarantees
of optimal estimation performance, and stable recovery of L
and S
. By
"stable", we mean root_pcp()
's reconstruction error is, in the worst case,
proportional to the magnitude of the noise corrupting the observed data
(\(||Z||_F\)), often outperforming this upper bound.
Candès et al. (2011) obtained the guarantee for lambda
, while
Zhang et al. (2021)
obtained the result for mu
.
Environmental health specific extensions
We refer interested readers to Gibson et al. (2022) for the complete details regarding the EH-specific extensions.
Missing value functionality: PCP assumes that the same data generating
mechanisms govern both the missing and the observed entries in D
. Because
PCP primarily seeks accurate estimation of patterns rather than
individual observations, this assumption is reasonable, but in some edge
cases may not always be justified. Missing values in D
are therefore
reconstructed in the recovered low-rank L
matrix according to the
underlying patterns in L
. There are three corollaries to keep in mind
regarding the quality of recovered missing observations:
Recovery of missing entries in
D
relies on accurate estimation ofL
;The fewer observations there are in
D
, the harder it is to accurately reconstructL
(therefore estimation of both unobserved and observed measurements inL
degrades); andGreater proportions of missingness in
D
artifically drive up the sparsity of the estimatedS
matrix. This is because it is not possible to recover a sparse event inS
when the corresponding entry inD
is unobserved. By definition, sparse events inS
cannot be explained by the consistent patterns inL
. Practically, if 20% of the entries inD
are missing, then at least 20% of the entries inS
will be 0.
Handling measurements below the limit of detection: When equipped with LOD information, PCP treats any estimations of values known to be below the LOD as equally valid if their approximations fall between 0 and the LOD. Over the course of optimization, observations below the LOD are pushed into this known range \([0, LOD]\) using penalties from above and below: should a \(< LOD\) estimate be \(< 0\), it is stringently penalized, since measured observations cannot be negative. On the other hand, if a \(< LOD\) estimate is \(>\) the LOD, it is also heavily penalized: less so than when \(< 0\), but more so than observations known to be above the LOD, because we have prior information that these observations must be below LOD. Observations known to be above the LOD are penalized as usual, using the Frobenius norm in the above objective function.
Gibson et al. (2022) demonstrates that
in experimental settings with up to 50% of the data corrupted below the LOD,
PCP with the LOD extension boasts superior accuracy of recovered L
models
compared to PCA coupled with \(LOD / \sqrt{2}\) imputation. PCP even
outperforms PCA in low-noise scenarios with as much as 75% of the data
corrupted below the LOD. The few situations in which PCA bettered PCP were
those pathological cases in which D
was characterized by extreme noise and
huge proportions (i.e., 75%) of observations falling below the LOD.
The non-negativity constraint on L
: To enhance interpretability of
PCP-rendered solutions, there is an optional non-negativity constraint
that can be imposed on the L
matrix to ensure all estimated values
within it are \(\geq 0\). This prevents researchers from having to deal
with negative observation values and questions surrounding their meaning
and utility. Non-negative L
models also allow for seamless use of methods
such as non-negative matrix factorization to extract non-negative patterns.
The non-negativity constraint is incorporated in the ADMM splitting technique
via the introduction of an additional optimization variable and corresponding
constraint.
References
Zhang, Junhui, Jingkai Yan, and John Wright. "Square root principal component pursuit: tuning-free noisy robust matrix recovery." Advances in Neural Information Processing Systems 34 (2021): 29464-29475. [available here]
Gibson, Elizabeth A., Junhui Zhang, Jingkai Yan, Lawrence Chillrud, Jaime Benavides, Yanelli Nunez, Julie B. Herbstman, Jeff Goldsmith, John Wright, and Marianthi-Anna Kioumourtzoglou. "Principal component pursuit for pattern identification in environmental mixtures." Environmental Health Perspectives 130, no. 11 (2022): 117008.
Boyd, Stephen, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. "Distributed optimization and statistical learning via the alternating direction method of multipliers." Foundations and Trends in Machine learning 3, no. 1 (2011): 1-122.
Gao, Wenbo, Donald Goldfarb, and Frank E. Curtis. "ADMM for multiaffine constrained optimization." Optimization Methods and Software 35, no. 2 (2020): 257-303.
Candès, Emmanuel J., Xiaodong Li, Yi Ma, and John Wright. "Robust principal component analysis?." Journal of the ACM (JACM) 58, no. 3 (2011): 1-37.
Examples
#### -------Simple simulated PCP problem-------####
# First we will simulate a simple dataset with the sim_data() function.
# The dataset will be a 100x10 matrix comprised of:
# 1. A rank-2 component as the ground truth L matrix;
# 2. A ground truth sparse component S w/outliers along the diagonal; and
# 3. A dense Gaussian noise component
data <- sim_data(r = 2, sigma = 0.1)
# Best practice is to conduct a grid search with grid_search_cv() function,
# but we skip that here for brevity.
pcp_model <- root_pcp(data$D, lambda = 0.225, mu = 3.04)
data.frame(
"Estimated_L_rank" = matrix_rank(pcp_model$L, 5e-2),
"Observed_relative_error" = norm(data$L - data$D, "F") / norm(data$L, "F"),
"PCA_error" = norm(data$L - proj_rank_r(data$D, r = 2), "F") / norm(data$L, "F"),
"PCP_L_error" = norm(data$L - pcp_model$L, "F") / norm(data$L, "F"),
"PCP_S_error" = norm(data$S - pcp_model$S, "F") / norm(data$S, "F")
)
#> Estimated_L_rank Observed_relative_error PCA_error PCP_L_error PCP_S_error
#> 1 2 0.2298567 0.1040869 0.09485763 0.2453499