APML0 {APML0} | R Documentation |
Fit linear, logistic and Cox models regularized with L0, lasso (L1), elastic-net (L1 and L2), or net (L1 and Laplacian) penalty, and their adaptive forms, such as adaptive lasso / elastic-net and net adjusting for signs of linked coefficients. It solves L0 penalty problem by simultaneously selecting regularization parameters and the number of non-zero coefficients. This augmented and penalized minimization method provides an approximation solution to the L0 penalty problem, but runs as fast as L1 regularization problem.
The package uses one-step coordinate descent algorithm and runs extremely fast by taking into account the sparsity structure of coefficients. It could deal with very high dimensional data.
APML0(x, y, family=c("gaussian", "binomial", "cox"), penalty=c("Lasso","Enet", "Net"), Omega=NULL, alpha=1.0, lambda=NULL, nlambda=50, rlambda=NULL, wbeta=rep(1,ncol(x)), sgn=rep(1,ncol(x)), nfolds=1, foldid=NULL, inzero=TRUE, isd=FALSE, iysd=FALSE, keep.beta=FALSE, ifast=TRUE, thresh=1e-7, maxit=1e+5)
x |
input matrix. Each row is an observation vector. |
y |
response variable. For |
family |
type of outcome. Can be "gaussian", "binomial" or "cox". |
penalty |
penalty type. Can choose λ*{α*||β||_1+(1-α)/2*(β^{T}Lβ)}, where L is a Laplacian matrix calculated from |
Omega |
adjacency matrix with zero diagonal and non-negative off-diagonal, used for |
alpha |
ratio between L1 and Laplacian for |
lambda |
a user supplied decreasing sequence. If |
nlambda |
number of |
rlambda |
fraction of |
wbeta |
penalty weights used with L1 penalty (adaptive L1), given by ∑_{j=1}^qw_j|β_j|. The |
sgn |
sign adjustment used with Laplacian penalty (adaptive Laplacian). The |
nfolds |
number of folds. With |
foldid |
an optional vector of values between 1 and |
inzero |
logical flag for simultaneously selecting the number of non-zero coefficients with |
isd |
logical flag for outputting standardized coefficients. |
iysd |
logical flag for standardizing |
keep.beta |
logical flag for returning estimates for all |
ifast |
logical flag for efficient calculation of risk set updates for |
thresh |
convergence threshold for coordinate descent. Default value is |
maxit |
Maximum number of iterations for coordinate descent. Default is |
One-step coordinate descent algorithm is applied for each lambda
. For family = "cox"
, ifast = TRUE
adopts an efficient way to update risk set and sometimes the algorithm ends before all nlambda
values of lambda
have been evaluated. To evaluate small values of lambda
, use ifast = FALSE
. The two methods only affect the efficiency of algorithm, not the estimates.
x
is always standardized prior to fitting the model and the estimate is returned on the original scale. For family = "gaussian"
, y is centered by removing its mean, so there is no intercept output.
Cross-validation is used for tuning parameters. For inzero = TRUE
, we further select the number of non-zero coefficients obtained from regularized model at each lambda
. This is motivated by formulating L0 variable selection in an augmented form, which shows significant improvement over the commonly used regularized methods without this technique.
An object with S3 class "APML0"
.
Beta |
a sparse Matrix of coefficients, stored in class "dgCMatrix". For |
Beta0 |
coefficients after additionally tuning the number of non-zeros, for |
fit |
a data.frame containing |
fit0 |
a data.frame containing |
lambda.min |
value of |
lambda.opt |
value of |
penalty |
penalty type. |
adaptive |
logical flags for adaptive version (see above). |
flag |
convergence flag (for internal debugging). |
It may terminate and return NULL
.
Xiang Li, Shanghong Xie, Donglin Zeng and Yuanjia Wang
Maintainer: Xiang Li <xli256@its.jnj.com>, Shanghong Xie <sx2168@cumc.columbia.edu>
Li, X., Xie, S., Zeng, D., Wang, Y. (2017).
Efficient Method to Optimally Identify Important Biomarkers for Disease Outcomes with High-dimensional Data. Statistics in Medicine, accepted.
Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011).
Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), 1-122.
http://dl.acm.org/citation.cfm?id=2185816
Friedman, J., Hastie, T. and Tibshirani, R. (2010).
Regularization paths for generalized linear models via coordinate descent, Journal of Statistical Software, Vol. 33(1), 1.
http://www.jstatsoft.org/v33/i01/
### Linear model ### set.seed(1213) N=100;p=30;p1=5 x=matrix(rnorm(N*p),N,p) beta=rnorm(p1) xb=x[,1:p1]%*%beta y=rnorm(N,xb) fiti=APML0(x,y,penalty="Lasso",nlambda=10) # Lasso fiti2=APML0(x,y,penalty="Lasso",nlambda=10,nfolds=10) # Lasso # attributes(fiti) ### Logistic model ### set.seed(1213) N=100;p=30;p1=5 x=matrix(rnorm(N*p),N,p) beta=rnorm(p1) xb=x[,1:p1]%*%beta y=rbinom(n=N, size=1, prob=1.0/(1.0+exp(-xb))) fiti=APML0(x,y,family="binomial",penalty="Lasso",nlambda=10) # Lasso fiti2=APML0(x,y,family="binomial",penalty="Lasso",nlambda=10,nfolds=10) # Lasso # attributes(fiti) ### Cox model ### set.seed(1213) N=100;p=30;p1=5 x=matrix(rnorm(N*p),N,p) beta=rnorm(p1) xb=x[,1:p1]%*%beta ty=rexp(N,exp(xb)) tcens=rbinom(n=N,prob=.3,size=1) # censoring indicator y=cbind(time=ty,status=1-tcens) fiti=APML0(x,y,family="cox",penalty="Lasso",nlambda=10) # Lasso fiti2=APML0(x,y,family="cox",penalty="Lasso",nlambda=10,nfolds=10) # Lasso # attributes(fiti)