svd.als {softImpute} | R Documentation |
fit a low-rank svd to a complete matrix by alternating orthogonal ridge regression. Special sparse-matrix classes available for very large matrices, including "SparseplusLowRank" versions for row and column centered sparse matrices.
svd.als(x, rank.max = 2, lambda = 0, thresh = 1e-05, maxit = 100, trace.it = FALSE, warm.start = NULL, final.svd = TRUE)
x |
An m by n matrix. Large matrices can be in "sparseMatrix" format, as
well as "SparseplusLowRank". The latter arise after centering sparse
matrices, for example with |
rank.max |
The maximum rank for the solution. This is also the dimension of the left and right matrices of orthogonal singular vectors. 'rank.max' should be no bigger than 'min(dim(x)'. |
lambda |
The regularization parameter. |
thresh |
convergence threshold, measured as the relative changed in the Frobenius norm between two successive estimates. |
maxit |
maximum number of iterations. |
trace.it |
with |
warm.start |
an svd object can be supplied as a warm start. If the solution requested has higher rank than the warm start, the additional subspace is initialized with random Gaussians (and then orthogonalized wrt the rest). |
final.svd |
Although in theory, this algorithm converges to the solution to a
nuclear-norm regularized low-rank matrix approximation problem,
with potentially some singular values equal to zero, in practice only
near-zeros are achieved. This final step does one more iteration with
|
This algorithm solves the problem
\min ||X-M||_F^2 +λ ||M||_*
subject to rank(M)≤q r, where ||M||_* is the nuclear norm of M (sum of singular values). It achieves this by solving the related problem
\min ||X-AB'||_F^2 +λ/2 (||A||_F^2+||B||_F^2)
subject to rank(A)=rank(B)≤q r. The solution is a rank-restricted, soft-thresholded SVD of X.
An svd object is returned, with components "u", "d", and "v".
u |
an m by |
d |
a vector of length |
v |
an n by |
Trevor Hastie, Rahul Mazumder
Maintainer: Trevor Hastie hastie@stanford.edu
Rahul Mazumder, Trevor Hastie and Rob Tibshirani (2010)
Spectral Regularization Algorithms for Learning Large Incomplete
Matrices,
http://www.stanford.edu/~hastie/Papers/mazumder10a.pdf
Journal of Machine Learning Research 11 (2010) 2287-2322
biScale
, softImpute
, Incomplete
,
lambda0
, impute
, complete
#create a matrix and run the algorithm set.seed(101) n=100 p=50 J=25 np=n*p x=matrix(rnorm(n*J),n,J)%*%matrix(rnorm(J*p),J,p)+matrix(rnorm(np),n,p)/5 fit=svd.als(x,rank=25,lambda=50) fit$d pmax(svd(x)$d-50,0) # now create a sparse matrix and do the same nnz=trunc(np*.3) inz=sample(seq(np),nnz,replace=FALSE) i=row(x)[inz] j=col(x)[inz] x=rnorm(nnz) xS=sparseMatrix(x=x,i=i,j=j) fit2=svd.als(xS,rank=20,lambda=7) fit2$d pmax(svd(as.matrix(xS))$d-7,0)