-
Notifications
You must be signed in to change notification settings - Fork 4
GSoC 2019 Final Report
MoMA
is an R
package for multivariate analysis of large and high-dimensional structured data sets. Using a novel regularized singular value decomposition (SVD), MoMA
supports sparse, functional, and sparse and functional variants of several classical multivariate analysis techniques, including Principal Components Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Penalized Least Squares (PLS), among others. The MoMA
framework was first discussed by Allen and Weylandt [1] in the context of PCA and later extended to other methods by Weylandt, Liao, and Allen (to appear).
This page describes work done by Luofeng (Luke) Liao as part of Google Summer of Code (GSoC) 2019.
Please refer to the package documentation available at https://DataSlingers.github.io/MoMA/ for detailed documentation and worked examples.
To install MoMA
, run the following from an R
console:
library(devtools)
devtools::install_github("DataSlingers/MoMA@master")
The following Pull Requests were merged during GSoC 2019:
-
Automated Code formatting #33
-
Code coverage #49
-
Add Additional Controls for Proximal Gradient Sub-Problem Solvers #36
-
Refactor Fitting Algorithms #37
-
R6 PCA wrappers #42
-
Use Closures to Specify Sparsity and Smoothness Parameters #48
-
Support LDA and CCA #54
-
Extend the package to allow more penalty choices and multivariate methods #52, #19
-
More helper R6 methods to facilitate exploration of the results.
-
Support caching and frame smoothing in Shiny apps.
Luofeng Liao acknowledges support from the Google Summer of Code program during Summer 2018 and 2019. Michael Weylandt acknowledges support from the NSF Graduate Research Fellowship Program under grant number 1450681. Genevera Allen acknowledges support from NSF DMS-1554821, NSF NeuroNex-1707400, and NSF DMS-126405.
[1] G.I. Allen and M. Weylandt. "Sparse and Functional Principal Components Analysis." DSW 2019: Proceedings of the IEEE Data Science Workshop 2019, pp. 11-16. DOI:10.1109/DSW.2019.8755778