-
This repository is tensorflow implementation of "Local Low-rank Matrix Approximation".
- I implemented two version of LLORMA: Parallel LLORMA (ICML'13) and Global LLORMA (JMLR'16).
-
I have increased the batch size for performance. If you want to get results as same as the original paper, please set batch size to 1.
-
I refer the codes from https://github.com/jnhwkim/PREA/tree/master/src/main/java/prea/recommender/llorma.
-
Folder description
- llorma_p: Parallel LLORMA (ICML'13)
- llorma_g: Global LLORMA (JMLR'16)
-
Dependecy:
python3.5
and seerequirements.txt
-
How to run
python train.py