This Project training 8 files at once as a Tabular Deep Learning model and stores Experimental results. additionaly use wandb.
For paper implementations, see the section "Papers and projects".
$cd Researh
$sh experiment.sh
Default inference. in python main.py --action train --model fttransformer --data microsoft --savepath output
.
We saved reult information in Output/model_name/data/default/info.json
.
We upload the datasets used in the paper with our train/val/test splits here. We do not impose additional restrictions to the original dataset licenses, the sources of the data are listed in the paper appendix.
You could load the datasets with the following commands:
conda activate tdl
cd $Researh
wget "https://www.dropbox.com/s/o53umyg6mn3zhxy/data.tar.gz?dl=1" -O rtdl_data.tar.gz
tar -zvf rtdl_data.tar.gz
File Structure
├── Data
│ ├── microsoft
│ │ └── ...
│ ├── yahoo
│ │ └── ...
│ └── etc..
├── Output
│ ├── ft-transformer
│ │ ├── microsoft
│ │ │ ├── default
│ │ │ └── ensemble
│ │ └── yahoo
│ │ └── etc..
│ └── resnet..
├── config.yaml "Model Architecture parameters.."
├── experiment.sh
├── main.py
├── infer.py
├── train.py
├── model.py
├── utils.py
etc..
Name | Location | Comment |
---|---|---|
Revisiting Pretrarining Objectives for Tabular Deep Learning | link | arXiv 2022 |
On Embeddings for Numerical Features in Tabular Deep Learning | link | arXiv 2022 |
@article = {
title = {Research on Tabular Deep Learning Model},
author = {Wongi Park},
journal = {GitHub},
url = {https://github.com/kalelpark/DeepLearning-for-Tabular-Data},
year = {2022},
}