Test project to experiment different tools that automate AI pipelines
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- Project documentation
│
├── metrics <- All tracking info related to a specific experiment (i.e. lossfunction value and hyper-parameters used)
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is ...
│
├── pipeines <- Dvc pipelines
│
├── src <- Source code for use in this project.
│
│
├── LICENSE
├── README.md <- The top-level README for developers using this project.
├── environment.yml <- The requirements file for reproducing the analysis environment, e.g.
generated with `conda env export > environment.yml`. Include the following libs:
jupyter numpy=1.15.2 pandas=0.23.4 matplotlib=2.2.3 seaborn=0.9.0
scikit-learn=0.20.0, xgboost=0.80, pytorch, fastai
Project based on the Quantyca data sciece base project template.