Skip to content

Latest commit

 

History

History
64 lines (53 loc) · 3.88 KB

README.md

File metadata and controls

64 lines (53 loc) · 3.88 KB

localAndGlobalJournal

Experiment code for LASA and GCR on univariate UCR datasets for the paper: Extracting Interpretable Local and Global Representations from Attention on Time Series. It is based on tensorflow and seml experiments to handle the different configurations.

For LASA see also: https://github.com/cslab-hub/LocalTSMHAInterpretability For GCR see also: https://github.com/cslab-hub/GlobalTimeSeriesCoherenceMatrices

Description

These experiments analyse Transformer Attention on univariate datasets from the UCR UEA repository with LASA and GCR. Both methods use herby, a symbolic approximate approach. LASA is a local abstraction technique to improve interpretation by reducing the complexity of the data. In these experiments, we analyse the performance and multiple XAI metrics on LASA to improve our understanding of the method and Transformer Attention itself. As subvariant we introduce LASA-S, which tries to find Shapelets in the abstracted data. GCR on the other hand is a global interpretation method which represents the data in a coherent multidimensional way, showing how each symbol effects each other symbol at a specific input. We analyse the performance and multiple XAI metrics to further improve our understanding of the GCR and Transformer Attention. As subvariants we introduce the threshold-based GCR-T and the penalty-based GCR-P. We analyse the GCRs ability to approximate the task as well as the model. The datasets are limited by univariate datasets with the maximal input length of 500. Each run per datasets is limited to 1 day run-time.

Files

mixModel.yaml - seml experiment configuration to train and evaluate the models
mixModelTrain.py - experiment runs for the different models
presults.yaml - seml experiments to process all experiment results into 1 file
resultProcessing.py - experiment code for the presults.yaml config
modules - different modules for the different model types + helper

Dependencies:

python==3.7.3
tensorflow-gpu==2.4.1
seaborn==0.10.1
scipy==1.7.3
scikit-learn==0.23.2
pandas==1.3.5
matplotlib==3.5.1
ipykernel==6.9.1

tensorflow_addons==0.14.0
tensorflow_probability==0.12.2
pyts==0.11.0
uea_ucr_datasets==0.1.2
dill==0.3.5.1
antropy==0.1.4
tslearn==0.5.2
sktime==0.9.0

We suggest the following installation:

1: conda create -n tsTransformer python==3.7.3 tensorflow-gpu==2.4.1 seaborn==0.10.1 scipy==1.7.3 scikit-learn==0.23.2 pandas==1.3.5 matplotlib==3.5.1 ipykernel==6.9.1

2: pip install seml===0.3.6 tensorflow_addons==0.14.0 tensorflow_probability==0.12.2 pyts==0.11.0 uea_ucr_datasets==0.1.2 dill==0.3.5.1 antropy==0.1.4 tslearn==0.5.2

Sktime is a bit annoying with the dependencies, but still works anyway for our purpose, thus we do it separately:
3: pip install sktime==0.9.0

How to run

  1. Set up seml with seml configure (yes you need a mongoDB server for this and yes the results will be saved a in separate file, however seml does a really well job in managing the parameter combinations in combination with slurm)
  2. Configure the yaml file you want to run. Probably you only need to change the number of maximal parallel experiments ('experiments_per_job' and 'max_simultaneous_jobs') and the memory and cpu use ('mem' and 'cpus-per-task').
  3. Add and start the seml experiment. For example like this:
    1. seml mixModel add mixModel.yaml
    2. seml mixModel start
  4. Check with "seml mixModel status" till all your experiments are finished
  5. Please find the results in the results folder. It includes a dict which can be further processed with the code in resultProcessing.py and the seperate presults.yaml file.

Cite and publications

This code represents the used model for the following publication: TODO

If you use, build upon this work or if it helped in any other way, please cite the linked publication.