Execute the following command in the git repo:
conda env create --name tl --file environment.yml
conda activate tl
This repository contains the implementation of the following layers:
- Convolution
- Recurrent Convolution
- ConvLSTM
- ConvGRU (Fully gated and Type 2)
- DenseConv This layers are implemented for image classification tasks. Find below a more detailed description of each layer.
The implementation details can be found in baseconv.py. The class Conv2D is ready to use in a convolutoinal network. Objects of this class can be further configured through config.py-file. It can be adjusted, if batch normalization or dropout should be applied as well as the corresponding parameters and which activation function (following strings are possible: "RELU", "LEAKY_RELU", "SIGMOID", "TANH") should be utilized. Depending on the activation function, the weights are initialized with kaiming_uniform or xavier_uniform
The class can be used as follows:
Conv2D(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
in_size=in_size
)
A Sample network containing a Conv2D-layer can be obtained from class SampleConvNet. By running the following command, certain unittest (including the training and testing of the sample network class SampleConvNet) can be executed
python -m unittest test_baseconv.py
The implemenation is based on this research paper. Different from the original paper, local response normalization has been replaced by batch normalization. The implementation details can be found in recurrentconv.py in the class RecurrentConv. As above mentioned, the parameters about batch normalization as well as dropout can be configured by the config.py-file.
The class can be used as follows:
RecurrentConv(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
in_size=in_size
)
A Sample network containing a RecurrentConv-layer can be obtained from class SampleRecurrentConvNet. By running the following command, certain unittest (including the training and testing of the sample network class SampleRecurrentConvNet) can be executed
python -m unittest test_recurrentconv.py
The implementation is based on this research paper. and inspired by this github repository, but also considers the cell state. Nevertheless, the implementation is thought for image classification tasks and is similar to the implementation of the Recurrent Convolution. Instead of feeding new samples of a time sequence into the ConvLSTM, we feed the initial sample and concatenate it with . The forward input is assumed to has the following formating: B,C,W,H. In the current implementation, only kernels with uneven size are possible. Each convolution is followed by a dedicated batch normalization layer. For that reason, the bias is omitted. The weights are initialized with xavier_uniform. The implementation details can be obtained by the file convlstm.py, and class ConvLSTM.
The class can be used as follows:
ConvLSTM(
in_channels=in_channels,
hidden_channels=hidden_channels,
kernel_size=kernel_size,
stride=stride,
in_size=in_size
)
A Sample network containing a ConvLSTM-layer can be obtained from class SampleConvLSTMNet. By running the following command, certain unittest (including the training and testing of the sample network class SampleConvLSTMNet) can be executed
python -m unittest test_convlstm.py
The implementaion of the fully gated ConvGRU version is based on this research paper and Type 2 Version is based on this research paper. The implementation is thought for image classification tasks and is similar to the implementation of the Recurrent Convolution and ConvLSTM. Instead of feeding new samples of a time sequence into the ConvGRU, we feed the initial sample. The forward input is assumed to has the following formating: B,C,W,H. In the current implementation, only kernels with uneven size are possible. Each convolution is followed by a dedicated batch normalization layer. For that reason, the bias is omitted. The weights are initialized with xavier_uniform. The implementation details can be obtained by the file convgru.py, and class ConvGRU.
The class can be used as follows:
ConvGRU(
in_channels=in_channels,
hidden_channels=hidden_channels,
kernel_size=kernel_size,
stride=stride,
in_size=in_size
)
A Sample network containing a ConvGRU-layer can be obtained from class SampleConvGRUNet. By running the following command, certain unittest (including the training and testing of the sample network class SampleConvGRUNet) can be executed
python -m unittest test_convgru.py
The implementation is mainly based on this github repository (which was inspired by this research paper). The main differences are, that the weights are initialized, depending on the configured activation function. Furthermore, dropout is optional. Those configurations can be done in config.py-file. The implementation details can be obtained by the file denseconv.py, and classes Transition,DenseLayer,DenseBlock.
The class DenseBlock can be used as follows:
DenseBlock(
num_layers=num_layers,
in_channels=in_channels
)
A Sample network containing a DenseBlock-layer can be obtained from class SampleDenseNet. By running the following command, certain unittest (including the training and testing of the sample network class SampleDenseNet) can be executed
python -m unittest test_denseconv.py
bash exec-tests.sh
Configurations regarding training and test can be done in config.py-file.