Skip to content

Spatio temporal

Akash Shah edited this page Feb 10, 2021 · 12 revisions

FOLDER STRUCTURE

torchTS / 
        spatiotemporal / 
                        model / 
                              model.py
                        cells / 
                              DCGRU
                              DCLSTM
                              SpectralGraphCell
                              RegularCNN
                        encoder-decoder /  (Can be imported across PDE , seq2seq and SpatioTemporal if needed)
                              encoder
                              decoder 
       data / 

SPATIO TEMPORAL MODEL HYPERPARAMETERS

cl_decay_steps: 2000 - parameter of curriculum learning
filter_type: dual_random_walk  - choose different filters for graph convolution
horizon: 12  - the forecast length, e.g. 12 means forecast the next 12 points in our case 1 hour.
Input_dim: In this case, 2 parameters determined by the input precisely speaking, the channel size
L1_decay: N/A
max_diffusion_step: 2  Corresponds to K in the diffusion equation, indicating summation of K random walks in either direction
num_nodes: 325 Number of nodes in the graph?
num_rnn_layers: 2 Depth of stacked RNN
output_dim: 1 output dimension, in this case, it is 1, can be anything
rnn_units: 64 hidden units, common to have more than input dim.
seq_len: the input sequence length, previous data input. 
use_curriculum_learning - Way of training an encoder-decoder network.

'''
This is the model.py file in the spatiotemporal folder.
'''

from encoder import Encoder
from decoder import Decoder


class spatiotemporal(torchTS.TimeModel):
    '''
    This Class inherits from Kevin's base model. The Base model should not implement nn.Module imo '''
    
    def __init__(self,**kwargs):
        #Parameters 
        self.epochs = None 
        self.lstm_or_gru =lorgself.loss_function =mae’,‘mse’ , ‘mape#Probabilistic / deterministic - maybe here , maybe there. 

        #Attributes
 
        ## Data attributes 
        self.adj_max = N * N (graph) * Time
        self.seperate = N * P * Time (The actual time values to be predicted)

        ## Diffusion related attributes
 
        self.max_diffusion_step = None
        self.cl_decay_steps = None
        self.filter_type =laplacianorrandom_walkordual_random_walk### Seq2Seq related attributes
        self.num_rnn_layers = None
        self.hidden_state_size = None
        self.nonlinearity =tanhorrelu### An idea - RNN Cell type
        #self.cell_type = 'DCGRU' or 'DCLSTM' or 'SpectralGCN' or 'CNN'

        self.use_gc_for_ru - reset/update

        # numerical representation for filter sizes for each DRCNN cell. (Keras like)
        self.gconv_layers = [2,3,2] # 2*2 , then 3*3 etc. 
        

    def _train(self, loader, device, optim):
        for x, y in loader:
            x, y = x.to(device), y.to(device)
            
            enc = Encoder(x,y,self.cell_type)
            dec = Decoder(x,y,self.cell_type)
            
            optim.zero_grad()
            pred = self(x)
            loss = loss_fun(y, pred)
            loss.backward()
            optim.step()

    def fit(self, train_loader, test_loader, device, optim, scheduler, n_epochs,verbose = 1):
        #Scope for parallelizing.
        for epoch in range(n_epochs):
            self._train(train_loader, device, optim)
            train_loss = self._eval(train_loader, device)
            test_loss = self._eval(test_loader, device)
            scheduler.step()

   def predict(self,horizon,probabilistic = True):
       ''' This function has parameters for the horizon, as well as the uncertainty quantifiers. eg. we can take a parameter like 'bayesian' ,'frequentist' 
          predict_next()
 	  predict_multi_step() - Isn’t it of the same length as the decoder?  
          predict_intervals()  - Uncertainty bound paper '''


   def get_params():
       '''Getter for variables private and non private'''

   def set_params():
       '''Setter for non underscore variables'''
   
   def _eval(self, loader, device):
       self.eval()
       loss = 0
        
       with torch.no_grad():
           for x, y in loader():
               x, y = x.to(device), y.to(device)
               pred = self(x)
               loss += loss_fun(y, pred)
        
       return loss / len(loader.dataset)
Clone this wiki locally