diff --git a/previews/PR182/.documenter-siteinfo.json b/previews/PR182/.documenter-siteinfo.json index 67fddd56..f95ede1c 100644 --- a/previews/PR182/.documenter-siteinfo.json +++ b/previews/PR182/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-18T10:41:47","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-18T11:58:00","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/previews/PR182/api/esn/index.html b/previews/PR182/api/esn/index.html index bd521356..4d96bb25 100644 --- a/previews/PR182/api/esn/index.html +++ b/previews/PR182/api/esn/index.html @@ -3,7 +3,7 @@ train_data = rand(10, 100) # 10 features, 100 time steps -esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)source

Variations

In addition to the standard ESN model, there are variations that allow for deeper customization of the underlying model. Currently, there are two available variations: Default and Hybrid. These variations provide different ways to configure the ESN. Here's the documentation for the variations:

ReservoirComputing.DefaultType
Default()

The Default struct specifies the use of the standard model in Echo State Networks (ESNs). It requires no parameters and is used when no specific variations or customizations of the ESN model are needed. This struct is ideal for straightforward applications where the default ESN settings are sufficient.

source
ReservoirComputing.HybridType
Hybrid(prior_model, u0, tspan, datasize)

Constructs a Hybrid variation of Echo State Networks (ESNs) integrating a knowledge-based model (prior_model) with ESNs for advanced training and prediction in chaotic systems.

Parameters

  • prior_model: A knowledge-based model function for integration with ESNs.
  • u0: Initial conditions for the model.
  • tspan: Time span as a tuple, indicating the duration for model operation.
  • datasize: The size of the data to be processed.

Returns

  • A Hybrid struct instance representing the combined ESN and knowledge-based model.

This method is effective for chaotic processes as highlighted in [Pathak].

Reference: [Pathak]: Jaideep Pathak et al. "Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).

source

The Hybrid variation is the most complex option and offers additional customization. Note that more variations may be added in the future to provide even greater flexibility.

Training

To train an ESN model, you can use the train function. It takes the ESN model, training data, and other optional parameters as input and returns a trained model. Here's the documentation for the train function:

ReservoirComputing.trainFunction
train(esn::AbstractEchoStateNetwork, target_data, training_method = StandardRidge(0.0))

Trains an Echo State Network (ESN) using the provided target data and a specified training method.

Parameters

  • esn::AbstractEchoStateNetwork: The ESN instance to be trained.
  • target_data: Supervised training data for the ESN.
  • training_method: The method for training the ESN (default: StandardRidge(0.0)).

Returns

  • The trained ESN model. Its type and structure depend on training_method and the ESN's implementation.

Returns

The trained ESN model. The exact type and structure of the return value depends on the training_method and the specific ESN implementation.

using ReservoirComputing
+esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
source

Variations

In addition to the standard ESN model, there are variations that allow for deeper customization of the underlying model. Currently, there are two available variations: Default and Hybrid. These variations provide different ways to configure the ESN. Here's the documentation for the variations:

ReservoirComputing.DefaultType
Default()

The Default struct specifies the use of the standard model in Echo State Networks (ESNs). It requires no parameters and is used when no specific variations or customizations of the ESN model are needed. This struct is ideal for straightforward applications where the default ESN settings are sufficient.

source
ReservoirComputing.HybridType
Hybrid(prior_model, u0, tspan, datasize)

Constructs a Hybrid variation of Echo State Networks (ESNs) integrating a knowledge-based model (prior_model) with ESNs for advanced training and prediction in chaotic systems.

Parameters

  • prior_model: A knowledge-based model function for integration with ESNs.
  • u0: Initial conditions for the model.
  • tspan: Time span as a tuple, indicating the duration for model operation.
  • datasize: The size of the data to be processed.

Returns

  • A Hybrid struct instance representing the combined ESN and knowledge-based model.

This method is effective for chaotic processes as highlighted in [Pathak].

Reference: [Pathak]: Jaideep Pathak et al. "Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).

source

The Hybrid variation is the most complex option and offers additional customization. Note that more variations may be added in the future to provide even greater flexibility.

Training

To train an ESN model, you can use the train function. It takes the ESN model, training data, and other optional parameters as input and returns a trained model. Here's the documentation for the train function:

ReservoirComputing.trainFunction
train(esn::AbstractEchoStateNetwork, target_data, training_method = StandardRidge(0.0))

Trains an Echo State Network (ESN) using the provided target data and a specified training method.

Parameters

  • esn::AbstractEchoStateNetwork: The ESN instance to be trained.
  • target_data: Supervised training data for the ESN.
  • training_method: The method for training the ESN (default: StandardRidge(0.0)).

Returns

  • The trained ESN model. Its type and structure depend on training_method and the ESN's implementation.

Returns

The trained ESN model. The exact type and structure of the return value depends on the training_method and the specific ESN implementation.

using ReservoirComputing
 
 # Initialize an ESN instance and target data
 esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
@@ -13,4 +13,4 @@
 trained_esn = train(esn, target_data)
 
 # Train the ESN using a custom training method
-trained_esn = train(esn, target_data, training_method=StandardRidge(1.0))

Notes

  • When using a Hybrid variation, the function extends the state matrix with data from the physical model included in the variation.
  • The training is handled by a lower-level _train function which takes the new state matrix and performs the actual training using the specified training_method.
source

With these components and variations, you can configure and train ESN models for various time series and sequential data prediction tasks.

+trained_esn = train(esn, target_data, training_method=StandardRidge(1.0))

Notes

source

With these components and variations, you can configure and train ESN models for various time series and sequential data prediction tasks.

diff --git a/previews/PR182/api/esn_drivers/index.html b/previews/PR182/api/esn_drivers/index.html index 633dbeab..366183c9 100644 --- a/previews/PR182/api/esn_drivers/index.html +++ b/previews/PR182/api/esn_drivers/index.html @@ -1,9 +1,9 @@ ESN Drivers · ReservoirComputing.jl

ESN Drivers

ReservoirComputing.RNNType
RNN(activation_function, leaky_coefficient)
-RNN(;activation_function=tanh, leaky_coefficient=1.0)

Returns a Recurrent Neural Network (RNN) initializer for the Echo State Network (ESN).

Arguments

  • activation_function: The activation function used in the RNN.
  • leaky_coefficient: The leaky coefficient used in the RNN.

Keyword Arguments

  • activation_function: The activation function used in the RNN. Defaults to tanh.
  • leaky_coefficient: The leaky coefficient used in the RNN. Defaults to 1.0.

This function creates an RNN object with the specified activation function and leaky coefficient, which can be used as a reservoir driver in the ESN.

source
ReservoirComputing.MRNNType
MRNN(activation_function, leaky_coefficient, scaling_factor)
+RNN(;activation_function=tanh, leaky_coefficient=1.0)

Returns a Recurrent Neural Network (RNN) initializer for the Echo State Network (ESN).

Arguments

  • activation_function: The activation function used in the RNN.
  • leaky_coefficient: The leaky coefficient used in the RNN.

Keyword Arguments

  • activation_function: The activation function used in the RNN. Defaults to tanh.
  • leaky_coefficient: The leaky coefficient used in the RNN. Defaults to 1.0.

This function creates an RNN object with the specified activation function and leaky coefficient, which can be used as a reservoir driver in the ESN.

source
ReservoirComputing.MRNNType
MRNN(activation_function, leaky_coefficient, scaling_factor)
 MRNN(;activation_function=[tanh, sigmoid], leaky_coefficient=1.0, 
-    scaling_factor=fill(leaky_coefficient, length(activation_function)))

Returns a Multiple RNN (MRNN) initializer for the Echo State Network (ESN), introduced in [lun].

Arguments

  • activation_function: A vector of activation functions used in the MRNN.
  • leaky_coefficient: The leaky coefficient used in the MRNN.
  • scaling_factor: A vector of scaling factors for combining activation functions.

Keyword Arguments

  • activation_function: A vector of activation functions used in the MRNN. Defaults to [tanh, sigmoid].
  • leaky_coefficient: The leaky coefficient used in the MRNN. Defaults to 1.0.
  • scaling_factor: A vector of scaling factors for combining activation functions. Defaults to an array of the same size as activation_function with all elements set to leaky_coefficient.

This function creates an MRNN object with the specified activation functions, leaky coefficient, and scaling factors, which can be used as a reservoir driver in the ESN.

Reference:

source
ReservoirComputing.GRUType
GRU(;activation_function=[NNlib.sigmoid, NNlib.sigmoid, tanh],
+    scaling_factor=fill(leaky_coefficient, length(activation_function)))

Returns a Multiple RNN (MRNN) initializer for the Echo State Network (ESN), introduced in [lun].

Arguments

  • activation_function: A vector of activation functions used in the MRNN.
  • leaky_coefficient: The leaky coefficient used in the MRNN.
  • scaling_factor: A vector of scaling factors for combining activation functions.

Keyword Arguments

  • activation_function: A vector of activation functions used in the MRNN. Defaults to [tanh, sigmoid].
  • leaky_coefficient: The leaky coefficient used in the MRNN. Defaults to 1.0.
  • scaling_factor: A vector of scaling factors for combining activation functions. Defaults to an array of the same size as activation_function with all elements set to leaky_coefficient.

This function creates an MRNN object with the specified activation functions, leaky coefficient, and scaling factors, which can be used as a reservoir driver in the ESN.

Reference:

source
ReservoirComputing.GRUType
GRU(;activation_function=[NNlib.sigmoid, NNlib.sigmoid, tanh],
     inner_layer = fill(DenseLayer(), 2),
     reservoir = fill(RandSparseReservoir(), 2),
     bias = fill(DenseLayer(), 2),
-    variant = FullyGated())

Returns a Gated Recurrent Unit (GRU) reservoir driver for Echo State Networks (ESNs). This driver is based on the GRU architecture [Cho], which is designed to capture temporal dependencies in data and is commonly used in various machine learning applications.

Arguments

  • activation_function: An array of activation functions for the GRU layers. By default, it uses sigmoid activation functions for the update gate, reset gate, and tanh for the hidden state.
  • inner_layer: An array of inner layers used in the GRU architecture. By default, it uses two dense layers.
  • reservoir: An array of reservoir layers. By default, it uses two random sparse reservoirs.
  • bias: An array of bias layers for the GRU. By default, it uses two dense layers.
  • variant: The GRU variant to use. By default, it uses the "FullyGated" variant.

Returns

A GRUParams object containing the parameters needed for the GRU-based reservoir driver.

References

source

The GRU driver also provides the user with the choice of the possible variants:

ReservoirComputing.FullyGatedType
FullyGated()

Returns a Fully Gated Recurrent Unit (FullyGated) initializer for the Echo State Network (ESN).

This function creates a FullyGated object, which can be used as a reservoir driver in the ESN. The FullyGated variant is described in the literature reference [cho].

Returns

  • FullyGated: A FullyGated reservoir driver.

Reference

source

Please refer to the original papers for more detail about these architectures.

  • lunLun, Shu-Xian, et al. "A novel model of leaky integrator echo state network for time-series prediction." Neurocomputing 159 (2015): 58-66.
  • ChoCho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
  • choCho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
  • ZhouZhou, Guo-Bing, et al. "Minimal gated unit for recurrent neural networks."
+ variant = FullyGated())

Returns a Gated Recurrent Unit (GRU) reservoir driver for Echo State Networks (ESNs). This driver is based on the GRU architecture [Cho], which is designed to capture temporal dependencies in data and is commonly used in various machine learning applications.

Arguments

Returns

A GRUParams object containing the parameters needed for the GRU-based reservoir driver.

References

source

The GRU driver also provides the user with the choice of the possible variants:

ReservoirComputing.FullyGatedType
FullyGated()

Returns a Fully Gated Recurrent Unit (FullyGated) initializer for the Echo State Network (ESN).

This function creates a FullyGated object, which can be used as a reservoir driver in the ESN. The FullyGated variant is described in the literature reference [cho].

Returns

  • FullyGated: A FullyGated reservoir driver.

Reference

source
ReservoirComputing.MinimalType
Minimal()

Returns a minimal GRU ESN initializer as described in [Zhou].

International Journal of Automation and Computing 13.3 (2016): 226-234.

source

Please refer to the original papers for more detail about these architectures.

diff --git a/previews/PR182/api/esn_layers/index.html b/previews/PR182/api/esn_layers/index.html index 05954f6f..11c360c0 100644 --- a/previews/PR182/api/esn_layers/index.html +++ b/previews/PR182/api/esn_layers/index.html @@ -1,13 +1,13 @@ ESN Layers · ReservoirComputing.jl

ESN Layers

Input Layers

ReservoirComputing.WeightedLayerType
WeightedInput(scaling)
-WeightedInput(;scaling=0.1)

Creates a WeightedInput layer initializer for Echo State Networks. This initializer generates a weighted input matrix with random non-zero elements distributed uniformly within the range [-scaling, scaling], following the approach in [Lu].

Parameters

  • scaling: The scaling factor for the weight distribution (default: 0.1).

Returns

  • A WeightedInput instance to be used for initializing the input layer of an ESN.

Reference: [Lu]: Lu, Zhixin, et al. "Reservoir observers: Model-free inference of unmeasured variables in chaotic systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.

source
ReservoirComputing.DenseLayerType
DenseLayer(scaling)
-DenseLayer(;scaling=0.1)

Creates a DenseLayer initializer for Echo State Networks, generating a fully connected input layer. The layer is initialized with random weights uniformly distributed within [-scaling, scaling]. This scaling factor can be provided either as an argument or a keyword argument. The DenseLayer is the default input layer in ESN construction.

Parameters

  • scaling: The scaling factor for weight distribution (default: 0.1).

Returns

  • A DenseLayer instance for initializing the ESN's input layer.
source
ReservoirComputing.SparseLayerType
SparseLayer(scaling, sparsity)
+WeightedInput(;scaling=0.1)

Creates a WeightedInput layer initializer for Echo State Networks. This initializer generates a weighted input matrix with random non-zero elements distributed uniformly within the range [-scaling, scaling], following the approach in [Lu].

Parameters

  • scaling: The scaling factor for the weight distribution (default: 0.1).

Returns

  • A WeightedInput instance to be used for initializing the input layer of an ESN.

Reference: [Lu]: Lu, Zhixin, et al. "Reservoir observers: Model-free inference of unmeasured variables in chaotic systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.

source
ReservoirComputing.DenseLayerType
DenseLayer(scaling)
+DenseLayer(;scaling=0.1)

Creates a DenseLayer initializer for Echo State Networks, generating a fully connected input layer. The layer is initialized with random weights uniformly distributed within [-scaling, scaling]. This scaling factor can be provided either as an argument or a keyword argument. The DenseLayer is the default input layer in ESN construction.

Parameters

  • scaling: The scaling factor for weight distribution (default: 0.1).

Returns

  • A DenseLayer instance for initializing the ESN's input layer.
source
ReservoirComputing.SparseLayerType
SparseLayer(scaling, sparsity)
 SparseLayer(scaling; sparsity=0.1)
-SparseLayer(;scaling=0.1, sparsity=0.1)

Creates a SparseLayer initializer for Echo State Networks, generating a sparse input layer. The layer is initialized with weights distributed within [-scaling, scaling] and a specified sparsity level. Both scaling and sparsity can be set as arguments or keyword arguments.

Parameters

  • scaling: Scaling factor for weight distribution (default: 0.1).
  • sparsity: Sparsity level of the layer (default: 0.1).

Returns

  • A SparseLayer instance for initializing ESN's input layer with sparse connections.
source
ReservoirComputing.InformedLayerType
InformedLayer(model_in_size; scaling=0.1, gamma=0.5)

Creates an InformedLayer initializer for Echo State Networks (ESNs) that generates a weighted input layer matrix. The matrix contains random non-zero elements drawn from the range [-scaling, scaling]. This initializer ensures that a fraction (gamma) of reservoir nodes are exclusively connected to the raw inputs, while the rest are connected to the outputs of a prior knowledge model, as described in [Pathak].

Arguments

  • model_in_size: The size of the prior knowledge model's output, which determines the number of columns in the input layer matrix.

Keyword Arguments

  • scaling: The absolute value of the weights (default: 0.1).
  • gamma: The fraction of reservoir nodes connected exclusively to raw inputs (default: 0.5).

Returns

  • An InformedLayer instance for initializing the ESN's input layer matrix.

Reference: [Pathak]: Jaideep Pathak et al. "Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).

source
ReservoirComputing.MinimumLayerType
MinimumLayer(weight, sampling)
+SparseLayer(;scaling=0.1, sparsity=0.1)

Creates a SparseLayer initializer for Echo State Networks, generating a sparse input layer. The layer is initialized with weights distributed within [-scaling, scaling] and a specified sparsity level. Both scaling and sparsity can be set as arguments or keyword arguments.

Parameters

  • scaling: Scaling factor for weight distribution (default: 0.1).
  • sparsity: Sparsity level of the layer (default: 0.1).

Returns

  • A SparseLayer instance for initializing ESN's input layer with sparse connections.
source
ReservoirComputing.InformedLayerType
InformedLayer(model_in_size; scaling=0.1, gamma=0.5)

Creates an InformedLayer initializer for Echo State Networks (ESNs) that generates a weighted input layer matrix. The matrix contains random non-zero elements drawn from the range [-scaling, scaling]. This initializer ensures that a fraction (gamma) of reservoir nodes are exclusively connected to the raw inputs, while the rest are connected to the outputs of a prior knowledge model, as described in [Pathak].

Arguments

  • model_in_size: The size of the prior knowledge model's output, which determines the number of columns in the input layer matrix.

Keyword Arguments

  • scaling: The absolute value of the weights (default: 0.1).
  • gamma: The fraction of reservoir nodes connected exclusively to raw inputs (default: 0.5).

Returns

  • An InformedLayer instance for initializing the ESN's input layer matrix.

Reference: [Pathak]: Jaideep Pathak et al. "Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).

source
ReservoirComputing.MinimumLayerType
MinimumLayer(weight, sampling)
 MinimumLayer(weight; sampling=BernoulliSample(0.5))
-MinimumLayer(;weight=0.1, sampling=BernoulliSample(0.5))

Creates a MinimumLayer initializer for Echo State Networks, generating a fully connected input layer. This layer has a uniform absolute weight value (weight) with the sign of each weight determined by the sampling method. This approach, as detailed in [Rodan1] and [Rodan2], allows for controlled weight distribution in the layer.

Parameters

  • weight: Absolute value of weights in the layer.
  • sampling: Method for determining the sign of weights (default: BernoulliSample(0.5)).

Returns

  • A MinimumLayer instance for initializing the ESN's input layer.

References: [Rodan1]: Rodan, Ali, and Peter Tino. "Minimum complexity echo state network." IEEE Transactions on Neural Networks 22.1 (2010): 131-144. [Rodan2]: Rodan, Ali, and Peter Tiňo. "Simple deterministically constructed cycle reservoirs with regular jumps." Neural Computation 24.7 (2012): 1822-1852.

source
ReservoirComputing.NullLayerType
NullLayer()

Creates a NullLayer initializer for Echo State Networks (ESNs) that generates a vector of zeros.

Returns

  • A NullLayer instance for initializing the ESN's input layer matrix.
source

The signs in the MinimumLayer are chosen based on the following methods:

ReservoirComputing.BernoulliSampleType
BernoulliSample(p)
-BernoulliSample(;p=0.5)

Creates a BernoulliSample constructor for the MinimumLayer. It uses a Bernoulli distribution to determine the sign of weights in the input layer. The parameter p sets the probability of a weight being positive, as per the Distributions package. This method of sign weight determination for input layers is based on the approach in [Rodan].

Parameters

  • p: Probability of a positive weight (default: 0.5).

Returns

  • A BernoulliSample instance for generating sign weights in MinimumLayer.

Reference: [Rodan]: Rodan, Ali, and Peter Tino. "Minimum complexity echo state network." IEEE Transactions on Neural Networks 22.1 (2010): 131-144.

source
ReservoirComputing.IrrationalSampleType
IrrationalSample(irrational, start)
-IrrationalSample(;irrational=pi, start=1)

Creates an IrrationalSample constructor for the MinimumLayer. It determines the sign of weights in the input layer based on the decimal expansion of an irrational number. The start parameter sets the starting point in the decimal sequence. The signs are assigned based on the thresholding of each decimal digit against 4.5, as described in [Rodan].

Parameters

  • irrational: An irrational number for weight sign determination (default: π).
  • start: Starting index in the decimal expansion (default: 1).

Returns

  • An IrrationalSample instance for generating sign weights in MinimumLayer.

Reference: [Rodan]: Rodan, Ali, and Peter Tiňo. "Simple deterministically constructed cycle reservoirs with regular jumps." Neural Computation 24.7 (2012): 1822-1852.

source

To derive the matrix one can call the following function:

ReservoirComputing.create_layerFunction
create_layer(input_layer::AbstractLayer, res_size, in_size)

Generates a matrix layer of size res_size x in_size, constructed according to the specifications of the input_layer.

Parameters

  • input_layer: An instance of AbstractLayer determining the layer construction.
  • res_size: The number of rows (reservoir size) for the layer.
  • in_size: The number of columns (input size) for the layer.

Returns

  • A matrix representing the constructed layer.
source

To create new input layers, it suffices to define a new struct containing the needed parameters of the new input layer. This struct will need to be an AbstractLayer, so the create_layer function can be dispatched over it. The workflow should follow this snippet:

#creation of the new struct for the layer
+MinimumLayer(;weight=0.1, sampling=BernoulliSample(0.5))

Creates a MinimumLayer initializer for Echo State Networks, generating a fully connected input layer. This layer has a uniform absolute weight value (weight) with the sign of each weight determined by the sampling method. This approach, as detailed in [Rodan1] and [Rodan2], allows for controlled weight distribution in the layer.

Parameters

  • weight: Absolute value of weights in the layer.
  • sampling: Method for determining the sign of weights (default: BernoulliSample(0.5)).

Returns

  • A MinimumLayer instance for initializing the ESN's input layer.

References: [Rodan1]: Rodan, Ali, and Peter Tino. "Minimum complexity echo state network." IEEE Transactions on Neural Networks 22.1 (2010): 131-144. [Rodan2]: Rodan, Ali, and Peter Tiňo. "Simple deterministically constructed cycle reservoirs with regular jumps." Neural Computation 24.7 (2012): 1822-1852.

source
ReservoirComputing.NullLayerType
NullLayer()

Creates a NullLayer initializer for Echo State Networks (ESNs) that generates a vector of zeros.

Returns

  • A NullLayer instance for initializing the ESN's input layer matrix.
source

The signs in the MinimumLayer are chosen based on the following methods:

ReservoirComputing.BernoulliSampleType
BernoulliSample(p)
+BernoulliSample(;p=0.5)

Creates a BernoulliSample constructor for the MinimumLayer. It uses a Bernoulli distribution to determine the sign of weights in the input layer. The parameter p sets the probability of a weight being positive, as per the Distributions package. This method of sign weight determination for input layers is based on the approach in [Rodan].

Parameters

  • p: Probability of a positive weight (default: 0.5).

Returns

  • A BernoulliSample instance for generating sign weights in MinimumLayer.

Reference: [Rodan]: Rodan, Ali, and Peter Tino. "Minimum complexity echo state network." IEEE Transactions on Neural Networks 22.1 (2010): 131-144.

source
ReservoirComputing.IrrationalSampleType
IrrationalSample(irrational, start)
+IrrationalSample(;irrational=pi, start=1)

Creates an IrrationalSample constructor for the MinimumLayer. It determines the sign of weights in the input layer based on the decimal expansion of an irrational number. The start parameter sets the starting point in the decimal sequence. The signs are assigned based on the thresholding of each decimal digit against 4.5, as described in [Rodan].

Parameters

  • irrational: An irrational number for weight sign determination (default: π).
  • start: Starting index in the decimal expansion (default: 1).

Returns

  • An IrrationalSample instance for generating sign weights in MinimumLayer.

Reference: [Rodan]: Rodan, Ali, and Peter Tiňo. "Simple deterministically constructed cycle reservoirs with regular jumps." Neural Computation 24.7 (2012): 1822-1852.

source

To derive the matrix one can call the following function:

ReservoirComputing.create_layerFunction
create_layer(input_layer::AbstractLayer, res_size, in_size)

Generates a matrix layer of size res_size x in_size, constructed according to the specifications of the input_layer.

Parameters

  • input_layer: An instance of AbstractLayer determining the layer construction.
  • res_size: The number of rows (reservoir size) for the layer.
  • in_size: The number of columns (input size) for the layer.

Returns

  • A matrix representing the constructed layer.
source

To create new input layers, it suffices to define a new struct containing the needed parameters of the new input layer. This struct will need to be an AbstractLayer, so the create_layer function can be dispatched over it. The workflow should follow this snippet:

#creation of the new struct for the layer
 struct MyNewLayer <: AbstractLayer
     #the layer params go here
 end
@@ -16,13 +16,13 @@
 function create_layer(input_layer::MyNewLayer, res_size, in_size)
     #the new algorithm to build the input layer goes here
 end

Reservoirs

ReservoirComputing.RandSparseReservoirType
RandSparseReservoir(res_size, radius, sparsity)
-RandSparseReservoir(res_size; radius=1.0, sparsity=0.1)

Returns a random sparse reservoir initializer, which generates a matrix of size res_size x res_size with the specified sparsity and scaled spectral radius according to radius. This type of reservoir initializer is commonly used in Echo State Networks (ESNs) for capturing complex temporal dependencies.

Arguments

  • res_size: The size of the reservoir matrix.
  • radius: The desired spectral radius of the reservoir. By default, it is set to 1.0.
  • sparsity: The sparsity level of the reservoir matrix, controlling the fraction of zero elements. By default, it is set to 0.1.

Returns

A RandSparseReservoir object that can be used as a reservoir initializer in ESN construction.

References

This type of reservoir initialization is a common choice in ESN construction for its ability to capture temporal dependencies in data. However, there is no specific reference associated with this function.

source
ReservoirComputing.PseudoSVDReservoirType
PseudoSVDReservoir(max_value, sparsity, sorted, reverse_sort)
-PseudoSVDReservoir(max_value, sparsity; sorted=true, reverse_sort=false)

Returns an initializer to build a sparse reservoir matrix with the given sparsity by using a pseudo-SVD approach as described in [yang].

Arguments

  • res_size: The size of the reservoir matrix.
  • max_value: The maximum absolute value of elements in the matrix.
  • sparsity: The desired sparsity level of the reservoir matrix.
  • sorted: A boolean indicating whether to sort the singular values before creating the diagonal matrix. By default, it is set to true.
  • reverse_sort: A boolean indicating whether to reverse the sorted singular values. By default, it is set to false.

Returns

A PseudoSVDReservoir object that can be used as a reservoir initializer in ESN construction.

References

This reservoir initialization method, based on a pseudo-SVD approach, is inspired by the work in [yang], which focuses on designing polynomial echo state networks for time series prediction.

source
ReservoirComputing.DelayLineReservoirType
DelayLineReservoir(res_size, weight)
-DelayLineReservoir(res_size; weight=0.1)

Returns a Delay Line Reservoir matrix constructor to obtain a deterministic reservoir as described in [Rodan2010].

Arguments

  • res_size::Int: The size of the reservoir.
  • weight::T: The weight determines the absolute value of all the connections in the reservoir.

Returns

A DelayLineReservoir object.

References

IEEE transactions on neural networks 22.1 (2010): 131-144.

source
ReservoirComputing.DelayLineBackwardReservoirType
DelayLineBackwardReservoir(res_size, weight, fb_weight)
-DelayLineBackwardReservoir(res_size; weight=0.1, fb_weight=0.2)

Returns a Delay Line Reservoir constructor to create a matrix with backward connections as described in [Rodan2010]. The weight and fb_weight can be passed as either arguments or keyword arguments, and they determine the absolute values of the connections in the reservoir.

Arguments

  • res_size::Int: The size of the reservoir.
  • weight::T: The weight determines the absolute value of forward connections in the reservoir.
  • fb_weight::T: The fb_weight determines the absolute value of backward connections in the reservoir.

Returns

A DelayLineBackwardReservoir object.

References

IEEE transactions on neural networks 22.1 (2010): 131-144.

source
ReservoirComputing.SimpleCycleReservoirType
SimpleCycleReservoir(res_size, weight)
-SimpleCycleReservoir(res_size; weight=0.1)

Returns a Simple Cycle Reservoir constructor to build a reservoir matrix as described in [Rodan2010]. The weight can be passed as an argument or a keyword argument, and it determines the absolute value of all the connections in the reservoir.

Arguments

  • res_size::Int: The size of the reservoir.
  • weight::T: The weight determines the absolute value of connections in the reservoir.

Returns

A SimpleCycleReservoir object.

References

IEEE transactions on neural networks 22.1 (2010): 131-144.

source
ReservoirComputing.CycleJumpsReservoirType
CycleJumpsReservoir(res_size; cycle_weight=0.1, jump_weight=0.1, jump_size=3)
-CycleJumpsReservoir(res_size, cycle_weight, jump_weight, jump_size)

Return a Cycle Reservoir with Jumps constructor to create a reservoir matrix as described in [Rodan2012]. The cycle_weight, jump_weight, and jump_size can be passed as arguments or keyword arguments, and they determine the absolute values of connections in the reservoir. The jump_size determines the jumps between jump_weights.

Arguments

  • res_size::Int: The size of the reservoir.
  • cycle_weight::T: The weight of cycle connections.
  • jump_weight::T: The weight of jump connections.
  • jump_size::Int: The number of steps between jump connections.

Returns

A CycleJumpsReservoir object.

References

with regular jumps." Neural computation 24.7 (2012): 1822-1852.

source
ReservoirComputing.NullReservoirType
NullReservoir()

Return a constructor for a matrix of zeros with dimensions res_size x res_size.

Arguments

  • None

Returns

A NullReservoir object.

References

  • None
source

Like for the input layers, to actually build the matrix of the reservoir, one can call the following function:

ReservoirComputing.create_reservoirFunction
create_reservoir(reservoir::AbstractReservoir, res_size)
-create_reservoir(reservoir, args...)

Given an AbstractReservoir constructor and the size of the reservoir (res_size), this function returns the corresponding reservoir matrix. Alternatively, it accepts a pre-generated matrix.

Arguments

  • reservoir: An AbstractReservoir object or constructor.
  • res_size: The size of the reservoir matrix.
  • matrix_type: The type of the resulting matrix. By default, it is set to Matrix{Float64}.

Returns

A matrix representing the reservoir, generated based on the properties of the specified reservoir object or constructor.

References

The choice of reservoir initialization is crucial in Echo State Networks (ESNs) for achieving effective temporal modeling. Specific references for reservoir initialization methods may vary based on the type of reservoir used, but the practice of initializing reservoirs for ESNs is widely documented in the ESN literature.

source

To create a new reservoir, the procedure is similar to the one for the input layers. First, the definition of the new struct of type AbstractReservoir with the reservoir parameters is needed. Then the dispatch over the create_reservoir function makes the model actually build the reservoir matrix. An example of the workflow is given in the following snippet:

#creation of the new struct for the reservoir
+RandSparseReservoir(res_size; radius=1.0, sparsity=0.1)

Returns a random sparse reservoir initializer, which generates a matrix of size res_size x res_size with the specified sparsity and scaled spectral radius according to radius. This type of reservoir initializer is commonly used in Echo State Networks (ESNs) for capturing complex temporal dependencies.

Arguments

Returns

A RandSparseReservoir object that can be used as a reservoir initializer in ESN construction.

References

This type of reservoir initialization is a common choice in ESN construction for its ability to capture temporal dependencies in data. However, there is no specific reference associated with this function.

source
ReservoirComputing.PseudoSVDReservoirType
PseudoSVDReservoir(max_value, sparsity, sorted, reverse_sort)
+PseudoSVDReservoir(max_value, sparsity; sorted=true, reverse_sort=false)

Returns an initializer to build a sparse reservoir matrix with the given sparsity by using a pseudo-SVD approach as described in [yang].

Arguments

  • res_size: The size of the reservoir matrix.
  • max_value: The maximum absolute value of elements in the matrix.
  • sparsity: The desired sparsity level of the reservoir matrix.
  • sorted: A boolean indicating whether to sort the singular values before creating the diagonal matrix. By default, it is set to true.
  • reverse_sort: A boolean indicating whether to reverse the sorted singular values. By default, it is set to false.

Returns

A PseudoSVDReservoir object that can be used as a reservoir initializer in ESN construction.

References

This reservoir initialization method, based on a pseudo-SVD approach, is inspired by the work in [yang], which focuses on designing polynomial echo state networks for time series prediction.

source
ReservoirComputing.DelayLineReservoirType
DelayLineReservoir(res_size, weight)
+DelayLineReservoir(res_size; weight=0.1)

Returns a Delay Line Reservoir matrix constructor to obtain a deterministic reservoir as described in [Rodan2010].

Arguments

  • res_size::Int: The size of the reservoir.
  • weight::T: The weight determines the absolute value of all the connections in the reservoir.

Returns

A DelayLineReservoir object.

References

IEEE transactions on neural networks 22.1 (2010): 131-144.

source
ReservoirComputing.DelayLineBackwardReservoirType
DelayLineBackwardReservoir(res_size, weight, fb_weight)
+DelayLineBackwardReservoir(res_size; weight=0.1, fb_weight=0.2)

Returns a Delay Line Reservoir constructor to create a matrix with backward connections as described in [Rodan2010]. The weight and fb_weight can be passed as either arguments or keyword arguments, and they determine the absolute values of the connections in the reservoir.

Arguments

  • res_size::Int: The size of the reservoir.
  • weight::T: The weight determines the absolute value of forward connections in the reservoir.
  • fb_weight::T: The fb_weight determines the absolute value of backward connections in the reservoir.

Returns

A DelayLineBackwardReservoir object.

References

IEEE transactions on neural networks 22.1 (2010): 131-144.

source
ReservoirComputing.SimpleCycleReservoirType
SimpleCycleReservoir(res_size, weight)
+SimpleCycleReservoir(res_size; weight=0.1)

Returns a Simple Cycle Reservoir constructor to build a reservoir matrix as described in [Rodan2010]. The weight can be passed as an argument or a keyword argument, and it determines the absolute value of all the connections in the reservoir.

Arguments

  • res_size::Int: The size of the reservoir.
  • weight::T: The weight determines the absolute value of connections in the reservoir.

Returns

A SimpleCycleReservoir object.

References

IEEE transactions on neural networks 22.1 (2010): 131-144.

source
ReservoirComputing.CycleJumpsReservoirType
CycleJumpsReservoir(res_size; cycle_weight=0.1, jump_weight=0.1, jump_size=3)
+CycleJumpsReservoir(res_size, cycle_weight, jump_weight, jump_size)

Return a Cycle Reservoir with Jumps constructor to create a reservoir matrix as described in [Rodan2012]. The cycle_weight, jump_weight, and jump_size can be passed as arguments or keyword arguments, and they determine the absolute values of connections in the reservoir. The jump_size determines the jumps between jump_weights.

Arguments

  • res_size::Int: The size of the reservoir.
  • cycle_weight::T: The weight of cycle connections.
  • jump_weight::T: The weight of jump connections.
  • jump_size::Int: The number of steps between jump connections.

Returns

A CycleJumpsReservoir object.

References

with regular jumps." Neural computation 24.7 (2012): 1822-1852.

source
ReservoirComputing.NullReservoirType
NullReservoir()

Return a constructor for a matrix of zeros with dimensions res_size x res_size.

Arguments

  • None

Returns

A NullReservoir object.

References

  • None
source

Like for the input layers, to actually build the matrix of the reservoir, one can call the following function:

ReservoirComputing.create_reservoirFunction
create_reservoir(reservoir::AbstractReservoir, res_size)
+create_reservoir(reservoir, args...)

Given an AbstractReservoir constructor and the size of the reservoir (res_size), this function returns the corresponding reservoir matrix. Alternatively, it accepts a pre-generated matrix.

Arguments

  • reservoir: An AbstractReservoir object or constructor.
  • res_size: The size of the reservoir matrix.
  • matrix_type: The type of the resulting matrix. By default, it is set to Matrix{Float64}.

Returns

A matrix representing the reservoir, generated based on the properties of the specified reservoir object or constructor.

References

The choice of reservoir initialization is crucial in Echo State Networks (ESNs) for achieving effective temporal modeling. Specific references for reservoir initialization methods may vary based on the type of reservoir used, but the practice of initializing reservoirs for ESNs is widely documented in the ESN literature.

source

To create a new reservoir, the procedure is similar to the one for the input layers. First, the definition of the new struct of type AbstractReservoir with the reservoir parameters is needed. Then the dispatch over the create_reservoir function makes the model actually build the reservoir matrix. An example of the workflow is given in the following snippet:

#creation of the new struct for the reservoir
 struct MyNewReservoir <: AbstractReservoir
     #the reservoir params go here
 end
@@ -30,4 +30,4 @@
 #dispatch over the function to build the reservoir matrix
 function create_reservoir(reservoir::AbstractReservoir, res_size)
     #the new algorithm to build the reservoir matrix goes here
-end
+end
diff --git a/previews/PR182/api/predict/index.html b/previews/PR182/api/predict/index.html index 7582af20..5d81d45a 100644 --- a/previews/PR182/api/predict/index.html +++ b/previews/PR182/api/predict/index.html @@ -1,2 +1,2 @@ -Prediction Types · ReservoirComputing.jl

Prediction Types

ReservoirComputing.GenerativeType
Generative(prediction_len)

This prediction methodology allows the models to produce an autonomous prediction, feeding the prediction into itself to generate the next step. The only parameter needed is the number of steps for the prediction.

source
ReservoirComputing.PredictiveType
Predictive(prediction_data)

Given a set of labels as prediction_data, this method of prediction will return the corresponding labels in a standard Machine Learning fashion.

source
+Prediction Types · ReservoirComputing.jl

Prediction Types

ReservoirComputing.GenerativeType
Generative(prediction_len)

This prediction methodology allows the models to produce an autonomous prediction, feeding the prediction into itself to generate the next step. The only parameter needed is the number of steps for the prediction.

source
ReservoirComputing.PredictiveType
Predictive(prediction_data)

Given a set of labels as prediction_data, this method of prediction will return the corresponding labels in a standard Machine Learning fashion.

source
diff --git a/previews/PR182/api/reca/index.html b/previews/PR182/api/reca/index.html index fa9f1f5b..c288a975 100644 --- a/previews/PR182/api/reca/index.html +++ b/previews/PR182/api/reca/index.html @@ -4,6 +4,6 @@ generations = 8, input_encoding=RandomMapping(), nla_type = NLADefault(), - states_type = StandardStates())

[1] Yilmaz, Ozgur. “Reservoir computing using cellular automata.” arXiv preprint arXiv:1410.0162 (2014).

[2] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:

ReservoirComputing.RandomMappingType
RandomMapping(permutations, expansion_size)
+    states_type = StandardStates())

[1] Yilmaz, Ozgur. “Reservoir computing using cellular automata.” arXiv preprint arXiv:1410.0162 (2014).

[2] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:

ReservoirComputing.RandomMappingType
RandomMapping(permutations, expansion_size)
 RandomMapping(permutations; expansion_size=40)
-RandomMapping(;permutations=8, expansion_size=40)

Random mapping of the input data directly in the reservoir. The expansion_size determines the dimension of the single reservoir, and permutations determines the number of total reservoirs that will be connected, each with a different mapping. The detail of this implementation can be found in [1].

[1] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The training and prediction follow the same workflow as the ESN. It is important to note that currently we were unable to find any papers using these models with a Generative approach for the prediction, so full support is given only to the Predictive method.

+RandomMapping(;permutations=8, expansion_size=40)

Random mapping of the input data directly in the reservoir. The expansion_size determines the dimension of the single reservoir, and permutations determines the number of total reservoirs that will be connected, each with a different mapping. The detail of this implementation can be found in [1].

[1] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The training and prediction follow the same workflow as the ESN. It is important to note that currently we were unable to find any papers using these models with a Generative approach for the prediction, so full support is given only to the Predictive method.

diff --git a/previews/PR182/api/states/index.html b/previews/PR182/api/states/index.html index 44816516..00b43933 100644 --- a/previews/PR182/api/states/index.html +++ b/previews/PR182/api/states/index.html @@ -1,4 +1,4 @@ -States Modifications · ReservoirComputing.jl

States Modifications

Padding and Estension

ReservoirComputing.StandardStatesType
StandardStates()

When this struct is employed, the states of the reservoir are not modified. It represents the default behavior in scenarios where no specific state modification is required. This approach is ideal for applications where the inherent dynamics of the reservoir are sufficient, and no external manipulation of the states is necessary. It maintains the original state representation, ensuring that the reservoir's natural properties are preserved and utilized in computations.

source
ReservoirComputing.ExtendedStatesType
ExtendedStates()

The ExtendedStates struct is used to extend the reservoir states by vertically concatenating the input data (during training) and the prediction data (during the prediction phase). This method enriches the state representation by integrating external data, enhancing the model's capability to capture and utilize complex patterns in both training and prediction stages.

source
ReservoirComputing.PaddedStatesType
PaddedStates(padding)
-PaddedStates(;padding=1.0)

Creates an instance of the PaddedStates struct with specified padding value. This padding is typically set to 1.0 by default but can be customized. The states of the reservoir are padded by vertically concatenating this padding value, enhancing the dimensionality and potentially improving the performance of the reservoir computing model. This function is particularly useful in scenarios where adding a constant baseline to the states is necessary for the desired computational task.

source
ReservoirComputing.PaddedExtendedStatesType
PaddedExtendedStates(padding)
-PaddedExtendedStates(;padding=1.0)

Constructs a PaddedExtendedStates struct, which first extends the reservoir states with training or prediction data, then pads them with a specified value (defaulting to 1.0). This process is achieved through vertical concatenation, combining the padding value, data, and states. This function is particularly useful for enhancing the reservoir's state representation in more complex scenarios, where both extended contextual information and consistent baseline padding are crucial for the computational effectiveness of the reservoir computing model.

source

Non Linear Transformations

ReservoirComputing.NLADefaultType
NLADefault()

NLADefault represents the default non-linear algorithm option. When used, it leaves the input array unchanged. This option is suitable in cases where no non-linear transformation of the data is required, maintaining the original state of the input array for further processing. It's the go-to choice for preserving the raw data integrity within the computational pipeline of the reservoir computing model.

source
ReservoirComputing.NLAT1Type
NLAT1()

NLAT1 implements the T₁ transformation algorithm introduced in [Chattopadhyay] and [Pathak]. The T₁ algorithm selectively squares elements of the input array, specifically targeting every second row. This non-linear transformation enhances certain data characteristics, making it a valuable tool in analyzing chaotic systems and improving the performance of reservoir computing models. The T₁ transformation's uniqueness lies in its selective approach, allowing for a more nuanced manipulation of the input data.

References: [Chattopadhyay]: Chattopadhyay, Ashesh, et al. "Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM." (2019). [Pathak]: Pathak, Jaideep, et al. "Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach." Physical review letters 120.2 (2018): 024102.

source
ReservoirComputing.NLAT2Type
NLAT2()

NLAT2 implements the T₂ transformation algorithm as defined in [Chattopadhyay]. This transformation algorithm modifies the reservoir states by multiplying each odd-indexed row (starting from the second row) with the product of its two preceding rows. This specific approach to non-linear transformation is useful for capturing and enhancing complex patterns in the data, particularly beneficial in the analysis of chaotic systems and in improving the dynamics within reservoir computing models.

Reference: [Chattopadhyay]: Chattopadhyay, Ashesh, et al. "Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM." (2019).

source
ReservoirComputing.NLAT3Type
NLAT3()

The NLAT3 struct implements the T₃ transformation algorithm as detailed in [Chattopadhyay]. This algorithm modifies the reservoir's states by multiplying each odd-indexed row (beginning from the second row) with the product of the immediately preceding and the immediately following rows. T₃'s unique approach to data transformation makes it particularly useful for enhancing complex data patterns, thereby improving the modeling and analysis capabilities within reservoir computing, especially for chaotic and dynamic systems.

Reference: [Chattopadhyay]: Chattopadhyay, Ashesh, et al. "Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM." (2019).

source
+States Modifications · ReservoirComputing.jl

States Modifications

Padding and Estension

ReservoirComputing.StandardStatesType
StandardStates()

When this struct is employed, the states of the reservoir are not modified. It represents the default behavior in scenarios where no specific state modification is required. This approach is ideal for applications where the inherent dynamics of the reservoir are sufficient, and no external manipulation of the states is necessary. It maintains the original state representation, ensuring that the reservoir's natural properties are preserved and utilized in computations.

source
ReservoirComputing.ExtendedStatesType
ExtendedStates()

The ExtendedStates struct is used to extend the reservoir states by vertically concatenating the input data (during training) and the prediction data (during the prediction phase). This method enriches the state representation by integrating external data, enhancing the model's capability to capture and utilize complex patterns in both training and prediction stages.

source
ReservoirComputing.PaddedStatesType
PaddedStates(padding)
+PaddedStates(;padding=1.0)

Creates an instance of the PaddedStates struct with specified padding value. This padding is typically set to 1.0 by default but can be customized. The states of the reservoir are padded by vertically concatenating this padding value, enhancing the dimensionality and potentially improving the performance of the reservoir computing model. This function is particularly useful in scenarios where adding a constant baseline to the states is necessary for the desired computational task.

source
ReservoirComputing.PaddedExtendedStatesType
PaddedExtendedStates(padding)
+PaddedExtendedStates(;padding=1.0)

Constructs a PaddedExtendedStates struct, which first extends the reservoir states with training or prediction data, then pads them with a specified value (defaulting to 1.0). This process is achieved through vertical concatenation, combining the padding value, data, and states. This function is particularly useful for enhancing the reservoir's state representation in more complex scenarios, where both extended contextual information and consistent baseline padding are crucial for the computational effectiveness of the reservoir computing model.

source

Non Linear Transformations

ReservoirComputing.NLADefaultType
NLADefault()

NLADefault represents the default non-linear algorithm option. When used, it leaves the input array unchanged. This option is suitable in cases where no non-linear transformation of the data is required, maintaining the original state of the input array for further processing. It's the go-to choice for preserving the raw data integrity within the computational pipeline of the reservoir computing model.

source
ReservoirComputing.NLAT1Type
NLAT1()

NLAT1 implements the T₁ transformation algorithm introduced in [Chattopadhyay] and [Pathak]. The T₁ algorithm selectively squares elements of the input array, specifically targeting every second row. This non-linear transformation enhances certain data characteristics, making it a valuable tool in analyzing chaotic systems and improving the performance of reservoir computing models. The T₁ transformation's uniqueness lies in its selective approach, allowing for a more nuanced manipulation of the input data.

References: [Chattopadhyay]: Chattopadhyay, Ashesh, et al. "Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM." (2019). [Pathak]: Pathak, Jaideep, et al. "Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach." Physical review letters 120.2 (2018): 024102.

source
ReservoirComputing.NLAT2Type
NLAT2()

NLAT2 implements the T₂ transformation algorithm as defined in [Chattopadhyay]. This transformation algorithm modifies the reservoir states by multiplying each odd-indexed row (starting from the second row) with the product of its two preceding rows. This specific approach to non-linear transformation is useful for capturing and enhancing complex patterns in the data, particularly beneficial in the analysis of chaotic systems and in improving the dynamics within reservoir computing models.

Reference: [Chattopadhyay]: Chattopadhyay, Ashesh, et al. "Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM." (2019).

source
ReservoirComputing.NLAT3Type
NLAT3()

The NLAT3 struct implements the T₃ transformation algorithm as detailed in [Chattopadhyay]. This algorithm modifies the reservoir's states by multiplying each odd-indexed row (beginning from the second row) with the product of the immediately preceding and the immediately following rows. T₃'s unique approach to data transformation makes it particularly useful for enhancing complex data patterns, thereby improving the modeling and analysis capabilities within reservoir computing, especially for chaotic and dynamic systems.

Reference: [Chattopadhyay]: Chattopadhyay, Ashesh, et al. "Data-driven prediction of a multi-scale Lorenz 96 chaotic system using a hierarchy of deep learning methods: Reservoir computing, ANN, and RNN-LSTM." (2019).

source
diff --git a/previews/PR182/api/training/index.html b/previews/PR182/api/training/index.html index 59a0d41e..cf61f2cf 100644 --- a/previews/PR182/api/training/index.html +++ b/previews/PR182/api/training/index.html @@ -1,5 +1,5 @@ Training Algorithms · ReservoirComputing.jl

Training Algorithms

Linear Models

ReservoirComputing.StandardRidgeType
StandardRidge(regularization_coeff)
-StandardRidge(;regularization_coeff=0.0)

Ridge regression training for all the models in the library. The regularization_coeff is the regularization, it can be passed as an arg or kwarg.

source
ReservoirComputing.LinearModelType
LinearModel(;regression=LinearRegression, 
+StandardRidge(;regularization_coeff=0.0)

Ridge regression training for all the models in the library. The regularization_coeff is the regularization, it can be passed as an arg or kwarg.

source
ReservoirComputing.LinearModelType
LinearModel(;regression=LinearRegression, 
     solver=Analytical(), 
-    regression_kwargs=(;))

Linear regression training based on MLJLinearModels for all the models in the library. All the parameters have to be passed into regression_kwargs, apart from the solver choice. MLJLinearModels.jl needs to be called in order to use these models.

source

Gaussian Regression

Currently, v0.9 is unavailable.

Support Vector Regression

Support Vector Regression is possible using a direct call to LIBSVM regression methods. Instead of a wrapper, please refer to the use of LIBSVM.AbstractSVR in the original library.

+ regression_kwargs=(;))

Linear regression training based on MLJLinearModels for all the models in the library. All the parameters have to be passed into regression_kwargs, apart from the solver choice. MLJLinearModels.jl needs to be called in order to use these models.

source

Gaussian Regression

Currently, v0.9 is unavailable.

Support Vector Regression

Support Vector Regression is possible using a direct call to LIBSVM regression methods. Instead of a wrapper, please refer to the use of LIBSVM.AbstractSVR in the original library.

diff --git a/previews/PR182/assets/Manifest.toml b/previews/PR182/assets/Manifest.toml index 3d1229bb..227a9446 100644 --- a/previews/PR182/assets/Manifest.toml +++ b/previews/PR182/assets/Manifest.toml @@ -1668,7 +1668,7 @@ version = "1.3.0" [[deps.ReservoirComputing]] deps = ["Adapt", "CellularAutomata", "Distances", "Distributions", "LIBSVM", "LinearAlgebra", "MLJLinearModels", "NNlib", "Optim", "SparseArrays", "Statistics"] -path = "/var/lib/buildkite-agent/builds/gpuci-10/julialang/reservoircomputing-dot-jl" +path = "/var/lib/buildkite-agent/builds/gpuci-4/julialang/reservoircomputing-dot-jl" uuid = "7c2d2b1e-3dd4-11ea-355a-8f6a8116e294" version = "0.9.5" diff --git a/previews/PR182/esn_tutorials/change_layers/index.html b/previews/PR182/esn_tutorials/change_layers/index.html index 39a8b4cd..5675618a 100644 --- a/previews/PR182/esn_tutorials/change_layers/index.html +++ b/previews/PR182/esn_tutorials/change_layers/index.html @@ -48,4 +48,4 @@ output = esn(Predictive(testing_input), wout) println(msd(testing_target, output)) end
0.0036884815121632775
-0.003480856411242415

As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoirs and layers.

Bibliography

+0.003480856411242415

As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoirs and layers.

Bibliography

diff --git a/previews/PR182/esn_tutorials/deep_esn/4ff88813.svg b/previews/PR182/esn_tutorials/deep_esn/4ff88813.svg deleted file mode 100644 index 5139eeeb..00000000 --- a/previews/PR182/esn_tutorials/deep_esn/4ff88813.svg +++ /dev/null @@ -1,92 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR182/esn_tutorials/deep_esn/fefd7129.svg b/previews/PR182/esn_tutorials/deep_esn/fefd7129.svg new file mode 100644 index 00000000..63b10984 --- /dev/null +++ b/previews/PR182/esn_tutorials/deep_esn/fefd7129.svg @@ -0,0 +1,92 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR182/esn_tutorials/deep_esn/index.html b/previews/PR182/esn_tutorials/deep_esn/index.html index 6501580e..54ce839b 100644 --- a/previews/PR182/esn_tutorials/deep_esn/index.html +++ b/previews/PR182/esn_tutorials/deep_esn/index.html @@ -35,13 +35,13 @@ input_layer = DenseLayer(), reservoir_driver = RNN(), nla_type = NLADefault(), - states_type = StandardStates())
ESN{Int64, Matrix{Float64}, Default, NLADefault, Vector{Matrix{Float64}}, RNN{typeof(NNlib.tanh_fast), Float64}, Vector{Matrix{Float64}}, Vector{Matrix{Float64}}, StandardStates, Int64, Matrix{Float64}}(399, [-9.903289241214775 -9.691839850827593 … 6.13395766702855 5.3690197189757916; -9.018088588651105 -8.473011482849701 … 1.9972030630363637 1.862732065250123; 29.81517777140388 29.93605690750639 … 29.48232612752881 28.165385874059083], Default(), NLADefault(), [[-0.05045904034429419 -0.09913154854753026 0.03431426646768124; -0.022767512030205925 0.06687043567069056 -0.007803022114706265; … ; 0.019427039591896136 -0.05370296791877319 0.07696557300870283; -0.04611912170300003 -0.0027950516126551694 0.05894923521249379], [0.07633587428785366 -0.076796090377165 … -0.040729720928828185 0.04701965601100702; 0.0337106421076617 -0.005491944266567669 … -0.04544143457277405 -0.0779417642758348; … ; -0.08657922348608688 0.07069097916602543 … 0.0656439096509818 0.024367231205882536; -0.04899655814886135 0.07398857293974709 … -0.00855315105171077 -0.05542846630695373], [0.008861024205799886 -0.09785116545012884 … 0.012813468483251128 0.003627502783353359; 0.030616566886144392 0.06897281761256896 … -0.09539364649100227 -0.005517672715950453; … ; 0.04978806570003791 -0.039260293888430046 … -0.04435566810332961 -0.08195666745551518; 0.00953305608710324 -0.028887484491230048 … 0.04589227920012989 -0.05243596504939543]], RNN{typeof(NNlib.tanh_fast), Float64}(NNlib.tanh_fast, 1.0), [[-0.011352027825729237 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.5463677006826678 0.0; 0.0 0.0 … 0.0 0.0], [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.49626560362140154 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.021039518007976973], [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0038637585167487476; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0]], [[0.0; 0.0; … ; 0.0; 0.0;;], [0.0; 0.0; … ; 0.0; 0.0;;], [0.0; 0.0; … ; 0.0; 0.0;;]], StandardStates(), 0, [0.9842091810749481 0.8575877608637682 … -0.41528363278282976 -0.38424677840977767; -0.5442810624464629 -0.6662290987573947 … -0.22996923105247455 -0.2221887809947482; … ; 0.0 0.0 … 0.4884175191057832 0.4839274596306136; 0.0 0.0 … -0.02348368047089107 0.06418761643521272])

As it is possible to see, different sizes can be chosen for the different reservoirs. The input layer and bias can also be given as vectors, but of course, they have to be of the same size of the reservoirs vector. If they are not passed as a vector, the value passed will be used for all the layers in the deep ESN.

In addition to using the provided functions for the construction of the layers, the user can also choose to build their own matrix, or array of matrices, and feed that into the ESN in the same way.

The training and prediction follow the usual framework:

training_method = StandardRidge(0.0)
+    states_type = StandardStates())
ESN{Int64, Matrix{Float64}, Default, NLADefault, Vector{Matrix{Float64}}, RNN{typeof(NNlib.tanh_fast), Float64}, Vector{Matrix{Float64}}, Vector{Matrix{Float64}}, StandardStates, Int64, Matrix{Float64}}(399, [-9.903289241214775 -9.691839850827593 … 6.13395766702855 5.3690197189757916; -9.018088588651105 -8.473011482849701 … 1.9972030630363637 1.862732065250123; 29.81517777140388 29.93605690750639 … 29.48232612752881 28.165385874059083], Default(), NLADefault(), [[0.02762010683057789 0.04764261349735327 0.017395371241804056; -0.042676458360453445 0.08123080280468856 0.03139560459499102; … ; 0.0032314926354150425 -0.015760800266822764 -0.02758249003720563; -0.03798157895528054 0.04580368506052088 -0.058308318597779296], [0.04621067999698972 -0.028327978859429975 … 0.008703147366200481 0.04967449834077234; -0.0686999914410566 0.015063314442657155 … -0.037856555045654106 0.07941609905499489; … ; 0.048997721804941136 0.09349340362576472 … -0.05307958102408257 -0.02799023624208788; -0.016675968197485375 0.059155398288878724 … 0.03108408703898563 -0.030267732357133717], [0.06193087459172705 0.04631648292008389 … 0.05591691085175041 0.08526977132033298; 0.08839192677565266 -0.09358790860385453 … 0.06140105101559573 -0.07086081471174666; … ; -0.05031327140628599 -0.026126594983998205 … 0.08285011025229902 -0.04948882868299498; 0.055048021314812234 0.0265461832277753 … 0.05170758375528725 -0.013786461698181846]], RNN{typeof(NNlib.tanh_fast), Float64}(NNlib.tanh_fast, 1.0), [[-0.5329415267589448 0.3691458392935954 … 0.0 0.0; 0.0 -0.4970254804826495 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.004854371135685515; 0.0 0.0 … 0.0 0.0], [0.0 0.0 … 0.0 0.0; 0.0 0.0 … -0.6027875337841551 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0], [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0]], [[0.0; 0.0; … ; 0.0; 0.0;;], [0.0; 0.0; … ; 0.0; 0.0;;], [0.0; 0.0; … ; 0.0; 0.0;;]], StandardStates(), 0, [-0.1824628024838625 -0.6081886926116691 … -0.3329410325818346 -0.342914833536077; 0.5553998305675131 -0.3793757861472148 … -0.42418912041798024 -0.42163656548112205; … ; 0.0 0.0 … 0.7922423351943572 -0.7640423561283032; 0.0 0.0 … 0.6405350207385068 0.8009176695580681])

As it is possible to see, different sizes can be chosen for the different reservoirs. The input layer and bias can also be given as vectors, but of course, they have to be of the same size of the reservoirs vector. If they are not passed as a vector, the value passed will be used for all the layers in the deep ESN.

In addition to using the provided functions for the construction of the layers, the user can also choose to build their own matrix, or array of matrices, and feed that into the ESN in the same way.

The training and prediction follow the usual framework:

training_method = StandardRidge(0.0)
 output_layer = train(esn, target_data, training_method)
 
 output = esn(Generative(predict_len), output_layer)
3×1250 Matrix{Float64}:
-  4.22392   3.83604   3.55956   3.38407  …  -13.6412   -11.8825   -9.94353
-  1.98662   2.17769   2.42904   2.73166      -5.65351   -2.46636  -0.113325
- 25.6582   24.4914   23.3861   22.3469       40.6391    39.5243   37.7273

Plotting the results:

using Plots
+  4.22418   3.83516   3.55645   3.37622  …  -13.2624  -14.1996  -14.7746
+  1.98715   2.17848   2.42619   2.72578     -18.6453  -18.0758  -16.4874
+ 25.6569   24.481    23.3686   22.3199      -26.895   -30.4622  -33.8009

Plotting the results:

using Plots
 
 ts = 0.0:0.02:200.0
 lorenz_maxlyap = 0.9056
@@ -58,4 +58,4 @@
 plot(p1, p2, p3, plot_title = "Lorenz System Coordinates",
     layout = (3, 1), xtickfontsize = 12, ytickfontsize = 12, xguidefontsize = 15,
     yguidefontsize = 15,
-    legendfontsize = 12, titlefontsize = 20)
Example block output

Note that there is a known bug at the moment with using WeightedLayer as the input layer with the deep ESN. We are in the process of investigating and solving it. The leak coefficient for the reservoirs has to always be the same in the current implementation. This is also something we are actively looking into expanding.

Documentation

+ legendfontsize = 12, titlefontsize = 20)Example block output

Note that there is a known bug at the moment with using WeightedLayer as the input layer with the deep ESN. We are in the process of investigating and solving it. The leak coefficient for the reservoirs has to always be the same in the current implementation. This is also something we are actively looking into expanding.

Documentation

diff --git a/previews/PR182/esn_tutorials/different_drivers/eb2b7672.svg b/previews/PR182/esn_tutorials/different_drivers/3fd873ac.svg similarity index 94% rename from previews/PR182/esn_tutorials/different_drivers/eb2b7672.svg rename to previews/PR182/esn_tutorials/different_drivers/3fd873ac.svg index 75b0a6fd..30c4d475 100644 --- a/previews/PR182/esn_tutorials/different_drivers/eb2b7672.svg +++ b/previews/PR182/esn_tutorials/different_drivers/3fd873ac.svg @@ -1,56 +1,56 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/previews/PR182/esn_tutorials/different_drivers/index.html b/previews/PR182/esn_tutorials/different_drivers/index.html index 2985aa50..81ab0968 100644 --- a/previews/PR182/esn_tutorials/different_drivers/index.html +++ b/previews/PR182/esn_tutorials/different_drivers/index.html @@ -92,7 +92,7 @@ linewidth = 2.5, xtickfontsize = 12, ytickfontsize = 12, - size = (1080, 720))Example block output

It is interesting to see a comparison of the GRU driven ESN and the standard RNN driven ESN. Using the same parameters defined before it is possible to do the following

using StatsBase
+    size = (1080, 720))
Example block output

It is interesting to see a comparison of the GRU driven ESN and the standard RNN driven ESN. Using the same parameters defined before it is possible to do the following

using StatsBase
 
 esn_rnn = ESN(training_input;
     reservoir = RandSparseReservoir(res_size, radius = res_radius),
@@ -103,4 +103,4 @@
 
 println(msd(testing_target, output))
 println(msd(testing_target, output_rnn))
12.665328620306422
-20.96408330872688
  • 1Lun, Shu-Xian, et al. "A novel model of leaky integrator echo state network for time-series prediction." Neurocomputing 159 (2015): 58-66.
  • 2Cho, Kyunghyun, et al. “Learning phrase representations using RNN encoder-decoder for statistical machine translation.” arXiv preprint arXiv:1406.1078 (2014).
  • 3Wang, Xinjie, Yaochu Jin, and Kuangrong Hao. "A Gated Recurrent Unit based Echo State Network." 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.
  • 4Di Sarli, Daniele, Claudio Gallicchio, and Alessio Micheli. "Gated Echo State Networks: a preliminary study." 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020.
  • 5Dey, Rahul, and Fathi M. Salem. "Gate-variants of gated recurrent unit (GRU) neural networks." 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS). IEEE, 2017.
  • 6Zhou, Guo-Bing, et al. "Minimal gated unit for recurrent neural networks." International Journal of Automation and Computing 13.3 (2016): 226-234.
  • 7Hübner, Uwe, Nimmi B. Abraham, and Carlos O. Weiss. "Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH 3 laser." Physical Review A 40.11 (1989): 6354.
+20.96408330872688
  • 1Lun, Shu-Xian, et al. "A novel model of leaky integrator echo state network for time-series prediction." Neurocomputing 159 (2015): 58-66.
  • 2Cho, Kyunghyun, et al. “Learning phrase representations using RNN encoder-decoder for statistical machine translation.” arXiv preprint arXiv:1406.1078 (2014).
  • 3Wang, Xinjie, Yaochu Jin, and Kuangrong Hao. "A Gated Recurrent Unit based Echo State Network." 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.
  • 4Di Sarli, Daniele, Claudio Gallicchio, and Alessio Micheli. "Gated Echo State Networks: a preliminary study." 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020.
  • 5Dey, Rahul, and Fathi M. Salem. "Gate-variants of gated recurrent unit (GRU) neural networks." 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS). IEEE, 2017.
  • 6Zhou, Guo-Bing, et al. "Minimal gated unit for recurrent neural networks." International Journal of Automation and Computing 13.3 (2016): 226-234.
  • 7Hübner, Uwe, Nimmi B. Abraham, and Carlos O. Weiss. "Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH 3 laser." Physical Review A 40.11 (1989): 6354.
diff --git a/previews/PR182/esn_tutorials/different_training/index.html b/previews/PR182/esn_tutorials/different_training/index.html index eaa6140c..a64cdb2a 100644 --- a/previews/PR182/esn_tutorials/different_training/index.html +++ b/previews/PR182/esn_tutorials/different_training/index.html @@ -1,2 +1,2 @@ -Using Different Training Methods · ReservoirComputing.jl +Using Different Training Methods · ReservoirComputing.jl diff --git a/previews/PR182/esn_tutorials/hybrid/4d0ab748.svg b/previews/PR182/esn_tutorials/hybrid/7db1da80.svg similarity index 93% rename from previews/PR182/esn_tutorials/hybrid/4d0ab748.svg rename to previews/PR182/esn_tutorials/hybrid/7db1da80.svg index f2706704..f65e0cac 100644 --- a/previews/PR182/esn_tutorials/hybrid/4d0ab748.svg +++ b/previews/PR182/esn_tutorials/hybrid/7db1da80.svg @@ -1,90 +1,90 @@ - + - + - + - + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/previews/PR182/esn_tutorials/hybrid/index.html b/previews/PR182/esn_tutorials/hybrid/index.html index 691f4094..63b9b4a1 100644 --- a/previews/PR182/esn_tutorials/hybrid/index.html +++ b/previews/PR182/esn_tutorials/hybrid/index.html @@ -54,4 +54,4 @@ plot(p1, p2, p3, plot_title = "Lorenz System Coordinates", layout = (3, 1), xtickfontsize = 12, ytickfontsize = 12, xguidefontsize = 15, yguidefontsize = 15, - legendfontsize = 12, titlefontsize = 20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model." Chaos: An Interdisciplinary Journal of Nonlinear Science 28.4 (2018): 041101.
+ legendfontsize = 12, titlefontsize = 20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model." Chaos: An Interdisciplinary Journal of Nonlinear Science 28.4 (2018): 041101.
diff --git a/previews/PR182/esn_tutorials/lorenz_basic/3dffed0e.svg b/previews/PR182/esn_tutorials/lorenz_basic/e1571378.svg similarity index 94% rename from previews/PR182/esn_tutorials/lorenz_basic/3dffed0e.svg rename to previews/PR182/esn_tutorials/lorenz_basic/e1571378.svg index 54fccfa5..23df269f 100644 --- a/previews/PR182/esn_tutorials/lorenz_basic/3dffed0e.svg +++ b/previews/PR182/esn_tutorials/lorenz_basic/e1571378.svg @@ -1,92 +1,92 @@ - + - + - + - + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/previews/PR182/esn_tutorials/lorenz_basic/index.html b/previews/PR182/esn_tutorials/lorenz_basic/index.html index ffe49a8f..2ea87262 100644 --- a/previews/PR182/esn_tutorials/lorenz_basic/index.html +++ b/previews/PR182/esn_tutorials/lorenz_basic/index.html @@ -103,4 +103,4 @@ plot(p1, p2, p3, plot_title = "Lorenz System Coordinates", layout = (3, 1), xtickfontsize = 12, ytickfontsize = 12, xguidefontsize = 15, yguidefontsize = 15, - legendfontsize = 12, titlefontsize = 20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Lukoševičius, Mantas. "A practical guide to applying echo state networks." Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 2012. 659-686.
  • 3Lu, Zhixin, et al. "Reservoir observers: Model-free inference of unmeasured variables in chaotic systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.
+ legendfontsize = 12, titlefontsize = 20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Lukoševičius, Mantas. "A practical guide to applying echo state networks." Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 2012. 659-686.
  • 3Lu, Zhixin, et al. "Reservoir observers: Model-free inference of unmeasured variables in chaotic systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.
diff --git a/previews/PR182/general/different_training/index.html b/previews/PR182/general/different_training/index.html index 03645f81..96065d7d 100644 --- a/previews/PR182/general/different_training/index.html +++ b/previews/PR182/general/different_training/index.html @@ -4,4 +4,4 @@ regression::Any solver::Any regression_kwargs::Any -end

To call the ridge regression using the MLJLinearModels APIs, you can use LinearModel(;regression=LinearRegression). You can also choose a specific solver by calling, for example, LinearModel(regression=LinearRegression, solver=Analytical()). For all the available solvers, please refer to the MLJLinearModels documentation.

To change the regularization coefficient in the ridge example, using for example lambda = 0.1, it is needed to pass it in the regression_kwargs like so LinearModel(;regression=LinearRegression, solver=Analytical(), regression_kwargs=(lambda=lambda)). The nomenclature of the coefficients must follow the MLJLinearModels APIs, using lambda, gamma for LassoRegression and delta, lambda, gamma for HuberRegression. Again, please check the relevant documentation if in doubt. When using MLJLinearModels based regressors, do remember to specify using MLJLinearModels.

Support Vector Regression

Contrary to the LinearModels, no wrappers are needed for support vector regression. By using LIBSVM.jl, LIBSVM wrappers in Julia, it is possible to call both epsilonSVR() or nuSVR() directly in train(). For the full range of kernels provided and the parameters to call, we refer the user to the official documentation. Like before, if one intends to use LIBSVM regressors, it is necessary to specify using LIBSVM.

+end

To call the ridge regression using the MLJLinearModels APIs, you can use LinearModel(;regression=LinearRegression). You can also choose a specific solver by calling, for example, LinearModel(regression=LinearRegression, solver=Analytical()). For all the available solvers, please refer to the MLJLinearModels documentation.

To change the regularization coefficient in the ridge example, using for example lambda = 0.1, it is needed to pass it in the regression_kwargs like so LinearModel(;regression=LinearRegression, solver=Analytical(), regression_kwargs=(lambda=lambda)). The nomenclature of the coefficients must follow the MLJLinearModels APIs, using lambda, gamma for LassoRegression and delta, lambda, gamma for HuberRegression. Again, please check the relevant documentation if in doubt. When using MLJLinearModels based regressors, do remember to specify using MLJLinearModels.

Support Vector Regression

Contrary to the LinearModels, no wrappers are needed for support vector regression. By using LIBSVM.jl, LIBSVM wrappers in Julia, it is possible to call both epsilonSVR() or nuSVR() directly in train(). For the full range of kernels provided and the parameters to call, we refer the user to the official documentation. Like before, if one intends to use LIBSVM regressors, it is necessary to specify using LIBSVM.

diff --git a/previews/PR182/general/predictive_generative/index.html b/previews/PR182/general/predictive_generative/index.html index 20a96b9f..ab7b6bf5 100644 --- a/previews/PR182/general/predictive_generative/index.html +++ b/previews/PR182/general/predictive_generative/index.html @@ -1,2 +1,2 @@ -Generative vs Predictive · ReservoirComputing.jl

Generative vs Predictive

The library provides two different methods for prediction, denoted as Predictive() and Generative(). These methods correspond to the two major applications of Reservoir Computing models found in the literature. This section aims to clarify the differences between these two methods before providing further details on their usage in the library.

Predictive

In the first method, users can utilize Reservoir Computing models in a manner similar to standard Machine Learning models. This involves using a set of features as input and a set of labels as outputs. In this case, both the feature and label sets can consist of vectors of different dimensions. Specifically, let's denote the feature set as $X=\{x_1,...,x_n\}$ where $x_i \in \mathbb{R}^{N}$, and the label set as $Y=\{y_1,...,y_n\}$ where $y_i \in \mathbb{R}^{M}$.

To make predictions using this method, you need to provide the feature set that you want to predict the labels for. For example, you can call Predictive(X) using the feature set $X$ as input. This method allows for both one-step-ahead and multi-step-ahead predictions.

Generative

The generative method provides a different approach to forecasting with Reservoir Computing models. It enables you to extend the forecasting capabilities of the model by allowing predicted results to be fed back into the model to generate the next prediction. This autonomy allows the model to make predictions without the need for a feature dataset as input.

To use the generative method, you only need to specify the number of time steps that you intend to forecast. For instance, you can call Generative(100) to generate predictions for the next one hundred time steps.

The key distinction between these methods lies in how predictions are made. The predictive method relies on input feature sets to make predictions, while the generative method allows for autonomous forecasting by feeding predicted results back into the model.

+Generative vs Predictive · ReservoirComputing.jl

Generative vs Predictive

The library provides two different methods for prediction, denoted as Predictive() and Generative(). These methods correspond to the two major applications of Reservoir Computing models found in the literature. This section aims to clarify the differences between these two methods before providing further details on their usage in the library.

Predictive

In the first method, users can utilize Reservoir Computing models in a manner similar to standard Machine Learning models. This involves using a set of features as input and a set of labels as outputs. In this case, both the feature and label sets can consist of vectors of different dimensions. Specifically, let's denote the feature set as $X=\{x_1,...,x_n\}$ where $x_i \in \mathbb{R}^{N}$, and the label set as $Y=\{y_1,...,y_n\}$ where $y_i \in \mathbb{R}^{M}$.

To make predictions using this method, you need to provide the feature set that you want to predict the labels for. For example, you can call Predictive(X) using the feature set $X$ as input. This method allows for both one-step-ahead and multi-step-ahead predictions.

Generative

The generative method provides a different approach to forecasting with Reservoir Computing models. It enables you to extend the forecasting capabilities of the model by allowing predicted results to be fed back into the model to generate the next prediction. This autonomy allows the model to make predictions without the need for a feature dataset as input.

To use the generative method, you only need to specify the number of time steps that you intend to forecast. For instance, you can call Generative(100) to generate predictions for the next one hundred time steps.

The key distinction between these methods lies in how predictions are made. The predictive method relies on input feature sets to make predictions, while the generative method allows for autonomous forecasting by feeding predicted results back into the model.

diff --git a/previews/PR182/general/states_variation/index.html b/previews/PR182/general/states_variation/index.html index 8f6cc57b..39723490 100644 --- a/previews/PR182/general/states_variation/index.html +++ b/previews/PR182/general/states_variation/index.html @@ -2,4 +2,4 @@ Altering States · ReservoirComputing.jl

Altering States

In ReservoirComputing models, it's possible to perform alterations on the reservoir states during the training stage. These alterations can improve prediction results or replicate results found in the literature. Alterations are categorized into two possibilities: padding or extending the states, and applying non-linear algorithms to the states.

Padding and Extending States

Extending States

Extending the states involves appending the corresponding input values to the reservoir states. If (\textbf{x}(t)) represents the reservoir state at time (t) corresponding to the input (\textbf{u}(t)), the extended state is represented as ([\textbf{x}(t); \textbf{u}(t)]), where ([;]) denotes vertical concatenation. This procedure is commonly used in Echo State Networks and is described in Jaeger's Scholarpedia. You can extend the states in every ReservoirComputing.jl model by using the states_type keyword argument and calling the ExtendedStates() method. No additional arguments are needed.

Padding States

Padding the states involves appending a constant value, such as 1.0, to each state. In the notation introduced earlier, padded states can be represented as ([\textbf{x}(t); 1.0]). This approach is detailed in the seminal guide to Echo State Networks by Mantas Lukoševičius. To pad the states, you can use the states_type keyword argument and call the PaddedStates(padding) method, where padding represents the value to be concatenated to the states. By default, the padding value is set to 1.0, so most of the time, calling PaddedStates() will suffice.

Additionally, you can pad the extended states by using the PaddedExtendedStates(padding) method, which also has a default padding value of 1.0.

You can choose not to apply any of these changes to the states by calling StandardStates(), which is the default choice for the states.

Non-Linear Algorithms

First introduced in [1] and expanded in [2], non-linear algorithms are nonlinear combinations of the columns of the matrix states. There are three such algorithms implemented in ReservoirComputing.jl, and you can choose which one to use with the nla_type keyword argument. The default value is set to NLADefault(), which means no non-linear algorithm is applied.

The available non-linear algorithms are:

  • NLAT1()
  • NLAT2()
  • NLAT3()

These algorithms perform specific operations on the reservoir states. To provide a better understanding of what they do, let $\textbf{x}_{i, j}$ be elements of the state matrix, with $i=1,...,T \ j=1,...,N$ where $T$ is the length of the training and $N$ is the reservoir size.

NLAT1

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \times \textbf{x}_{i,j} \ \ \text{if \textit{j} is odd} \\ \tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is even}\]

NLAT2

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j-1} \times \textbf{x}_{i,j-2} \ \ \text{if \textit{j} > 1 is odd} \\ \tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

NLAT3

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j-1} \times \textbf{x}_{i,j+1} \ \ \text{if \textit{j} > 1 is odd} \\ -\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Chattopadhyay, Ashesh, Pedram Hassanzadeh, and Devika Subramanian. "Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: reservoir computing, artificial neural network, and long short-term memory network." Nonlinear Processes in Geophysics 27.3 (2020): 373-389.
+\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Chattopadhyay, Ashesh, Pedram Hassanzadeh, and Devika Subramanian. "Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: reservoir computing, artificial neural network, and long short-term memory network." Nonlinear Processes in Geophysics 27.3 (2020): 373-389.
diff --git a/previews/PR182/index.html b/previews/PR182/index.html index 8d854879..6c8d0c21 100644 --- a/previews/PR182/index.html +++ b/previews/PR182/index.html @@ -11,7 +11,7 @@ number = {288}, pages = {1--8}, url = {http://jmlr.org/papers/v23/22-0611.html} -}

Reproducibility

The documentation of this SciML package was built using these direct dependencies,
Status `/var/lib/buildkite-agent/builds/gpuci-10/julialang/reservoircomputing-dot-jl/docs/Project.toml`
+}

Reproducibility

The documentation of this SciML package was built using these direct dependencies,
Status `/var/lib/buildkite-agent/builds/gpuci-4/julialang/reservoircomputing-dot-jl/docs/Project.toml`
   [052768ef] CUDA v5.1.1
   [878138dc] CellularAutomata v0.0.2
   [0c46a032] DifferentialEquations v7.11.0
@@ -19,7 +19,7 @@
  [1dea7af3] OrdinaryDiffEq v6.59.3
   [91a5bcdd] Plots v1.39.0
   [31e2f376] PredefinedDynamicalSystems v1.2.0
-  [7c2d2b1e] ReservoirComputing v0.9.5 `/var/lib/buildkite-agent/builds/gpuci-10/julialang/reservoircomputing-dot-jl`
+  [7c2d2b1e] ReservoirComputing v0.9.5 `/var/lib/buildkite-agent/builds/gpuci-4/julialang/reservoircomputing-dot-jl`
   [2913bbd2] StatsBase v0.34.2
   [37e2e46d] LinearAlgebra
   [9a3f8284] Random
@@ -39,7 +39,7 @@
   JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01852978-cea0-41b9-93ac-ff3dc03e5dc5
   LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
   JULIA_PKG_SERVER =
-  JULIA_IMAGE_THREADS = 1
A more complete overview of all dependencies and their versions is also provided.
Status `/var/lib/buildkite-agent/builds/gpuci-10/julialang/reservoircomputing-dot-jl/docs/Manifest.toml`
+  JULIA_IMAGE_THREADS = 1
A more complete overview of all dependencies and their versions is also provided.
Status `/var/lib/buildkite-agent/builds/gpuci-4/julialang/reservoircomputing-dot-jl/docs/Manifest.toml`
   [47edcb42] ADTypes v0.2.5
   [a4c015fc] ANSIColoredPrinters v0.0.1
   [621f4979] AbstractFFTs v1.5.0
@@ -208,13 +208,13 @@
   [e6cf234a] RandomNumbers v1.5.3
   [3cdcf5f2] RecipesBase v1.3.4
   [01d81517] RecipesPipeline v0.6.12
- [731186ca] RecursiveArrayTools v2.38.10
+  [731186ca] RecursiveArrayTools v2.38.10
   [f2c3362d] RecursiveFactorization v0.2.21
   [189a3867] Reexport v1.2.2
   [2792f1a3] RegistryInstances v0.1.0
   [05181044] RelocatableFolders v1.0.1
   [ae029012] Requires v1.3.0
-  [7c2d2b1e] ReservoirComputing v0.9.5 `/var/lib/buildkite-agent/builds/gpuci-10/julialang/reservoircomputing-dot-jl`
+  [7c2d2b1e] ReservoirComputing v0.9.5 `/var/lib/buildkite-agent/builds/gpuci-4/julialang/reservoircomputing-dot-jl`
   [ae5879a3] ResettableStacks v1.1.1
   [79098fc4] Rmath v0.7.1
   [f2b01f46] Roots v2.0.22
@@ -410,4 +410,4 @@
   [8e850b90] libblastrampoline_jll v5.8.0+0
   [8e850ede] nghttp2_jll v1.52.0+1
   [3f19e933] p7zip_jll v17.4.0+0
-Info Packages marked with  and  have new versions available. Those with  may be upgradable, but those with  are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

+Info Packages marked with and have new versions available. Those with may be upgradable, but those with are restricted by compatibility constraints from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

diff --git a/previews/PR182/reca_tutorials/reca/index.html b/previews/PR182/reca_tutorials/reca/index.html index c90cbe6f..8f253eda 100644 --- a/previews/PR182/reca_tutorials/reca/index.html +++ b/previews/PR182/reca_tutorials/reca/index.html @@ -13,4 +13,4 @@ input_encoding = RandomMapping(16, 40))
RECA{Matrix{Float32}, CellularAutomata.DCA{Int64, Vector{Int64}, Int64}, RandomMaps{Int64, Int64, Int64, Matrix{Int64}, Int64}, Matrix{Float64}, StandardStates}(Float32[0.0 0.0 … 0.0 0.0; 1.0 1.0 … 0.0 0.0; 0.0 0.0 … 1.0 1.0; 0.0 0.0 … 0.0 0.0], CellularAutomata.DCA{Int64, Vector{Int64}, Int64}(90, [0, 1, 0, 1, 1, 0, 1, 0], 2, 1), RandomMaps{Int64, Int64, Int64, Matrix{Int64}, Int64}(16, 40, 16, [7 31 6 39; 23 4 9 19; … ; 5 35 8 18; 23 31 17 34], 10240, 640), NLADefault(), [0.0 0.0 … 0.0 1.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 1.0 … 0.0 0.0; 0.0 0.0 … 1.0 0.0], StandardStates())

After this, the training can be performed with the chosen method.

output_layer = train(reca, output, StandardRidge(0.00001))
OutputLayer{StandardRidge{Float64}, LinearAlgebra.Adjoint{Float64, Matrix{Float64}}, Int64, Vector{Float32}}(StandardRidge{Float64}(1.0e-5), [0.002387664202289784 0.0027015434856900063 … 0.0022509868774862997 -0.0009707062602933361; 2.731616894796331e-6 -0.00499026855924859 … 0.0009682355787262944 -0.001328060548487713; -0.002053657966777859 0.0020966199745288305 … -0.0027308726440051856 0.0027137958582471706; 0.0 0.0 … 0.0 -0.0], 4, Float32[1.0, 0.0, 0.0, 0.0])

The prediction in this case will be a Predictive() with the input data equal to the training data. In addition, to test the 5 bit memory task, a conversion from Float to Bool is necessary (at the moment, we are aware of a bug that doesn't allow boolean input data to the RECA models):

prediction = reca(Predictive(input), output_layer)
 final_pred = convert(AbstractArray{Float32}, prediction .> 0.5)
 
-final_pred == output
true
  • 1Yilmaz, Ozgur. "Reservoir computing using cellular automata." arXiv preprint arXiv:1410.0162 (2014).
  • 2Margem, Mrwan, and Ozgür Yilmaz. "An experimental study on cellular automata reservoir in pathological sequence learning tasks." (2017).
  • 3Nichele, Stefano, and Andreas Molund. "Deep reservoir computing using cellular automata." arXiv preprint arXiv:1703.02806 (2017).
  • 4Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.
+final_pred == output
true
  • 1Yilmaz, Ozgur. "Reservoir computing using cellular automata." arXiv preprint arXiv:1410.0162 (2014).
  • 2Margem, Mrwan, and Ozgür Yilmaz. "An experimental study on cellular automata reservoir in pathological sequence learning tasks." (2017).
  • 3Nichele, Stefano, and Andreas Molund. "Deep reservoir computing using cellular automata." arXiv preprint arXiv:1703.02806 (2017).
  • 4Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.