Skip to content

Commit

Permalink
update not working file
Browse files Browse the repository at this point in the history
  • Loading branch information
giovastabile committed Nov 15, 2023
1 parent 4f93352 commit 8c7d586
Showing 1 changed file with 3 additions and 121 deletions.
124 changes: 3 additions & 121 deletions tutorials/tutorial-2.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,125 +105,7 @@
# We start by passing the matrices of the parameters and snapshots to the `Database()` class. It must be said that at this time we create the ROM for the `vx` field. We also instantiate the `POD` and `RBF` object to have a benchmark ROM.

# In[5]:


db = Database(data.params, data.snapshots['vx'])
rom = ROM(db, POD(), RBF())
rom.fit();


# Three lines for a data-driven reduced order model, not bad!
#
# Just to have a visual check that everything is going well, we plot the approximation for new parameters in the range $[1, 80]$.

# In[6]:


new_params = np.random.uniform(size=(2))*79.+1.

fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(16, 3))
for i, param in enumerate(new_params):
ax[i].tricontourf(data.triang, rom.predict([param]))
ax[i].set_title('Predicted snapshots at inlet velocity = {}'.format(param))


# We are now calculating the approximation error to see how close is our reduced solution to the full-order solution/simulation using the **k-fold Cross-Validation** strategy by passing the number of splits to the `ReducedOrderModel.kfold_cv_error(n_splits)` method, which operates as follows:
#
# 1. Split the dataset (parameters/snapshots) into $k$-number of groups/folds.
# 2. Use $k-1$ groups to calculate the reduced space and leave one group for testing.
# 3. Use the approximation/interpolation method to predict each snapshot in the testing group.
# 4. Calculate the error for each snapshot in the testing group by taking the difference between the predicted and the original snapshot.
# 5. Average the errors for predicting snapshots of the testing group/fold.
# 6. Repeat this procedure using different groups for testing and the remaining $k-1$ groups to calculate the reduced space.
# 7. In the end, we will have $k$-number errors for predicting each group/fold that we can average them to have one value for the error.

# In[7]:


errors = rom.kfold_cv_error(n_splits = 5)
print('Average error for each fold:')
for e in errors:
print(' ',e)
print('\nAverage error = {}'.format(errors.mean()))


# Another strategy for calculating the approximation error is called **leave-one-out** by using the `ReducedOrderModel.loo_error()` method, which is similar to setting the number of folds equal to the number of snapshots (eg. in this case setting `n_splits` = 500) and it operates as follows:
# 1. Combine all the snapshots except one.
# 2. Calculate the reduced space.
# 3. Use the approximation/interpolation method to predict the removed snapshot.
# 4. Calculate the error by taking the difference between the predicted snapshot and the original removed one.
# 5. The error vector is obtained by repeating this procedure for each snapshot in the database.
#
# It is worth mentioning that it consumes more time because we have 500 snapshots and the algorithm will perform space order reduction and calculate the approximation error 500 times. For this reason, we commented the next line of code, in order to limit the computational effort needed to run this tutorial. Uncomment it only if you are a really brave person!

# In[8]:


# errors = rom.loo_error()


# ### Comparison between different methods
#
# One of the advantages of the data-driven reduced order modeling is the modular nature of the method. Practically speaking, we need
# - a method for reducing the dimensionality of input snapshots;
# - a method for approximate the solution manifold;
#
# allowing in principle a large variety of combinations.
#
# The list of implemented **reduction methods** in EZyRB contains:
# - `POD`: *proper orthogonal decomposition*
# - `AE`: *autoencoder*
#
# while the list of implemented **approximation methods** contains:
# - `RBF`: *radial basis function interpolation*
# - `GPR`: *gaussian process regression*
# - `KNeighborsRegressor`: *k-neighbors regression*
# - `RadiusNeighborsRegressor`: *radius neighbors regression*
# - `Linear`: *multidimensional linear interpolation*
#
# Moreover, new state-of-the-art methods will arrive, so we invite you to read the [documentation](https://mathlab.github.io/EZyRB/) for the complete list of all the possibilities!
#
# In the next cell, we create two dictionaries with the objects, such that we can easily test everything with simple `for` cycles. **WARNING** since several methods require the solution of an optimization problem (eg. GPR, ANN, AE), the cell may require some minutes to be run.

# In[9]:


reductions = {
'POD': POD('svd',rank=10),
'AE': AE([200, 100, 10], [10, 100, 200], nn.Tanh(), nn.Tanh(), 10),
}

approximations = {
# 'Linear': Linear(),
'RBF': RBF(),
'GPR': GPR(),
'KNeighbors': KNeighborsRegressor(),
'RadiusNeighbors': RadiusNeighborsRegressor(),
'ANN': ANN([20, 20], nn.Tanh(), 10),
}

header = '{:10s}'.format('')
for name in approximations:
header += ' {:>15s}'.format(name)

print(header)
for redname, redclass in reductions.items():
row = '{:10s}'.format(redname)
for approxname, approxclass in approximations.items():
rom = ROM(db, redclass, approxclass)
rom.fit()
row += ' {:15e}'.format(rom.kfold_cv_error(n_splits=5).mean())

print(row)


# In a very compact way, we tested several frameworks - like POD-RBF, POD-GPR, POD-NN -, showing the accuracy reached by any of them.
#
# We can also note that the frameworks that involve neural networks (`AE` and `ANN`) show a very poor precision. This is due to the fact of the limited number of epochs we impose in the learning procedure. You can try to increase the number of epochs as we shown in the next cell in order to obtain better results, at the cost of a longer training phase.

# In[10]:


reductions['AE'] = AE([100, 10], [10, 100], nn.ReLU(), nn.ReLU(), 30000)
approximations['ANN'] = ANN([50, 10], nn.ReLU(), 30000)

rom = ROM(db, POD(), Linear())
rom.fit()
rom.predict(20.0)

0 comments on commit 8c7d586

Please sign in to comment.