diff --git a/.DS_Store b/.DS_Store index 30636b7..a7cabee 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/LICENSE b/LICENSE index 98ae6e7..6804e2c 100644 --- a/LICENSE +++ b/LICENSE @@ -1,21 +1,441 @@ -MIT License - -Copyright (c) 2022 Ethan Pickering, George E.M. Karniadakis, Themistoklis Sapsis - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. +Copyright (c) 2022 Ethan Pickering, Stephen Guth, George EM Karniadakis, Themistoklis P. Sapsis + +Attribution-NonCommercial-ShareAlike 4.0 International + +======================================================================= + +Creative Commons Corporation ("Creative Commons") is not a law firm and +does not provide legal services or legal advice. Distribution of +Creative Commons public licenses does not create a lawyer-client or +other relationship. Creative Commons makes its licenses and related +information available on an "as-is" basis. Creative Commons gives no +warranties regarding its licenses, any material licensed under their +terms and conditions, or any related information. Creative Commons +disclaims all liability for damages resulting from their use to the +fullest extent possible. + +Using Creative Commons Public Licenses + +Creative Commons public licenses provide a standard set of terms and +conditions that creators and other rights holders may use to share +original works of authorship and other material subject to copyright +and certain other rights specified in the public license below. The +following considerations are for informational purposes only, are not +exhaustive, and do not form part of our licenses. + + Considerations for licensors: Our public licenses are + intended for use by those authorized to give the public + permission to use material in ways otherwise restricted by + copyright and certain other rights. Our licenses are + irrevocable. Licensors should read and understand the terms + and conditions of the license they choose before applying it. + Licensors should also secure all rights necessary before + applying our licenses so that the public can reuse the + material as expected. Licensors should clearly mark any + material not subject to the license. This includes other CC- + licensed material, or material used under an exception or + limitation to copyright. More considerations for licensors: + wiki.creativecommons.org/Considerations_for_licensors + + Considerations for the public: By using one of our public + licenses, a licensor grants the public permission to use the + licensed material under specified terms and conditions. If + the licensor's permission is not necessary for any reason--for + example, because of any applicable exception or limitation to + copyright--then that use is not regulated by the license. Our + licenses grant only permissions under copyright and certain + other rights that a licensor has authority to grant. Use of + the licensed material may still be restricted for other + reasons, including because others have copyright or other + rights in the material. A licensor may make special requests, + such as asking that all changes be marked or described. + Although not required by our licenses, you are encouraged to + respect those requests where reasonable. More considerations + for the public: + wiki.creativecommons.org/Considerations_for_licensees + +======================================================================= + +Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International +Public License + +By exercising the Licensed Rights (defined below), You accept and agree +to be bound by the terms and conditions of this Creative Commons +Attribution-NonCommercial-ShareAlike 4.0 International Public License +("Public License"). To the extent this Public License may be +interpreted as a contract, You are granted the Licensed Rights in +consideration of Your acceptance of these terms and conditions, and the +Licensor grants You such rights in consideration of benefits the +Licensor receives from making the Licensed Material available under +these terms and conditions. + + +Section 1 -- Definitions. + + a. Adapted Material means material subject to Copyright and Similar + Rights that is derived from or based upon the Licensed Material + and in which the Licensed Material is translated, altered, + arranged, transformed, or otherwise modified in a manner requiring + permission under the Copyright and Similar Rights held by the + Licensor. For purposes of this Public License, where the Licensed + Material is a musical work, performance, or sound recording, + Adapted Material is always produced where the Licensed Material is + synched in timed relation with a moving image. + + b. Adapter's License means the license You apply to Your Copyright + and Similar Rights in Your contributions to Adapted Material in + accordance with the terms and conditions of this Public License. + + c. BY-NC-SA Compatible License means a license listed at + creativecommons.org/compatiblelicenses, approved by Creative + Commons as essentially the equivalent of this Public License. + + d. Copyright and Similar Rights means copyright and/or similar rights + closely related to copyright including, without limitation, + performance, broadcast, sound recording, and Sui Generis Database + Rights, without regard to how the rights are labeled or + categorized. For purposes of this Public License, the rights + specified in Section 2(b)(1)-(2) are not Copyright and Similar + Rights. + + e. Effective Technological Measures means those measures that, in the + absence of proper authority, may not be circumvented under laws + fulfilling obligations under Article 11 of the WIPO Copyright + Treaty adopted on December 20, 1996, and/or similar international + agreements. + + f. Exceptions and Limitations means fair use, fair dealing, and/or + any other exception or limitation to Copyright and Similar Rights + that applies to Your use of the Licensed Material. + + g. License Elements means the license attributes listed in the name + of a Creative Commons Public License. The License Elements of this + Public License are Attribution, NonCommercial, and ShareAlike. + + h. Licensed Material means the artistic or literary work, database, + or other material to which the Licensor applied this Public + License. + + i. Licensed Rights means the rights granted to You subject to the + terms and conditions of this Public License, which are limited to + all Copyright and Similar Rights that apply to Your use of the + Licensed Material and that the Licensor has authority to license. + + j. Licensor means the individual(s) or entity(ies) granting rights + under this Public License. + + k. NonCommercial means not primarily intended for or directed towards + commercial advantage or monetary compensation. For purposes of + this Public License, the exchange of the Licensed Material for + other material subject to Copyright and Similar Rights by digital + file-sharing or similar means is NonCommercial provided there is + no payment of monetary compensation in connection with the + exchange. + + l. Share means to provide material to the public by any means or + process that requires permission under the Licensed Rights, such + as reproduction, public display, public performance, distribution, + dissemination, communication, or importation, and to make material + available to the public including in ways that members of the + public may access the material from a place and at a time + individually chosen by them. + + m. Sui Generis Database Rights means rights other than copyright + resulting from Directive 96/9/EC of the European Parliament and of + the Council of 11 March 1996 on the legal protection of databases, + as amended and/or succeeded, as well as other essentially + equivalent rights anywhere in the world. + + n. You means the individual or entity exercising the Licensed Rights + under this Public License. Your has a corresponding meaning. + + +Section 2 -- Scope. + + a. License grant. + + 1. Subject to the terms and conditions of this Public License, + the Licensor hereby grants You a worldwide, royalty-free, + non-sublicensable, non-exclusive, irrevocable license to + exercise the Licensed Rights in the Licensed Material to: + + a. reproduce and Share the Licensed Material, in whole or + in part, for NonCommercial purposes only; and + + b. produce, reproduce, and Share Adapted Material for + NonCommercial purposes only. + + 2. Exceptions and Limitations. For the avoidance of doubt, where + Exceptions and Limitations apply to Your use, this Public + License does not apply, and You do not need to comply with + its terms and conditions. + + 3. Term. The term of this Public License is specified in Section + 6(a). + + 4. Media and formats; technical modifications allowed. The + Licensor authorizes You to exercise the Licensed Rights in + all media and formats whether now known or hereafter created, + and to make technical modifications necessary to do so. The + Licensor waives and/or agrees not to assert any right or + authority to forbid You from making technical modifications + necessary to exercise the Licensed Rights, including + technical modifications necessary to circumvent Effective + Technological Measures. For purposes of this Public License, + simply making modifications authorized by this Section 2(a) + (4) never produces Adapted Material. + + 5. Downstream recipients. + + a. Offer from the Licensor -- Licensed Material. Every + recipient of the Licensed Material automatically + receives an offer from the Licensor to exercise the + Licensed Rights under the terms and conditions of this + Public License. + + b. Additional offer from the Licensor -- Adapted Material. + Every recipient of Adapted Material from You + automatically receives an offer from the Licensor to + exercise the Licensed Rights in the Adapted Material + under the conditions of the Adapter's License You apply. + + c. No downstream restrictions. You may not offer or impose + any additional or different terms or conditions on, or + apply any Effective Technological Measures to, the + Licensed Material if doing so restricts exercise of the + Licensed Rights by any recipient of the Licensed + Material. + + 6. No endorsement. Nothing in this Public License constitutes or + may be construed as permission to assert or imply that You + are, or that Your use of the Licensed Material is, connected + with, or sponsored, endorsed, or granted official status by, + the Licensor or others designated to receive attribution as + provided in Section 3(a)(1)(A)(i). + + b. Other rights. + + 1. Moral rights, such as the right of integrity, are not + licensed under this Public License, nor are publicity, + privacy, and/or other similar personality rights; however, to + the extent possible, the Licensor waives and/or agrees not to + assert any such rights held by the Licensor to the limited + extent necessary to allow You to exercise the Licensed + Rights, but not otherwise. + + 2. Patent and trademark rights are not licensed under this + Public License. + + 3. To the extent possible, the Licensor waives any right to + collect royalties from You for the exercise of the Licensed + Rights, whether directly or through a collecting society + under any voluntary or waivable statutory or compulsory + licensing scheme. In all other cases the Licensor expressly + reserves any right to collect such royalties, including when + the Licensed Material is used other than for NonCommercial + purposes. + + +Section 3 -- License Conditions. + +Your exercise of the Licensed Rights is expressly made subject to the +following conditions. + + a. Attribution. + + 1. If You Share the Licensed Material (including in modified + form), You must: + + a. retain the following if it is supplied by the Licensor + with the Licensed Material: + + i. identification of the creator(s) of the Licensed + Material and any others designated to receive + attribution, in any reasonable manner requested by + the Licensor (including by pseudonym if + designated); + + ii. a copyright notice; + + iii. a notice that refers to this Public License; + + iv. a notice that refers to the disclaimer of + warranties; + + v. a URI or hyperlink to the Licensed Material to the + extent reasonably practicable; + + b. indicate if You modified the Licensed Material and + retain an indication of any previous modifications; and + + c. indicate the Licensed Material is licensed under this + Public License, and include the text of, or the URI or + hyperlink to, this Public License. + + 2. You may satisfy the conditions in Section 3(a)(1) in any + reasonable manner based on the medium, means, and context in + which You Share the Licensed Material. For example, it may be + reasonable to satisfy the conditions by providing a URI or + hyperlink to a resource that includes the required + information. + 3. If requested by the Licensor, You must remove any of the + information required by Section 3(a)(1)(A) to the extent + reasonably practicable. + + b. ShareAlike. + + In addition to the conditions in Section 3(a), if You Share + Adapted Material You produce, the following conditions also apply. + + 1. The Adapter's License You apply must be a Creative Commons + license with the same License Elements, this version or + later, or a BY-NC-SA Compatible License. + + 2. You must include the text of, or the URI or hyperlink to, the + Adapter's License You apply. You may satisfy this condition + in any reasonable manner based on the medium, means, and + context in which You Share Adapted Material. + + 3. You may not offer or impose any additional or different terms + or conditions on, or apply any Effective Technological + Measures to, Adapted Material that restrict exercise of the + rights granted under the Adapter's License You apply. + + +Section 4 -- Sui Generis Database Rights. + +Where the Licensed Rights include Sui Generis Database Rights that +apply to Your use of the Licensed Material: + + a. for the avoidance of doubt, Section 2(a)(1) grants You the right + to extract, reuse, reproduce, and Share all or a substantial + portion of the contents of the database for NonCommercial purposes + only; + + b. if You include all or a substantial portion of the database + contents in a database in which You have Sui Generis Database + Rights, then the database in which You have Sui Generis Database + Rights (but not its individual contents) is Adapted Material, + including for purposes of Section 3(b); and + + c. You must comply with the conditions in Section 3(a) if You Share + all or a substantial portion of the contents of the database. + +For the avoidance of doubt, this Section 4 supplements and does not +replace Your obligations under this Public License where the Licensed +Rights include other Copyright and Similar Rights. + + +Section 5 -- Disclaimer of Warranties and Limitation of Liability. + + a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE + EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS + AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF + ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, + IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, + WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR + PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, + ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT + KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT + ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. + + b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE + TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, + NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, + INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, + COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR + USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN + ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR + DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR + IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. + + c. The disclaimer of warranties and limitation of liability provided + above shall be interpreted in a manner that, to the extent + possible, most closely approximates an absolute disclaimer and + waiver of all liability. + + +Section 6 -- Term and Termination. + + a. This Public License applies for the term of the Copyright and + Similar Rights licensed here. However, if You fail to comply with + this Public License, then Your rights under this Public License + terminate automatically. + + b. Where Your right to use the Licensed Material has terminated under + Section 6(a), it reinstates: + + 1. automatically as of the date the violation is cured, provided + it is cured within 30 days of Your discovery of the + violation; or + + 2. upon express reinstatement by the Licensor. + + For the avoidance of doubt, this Section 6(b) does not affect any + right the Licensor may have to seek remedies for Your violations + of this Public License. + + c. For the avoidance of doubt, the Licensor may also offer the + Licensed Material under separate terms or conditions or stop + distributing the Licensed Material at any time; however, doing so + will not terminate this Public License. + + d. Sections 1, 5, 6, 7, and 8 survive termination of this Public + License. + + +Section 7 -- Other Terms and Conditions. + + a. The Licensor shall not be bound by any additional or different + terms or conditions communicated by You unless expressly agreed. + + b. Any arrangements, understandings, or agreements regarding the + Licensed Material not stated herein are separate from and + independent of the terms and conditions of this Public License. + + +Section 8 -- Interpretation. + + a. For the avoidance of doubt, this Public License does not, and + shall not be interpreted to, reduce, limit, restrict, or impose + conditions on any use of the Licensed Material that could lawfully + be made without permission under this Public License. + + b. To the extent possible, if any provision of this Public License is + deemed unenforceable, it shall be automatically reformed to the + minimum extent necessary to make it enforceable. If the provision + cannot be reformed, it shall be severed from this Public License + without affecting the enforceability of the remaining terms and + conditions. + + c. No term or condition of this Public License will be waived and no + failure to comply consented to unless expressly agreed to by the + Licensor. + + d. Nothing in this Public License constitutes or may be interpreted + as a limitation upon, or waiver of, any privileges and immunities + that apply to the Licensor or You, including from the legal + processes of any jurisdiction or authority. + +======================================================================= + +Creative Commons is not a party to its public +licenses. Notwithstanding, Creative Commons may elect to apply one of +its public licenses to material it publishes and in those instances +will be considered the “Licensor.” The text of the Creative Commons +public licenses is dedicated to the public domain under the CC0 Public +Domain Dedication. Except for the limited purpose of indicating that +material is shared under a Creative Commons public license or as +otherwise permitted by the Creative Commons policies published at +creativecommons.org/policies, Creative Commons does not authorize the +use of the trademark "Creative Commons" or any other trademark or logo +of Creative Commons without its prior written consent including, +without limitation, in connection with any unauthorized modifications +to any of its public licenses or any other arrangements, +understandings, or agreements concerning use of licensed material. For +the avoidance of doubt, this paragraph does not form part of the +public licenses. + +Creative Commons may be contacted at creativecommons.org + + diff --git a/README.md b/README.md index afed846..a6818a4 100644 --- a/README.md +++ b/README.md @@ -6,18 +6,29 @@ Source code for Deep Neural Operator (DeepONet) Bayesian experimental design (i. Execute `pip install .` from the master directory (directory where setup.py is located). -This package requires the use of DeepONet from the DeepXDE library. While installing these requirements, it may be useful to comment the deepxde package and install separately. If using a verion of DeepXDE above v0.11.2, then DeepONet calls must be changed approprately, as stated by DeepXDEs instructions.done separately. +This package requires the use of DeepONet from the DeepXDE library. While installing these requirements, it may be useful to comment the deepxde package and install separately. If using a verion of DeepXDE above v0.11.2, then DeepONet calls must be changed approprately, as stated by DeepXDEs instructions done separately. ## Demo -Two folders in the examples directory: SIR and NLS, provide the basis for computing any of the results from the paper. The SIR folder provides the simplest, yet complete demonstration of the algorithm. +Three folders in the examples directory: SIR, NLS, LAMP provide the basis for computing any of the results from the paper. The SIR folder provides the simplest, yet complete demonstration of the algorithm. We recommend learning from the SIR implementation for applying to novel problems, as the other problems are computationally much more demanding. -SIR.py is setup to run in an IDE (e.g. Spyder) and dynamically plots various results. SIR_bash.py is set up to be called by the shell script, SIR_shell.sh, and will plot to the SIR directory. The advantage of the shell script is that Tensorflow slows in time, while the shell script reopens Python at each iteration ensuring that Tensorflow runs quickly. +SIR.py is setup to run in an IDE (e.g. Spyder) and dynamically plots various results. This is recommended for best understanding of the code and process. SIR_bash.py is set up to be called by the shell script, SIR_shell.sh, and will plot to the SIR directory. The advantage of the shell script is that Tensorflow slows in time, while the shell script reopens Python at each iteration ensuring that Tensorflow runs quickly. +The MMT(NLS) code requires matlab be called to solve the MMT equations. For smoother passing of the code, the shell scripts perform this, by individually calling matlab at each iteration. This also allows Tensforflow to restart. To recreate the various plots in the study, the shell script must be changed to reflect the parameters of each study. Considering the number of independent experiments and conditions, calculating all will take substantial computational time. -The NLS code requires matlab be called to solve the NLS equations. For smoother passing of the code, the shell script performs this, by individually calling matlab at each iteration. This also allows Tensforflow to restart. To recreate the various plots in the study, the shell script must be changed to reflect the parameters of each study. Considering the number of independent experiments and conditions, calculating all will take substantial computational time. +For MMT, several shell scripts are created and labelled for the various figures. Running these scripts will provide data in the results folder for which the plotting file, and associated blocks of the file will create a light version of the figure. For each of these, there is currently data provided for one experiment (1 seed) per case in the data folder of the paper link. +The LAMP code (located in the lamp folder) is divided into the DNO and GP components. DNO is performed by running the bash script LAMP_10D_run.sh. This script can take substantial time to run. The GP code is run by entering the Matlab_GP_Implementation subfolder and running the code: RUN_ME.m + +## Data + +For some of the code, certain data sets or folders are needed and are rather large and provided at the following links. + +The truth data file for the SIR model is located at: https://www.dropbox.com/s/defzt6usnn7m3ij/truth_data.mat?dl=0 +IC folder for the MMT solution is located at: https://www.dropbox.com/sh/izrs1n261heivr7/AAD4OBDm-QEnYGVbR4u25o_8a?dl=0 +The folder for the LAMP problem is located at: https://www.dropbox.com/sh/93u5ypxzhnxxql8/AADWY-CLBF-aK1hpEUjuR01Ba?dl=0 + ## References * [Discovering and forecasting extreme events via active learning in neural operators](https://arxiv.org/pdf/2204.02488.pdf) diff --git a/dnosearch/.DS_Store b/dnosearch/.DS_Store index aa6cfe4..af38565 100644 Binary files a/dnosearch/.DS_Store and b/dnosearch/.DS_Store differ diff --git a/dnosearch/examples/.DS_Store b/dnosearch/examples/.DS_Store index 616aba2..777d6ed 100644 Binary files a/dnosearch/examples/.DS_Store and b/dnosearch/examples/.DS_Store differ diff --git a/dnosearch/examples/intracycle/IntracycleData.mat b/dnosearch/examples/intracycle/IntracycleData.mat deleted file mode 100644 index 84d1b8a..0000000 Binary files a/dnosearch/examples/intracycle/IntracycleData.mat and /dev/null differ diff --git a/dnosearch/examples/intracycle/__pycache__/oscillator.cpython-38.pyc b/dnosearch/examples/intracycle/__pycache__/oscillator.cpython-38.pyc deleted file mode 100644 index c73fd3a..0000000 Binary files a/dnosearch/examples/intracycle/__pycache__/oscillator.cpython-38.pyc and /dev/null differ diff --git a/dnosearch/examples/intracycle/intracycle.py b/dnosearch/examples/intracycle/intracycle.py deleted file mode 100755 index 902809e..0000000 --- a/dnosearch/examples/intracycle/intracycle.py +++ /dev/null @@ -1,327 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Sun Apr 10 11:36:27 2022 - -X% CDF Optimizer for the Intracycle Data - -@author: ethanpickering -""" - -# dnosearch Imports -import numpy as np -from dnosearch import (BlackBox, UniformInputs, DeepONet) -from oscillator import Noise - - -# DeepONet Imports -import deepxde as dde - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -from scipy.interpolate import griddata -import matplotlib.pyplot as plt -from scipy.spatial import distance - - - -def map_def(theta): - d = sio.loadmat('./IntracycleData.mat') - eff = d['eff'] - amp = d['amplitude'] - phi = d['phi'] - eff = eff.reshape(462,1) - 0.15 - amp = amp.reshape(462,1) - phi = phi.reshape(462,1) - val = griddata(np.append(amp,phi,axis=1), eff, theta, method='linear', fill_value=0) - return val - -def main_init(seed,acq): - - #seed = 1 - dim = 2 - acquisition = acq - epochs = 1000 - b_layers = 3 - t_layers = 1 - neurons = 50 - init_method = 'lhs' - N = 2 - n_init = 3 - iter_num = 0 - n_keep = 1 - cdf_val = 0.75 - iters_max = 50 - - ndim = dim - udim = ndim - - branch_dim = ndim # This refers to rank in the underlying codes - domain = [ [0.25, 13], [0, 6.1] ] # Domain of amplitude and the phase - - inputs = UniformInputs(domain) - np.random.seed(seed) # Set the seet - - if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - if init_method == 'grd': - n_init = n_init**2 # If grd, we need to square the init values - - noise_var = 0 - my_map = BlackBox(map_def, noise_var=noise_var) - - # Creating the test points and the truth data - test_pts = 100 - Thetanew = inputs.draw_samples(test_pts, "grd") - Y_truth = map_def(Thetanew).reshape(test_pts**ndim,1) - # Calculate the truth CDF cut off - x_max = np.max(Y_truth) - x_min = np.min(Y_truth) - x_int = np.linspace(1.25*x_min,1.25*x_max,1024) - - sc = scipy.stats.gaussian_kde(Y_truth.reshape(test_pts**ndim,)) - y = sc.evaluate(x_int) - - # Lets create the CDF - dx = np.diff(x_int) - x_Y_truth = x_int - dx = np.append(dx,0) - area = dx*y - cdf = np.zeros(1024,) - for i in range(0,1024): - cdf[i] = np.sum(area[0:i]) - - temp = np.ones((1024,))*cdf - temp[temp > cdf_val] = 0 # VERY IMPORTANT - indice = np.argmax(temp) - mu_cut_Y_truth = x_Y_truth[indice] - Y_cdf = np.zeros(np.shape(Y_truth)) - Y_cdf = Y_truth.copy() - Y_cdf[Y_truth < mu_cut_Y_truth] = 0 - - - - # Determine the input signal, which must be discretized - nsteps = 50 # Choose the number of steps of the signal - - # DeepONet only needs a coarse version of the signal - # Thus, we can coarsen it for computational advantages - coarse = 1 # Lets keep it the same for now - - # Converts the Theta_u Parameters to Branch functions: U - def Theta_to_U(Theta,nsteps,coarse,rank): - U = np.zeros((np.shape(Theta)[0],nsteps)) - x = np.linspace(0,2*np.pi,nsteps) - print(Theta) - for j in range(0,np.shape(Theta)[0]): - U[j,:] = Theta[j,0]*np.sin(x+Theta[j,1]) / 13 - # For some reason... this was working when only the first index was used... which was the direct answer I believe - return U - - # Converts the Theta_z Parameters to Trunk Functions - def Theta_to_Z(Theta,rank): - if Theta.shape[1] == rank: - Z = np.ones((Theta.shape[0], 1)) - else: - Z = Theta[:,(2*rank):Theta.shape[1]] - return Z - - if iter_num == 0: - # Determine the training data - Us = Theta_to_U(Theta,nsteps,1,ndim) - Y = map_def(Theta).reshape(n_init,1) - - def DNO_Y_transform(x): - x_transform = x - return x_transform - - def DNO_Y_itransform(x_transform): - x = x_transform - return x - - # Set the Neural Operator Parameters - m = int(nsteps/coarse) # Number of sensor inputs - lr = 0.001 # Learning Rate - dim_x = 1 # Dimensionality of the operator values - activation = "relu" - branch = [neurons]*(b_layers+1) - branch[0] = m - trunk = [neurons]*(t_layers+1) - trunk[0] = dim_x - - net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, - ) - - M = 1 # Number of snapshot ensembles - save_period = 1000 - - # bananas - model_dir = './model/' - save_str = 'coarse'+str(coarse)+'_InitMethod_'+init_method - base_dir = './data/' - save_path_data = base_dir+'Wave_Dim'+str(ndim)+'_'+save_str+'_'+acq+'_Seed'+str(seed)+'_Init'+str(n_init)+'_batch'+str(n_keep)+'_N'+str(N)+'_Iteration'+str(iter_num)+'.mat' - load_path_data = base_dir+'Wave_Dim'+str(ndim)+'_'+save_str+'_'+acq+'_Seed'+str(seed)+'_Init'+str(n_init)+'_batch'+str(n_keep)+'_N'+str(N)+'_Iteration'+str(iter_num-1)+'.mat' - - print(np.shape(Theta)) - print(np.shape(Y)) - - model_str = 'Rank'+str(np.shape(Theta)[1])+'_'+save_str+'_'+acq+'_Iter'+str(np.size(Y)-n_init+1) - - MSE_CDF = np.zeros((iters_max,)) - Means = np.zeros((test_pts**2,iters_max)) - Vars = np.zeros((test_pts**2,iters_max)) - Wxs = np.zeros((test_pts**2,iters_max)) - Acqs = np.zeros((test_pts**2,iters_max)) - Means_CDF = np.zeros((test_pts**2,iters_max)) - mu_cuts = np.zeros((iters_max,)) - x_ints = np.zeros((1024,iters_max)) - ys = np.zeros((1024,iters_max)) - cdfs = np.zeros((1024,iters_max)) - - for iters in range(0,iters_max): - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_Z, Y, net, lr, epochs, N, model_dir, seed, save_period, model_str, coarse, udim, DNO_Y_transform, DNO_Y_itransform) - - - Mean_Val, Var_Val = model.predict(Thetanew) - - x_max = np.max(Mean_Val) - x_min = np.min(Mean_Val) - x_int = np.linspace(1.25*x_min,1.25*x_max,1024) - - sc = scipy.stats.gaussian_kde(Mean_Val.reshape(test_pts**ndim,)) - y = sc.evaluate(x_int) - - n_guess = n_keep # Typically restarts but will fix later mayeb - - # Lets create the CDF - dx = np.diff(x_int) - x_truth = x_int - dx = np.append(dx,0) - area = dx*y - cdf = np.zeros(1024,) - for i in range(0,1024): - cdf[i] = np.sum(area[0:i]) - - temp = np.ones((1024,))*cdf - temp[temp > cdf_val] = 0 # VERY IMPORTANT - indice = np.argmax(temp) - mu_cut = x_truth[indice] - - #acquisition = 'cdf' - beta = 1#10000**ndim - var_vals = Var_Val - mean_vals = Mean_Val - n_monte = test_pts**ndim - - if acquisition == "us": - scores = - var_vals.reshape(n_monte,) - elif acquisition == "uslw": - print('Not implemented') - #scores = - wx * var_vals.reshape(n_monte,) - elif acquisition == "bandit_us": - #gamma = np.max(mean_vals) / np.max(var_vals) # Dynamic gamma - gamma = 1 - scores = - (mean_vals.reshape(n_monte,) + gamma * var_vals.reshape(n_monte,)) - elif acquisition == "bandit_uslw": - print('Not implemented') - #scores = - (mean_vals.reshape(n_monte,) + np.max(mean_vals) / np.max(var_vals) / np.max(wx) * wx * var_vals.reshape(n_monte,)) - elif acquisition == "cdf": - mean_vals = mean_vals.reshape(n_monte,) - wx = np.zeros(np.shape(mean_vals)) - wx[mean_vals > mu_cut] = beta - wx[mean_vals <= mu_cut] = 0 - #scores = - wx * var_vals.reshape(n_monte,) - scores = -wx.reshape(n_monte,)*Var_Val.reshape(n_monte,) - - sorted_idxs = np.argsort(scores,axis = 0) - scores = scores.reshape(n_monte,) - - # New version where we impose the radius earlier - sorted_scores = scores[sorted_idxs[0:n_monte]] - sorted_x0 = Thetanew[sorted_idxs[0:n_monte], :] - n_counter = 0 - - x0_guess = np.zeros((n_guess,ndim)) - score_guess = np.zeros((n_guess,)) - - x0_guess[0,:] = sorted_x0[0,:] - score_guess[0] = sorted_scores[0] - - # Now we need to remove the optimal from consideration, and remove values within a radius of influence - max_domain_distance = np.sqrt((inputs.domain[1][1]-inputs.domain[0][0])**2*ndim) - r_val = 0.025*max_domain_distance - - for i in range(1,n_guess): - # Now remove the optimal value - sorted_x0 = np.delete(sorted_x0, 0, axis=0) - sorted_scores = np.delete(sorted_scores, 0) - distances = np.zeros((np.size(sorted_scores),)) - - for j in range(0,min(1000,np.size(sorted_scores))): # Now this is not perfect becuase it does not eliminate all points to be considered with the radius, but should be more robust - distances[j] = distance.euclidean(x0_guess[i-1,:], sorted_x0[j,:]) - sorted_x0 = sorted_x0[distances > r_val,:] - sorted_scores = sorted_scores[distances > r_val] - x0_guess[i,:] = sorted_x0[0,:] - score_guess[i] = sorted_scores[0] - - scores_opt = score_guess - theta_opt = x0_guess - - Theta_opt = theta_opt.reshape(1,ndim) - Y_opt = map_def(Theta_opt).reshape(1,1) - - - plt.pcolor(Thetanew[:,0].reshape(test_pts,test_pts),Thetanew[:,1].reshape(test_pts,test_pts),Mean_Val.reshape(test_pts,test_pts)) - plt.title('Mean Model Prediction') - plt.colorbar() - plt.plot(Theta[:,0], Theta[:,1], 'wo') - plt.title('Iterations:'+str(np.size(Y)-n_init)) - plt.show() - plt.pcolor(Thetanew[:,0].reshape(test_pts,test_pts),Thetanew[:,1].reshape(test_pts,test_pts),wx.reshape(test_pts,test_pts)) - plt.title('CDF Cut') - plt.show() - plt.pcolor(Thetanew[:,0].reshape(test_pts,test_pts),Thetanew[:,1].reshape(test_pts,test_pts),Var_Val.reshape(test_pts,test_pts)/np.max(Var_Val)) - plt.title('Model Variance') - plt.plot(Theta[:,0], Theta[:,1], 'wo') - plt.show() - plt.pcolor(Thetanew[:,0].reshape(test_pts,test_pts),Thetanew[:,1].reshape(test_pts,test_pts),-scores.reshape(test_pts,test_pts)) - plt.title('Acquisition Function') - plt.plot(Theta_opt[0,0], Theta_opt[0,1], 'ro') - plt.show() - - - Y_NN_cdf = Mean_Val.copy() - Y_NN_cdf[Y_truth < mu_cut_Y_truth] = 0 - - MSE_CDF[iters] = np.sum((Y_cdf - Y_NN_cdf)**2) / np.sum(Y_truth > mu_cut_Y_truth) - print(MSE_CDF[-1]) - - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) - - Means[:,iters] = Mean_Val.reshape(test_pts**ndim,) - Vars[:,iters] = Var_Val.reshape(test_pts**ndim,) - Wxs[:,iters] = wx.reshape(test_pts**ndim,) - Acqs[:,iters] = -scores.reshape(test_pts**ndim,) - Means_CDF[:,iters] = Y_NN_cdf.reshape(test_pts**ndim,) - mu_cuts[iters] = mu_cut - x_ints[:,iters] = x_int.reshape(1024,) - ys[:,iters] = y.reshape(1024,) - cdfs[:,iters] = cdf.reshape(1024,) - - # plt.plot(np.log10(MSE_CDF)) - - sio.savemat('./data/Intracycle_Seed'+str(seed)+'_'+acquisition+'_'+str(cdf_val)+'_Opt_N'+str(N)+'.mat', - {'Means':Means, 'Vars':Vars, 'Wxs':Wxs, 'Acqs':Acqs, 'Means_CDF':Means_CDF, 'mu_cuts':mu_cuts, 'x_ints':x_ints, 'ys':ys, 'cdfs':cdfs, - 'MSE_CDF':MSE_CDF, 'Y':Y, 'Theta':Theta, 'Y_cdf':Y_cdf, 'Y_truth':Y_truth, 'Y_NN_cdf':Y_NN_cdf, 'mu_cut_Y_truth':mu_cut_Y_truth}) - -main_init(int(sys.argv[1]), sys.argv[2]) \ No newline at end of file diff --git a/dnosearch/examples/intracycle/intracycle_US.py b/dnosearch/examples/intracycle/intracycle_US.py deleted file mode 100755 index d5972f0..0000000 --- a/dnosearch/examples/intracycle/intracycle_US.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Sun Mar 27 11:53:54 2022 -# -@author: ethanpickering -""" -# dnosearch Imports -import numpy as np -from dnosearch import (BlackBox, UniformInputs, DeepONet) -from oscillator import Noise - - -# DeepONet Imports -import deepxde as dde - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -from scipy.interpolate import griddata -import matplotlib.pyplot as plt -from scipy.spatial import distance - - - -def map_def(theta): - d = sio.loadmat('./IntracycleData.mat') - eff = d['eff'] - amp = d['amplitude'] - phi = d['phi'] - eff = eff.reshape(462,1) - 0.15 - amp = amp.reshape(462,1) - phi = phi.reshape(462,1) - val = griddata(np.append(amp,phi,axis=1), eff, theta, method='linear', fill_value=0) - return val - -seed = 1 -dim = 2 -acq = 'US' -epochs = 1000 -b_layers = 3 -t_layers = 1 -neurons = 50 -init_method = 'lhs' -N = 2 -n_init = 3 -iter_num = 0 -n_keep = 1 -iters_max = 25 - - -ndim = dim -udim = ndim -branch_dim = ndim # This refers to rank in the underlying codes -domain = [ [0.25, 13], [0, 6.1] ] # Domain of amplitude and the phase - -inputs = UniformInputs(domain) -np.random.seed(seed) # Set the seet - -if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - if init_method == 'grd': - n_init = n_init**2 # If grd, we need to square the init values - -noise_var = 0 -my_map = BlackBox(map_def, noise_var=noise_var) - - -# Determine the input signal, which must be discretized -nsteps = 50 # Choose the number of steps of the signal - -# DeepONet only needs a coarse version of the signal -# Thus, we can coarsen it for computational advantages -coarse = 1 # Lets keep it the same for now - -# Converts the Theta_u Parameters to Branch functions: U -def Theta_to_U(Theta,nsteps,coarse,rank): - U = np.zeros((np.shape(Theta)[0],nsteps)) - x = np.linspace(0,2*np.pi,nsteps) - print(Theta) - for j in range(0,np.shape(Theta)[0]): - U[j,:] = Theta[j,0]*np.sin(x+Theta[j,1]) / 13 - # For some reason... this was working when only the first index was used... which was the direct answer I believe - return U - -# Converts the Theta_z Parameters to Trunk Functions -def Theta_to_Z(Theta,rank): - if Theta.shape[1] == rank: - Z = np.ones((Theta.shape[0], 1)) - else: - Z = Theta[:,(2*rank):Theta.shape[1]] - return Z - -if iter_num == 0: - # Determine the training data - Us = Theta_to_U(Theta,nsteps,1,ndim) - Y = map_def(Theta).reshape(n_init,1) - -def DNO_Y_transform(x): - x_transform = x - return x_transform - -def DNO_Y_itransform(x_transform): - x = x_transform - return x - - -# Set the Neural Operator Parameters -m = int(nsteps/coarse) # Number of sensor inputs -lr = 0.001 # Learning Rate -dim_x = 1 # Dimensionality of the operator values -activation = "relu" -branch = [neurons]*(b_layers+1) -branch[0] = m -trunk = [neurons]*(t_layers+1) -trunk[0] = dim_x - -net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, -) - -save_period = 1000 - -model_dir = './model/' -save_str = 'coarse'+str(coarse)+'_InitMethod_'+init_method -base_dir = './data/' -save_path_data = base_dir+'Wave_Dim'+str(ndim)+'_'+save_str+'_'+acq+'_Seed'+str(seed)+'_Init'+str(n_init)+'_batch'+str(n_keep)+'_N'+str(N)+'_Iteration'+str(iter_num)+'.mat' -load_path_data = base_dir+'Wave_Dim'+str(ndim)+'_'+save_str+'_'+acq+'_Seed'+str(seed)+'_Init'+str(n_init)+'_batch'+str(n_keep)+'_N'+str(N)+'_Iteration'+str(iter_num-1)+'.mat' - -print(np.shape(Theta)) -print(np.shape(Y)) - -model_str = 'Rank'+str(np.shape(Theta)[1])+'_'+save_str+'_'+acq+'_Iter'+str(np.size(Y)-n_init+1) - -for iters in range(0,iters_max): - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_Z, Y, net, lr, epochs, N, model_dir, seed, save_period, model_str, coarse, udim, DNO_Y_transform, DNO_Y_itransform) - - test_pts = 100 - Thetanew = inputs.draw_samples(test_pts, "grd") - Mean_Val, Var_Val = model.predict(Thetanew) - - plt.pcolor(Thetanew[:,0].reshape(test_pts,test_pts),Thetanew[:,1].reshape(test_pts,test_pts),Mean_Val.reshape(test_pts,test_pts)) - plt.title('Mean Model Prediction') - plt.colorbar() - #plt.plot(Theta[:,0], Theta[:,1], 'o') - plt.title('Iterations:'+str(np.size(Y)-n_init)) - plt.show() - plt.pcolor(Thetanew[:,0].reshape(test_pts,test_pts),Thetanew[:,1].reshape(test_pts,test_pts),Var_Val.reshape(test_pts,test_pts)/np.max(Var_Val)) - plt.title('Model Variance') - plt.plot(Theta[:,0], Theta[:,1], 'o') - Theta_opt = Thetanew[np.argmax(Var_Val)] - plt.plot(Theta_opt[0], Theta_opt[1], 'o') - plt.show() - plt.draw() - - # We will simply impose the US sampling technique - Theta_opt = Thetanew[np.argmax(Var_Val)] - Theta_opt = Theta_opt.reshape(1,ndim) - #Us_opt = Theta_to_U(Theta_opt,nsteps,1,ndim) - Y_opt = map_def(Theta_opt).reshape(1,1) - - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) \ No newline at end of file diff --git a/dnosearch/examples/intracycle/intracycle_shell.sh b/dnosearch/examples/intracycle/intracycle_shell.sh deleted file mode 100644 index ca95046..0000000 --- a/dnosearch/examples/intracycle/intracycle_shell.sh +++ /dev/null @@ -1,14 +0,0 @@ -acq='cdf' -for seed in {1..10} -do - python3 intracycle.py $seed $acq -wait -done - - -acq='us' -for seed in {1..10} -do - python3 intracycle.py $seed $acq -wait -done diff --git a/dnosearch/examples/intracycle/model/checkpoint b/dnosearch/examples/intracycle/model/checkpoint deleted file mode 100644 index c313ad2..0000000 --- a/dnosearch/examples/intracycle/model/checkpoint +++ /dev/null @@ -1,2 +0,0 @@ -model_checkpoint_path: "N1seed3__model.ckpt-1000" -all_model_checkpoint_paths: "N1seed3__model.ckpt-1000" diff --git a/dnosearch/examples/intracycle/model/model/checkpoint b/dnosearch/examples/intracycle/model/model/checkpoint deleted file mode 100644 index 9687a69..0000000 --- a/dnosearch/examples/intracycle/model/model/checkpoint +++ /dev/null @@ -1,2 +0,0 @@ -model_checkpoint_path: "N1seed1_Rank2_coarse1_InitMethod_lhs_US_Iter1_model.ckpt-1000" -all_model_checkpoint_paths: "N1seed1_Rank2_coarse1_InitMethod_lhs_US_Iter1_model.ckpt-1000" diff --git a/dnosearch/examples/intracycle/oscillator.py b/dnosearch/examples/intracycle/oscillator.py deleted file mode 100644 index 8fa2b7a..0000000 --- a/dnosearch/examples/intracycle/oscillator.py +++ /dev/null @@ -1,121 +0,0 @@ -import numpy as np -from scipy.interpolate import interp1d - - -class Oscillator: - - def __init__(self, noise, tf, nsteps, u_init, - delta=1.5, alpha=1.0, beta=0.1, x1=0.5, x2=1.5): - self.delta = delta - self.alpha = alpha - self.beta = beta - self.x1 = x1 - self.x2 = x2 - self.noise = noise - self.tf = tf - self.nsteps = nsteps - self.u_init = u_init - - def rhs(self, u, t): - u0, u1 = u - f0 = u1 - f1 = -self.delta*u1 - self.f_nl(u0) + self.sample_noise(t) - f = [f0, f1] - return f - - def f_nl(self, u0): - if np.abs(u0) <= self.x1: - return self.alpha * u0 - elif np.abs(u0) >= self.x2: - return self.alpha * self.x1 * np.sign(u0) \ - + self.beta * (u0 - self.x2 * np.sign(u0))**3 - else: - return self.alpha * self.x1 * np.sign(u0) - - def solve(self, theta): - self.sample_noise = self.noise.get_sample_interp(theta) - time = np.linspace(0, self.tf, self.nsteps+1) - solver = ODESolver(self.rhs) - solver.set_ics(self.u_init) - u, t = solver.solve(time) - return u, t - - -class Noise: - - def __init__(self, domain, sigma=0.1, ell=4.0): - self.ti = domain[0] - self.tf = domain[1] - self.tl = domain[1] - domain[0] - self.R = self.get_covariance(sigma, ell) - self.lam, self.phi = self.kle(self.R) - - def get_covariance(self, sigma, ell): - m = 500 + 1 - self.t = np.linspace(self.ti, self.tf, m) - self.dt = self.tl/(m-1) - R = np.zeros([m, m]) - for i in range(m): - for j in range(m): - tau = self.t[j] - self.t[i] - R[i,j] = sigma*np.exp(-tau**2/(2*ell**2)) - return R*self.dt - - def kle(self, R): - lam, phi = np.linalg.eigh(R) - phi = phi/np.sqrt(self.dt) - idx = lam.argsort()[::-1] - lam = lam[idx] - phi = phi[:,idx] - return lam, phi - - def get_eigenvalues(self, trunc=None): - return self.lam[0:trunc] - - def get_eigenvectors(self, trunc=None): - return self.phi[:,0:trunc] - - def get_sample(self, xi): - nRV = np.asarray(xi).shape[0] - phi_trunc = self.phi[:,0:nRV] - lam_trunc = self.lam[0:nRV] - lam_sqrtm = np.diag(np.sqrt(lam_trunc)) - sample = np.dot(phi_trunc, np.dot(lam_sqrtm, xi)) - return sample - - def get_sample_interp(self, xi): - sample = self.get_sample(xi.ravel()) - sample_int = interp1d(self.t, sample, kind='cubic') - return sample_int - - -class ODESolver: - - def __init__(self, f): - self.f = lambda u, t: np.asarray(f(u, t), float) - - def set_ics(self, U0): - U0 = np.asarray(U0) - self.neq = U0.size - self.U0 = U0 - - def advance(self): - u, f, k, t = self.u, self.f, self.k, self.t - dt = t[k+1] - t[k] - K1 = dt*f(u[k], t[k]) - K2 = dt*f(u[k] + 0.5*K1, t[k] + 0.5*dt) - K3 = dt*f(u[k] + 0.5*K2, t[k] + 0.5*dt) - K4 = dt*f(u[k] + K3, t[k] + dt) - u_new = u[k] + (1/6.0)*(K1 + 2*K2 + 2*K3 + K4) - return u_new - - def solve(self, time): - self.t = np.asarray(time) - n = self.t.size - self.u = np.zeros((n,self.neq)) - self.u[0] = self.U0 - for k in range(n-1): - self.k = k - self.u[k+1] = self.advance() - return self.u[:k+2], self.t[:k+2] - diff --git a/dnosearch/examples/jet_control/JC_2P_1U_shell.sh b/dnosearch/examples/jet_control/JC_2P_1U_shell.sh deleted file mode 100644 index 8cc284d..0000000 --- a/dnosearch/examples/jet_control/JC_2P_1U_shell.sh +++ /dev/null @@ -1,22 +0,0 @@ -dim=2 -acq='US_LW' -n_init=100 -epochs=1000 -b_layers=8 -t_layers=1 -neurons=300 -init_method='lhs' -N=2 - -seed_start=2 -seed_end=2 - -for ((seed=$seed_start;seed<=$seed_end;seed++)) -do - for iter_num in {0..50} - do - python3 ./jet_control_2DPhi_U1D_bash.py $seed $iter_num $dim $acq $n_init $epochs $b_layers $t_layers $neurons $init_method $N - done - wait -done - diff --git a/dnosearch/examples/jet_control/JC_2P_2U_shell.sh b/dnosearch/examples/jet_control/JC_2P_2U_shell.sh deleted file mode 100644 index 1330263..0000000 --- a/dnosearch/examples/jet_control/JC_2P_2U_shell.sh +++ /dev/null @@ -1,22 +0,0 @@ -dim=2 -acq='LCB_LW' -n_init=3 -epochs=1000 -b_layers=8 -t_layers=1 -neurons=300 -init_method='lhs' -N=2 - -seed_start=1 -seed_end=1 - -for ((seed=$seed_start;seed<=$seed_end;seed++)) -do - for iter_num in {0..50} - do - python3 ./jet_control_2DPhi_2DU_bash.py $seed $iter_num $dim $acq $n_init $epochs $b_layers $t_layers $neurons $init_method $N - done - wait -done - diff --git a/dnosearch/examples/jet_control/JC_2P_2Ua_shell.sh b/dnosearch/examples/jet_control/JC_2P_2Ua_shell.sh deleted file mode 100644 index c17c47e..0000000 --- a/dnosearch/examples/jet_control/JC_2P_2Ua_shell.sh +++ /dev/null @@ -1,22 +0,0 @@ -dim=2 -acq='LCB_LW' -n_init=3 -epochs=1000 -b_layers=8 -t_layers=1 -neurons=300 -init_method='lhs' -N=2 - -seed_start=1 -seed_end=1 - -for ((seed=$seed_start;seed<=$seed_end;seed++)) -do - for iter_num in {0..50} - do - python3 ./jet_control_2DPhi_2DUa_bash.py $seed $iter_num $dim $acq $n_init $epochs $b_layers $t_layers $neurons $init_method $N - done - wait -done - diff --git a/dnosearch/examples/jet_control/__pycache__/oscillator.cpython-38.pyc b/dnosearch/examples/jet_control/__pycache__/oscillator.cpython-38.pyc deleted file mode 100644 index 0b6c7f9..0000000 Binary files a/dnosearch/examples/jet_control/__pycache__/oscillator.cpython-38.pyc and /dev/null differ diff --git a/dnosearch/examples/jet_control/data/.DS_Store b/dnosearch/examples/jet_control/data/.DS_Store deleted file mode 100644 index 5008ddf..0000000 Binary files a/dnosearch/examples/jet_control/data/.DS_Store and /dev/null differ diff --git a/dnosearch/examples/jet_control/jet_control.py b/dnosearch/examples/jet_control/jet_control.py deleted file mode 100755 index be227a4..0000000 --- a/dnosearch/examples/jet_control/jet_control.py +++ /dev/null @@ -1,632 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Fri Mar 11 12:06:32 2022 - Active learning via NNs of a 2D stochastic SIR Pandemic Model -@author: ethanpickering -""" - -# DNOSearch Imports -import numpy as np -from dnosearch import (BlackBox, GaussianInputs, DeepONet) -from oscillator import Noise - -# DeepONet Imports -import deepxde as dde - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -import matplotlib.pyplot as plt -plt.rcParams.update({ - "text.usetex": False, - "font.family": "serif", - "font.serif": ["Times"]}) - -# Variables -iter_num = 0 # Iteration number -dim = 2 # Dimension of the stochastic excitation (infection rate) -acq = 'US_LW' # Acquisition type - currently only Likelihood-weighted uncertatiny sampling -n_init = 3 # Initial data points -epochs = 1000 # Number of training epochs -b_layers = 8 # Branch Layers -t_layers = 1 # Trunk Layers -neurons = 300 # Number of neurons per layer -init_method = 'lhs'# How initial data are pulled -N = 2 # Number of DNO ensembles -seed = 3 # Seed for initial condition consistency - NOTE due to gradient descent of the DNO, the seed will not provide perfectly similar results, but will be analogous -iters_max = 15 # Iterations to perform - -print_plots =True - - -#%% The map we are defining here is -# Input = sum_i^N sin(x+phi_0+theta_i) -# Output = sum_i^N sin(x(end)+phi_0+theta_i) = sum_i^N sin(2*pi+phi_0+theta_i) - -# This I-O relationshsip means we are interested in an identify mapping -# of the last point in the input signal - -def map_def(Theta,phi_0,wavenumber): - f = np.zeros((np.shape(Theta)[0],1)) - #x = np.linspace(0,2*np.pi) + phi_0 - #Theta = Theta.reshape((np.shape(Theta)[0],1)) # This resize is not general - print(Theta) - for j in range(0,np.shape(Theta)[0]): - for i in range(0,np.shape(Theta)[1]): - # CHANGEED TO SQUARED - f[j] = f[j] + np.sin(wavenumber*(2*np.pi+phi_0+Theta[j,i]))**3 - return f - - -def main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,iters_max,print_plots): - - T = 45 - dt = 0.1 - gamma = 0.25 - delta = 0 - N_people = 10*10**7 - I0 = 50 - - ndim = dim - udim = dim # The dimensionality of the U components of Theta - - np.random.seed(seed) - noise_var = 0 - my_map = BlackBox(map_def, noise_var=noise_var) - mean, cov = np.zeros(ndim), np.ones(ndim) - domain = [ [-6, 6] ] * ndim - inputs = GaussianInputs(domain, mean, cov) - - if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - - noise = Noise([0,1], sigma=0.1, ell=1) - - # Needed to determine U - nsteps = int(T/dt) - x_vals = np.linspace(0, 1, nsteps+1) - x_vals = x_vals[0:-1] - - # DeepONet only needs a coarse version of the signal - coarse = 4 - - # Create the X to U map, which is actually theta to U - multiplier = 3*10**-9 # Special for the map - - - def Theta_to_U(Theta,nsteps,coarse,udim): - U1 = noise.get_sample(np.transpose(Theta)) - - NN_grid = np.linspace(0,1,nsteps) - Noise_grid = np.linspace(0,1,np.shape(U1)[0]) - - U = np.zeros((np.shape(Theta)[0],nsteps)) - for i in range(0,np.shape(Theta)[0]): - interp_func = InterpolatedUnivariateSpline(Noise_grid, U1[:,i], k=1) - U[i,:] = interp_func(NN_grid) - - coarser_inds = np.linspace(0,nsteps-1,int(nsteps/coarse)).astype(int) - U = U[:,coarser_inds] - return U - - - def Theta_to_Z(Theta,udim): - if Theta.shape[1] == udim: - Z = np.ones((Theta.shape[0], 1)) - else: - Z = Theta[:,(udim+1):Theta.shape[1]] - return Z - - if iter_num == 0: - - # Determine the training data - Y = np.zeros((n_init,)) - Us = Theta_to_U(Theta,nsteps,1,2)+2.55 - Us = Us*multiplier - - for i in range(0,n_init): - I_temp = map_def(Us[i,:],gamma,delta,N_people,I0,T,dt, np.zeros(np.shape(Us[i,:]))) - Y[i] = I_temp[-1] - - Y = Y.reshape(n_init,1) - - - m = int(nsteps/coarse) #604*2 - lr = 0.001 - dim_x = 1 - activation = "relu" - branch = [neurons]*(b_layers+1) - branch[0] = m - trunk = [neurons]*(t_layers+1) - trunk[0] = dim_x - - net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, - ) - save_period = 1000 - - # These functions are defined for normalizing, standardizing, or flatenining interal to DeepONet - - def DNO_Y_transform(x): - x_transform = np.log10(x)/10 - 0.5 - return x_transform - - def DNO_Y_itransform(x_transform): - x = 10**((x_transform+0.5)*10) - return x - - # Keeping track of the metric - pys = np.zeros((iters_max,10000)) - log10_errors = np.zeros((iters_max,)) - - ########################################## - # Loop through iterations - ########################################## - - for iter_num in range(0,iters_max): - # Train the model - np.random.seed(np.size(Y)) - model_dir = './' - model_str = '' - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_Z, Y, net, lr, epochs, N, model_dir, seed, save_period, model_str, coarse, udim, DNO_Y_transform, DNO_Y_itransform) - - # Pull a fine set of test_pts in the domain - test_pts = 75 - Theta_test = inputs.draw_samples(test_pts, "grd") - # Predict - Mean_Val, Var_Val = model.predict(Theta_test) - - # Determine Bounds for evaluzting the metric - x_max = np.max(Mean_Val) - x_min = np.min(Mean_Val) - x_int = np.linspace(x_min,x_max,10000) # Linearly space points - x_int_standard = np.linspace(0,10**8,10000) # Static for pt-wise comparisons - - # Create the weights/exploitation values - px = inputs.pdf(Theta_test) - sc = scipy.stats.gaussian_kde(Mean_Val.reshape(test_pts**2,), weights=px) # Fit a guassian kde using px input weights - py = sc.evaluate(x_int) # Evaluate at x_int - py[py<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) - py_standard = sc.evaluate(x_int_standard) # Evaluate for pt-wise comparisons - py_interp = InterpolatedUnivariateSpline(x_int, py, k=1) # Create interpolation function - - # Conctruct the weights - wx = px.reshape(test_pts**2,)/py_interp(Mean_Val).reshape(test_pts**2,) - wx = wx.reshape(test_pts**2,1) - - # Compute the acquisition values - ax = wx*Var_Val # This is simply w(\theta) \sigma^2(\theta) - note that x and \theta are used interchangably - - # Find the optimal acquisition point - Theta_opt = Theta_test[np.argmax(ax),:] - Theta_opt = Theta_opt.reshape(1,2) - - # Calculate the associated U - U_opt = Theta_to_U(Theta_opt,nsteps,1,2)+2.55 - U_opt = U_opt*multiplier - U_opt = U_opt.reshape(np.size(U_opt),1) - - # Pass to the Map - I_temp = map_def(U_opt,gamma,delta,N_people,I0,T,dt,np.zeros(np.shape(U_opt))) - Y_opt = I_temp[-1] - Y_opt = Y_opt.reshape(1,1) - - # Append the value for the next step - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) - pys[iter_num,:] = py_standard - sio.savemat('SIR_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat', {'pys':pys, 'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, 'I_temp':I_temp, 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - if iter_num == 0: # Calulate the truth values - d = sio.loadmat('./truth_data_py.mat') - py_standard_truth = d['py_standard'] - py_standard_truth = py_standard_truth.reshape(10000,) - - log10_error = np.sum(np.abs(np.log10(py_standard[50:2750]) - np.log10(py_standard_truth[50:2750])))/(x_int_standard[2] -x_int_standard[1]) - log10_errors[iter_num] = log10_error - print('The log10 of the log-pdf error is: '+str(np.log10(log10_error))) - - if print_plots: - - fig = plt.figure() - gs = fig.add_gridspec(2, 3, hspace=0.15, wspace=0.3) - (ax1, ax2, ax3), (ax4, ax5, ax6) = gs.subplots()#(sharex='col', sharey='row') - fig.suptitle('2D Stochastic Pandemic Search, Iteration '+str(iter_num)) - ax1.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Mean_Val.reshape(test_pts, test_pts)) - ax1.set_aspect('equal') - ax1.annotate('Mean Solution', - xy=(-3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax1.set_ylabel('$\theta_2$') - - ax2.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Var_Val.reshape(test_pts, test_pts)) - ax2.plot(Theta[0:np.size(Y)-1,0], Theta[0:np.size(Y)-1,1], 'wo') - ax2.set_aspect('equal') - ax2.annotate('Variance', - xy=(-3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - - ax3.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), wx.reshape(test_pts, test_pts)) - ax3.set_aspect('equal') - ax3.annotate('Danger Scores', - xy=(-3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax3.set_ylabel('$\theta_2$') - #ax3.set_xlabel('$\theta_1$') - - ax4.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), ax.reshape(test_pts, test_pts)) - ax4.plot(Theta[-1,0], Theta[-1,1], 'ro') - ax4.set_aspect('equal') - ax4.annotate('Acquisition', - xy=(-3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax4.set_xlabel('$\theta_1$') - ax4.set_xlim([-6,6]) - ax4.set_ylim([-6,6]) - - ax5.semilogy(x_int_standard, py_standard_truth, label ='True PDF' ) - ax5.semilogy(x_int_standard, py_standard, label='NN Approx.') - ax5.set_xlim([0,2.75*10**7]) - ax5.set_ylim([10**-10,10**-6.75]) - ax5.legend(loc='lower left') - #ax5.annotate('Output PDFs', - #xy=(-3, 5), xycoords='data', - #xytext=(0.7, 0.95), textcoords='axes fraction', - #horizontalalignment='right', verticalalignment='top',color='white') - ax5.set_xlabel('New Infections') - - ax6.plot(np.linspace(0,iter_num,iter_num+1),np.log10(log10_errors[0:iter_num+1]), label='Error') - #ax6.annotate('Log10 of log-pdf Error', - #xy=(-3, 5), xycoords='data', - #xytext=(0.7, 0.95), textcoords='axes fraction', - #horizontalalignment='right', verticalalignment='top',color='white') - ax6.legend(loc='lower left') - ax6.set_xlabel('Iterations') - plt.show() - - sio.savemat('./data/SIR_Errors_Seed_'+str(seed)+'_N'+str(N)+'.mat', {'log10_errors':log10_errors}) - return - -# Call the function -main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,iters_max,print_plots) - - - - -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Tue May 3 21:13:25 2022 - -@author: ethanpickering -""" - -# Testing the squared version u(x)**2 and u(x)**3 - -# GPSearch Imports -import numpy as np -from dnosearch import (BlackBox, UniformInputs, DeepONet, custom_KDE, Oscillator) -from oscillator import Noise - -# DeepONet Imports -import deepxde as dde -from utils import mean_squared_error_outlier, safe_test, trim_to_65535 - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -import matplotlib.pyplot as plt -from scipy.spatial import distance -import seaborn as sns -import math - -iter_num = 0 -dim = 1 -acq = 'US' -n_init = 2 # init conditions -epochs = 1000 # train 1000 -b_layers = 3 # branch -t_layers = 1 # trunk -neurons = 100 -init_method = 'lhs' # latin hypercube sampling -# originally N = 2 -N = 6 # add more models (8, 100, etc.) -upper_limit = 0 # dummy variable, need to remove later -n_keep = 1 # will be used later, can you take more than one sample at a time (batching) - -# iter_num = int(sys.argv[2]) -# dim = int(sys.argv[3]) -# acq = sys.argv[4] -# n_init = int(sys.argv[5]) -# epochs = int(sys.argv[6]) -# b_layers = int(sys.argv[7]) -# t_layers = int(sys.argv[8]) -# neurons = int(sys.argv[9]) -# init_method = sys.argv[10] -# N = int(sys.argv[11]) -# upper_limit = int(sys.argv[12]) -# n_keep = int(sys.argv[13]) -# norm_val = float(sys.argv[14]) - - - - -#%% - - -#def main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,upper_limit,n_keep): -seed = 1 # independent experiments -dim = 1 -acq = 'US' -epochs = 1000 -b_layers = 3 -t_layers = 1 -neurons = 50 -init_method = 'lhs' -N = 6 -n_init = 2 -iter_num = 0 - -ndim = dim -rank = ndim # THIS NEEDS TO BE CHANGED TO SOMETHING USEFUL -mean, cov = np.zeros(ndim), np.ones(ndim) -domain = [ [0, 2*np.pi] ] * ndim - -inputs = UniformInputs(domain) -#Theta = inputs.draw_samples(100, "grd") -np.random.seed(seed) - -if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - -noise_var = 0 # can add in noise in the future; thing is though, don't call Blackbox -my_map = BlackBox(map_def, noise_var=noise_var) - -#Theta = inputs.draw_samples(50, "grd") -#noise = Noise([0,1], sigma=0.1, ell=0.5) - -# Need to determine U -nsteps = 50 # discretely define function (sin val) -# nsteps = 50 # original -#x_vals = np.linspace(0, 1, nsteps) -#x_vals = x_vals[0:-1] - -# DeepONet only needs a coarse version of the signal -coarse = 1 # Lets keep it the same for now - -#y = mvnpdf(X,mu,round(Sigma,2)); - -##!!! NOTE THE BOTH OF THESE NEED TO BE CHANGED INSIDE: Theta_to_U as well. -phi_0 = -np.pi/2 # original -wavenumber = 1 # cool to look at higher wave numbers, e.g., 4 - -def Theta_to_U(Theta,nsteps,coarse,rank): - U = np.zeros((np.shape(Theta)[0],nsteps)) - phi_0 = -np.pi/2 # original - wavenumber = 1 # 1 - x = np.linspace(0,2*np.pi,nsteps) + phi_0 - print(Theta) - for j in range(0,np.shape(Theta)[0]): - for i in range(0,np.shape(Theta)[1]): - # CHANGED TO SQUARED - #U[j,:] = U[j,:] + np.sin(wavenumber*(x+phi_0+Theta[j,i]))**3 - U[j,:] = U[j,:] + np.sin(wavenumber*(x+phi_0+Theta[j,i])) - # For some reason... this was working when only the first index was used... which was the direct answer I believe - return U - - -def Theta_to_X(Theta,rank): - if Theta.shape[1] == rank: - X = np.ones((Theta.shape[0], 1)) - else: - X = Theta[:,(2*rank):Theta.shape[1]] - return X - -if iter_num == 0: - # Determine the training data - Us = Theta_to_U(Theta,nsteps,1,ndim) - Y = map_def(Theta,phi_0,wavenumber).reshape(n_init,1) - - -def get_corrs(wavenumber,test_pts,Mean_Val,Var_Val): - """ - Only works for 1-dimension! Make sure that test_pts are divisible by wavenumber - INPUTS: wavenumber, test_pts, Mean_Val, Var_Val - OUTPUTS: m_corrs, v_corrs - """ - m_corrs = 0 - v_corrs = 0 - - for i in range(0,wavenumber-1): - for j in range (i+1,wavenumber): - m_corr = np.corrcoef(Mean_Val[i*test_pts//wavenumber:(i+1)*test_pts//wavenumber], - Mean_Val[j*test_pts//wavenumber:(j+1)*test_pts//wavenumber], - rowvar=False)[0,1] - v_corr = np.corrcoef(Var_Val[i*test_pts//wavenumber:(i+1)*test_pts//wavenumber], - Var_Val[j*test_pts//wavenumber:(j+1)*test_pts//wavenumber], - rowvar=False)[0,1] - m_corrs = m_corrs + m_corr - v_corrs = v_corrs + v_corr - - # Calculate how many total combinations - f = math.factorial - combs = f(wavenumber) // f(2) // f(wavenumber-2) - # For Python 3.8 and above, comb(wavenumber,2) should work - - m_corrs = m_corrs / (combs) - v_corrs = v_corrs / (combs) - return m_corrs, v_corrs - - -#b_layers = 5 -#t_layers = 1 - -m = int(nsteps/coarse) #604*2 -#neurons = 200 -#epochs = 1000 -lr = 0.001 -dim_x = 1 -activation = "relu" -branch = [neurons]*(b_layers+1) -branch[0] = m -trunk = [neurons]*(t_layers+1) -trunk[0] = dim_x - -net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, -) - -M = 1 -save_period = 1000 - -# bananas -#model_dir = '/Users/ethanpickering/Documents/git/gpsearch_pickering/gpsearch/examples/sir/models/' -#model_dir = '/Users/ethanpickering/Documents/Wind/models/' -model_dir = '/Users/hchoi2/Documents/temp/models/' -#acq = 'lhs' -save_str = 'coarse'+str(coarse)+'_InitMethod_'+init_method #+'_upperlimit'+str(upper_limit) # This alters the string for the model saving -#base_dir = '/Users/ethanpickering/Dropbox (MIT)/Wind/Runs/' -base_dir = '/Users/hchoi2/Documents/temp/runs/' -#model_dir = scratch_or_research+'epickeri/MMT/Data/Rank'+str(rank)+'/models/' -#save_dir = scratch_or_research+'epickeri/MMT/Data/Rank'+str(rank)+'/DON_Search/' # Not Sure this is used anymore -#save_str = 'coarse'+str(coarse)+'_lam'+str(lam)+'_BatchSize'+str(batch_size)+'_OptMethod_'+init_method+'_nguess'+str(n_guess)+'_'+objective # This alters the string for the model saving -save_path_data = base_dir+'Wave_Dim'+str(ndim)+'_'+save_str+'_'+acq+'_Seed'+str(seed)+'_Init'+str(n_init)+'_batch'+str(n_keep)+'_N'+str(N)+'_Iteration'+str(iter_num)+'.mat' -load_path_data = base_dir+'Wave_Dim'+str(ndim)+'_'+save_str+'_'+acq+'_Seed'+str(seed)+'_Init'+str(n_init)+'_batch'+str(n_keep)+'_N'+str(N)+'_Iteration'+str(iter_num-1)+'.mat' - - -# if iter_num > 0: -# d = sio.loadmat(load_path_data) -# Theta = d['Theta'] -# Y = d['Y'] - -print(np.shape(Theta)) -print(np.shape(Y)) - - -model_str = 'Rank'+str(np.shape(Theta)[1])+'_'+save_str+'_'+acq+'_Iter'+str(np.size(Y)-n_init+1) - -iters_max = 10 # measure in 1D case: MSE between true and neural net at each iteration for multiple wavenumbers and experiments (seeds=2 or 3) -# iteration wise = 25 -test_pts = 100 -Thetanew = inputs.draw_samples(test_pts, "grd") # theta tests -Y_true = map_def(Thetanew,phi_0,wavenumber).reshape(test_pts**ndim,1) -MSE = np.zeros((iters_max,1)) -mean_corr = np.zeros((iters_max,1)) -var_corr = np.zeros((iters_max,1)) - -# def get_corrs(df): -# col_correlations = pd.df.corr() -# col_correlations.loc[:, :] = np.tril(col_correlations, k=-1) -# cor_pairs = col_correlations.stack() -# return cor_pairs.to_dict() - -plot_dir = '/Users/hchoi2/Documents/temp/plots/cubed/' - -for iters in range(0,iters_max): - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_X, Y, net, lr, epochs, N, M, model_dir, seed, save_period, model_str, coarse, rank) - - # Calculate the MSE - Mean_Val, Var_Val = model.predict(Thetanew) - MSE[iters] = np.mean((Y_true - Mean_Val)**2) - - # Wavenumber = 2, Testpts = 100, Dim = 1, N = 2: - # Compute the correlation coefficients of each period - # mean_corr[iters] = np.corrcoef(Mean_Val[0:test_pts//2],Mean_Val[test_pts//2:test_pts],rowvar=False)[0,1] - # var_corr[iters] = np.corrcoef(Var_Val[0:test_pts//2],Var_Val[test_pts//2:test_pts],rowvar=False)[0,1] - - if dim == 1: - # Plot predicted mean model and variance - plt.plot(Thetanew,Mean_Val) - plt.title('Mean Model Prediction') - plt.plot(Theta,Y,'o') - plt.title('Iterations:'+str(np.size(Y)-n_init)) - plt.xlabel(r'$\Theta$') - #plt.savefig(plot_dir+'mean_NSTEPS_'+str(nsteps)+'_dim_' + str(dim) + '_wavenumber_' + str(wavenumber) + '_N_' + str(N) + '_iter_' + str(iters) + '.png') - plt.show() - plt.plot(Y_true) - plt.title('True function testing') - plt.show() - - # Plot predicted variance - plt.plot(Thetanew,Var_Val/np.max(Var_Val), 'r') - plt.title('Model Variance') - plt.plot(Theta,np.zeros(np.size(Y)), 'o') - Theta_opt = Thetanew[np.argmax(Var_Val)] - plt.plot(Theta_opt, 1, 'o') - plt.xlabel(r'$\Theta$') - #plt.savefig(plot_dir+'var_NSTEPS_' + str(nsteps) + '_dim_' + str(dim) + '_wavenumber_' + str(wavenumber) + '_N_' + str(N) + '_iter_' + str(iters) + '.png') - plt.show() - - # Get the correlation coefficients - # mean_corr[iters] = get_corrs(wavenumber,test_pts,Mean_Val,Var_Val)[0] - # var_corr[iters] = get_corrs(wavenumber,test_pts,Mean_Val,Var_Val)[1] - - - if dim == 2: - plt.pcolor(Mean_Val.reshape((test_pts, test_pts))) - plt.title('Mean Model Prediction') - plt.title('Iterations:'+str(np.size(Y)-n_init)) - plt.xlabel(r'$\Theta_1$') - plt.ylabel(r'$\Theta_2$') - plt.colorbar() - #plt.savefig(plot_dir+'mean_dim_' + str(dim) + '_wavenumber_' + str(wavenumber) + '_N_' + str(N) + '_iter_' + str(iters) + '.png') - plt.show() - - # Plot predicted variance - #plt.pcolor(Var_Val/np.max(Var_Val)) - plt.pcolor(Var_Val.reshape((test_pts, test_pts))/np.max(Var_Val)) - plt.title('Model Variance') - plt.xlabel(r'$\Theta_1$') - plt.ylabel(r'$\Theta_2$') - plt.colorbar() - #plt.savefig(plot_dir+'var_dim_' + str(dim) + '_wavenumber_' + str(wavenumber) + '_N_' + str(N) + '_iter_' + str(iters) + '.png') - plt.show() - - # mean_corr[iters] = np.corrcoef(Mean_Val.reshape(test_pts,test_pts)[0:test_pts//2,0:test_pts//2].reshape(test_pts//2*test_pts//2,1),Mean_Val.reshape(test_pts,test_pts)[test_pts//2:test_pts,test_pts//2:test_pts].reshape(test_pts//2*test_pts//2,1),rowvar=False)[0,1] - # var_corr[iters] = np.corrcoef(Var_Val.reshape(test_pts,test_pts)[0:test_pts//2,0:test_pts//2].reshape(test_pts//2*test_pts//2,1),Var_Val.reshape(test_pts,test_pts)[test_pts//2:test_pts,test_pts//2:test_pts].reshape(test_pts//2*test_pts//2,1),rowvar=False)[0,1] - - # We will simply impose the US sampling technique - Theta_opt = Thetanew[np.argmax(Var_Val)] - Theta_opt = Theta_opt.reshape(1,ndim) - #Us_opt = Theta_to_U(Theta_opt,nsteps,1,ndim) - Y_opt = map_def(Theta_opt,phi_0,wavenumber).reshape(1,1) - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) - -# Plot the MSE vs. iterations -plt.semilogy(MSE) -plt.title('MSE') -plt.xlabel('Iterations') -plt.show() - -plt.plot(mean_corr) -plt.title('Mean correlation coefficient between ' + str(wavenumber) + ' periods') -plt.xlabel('Iterations') -plt.show() - -plt.plot(var_corr) -plt.title('Variance correlation coefficient between ' + str(wavenumber) + ' periods') -plt.xlabel('Iterations') -plt.show() - diff --git a/dnosearch/examples/jet_control/jet_control_2DPhi_2DU_bash.py b/dnosearch/examples/jet_control/jet_control_2DPhi_2DU_bash.py deleted file mode 100755 index 13beade..0000000 --- a/dnosearch/examples/jet_control/jet_control_2DPhi_2DU_bash.py +++ /dev/null @@ -1,262 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Wed May 4 07:54:39 2022 - -@author: ethanpickering -""" - -# DNOSearch Imports -import numpy as np -from dnosearch import (BlackBox, UniformInputs, DeepONet) -from oscillator import Noise - -# DeepONet Imports -import deepxde as dde - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -import matplotlib.pyplot as plt -plt.rcParams.update({ - "text.usetex": False, - "font.family": "serif", - "font.serif": ["Times"]}) - -# Variables -iter_num = int(sys.argv[2]) # Iteration number -dim = int(sys.argv[3]) # Dimension of the stochastic excitation (infection rate) -acq = sys.argv[4] # Acquisition type - currently only Likelihood-weighted uncertatiny sampling -n_init = int(sys.argv[5]) # Initial data points -epochs = int(sys.argv[6]) # Number of training epochs -b_layers = int(sys.argv[7]) # Branch Layers -t_layers = int(sys.argv[8]) # Trunk Layers -neurons = int(sys.argv[9]) # Number of neurons per layer -init_method = sys.argv[10] # How initial data are pulled -N = int(sys.argv[11]) # Number of DNO ensembles - -print('Print plots is set to True') -print_plots =True - - -# The map we are defining here is -# Input = sum_i^N sin(x+phi_0+theta_i) -# Output = sum_i^N sin(x(end)+phi_0+theta_i) = sum_i^N sin(2*pi+phi_0+theta_i) - -# This I-O relationshsip means we are interested in an identify mapping -# of the last point in the input signal - -def map_def(U): - nsteps = 50 - y=np.zeros(np.shape(U)[0]) - for i in range(0,2): - y = y + U[:,i*nsteps-1]**20 - return y - - -def main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,print_plots): - # Seed above is for initial condition consistency - NOTE due to gradient descent of the DNO, the seed will not provide perfectly similar results, but will be analogous - - ndim = dim - udim = ndim # The dimensionality of the U components of Theta - mean, cov = np.zeros(ndim), np.ones(ndim) - domain = [ [0, 2*np.pi] ] * ndim - - inputs = UniformInputs(domain) - np.random.seed(seed) - - if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - - # Need to determine U - nsteps = 50 # discretely define function (sin val) - - # DeepONet only needs a coarse version of the signal - coarse = 1 # Lets keep it the same for now - - ##!!! NOTE THE BOTH OF THESE NEED TO BE CHANGED INSIDE: Theta_to_U as well. - phi_0 = -np.pi/2 # original - wavenumber = 1 # cool to look at higher wave numbers, e.g., 4 - - def Theta_to_U(Theta,nsteps,coarse,rank): - U = np.zeros((np.shape(Theta)[0],nsteps*2)) - phi_0 = -np.pi/2 # original - wavenumber = 1 # 1 - x = np.linspace(0,2*np.pi,nsteps) + phi_0 - print(Theta) - - for j in range(0,np.shape(Theta)[0]): - for i in range(0,2): - U[j,0+(i*nsteps):((i+1)*nsteps)] = np.sin(wavenumber*(x+phi_0+Theta[j,i])) - return U - - - def Theta_to_Z(Theta,rank): - if Theta.shape[1] == rank: - Z = np.ones((Theta.shape[0], 1)) - else: - Z = Theta[:,(2*rank):Theta.shape[1]] - return Z - - if iter_num == 0: - # Determine the training data - Us = Theta_to_U(Theta,nsteps,1,ndim) - Y = map_def(Us).reshape(n_init,1) - Y = Y.reshape(n_init,1) - - # Data Paths - save_path_data = './data/Jet_2DPhi_2DU_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat' - load_path_data = './data/Jet_2DPhi_2DU_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num-1)+'.mat' - - if iter_num > 0: # Will load in data - d = sio.loadmat(load_path_data) - Theta = d['Theta'] - Y = d['Y'] - - m = int(nsteps/coarse)*ndim #604*2 - lr = 0.001 - dim_x = 1 - activation = "relu" - branch = [neurons]*(b_layers+1) - branch[0] = m - trunk = [neurons]*(t_layers+1) - trunk[0] = dim_x - - net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, - ) - save_period = 1000 - - # These functions are defined for normalizing, standardizing, or flatenining interal to DeepONet - def DNO_Y_transform(x): - x_transform = x/2 - return x_transform - - def DNO_Y_itransform(x_transform): - x = x_transform*2 - return x - - - # Train the model - np.random.seed(np.size(Y)) # Randomize the seed based on Y size for consistency - - # Where to save the DeepONet models - model_dir = './' - model_str = '' - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_Z, Y, net, lr, epochs, N, model_dir, seed, save_period, model_str, coarse, udim, DNO_Y_transform, DNO_Y_itransform) - - # Pull a fine set of test_pts in the domain - test_pts = 75 - Theta_test = inputs.draw_samples(test_pts, "grd") - # Predict - Mean_Val, Var_Val = model.predict(Theta_test) - - # Determine Bounds for evaluzting the metric - x_max = np.max(Mean_Val) - x_min = np.min(Mean_Val) - x_int = np.linspace(x_min,x_max,100) # Linearly space points - x_int_standard = np.linspace(0,ndim,100) # Static for pt-wise comparisons - - # Create the weights/exploitation values - px = inputs.pdf(Theta_test) - sc = scipy.stats.gaussian_kde(Mean_Val.reshape(test_pts**2,), weights=px) # Fit a guassian kde using px input weights - py = sc.evaluate(x_int) # Evaluate at x_int - py[py<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) - py_standard = sc.evaluate(x_int_standard) # Evaluate for pt-wise comparisons - py_interp = InterpolatedUnivariateSpline(x_int, py, k=1) # Create interpolation function - - # Conctruct the weights - wx = px.reshape(test_pts**2,)/py_interp(Mean_Val).reshape(test_pts**2,) - wx = wx.reshape(test_pts**2,1) - - # Compute the acquisition values - - if acq == 'LCB_LW': - kappa = 1 - ax = Mean_Val + kappa * wx * (Var_Val)**(1/2) * np.max(Mean_Val) / np.max(wx*(Var_Val)**(1/2)) - elif acq == 'US_LW': - ax = wx*Var_Val # This is simply w(\theta) \sigma^2(\theta) - note that x and \theta are used interchangably - - # Find the optimal acquisition point - Theta_opt = Theta_test[np.argmax(ax),:] - Theta_opt = Theta_opt.reshape(1,udim) - - # Calculate the associated U - U_opt = Theta_to_U(Theta_opt,nsteps,1,udim) - U_opt = U_opt.reshape(1,np.size(U_opt)) - - # Pass to the Map - Y_opt = map_def(U_opt).reshape(1,1) - Y_opt = Y_opt.reshape(1,1) - - # Append the value for the next step - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) - #pys[iter_num,:] = py_standard - sio.savemat('./data/Jet_2DPhi_2DU_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat', {'py_standard':py_standard, 'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - - if print_plots: - - fig = plt.figure() - gs = fig.add_gridspec(2, 2, hspace=0.2, wspace=0.1) - (ax1, ax2), (ax3, ax4) = gs.subplots()#(sharex='col', sharey='row') - fig.suptitle('2D U = U1;U2 Phi Jet Control Search, Iteration '+str(iter_num)) - ax1.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Mean_Val.reshape(test_pts, test_pts)) - ax1.set_aspect('equal') - #ax1.colorbar() - ax1.annotate('Mean Solution', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax1.set_ylabel('$\theta_2$') - - ax2.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Var_Val.reshape(test_pts, test_pts)) - ax2.plot(Theta[0:np.size(Y)-1,0], Theta[0:np.size(Y)-1,1], 'wo') - ax2.set_aspect('equal') - #ax2.colorbar() - ax2.annotate('Variance', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - - ax3.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), wx.reshape(test_pts, test_pts)) - ax3.set_aspect('equal') - #ax3.colorbar() - ax3.annotate('Danger Scores', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax3.set_ylabel('$\theta_2$') - #ax3.set_xlabel('$\theta_1$') - - ax4.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), ax.reshape(test_pts, test_pts)) - ax4.plot(Theta[-1,0], Theta[-1,1], 'ro') - #ax4.colorbar() - ax4.set_aspect('equal') - ax4.annotate('Acquisition', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax4.set_xlabel('$\theta_1$') - #ax4.set_xlim([-6,6]) - #ax4.set_ylim([-6,6]) - plt.draw() - - plt.savefig('./plots/Jet_2DPhi_2DU_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.jpg', dpi=150) - - sio.savemat(save_path_data, {'py_standard':py_standard,'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, - 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, - 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - -if __name__ == "__main__": - main(int(sys.argv[1]),iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,print_plots) diff --git a/dnosearch/examples/jet_control/jet_control_2DPhi_2DUa_bash.py b/dnosearch/examples/jet_control/jet_control_2DPhi_2DUa_bash.py deleted file mode 100755 index 6bcc8c5..0000000 --- a/dnosearch/examples/jet_control/jet_control_2DPhi_2DUa_bash.py +++ /dev/null @@ -1,262 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Wed May 4 07:54:39 2022 - -@author: ethanpickering -""" - -# DNOSearch Imports -import numpy as np -from dnosearch import (BlackBox, UniformInputs, DeepONet) -from oscillator import Noise - -# DeepONet Imports -import deepxde as dde - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -import matplotlib.pyplot as plt -plt.rcParams.update({ - "text.usetex": False, - "font.family": "serif", - "font.serif": ["Times"]}) - -# Variables -iter_num = int(sys.argv[2]) # Iteration number -dim = int(sys.argv[3]) # Dimension of the stochastic excitation (infection rate) -acq = sys.argv[4] # Acquisition type - currently only Likelihood-weighted uncertatiny sampling -n_init = int(sys.argv[5]) # Initial data points -epochs = int(sys.argv[6]) # Number of training epochs -b_layers = int(sys.argv[7]) # Branch Layers -t_layers = int(sys.argv[8]) # Trunk Layers -neurons = int(sys.argv[9]) # Number of neurons per layer -init_method = sys.argv[10] # How initial data are pulled -N = int(sys.argv[11]) # Number of DNO ensembles - -print('Print plots is set to True') -print_plots =True - - -# The map we are defining here is -# Input = sum_i^N sin(x+phi_0+theta_i) -# Output = sum_i^N sin(x(end)+phi_0+theta_i) = sum_i^N sin(2*pi+phi_0+theta_i) - -# This I-O relationshsip means we are interested in an identify mapping -# of the last point in the input signal - -def map_def(U): - nsteps = 50 - y=np.zeros(np.shape(U)[0]) - for i in range(0,1): - y = y + (U[:,i*nsteps-1])**10 - return y - - -def main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,print_plots): - # Seed above is for initial condition consistency - NOTE due to gradient descent of the DNO, the seed will not provide perfectly similar results, but will be analogous - - ndim = dim - udim = ndim # The dimensionality of the U components of Theta - mean, cov = np.zeros(ndim), np.ones(ndim) - domain = [[0, 2*np.pi], [-0.5, 0.5]] - - inputs = UniformInputs(domain) - np.random.seed(seed) - - if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - - # Need to determine U - nsteps = 50 # discretely define function (sin val) - - # DeepONet only needs a coarse version of the signal - coarse = 1 # Lets keep it the same for now - - ##!!! NOTE THE BOTH OF THESE NEED TO BE CHANGED INSIDE: Theta_to_U as well. - phi_0 = -np.pi/2 # original - wavenumber = 1 # cool to look at higher wave numbers, e.g., 4 - - def Theta_to_U(Theta,nsteps,coarse,rank): - U = np.zeros((np.shape(Theta)[0],nsteps)) - phi_0 = -np.pi/2 # original - wavenumber = 1 # 1 - x = np.linspace(0,2*np.pi,nsteps) + phi_0 - print(Theta) - - for j in range(0,np.shape(Theta)[0]): - for i in range(0,1): - U[j,0+(i*nsteps):((i+1)*nsteps)] = Theta[j,i+1] + np.sin(wavenumber*(x+phi_0+Theta[j,i])) - return U - - - def Theta_to_Z(Theta,rank): - if Theta.shape[1] == rank: - Z = np.ones((Theta.shape[0], 1)) - else: - Z = Theta[:,(2*rank):Theta.shape[1]] - return Z - - if iter_num == 0: - # Determine the training data - Us = Theta_to_U(Theta,nsteps,1,ndim) - Y = map_def(Us).reshape(n_init,1) - Y = Y.reshape(n_init,1) - - # Data Paths - save_path_data = './data/Jet_2DPhi_2DUa_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat' - load_path_data = './data/Jet_2DPhi_2DUa_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num-1)+'.mat' - - if iter_num > 0: # Will load in data - d = sio.loadmat(load_path_data) - Theta = d['Theta'] - Y = d['Y'] - - m = int(nsteps/coarse) #604*2 - lr = 0.001 - dim_x = 1 - activation = "relu" - branch = [neurons]*(b_layers+1) - branch[0] = m - trunk = [neurons]*(t_layers+1) - trunk[0] = dim_x - - net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, - ) - save_period = 1000 - - # These functions are defined for normalizing, standardizing, or flatenining interal to DeepONet - def DNO_Y_transform(x): - x_transform = x/60 - return x_transform - - def DNO_Y_itransform(x_transform): - x = x_transform*60 - return x - - - # Train the model - np.random.seed(np.size(Y)) # Randomize the seed based on Y size for consistency - - # Where to save the DeepONet models - model_dir = './' - model_str = '' - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_Z, Y, net, lr, epochs, N, model_dir, seed, save_period, model_str, coarse, udim, DNO_Y_transform, DNO_Y_itransform) - - # Pull a fine set of test_pts in the domain - test_pts = 75 - Theta_test = inputs.draw_samples(test_pts, "grd") - # Predict - Mean_Val, Var_Val = model.predict(Theta_test) - - # Determine Bounds for evaluzting the metric - x_max = np.max(Mean_Val) - x_min = np.min(Mean_Val) - x_int = np.linspace(x_min,x_max,100) # Linearly space points - x_int_standard = np.linspace(0,ndim,100) # Static for pt-wise comparisons - - # Create the weights/exploitation values - px = inputs.pdf(Theta_test) - sc = scipy.stats.gaussian_kde(Mean_Val.reshape(test_pts**2,), weights=px) # Fit a guassian kde using px input weights - py = sc.evaluate(x_int) # Evaluate at x_int - py[py<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) - py_standard = sc.evaluate(x_int_standard) # Evaluate for pt-wise comparisons - py_interp = InterpolatedUnivariateSpline(x_int, py, k=1) # Create interpolation function - - # Conctruct the weights - wx = px.reshape(test_pts**2,)/py_interp(Mean_Val).reshape(test_pts**2,) - wx = wx.reshape(test_pts**2,1) - - # Compute the acquisition values - - if acq == 'LCB_LW': - kappa = 1 - ax = Mean_Val + kappa * wx * (Var_Val)**(1/2) * np.max(Mean_Val) / np.max(wx*(Var_Val)**(1/2)) - elif acq == 'US_LW': - ax = wx*Var_Val # This is simply w(\theta) \sigma^2(\theta) - note that x and \theta are used interchangably - - # Find the optimal acquisition point - Theta_opt = Theta_test[np.argmax(ax),:] - Theta_opt = Theta_opt.reshape(1,udim) - - # Calculate the associated U - U_opt = Theta_to_U(Theta_opt,nsteps,1,udim) - U_opt = U_opt.reshape(1,np.size(U_opt)) - - # Pass to the Map - Y_opt = map_def(U_opt).reshape(1,1) - Y_opt = Y_opt.reshape(1,1) - - # Append the value for the next step - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) - #pys[iter_num,:] = py_standard - sio.savemat('./data/Jet_2DPhi_2DUa_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat', {'py_standard':py_standard, 'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - - if print_plots: - - fig = plt.figure() - gs = fig.add_gridspec(2, 2, hspace=0.2, wspace=0.1) - (ax1, ax2), (ax3, ax4) = gs.subplots()#(sharex='col', sharey='row') - fig.suptitle('2D U = U1 Phi x a, Jet Control Search, Iteration '+str(iter_num)) - ax1.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Mean_Val.reshape(test_pts, test_pts)) - #ax1.set_aspect('equal') - #ax1.colorbar() - ax1.annotate('Mean Solution', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax1.set_ylabel('$\theta_2$') - - ax2.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Var_Val.reshape(test_pts, test_pts)) - ax2.plot(Theta[0:np.size(Y)-1,0], Theta[0:np.size(Y)-1,1], 'wo') - #ax2.set_aspect('equal') - #ax2.colorbar() - ax2.annotate('Variance', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - - ax3.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), wx.reshape(test_pts, test_pts)) - #ax3.set_aspect('equal') - #ax3.colorbar() - ax3.annotate('Danger Scores', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax3.set_ylabel('$\theta_2$') - #ax3.set_xlabel('$\theta_1$') - - ax4.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), ax.reshape(test_pts, test_pts)) - ax4.plot(Theta[-1,0], Theta[-1,1], 'ro') - #ax4.colorbar() - #ax4.set_aspect('equal') - ax4.annotate('Acquisition', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax4.set_xlabel('$\theta_1$') - #ax4.set_xlim([-6,6]) - #ax4.set_ylim([-6,6]) - plt.draw() - - plt.savefig('./plots/Jet_2DPhi_2DUa_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.jpg', dpi=150) - - sio.savemat(save_path_data, {'py_standard':py_standard,'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, - 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, - 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - -if __name__ == "__main__": - main(int(sys.argv[1]),iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,print_plots) diff --git a/dnosearch/examples/jet_control/jet_control_2DPhi_U1D_bash.py b/dnosearch/examples/jet_control/jet_control_2DPhi_U1D_bash.py deleted file mode 100755 index 23d5bff..0000000 --- a/dnosearch/examples/jet_control/jet_control_2DPhi_U1D_bash.py +++ /dev/null @@ -1,258 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Tue May 3 22:35:41 2022 - -@author: ethanpickering -""" - -# DNOSearch Imports -import numpy as np -from dnosearch import (BlackBox, UniformInputs, DeepONet) -from oscillator import Noise - -# DeepONet Imports -import deepxde as dde - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -import matplotlib.pyplot as plt -plt.rcParams.update({ - "text.usetex": False, - "font.family": "serif", - "font.serif": ["Times"]}) - -# Variables -iter_num = int(sys.argv[2]) # Iteration number -dim = int(sys.argv[3]) # Dimension of the stochastic excitation (infection rate) -acq = sys.argv[4] # Acquisition type - currently only Likelihood-weighted uncertatiny sampling -n_init = int(sys.argv[5]) # Initial data points -epochs = int(sys.argv[6]) # Number of training epochs -b_layers = int(sys.argv[7]) # Branch Layers -t_layers = int(sys.argv[8]) # Trunk Layers -neurons = int(sys.argv[9]) # Number of neurons per layer -init_method = sys.argv[10] # How initial data are pulled -N = int(sys.argv[11]) # Number of DNO ensembles - -print('Print plots is set to True') -print_plots =True - - -# The map we are defining here is -# Input = sum_i^N sin(x+phi_0+theta_i) -# Output = sum_i^N sin(x(end)+phi_0+theta_i) = sum_i^N sin(2*pi+phi_0+theta_i) - -# This I-O relationshsip means we are interested in an identify mapping -# of the last point in the input signal - -def map_def(U): - y = U[:,-1]**4 - return y - - -def main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,print_plots): - # Seed above is for initial condition consistency - NOTE due to gradient descent of the DNO, the seed will not provide perfectly similar results, but will be analogous - - ndim = dim - udim = ndim # The dimensionality of the U components of Theta - mean, cov = np.zeros(ndim), np.ones(ndim) - domain = [ [0, 2*np.pi] ] * ndim - - inputs = UniformInputs(domain) - np.random.seed(seed) - - if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - - # Need to determine U - nsteps = 50 # discretely define function (sin val) - - # DeepONet only needs a coarse version of the signal - coarse = 1 # Lets keep it the same for now - - ##!!! NOTE THE BOTH OF THESE NEED TO BE CHANGED INSIDE: Theta_to_U as well. - phi_0 = -np.pi/2 # original - wavenumber = 1 # cool to look at higher wave numbers, e.g., 4 - - def Theta_to_U(Theta,nsteps,coarse,rank): - U = np.zeros((np.shape(Theta)[0],nsteps)) - phi_0 = -np.pi/2 # original - wavenumber = 1 # 1 - x = np.linspace(0,2*np.pi,nsteps) + phi_0 - print(Theta) - for j in range(0,np.shape(Theta)[0]): - for i in range(0,np.shape(Theta)[1]): - U[j,:] = U[j,:] + np.sin(wavenumber*(x+phi_0+Theta[j,i])) - return U - - - def Theta_to_Z(Theta,rank): - if Theta.shape[1] == rank: - Z = np.ones((Theta.shape[0], 1)) - else: - Z = Theta[:,(2*rank):Theta.shape[1]] - return Z - - if iter_num == 0: - # Determine the training data - Us = Theta_to_U(Theta,nsteps,1,ndim) - Y = map_def(Us).reshape(n_init,1) - Y = Y.reshape(n_init,1) - - # Data Paths - save_path_data = './data/Jet_2DPhi_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat' - load_path_data = './data/Jet_2DPhi_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num-1)+'.mat' - - if iter_num > 0: # Will load in data - d = sio.loadmat(load_path_data) - Theta = d['Theta'] - Y = d['Y'] - - m = int(nsteps/coarse) #604*2 - lr = 0.001 - dim_x = 1 - activation = "relu" - branch = [neurons]*(b_layers+1) - branch[0] = m - trunk = [neurons]*(t_layers+1) - trunk[0] = dim_x - - net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, - ) - save_period = 1000 - - # These functions are defined for normalizing, standardizing, or flatenining interal to DeepONet - def DNO_Y_transform(x): - x_transform = x/16 - return x_transform - - def DNO_Y_itransform(x_transform): - x = x_transform*16 - return x - - - # Train the model - np.random.seed(np.size(Y)) # Randomize the seed based on Y size for consistency - - # Where to save the DeepONet models - model_dir = './' - model_str = '' - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_Z, Y, net, lr, epochs, N, model_dir, seed, save_period, model_str, coarse, udim, DNO_Y_transform, DNO_Y_itransform) - - # Pull a fine set of test_pts in the domain - test_pts = 75 - Theta_test = inputs.draw_samples(test_pts, "grd") - # Predict - Mean_Val, Var_Val = model.predict(Theta_test) - - # Determine Bounds for evaluzting the metric - x_max = np.max(Mean_Val) - x_min = np.min(Mean_Val) - x_int = np.linspace(x_min,x_max,100) # Linearly space points - x_int_standard = np.linspace(0,16,100) # Static for pt-wise comparisons - - # Create the weights/exploitation values - px = inputs.pdf(Theta_test) - sc = scipy.stats.gaussian_kde(Mean_Val.reshape(test_pts**2,), weights=px) # Fit a guassian kde using px input weights - py = sc.evaluate(x_int) # Evaluate at x_int - py[py<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) - py_standard = sc.evaluate(x_int_standard) # Evaluate for pt-wise comparisons - py_interp = InterpolatedUnivariateSpline(x_int, py, k=1) # Create interpolation function - - # Conctruct the weights - wx = px.reshape(test_pts**2,)/py_interp(Mean_Val).reshape(test_pts**2,) - wx = wx.reshape(test_pts**2,1) - - # Compute the acquisition values - - if acq == 'LCB_LW': - kappa = 1 - ax = Mean_Val + kappa * wx * (Var_Val)**(1/2) * np.max(Mean_Val) / np.max(wx*(Var_Val)**(1/2)) - elif acq == 'US_LW': - ax = wx*Var_Val # This is simply w(\theta) \sigma^2(\theta) - note that x and \theta are used interchangably - - # Find the optimal acquisition point - Theta_opt = Theta_test[np.argmax(ax),:] - Theta_opt = Theta_opt.reshape(1,udim) - - # Calculate the associated U - U_opt = Theta_to_U(Theta_opt,nsteps,1,udim) - U_opt = U_opt.reshape(1,np.size(U_opt)) - - # Pass to the Map - Y_opt = map_def(U_opt).reshape(1,1) - Y_opt = Y_opt.reshape(1,1) - - # Append the value for the next step - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) - #pys[iter_num,:] = py_standard - sio.savemat('./data/Jet_2DPhi_1DU_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat', {'py_standard':py_standard, 'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - - if print_plots: - - fig = plt.figure() - gs = fig.add_gridspec(2, 2, hspace=0.2, wspace=0.1) - (ax1, ax2), (ax3, ax4) = gs.subplots()#(sharex='col', sharey='row') - fig.suptitle('2D U = U1+U2 Phi Jet Control Search, Iteration '+str(iter_num)) - ax1.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Mean_Val.reshape(test_pts, test_pts)) - ax1.set_aspect('equal') - #ax1.colorbar() - ax1.annotate('Mean Solution', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax1.set_ylabel('$\theta_2$') - - ax2.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Var_Val.reshape(test_pts, test_pts)) - ax2.plot(Theta[0:np.size(Y)-1,0], Theta[0:np.size(Y)-1,1], 'wo') - ax2.set_aspect('equal') - #ax2.colorbar() - ax2.annotate('Variance', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - - ax3.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), wx.reshape(test_pts, test_pts)) - ax3.set_aspect('equal') - #ax3.colorbar() - ax3.annotate('Danger Scores', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax3.set_ylabel('$\theta_2$') - #ax3.set_xlabel('$\theta_1$') - - ax4.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), ax.reshape(test_pts, test_pts)) - ax4.plot(Theta[-1,0], Theta[-1,1], 'ro') - #ax4.colorbar() - ax4.set_aspect('equal') - ax4.annotate('Acquisition', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax4.set_xlabel('$\theta_1$') - #ax4.set_xlim([-6,6]) - #ax4.set_ylim([-6,6]) - plt.draw() - - plt.savefig('./plots/Jet_2DPhi_1DU_'+str(acq)+'_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.jpg', dpi=150) - - sio.savemat(save_path_data, {'py_standard':py_standard,'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, - 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, - 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - -if __name__ == "__main__": - main(int(sys.argv[1]),iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,print_plots) diff --git a/dnosearch/examples/jet_control/jet_control_2D_phi.py b/dnosearch/examples/jet_control/jet_control_2D_phi.py deleted file mode 100755 index 968735e..0000000 --- a/dnosearch/examples/jet_control/jet_control_2D_phi.py +++ /dev/null @@ -1,293 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Tue May 11 21:06:32 2022 - Active learning / Bayesian Optimization For Jet Control -@author: ethanpickering -""" - -# DNOSearch Imports -import numpy as np -from dnosearch import (BlackBox, UniformInputs, DeepONet) -from oscillator import Noise - -# DeepONet Imports -import deepxde as dde - -# Other Imports -import sys -import scipy -from scipy.interpolate import InterpolatedUnivariateSpline -import scipy.io as sio -import h5py -import matplotlib.pyplot as plt -plt.rcParams.update({ - "text.usetex": False, - "font.family": "serif", - "font.serif": ["Times"]}) - -# Variables -iter_num = 0 # Iteration number -dim = 2 # Dimension of the stochastic excitation (infection rate) -acq = 'LCB_LW' # Acquisition type - currently only Likelihood-weighted uncertatiny sampling -n_init = 3 # Initial data points -epochs = 1000 # Number of training epochs -b_layers = 8 # Branch Layers -t_layers = 1 # Trunk Layers -neurons = 300 # Number of neurons per layer -init_method = 'lhs'# How initial data are pulled -N = 2 # Number of DNO ensembles -seed = 1 # Seed for initial condition consistency - NOTE due to gradient descent of the DNO, the seed will not provide perfectly similar results, but will be analogous -iters_max = 15 # Iterations to perform - -print_plots =True - - -#%% The map we are defining here is -# Input = sum_i^N sin(x+phi_0+theta_i) -# Output = sum_i^N sin(x(end)+phi_0+theta_i) = sum_i^N sin(2*pi+phi_0+theta_i) - -# This I-O relationshsip means we are interested in an identify mapping -# of the last point in the input signal - -def map_def(U): - y = U[:,-1]**10 - return y - -def main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,iters_max,print_plots): - - - ndim = dim - udim = ndim # The dimensionality of the U components of Theta - domain = [ [0, 2*np.pi] ] * ndim - - inputs = UniformInputs(domain) - np.random.seed(seed) - - if iter_num == 0: - Theta = inputs.draw_samples(n_init, init_method) - - # Need to determine U - nsteps = 50 # discretely define function (sin val) - - # DeepONet only needs a coarse version of the signal - coarse = 1 # Lets keep it the same for now - - - ##!!! NOTE THE BOTH OF THESE NEED TO BE CHANGED INSIDE: Theta_to_U as well. - phi_0 = -np.pi/2 # original - wavenumber = 1 # cool to look at higher wave numbers, e.g., 4 - - def Theta_to_U(Theta,nsteps,coarse,rank): - U = np.zeros((np.shape(Theta)[0],nsteps)) - phi_0 = -np.pi/2 # original - wavenumber = 1 # 1 - x = np.linspace(0,2*np.pi,nsteps) + phi_0 - print(Theta) - for j in range(0,np.shape(Theta)[0]): - for i in range(0,np.shape(Theta)[1]): - U[j,:] = U[j,:] + np.sin(wavenumber*(x+phi_0+Theta[j,i])) - return U - - # U = f(Theta); U is your control function/law - # u_i = a_i sin(2pit+phi) - - # Theta = [a_1, phi_1, a_2, phi_2 ..., ] - # Theta = [a_1, a_2, ..., phi_1, phi_2] - - # U = [u_1; u_2; ...] - - - # G(U(x)) = int( u(x)^2) dx - # G(U(x)) = int( u(x)^1) dx - # G(U(x)) = int( u(x)^3) dx - # G(U(x)) = int( u(x)^z) dx - - - def Theta_to_Z(Theta,rank): - if Theta.shape[1] == rank: - Z = np.ones((Theta.shape[0], 1)) - else: - Z = Theta[:,(2*rank):Theta.shape[1]] - return Z - - if iter_num == 0: - # Determine the training data - Us = Theta_to_U(Theta,nsteps,1,ndim) - Y = map_def(Us).reshape(n_init,1) - Y = Y.reshape(n_init,1) - - m = int(nsteps/coarse) #604*2 - lr = 0.001 - dim_x = 1 - activation = "relu" - branch = [neurons]*(b_layers+1) - branch[0] = m - trunk = [neurons]*(t_layers+1) - trunk[0] = dim_x - - net = dde.maps.OpNN( - branch, - trunk, - activation, - "Glorot normal", - use_bias=True, - stacked=False, - ) - save_period = 1000 - - # These functions are defined for normalizing, standardizing, or flatenining interal to DeepONet - - def DNO_Y_transform(x): - x_transform = x/1024 - return x_transform - - def DNO_Y_itransform(x_transform): - x = x_transform*1024 - return x - - # Keeping track of the metric - pys = np.zeros((iters_max,10000)) - - ########################################## - # Loop through iterations - ########################################## - - for iter_num in range(0,iters_max): - # Train the model - np.random.seed(np.size(Y)) - model_dir = './' - model_str = '' - model = DeepONet(Theta, nsteps, Theta_to_U, Theta_to_Z, Y, net, lr, epochs, N, model_dir, seed, save_period, model_str, coarse, udim, DNO_Y_transform, DNO_Y_itransform) - - # Pull a fine set of test_pts in the domain - test_pts = 75 - Theta_test = inputs.draw_samples(test_pts, "grd") - # Predict - Mean_Val, Var_Val = model.predict(Theta_test) - - # Determine Bounds for evaluzting the metric - x_max = np.max(Mean_Val) - x_min = np.min(Mean_Val) - x_int = np.linspace(x_min,x_max,100) # Linearly space points - x_int_standard = np.linspace(0,1024,100) # Static for pt-wise comparisons - - # Create the weights/exploitation values - px = inputs.pdf(Theta_test) - sc = scipy.stats.gaussian_kde(Mean_Val.reshape(test_pts**2,), weights=px) # Fit a guassian kde using px input weights - py = sc.evaluate(x_int) # Evaluate at x_int - py[py<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) - py_standard = sc.evaluate(x_int_standard) # Evaluate for pt-wise comparisons - py_interp = InterpolatedUnivariateSpline(x_int, py, k=1) # Create interpolation function - - # Conctruct the weights - wx = px.reshape(test_pts**2,)/py_interp(Mean_Val).reshape(test_pts**2,) - wx = wx.reshape(test_pts**2,1) - - # Compute the acquisition values - - if acq == 'LCB_LW': - kappa = np.max(Mean_Val) / np.max(wx*(Var_Val)**(1/2)) - kappa2 = 1 - ax = Mean_Val + kappa * wx * (Var_Val)**(1/2) * kappa2 - - elif acq == 'US_LW': - ax = wx*Var_Val # This is simply w(\theta) \sigma^2(\theta) - note that x and \theta are used interchangably - - - # Find the optimal acquisition point - Theta_opt = Theta_test[np.argmax(ax),:] - Theta_opt = Theta_opt.reshape(1,udim) - - # Calculate the associated U - U_opt = Theta_to_U(Theta_opt,nsteps,1,udim) - U_opt = U_opt.reshape(1,np.size(U_opt)) - - # Pass to the Map - Y_opt = map_def(U_opt).reshape(1,1) - Y_opt = Y_opt.reshape(1,1) - - # Append the value for the next step - Theta = np.append(Theta, Theta_opt, axis = 0) - Y = np.append(Y, Y_opt, axis = 0) - #pys[iter_num,:] = py_standard - sio.savemat('./data/Jet_Control_Seed_'+str(seed)+'_N'+str(N)+'_iter_'+str(iter_num)+'.mat', {'pys':pys, 'x_int_standard':x_int_standard, 'Theta':Theta, 'U_opt':U_opt, 'wx':wx, 'ax':ax, 'py':py, 'x_int':x_int, 'Y':Y, 'Mean_Val':Mean_Val, 'Var_Val':Var_Val, 'n_init':n_init, 'N':N, 'seed':seed, 'Theta_test':Theta_test}) - - #if iter_num == 0: # Calulate the truth values - #d = sio.loadmat('./truth_data_py.mat') - #py_standard_truth = d['py_standard'] - #py_standard_truth = py_standard_truth.reshape(10000,) - - #log10_error = np.sum(np.abs(np.log10(py_standard[50:2750]) - np.log10(py_standard_truth[50:2750])))/(x_int_standard[2] -x_int_standard[1]) - #log10_errors[iter_num] = log10_error - #print('The log10 of the log-pdf error is: '+str(np.log10(log10_error))) - - if print_plots: - - fig = plt.figure() - gs = fig.add_gridspec(2, 2, hspace=0.2, wspace=0.1) - (ax1, ax2), (ax3, ax4) = gs.subplots()#(sharex='col', sharey='row') - fig.suptitle('2D U = U1+U2 Phi Jet Control Search, Iteration '+str(iter_num)) - ax1.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Mean_Val.reshape(test_pts, test_pts)) - ax1.set_aspect('equal') - ax1.annotate('Mean Solution', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax1.set_ylabel('$\theta_2$') - - ax2.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), Var_Val.reshape(test_pts, test_pts)) - ax2.plot(Theta[0:np.size(Y)-1,0], Theta[0:np.size(Y)-1,1], 'wo') - ax2.set_aspect('equal') - ax2.annotate('Variance', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - - ax3.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), wx.reshape(test_pts, test_pts)) - ax3.set_aspect('equal') - ax3.annotate('Danger Scores', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax3.set_ylabel('$\theta_2$') - #ax3.set_xlabel('$\theta_1$') - - ax4.pcolor(Theta_test[:,0].reshape(test_pts, test_pts), Theta_test[:,1].reshape(test_pts, test_pts), ax.reshape(test_pts, test_pts)) - ax4.plot(Theta[-1,0], Theta[-1,1], 'ro') - ax4.set_aspect('equal') - ax4.annotate('Acquisition', - xy=(3, 5), xycoords='data', - xytext=(0.7, 0.95), textcoords='axes fraction', - horizontalalignment='right', verticalalignment='top',color='white') - #ax4.set_xlabel('$\theta_1$') - #ax4.set_xlim([-6,6]) - #ax4.set_ylim([-6,6]) - plt.show() - # ax5.semilogy(x_int_standard, py_standard_truth, label ='True PDF' ) - # ax5.semilogy(x_int_standard, py_standard, label='NN Approx.') - # ax5.set_xlim([0,2.75*10**7]) - # ax5.set_ylim([10**-10,10**-6.75]) - # ax5.legend(loc='lower left') - # #ax5.annotate('Output PDFs', - # #xy=(-3, 5), xycoords='data', - # #xytext=(0.7, 0.95), textcoords='axes fraction', - # #horizontalalignment='right', verticalalignment='top',color='white') - # ax5.set_xlabel('New Infections') - - # ax6.plot(np.linspace(0,iter_num,iter_num+1),np.log10(log10_errors[0:iter_num+1]), label='Error') - # #ax6.annotate('Log10 of log-pdf Error', - # #xy=(-3, 5), xycoords='data', - # #xytext=(0.7, 0.95), textcoords='axes fraction', - # #horizontalalignment='right', verticalalignment='top',color='white') - # ax6.legend(loc='lower left') - # ax6.set_xlabel('Iterations') - # plt.show() - - - #sio.savemat('./data/Jet_Control_Errors_Seed_'+str(seed)+'_N'+str(N)+'.mat', {'log10_errors':log10_errors}) - return - -# Call the function -main(seed,iter_num,dim,acq,n_init,epochs,b_layers,t_layers,neurons,init_method,N,iters_max,print_plots) - diff --git a/dnosearch/examples/jet_control/model/checkpoint b/dnosearch/examples/jet_control/model/checkpoint deleted file mode 100644 index 6d77e03..0000000 --- a/dnosearch/examples/jet_control/model/checkpoint +++ /dev/null @@ -1,2 +0,0 @@ -model_checkpoint_path: "N1seed1__model.ckpt-1000" -all_model_checkpoint_paths: "N1seed1__model.ckpt-1000" diff --git a/dnosearch/examples/jet_control/oscillator.py b/dnosearch/examples/jet_control/oscillator.py deleted file mode 100644 index 8fa2b7a..0000000 --- a/dnosearch/examples/jet_control/oscillator.py +++ /dev/null @@ -1,121 +0,0 @@ -import numpy as np -from scipy.interpolate import interp1d - - -class Oscillator: - - def __init__(self, noise, tf, nsteps, u_init, - delta=1.5, alpha=1.0, beta=0.1, x1=0.5, x2=1.5): - self.delta = delta - self.alpha = alpha - self.beta = beta - self.x1 = x1 - self.x2 = x2 - self.noise = noise - self.tf = tf - self.nsteps = nsteps - self.u_init = u_init - - def rhs(self, u, t): - u0, u1 = u - f0 = u1 - f1 = -self.delta*u1 - self.f_nl(u0) + self.sample_noise(t) - f = [f0, f1] - return f - - def f_nl(self, u0): - if np.abs(u0) <= self.x1: - return self.alpha * u0 - elif np.abs(u0) >= self.x2: - return self.alpha * self.x1 * np.sign(u0) \ - + self.beta * (u0 - self.x2 * np.sign(u0))**3 - else: - return self.alpha * self.x1 * np.sign(u0) - - def solve(self, theta): - self.sample_noise = self.noise.get_sample_interp(theta) - time = np.linspace(0, self.tf, self.nsteps+1) - solver = ODESolver(self.rhs) - solver.set_ics(self.u_init) - u, t = solver.solve(time) - return u, t - - -class Noise: - - def __init__(self, domain, sigma=0.1, ell=4.0): - self.ti = domain[0] - self.tf = domain[1] - self.tl = domain[1] - domain[0] - self.R = self.get_covariance(sigma, ell) - self.lam, self.phi = self.kle(self.R) - - def get_covariance(self, sigma, ell): - m = 500 + 1 - self.t = np.linspace(self.ti, self.tf, m) - self.dt = self.tl/(m-1) - R = np.zeros([m, m]) - for i in range(m): - for j in range(m): - tau = self.t[j] - self.t[i] - R[i,j] = sigma*np.exp(-tau**2/(2*ell**2)) - return R*self.dt - - def kle(self, R): - lam, phi = np.linalg.eigh(R) - phi = phi/np.sqrt(self.dt) - idx = lam.argsort()[::-1] - lam = lam[idx] - phi = phi[:,idx] - return lam, phi - - def get_eigenvalues(self, trunc=None): - return self.lam[0:trunc] - - def get_eigenvectors(self, trunc=None): - return self.phi[:,0:trunc] - - def get_sample(self, xi): - nRV = np.asarray(xi).shape[0] - phi_trunc = self.phi[:,0:nRV] - lam_trunc = self.lam[0:nRV] - lam_sqrtm = np.diag(np.sqrt(lam_trunc)) - sample = np.dot(phi_trunc, np.dot(lam_sqrtm, xi)) - return sample - - def get_sample_interp(self, xi): - sample = self.get_sample(xi.ravel()) - sample_int = interp1d(self.t, sample, kind='cubic') - return sample_int - - -class ODESolver: - - def __init__(self, f): - self.f = lambda u, t: np.asarray(f(u, t), float) - - def set_ics(self, U0): - U0 = np.asarray(U0) - self.neq = U0.size - self.U0 = U0 - - def advance(self): - u, f, k, t = self.u, self.f, self.k, self.t - dt = t[k+1] - t[k] - K1 = dt*f(u[k], t[k]) - K2 = dt*f(u[k] + 0.5*K1, t[k] + 0.5*dt) - K3 = dt*f(u[k] + 0.5*K2, t[k] + 0.5*dt) - K4 = dt*f(u[k] + K3, t[k] + dt) - u_new = u[k] + (1/6.0)*(K1 + 2*K2 + 2*K3 + K4) - return u_new - - def solve(self, time): - self.t = np.asarray(time) - n = self.t.size - self.u = np.zeros((n,self.neq)) - self.u[0] = self.U0 - for k in range(n-1): - self.k = k - self.u[k+1] = self.advance() - return self.u[:k+2], self.t[:k+2] - diff --git a/dnosearch/examples/jet_control/plots/.DS_Store b/dnosearch/examples/jet_control/plots/.DS_Store deleted file mode 100644 index 5008ddf..0000000 Binary files a/dnosearch/examples/jet_control/plots/.DS_Store and /dev/null differ diff --git a/dnosearch/examples/jet_control/model/.DS_Store b/dnosearch/examples/lamp/.DS_Store similarity index 71% rename from dnosearch/examples/jet_control/model/.DS_Store rename to dnosearch/examples/lamp/.DS_Store index 5008ddf..25dc6da 100644 Binary files a/dnosearch/examples/jet_control/model/.DS_Store and b/dnosearch/examples/lamp/.DS_Store differ diff --git a/dnosearch/examples/lamp/.LAMP_10D_run.sh.swp b/dnosearch/examples/lamp/.LAMP_10D_run.sh.swp new file mode 100644 index 0000000..eb5989c Binary files /dev/null and b/dnosearch/examples/lamp/.LAMP_10D_run.sh.swp differ diff --git a/dnosearch/examples/jet_control/.DS_Store b/dnosearch/examples/lamp/10d_second_relu_as_1/.DS_Store similarity index 96% rename from dnosearch/examples/jet_control/.DS_Store rename to dnosearch/examples/lamp/10d_second_relu_as_1/.DS_Store index fdf6679..70bbd10 100644 Binary files a/dnosearch/examples/jet_control/.DS_Store and b/dnosearch/examples/lamp/10d_second_relu_as_1/.DS_Store differ diff --git a/dnosearch/examples/lamp/10d_second_relu_as_1/model/N0seed1__r0_model.ckpt-1000.data-00000-of-00001 b/dnosearch/examples/lamp/10d_second_relu_as_1/model/N0seed1__r0_model.ckpt-1000.data-00000-of-00001 new file mode 100644 index 0000000..3f7d02b Binary files /dev/null and b/dnosearch/examples/lamp/10d_second_relu_as_1/model/N0seed1__r0_model.ckpt-1000.data-00000-of-00001 differ diff --git a/dnosearch/examples/lamp/10d_second_relu_as_1/model/N0seed1__r0_model.ckpt-1000.index b/dnosearch/examples/lamp/10d_second_relu_as_1/model/N0seed1__r0_model.ckpt-1000.index new file mode 100644 index 0000000..3dc6b5c Binary files /dev/null and b/dnosearch/examples/lamp/10d_second_relu_as_1/model/N0seed1__r0_model.ckpt-1000.index differ diff --git a/dnosearch/examples/lamp/10d_second_relu_as_1/model/checkpoint b/dnosearch/examples/lamp/10d_second_relu_as_1/model/checkpoint new file mode 100644 index 0000000..b53a88f --- /dev/null +++ b/dnosearch/examples/lamp/10d_second_relu_as_1/model/checkpoint @@ -0,0 +1,2 @@ +model_checkpoint_path: "N0seed1__r0_model.ckpt-1000" +all_model_checkpoint_paths: "N0seed1__r0_model.ckpt-1000" diff --git a/dnosearch/examples/lamp/LAMP_10D_run.sh b/dnosearch/examples/lamp/LAMP_10D_run.sh new file mode 100755 index 0000000..42c5743 --- /dev/null +++ b/dnosearch/examples/lamp/LAMP_10D_run.sh @@ -0,0 +1,30 @@ +dim=10 +n_init=3 +epochs=1000 +b_layers=5 +t_layers=1 +neurons=200 +init_method='pdf' +N=2 +sigma=0.15 +activation='relu' + +seed_start=1 +seed_end=10 + +for ((seed=$seed_start;seed<=$seed_end;seed++)) +do + for iter_num in {0..100} + do + acq='KUS_LW' + run_name='10d_second_relu_as_' + python3 ./main_lamp.py $seed $iter_num $dim $acq $n_init $epochs $b_layers $t_layers $neurons $init_method $N $run_name$seed $activation $sigma + + acq='RAND' + run_name='10d_second_relu_uniform_' + python3 ./main_lamp.py $seed $iter_num $dim $acq $n_init $epochs $b_layers $t_layers $neurons $init_method $N $run_name$seed $activation $sigma + done + wait +done + + diff --git a/dnosearch/examples/utils/.DS_Store b/dnosearch/examples/lamp/Matlab_GP_Implementation/.DS_Store similarity index 97% rename from dnosearch/examples/utils/.DS_Store rename to dnosearch/examples/lamp/Matlab_GP_Implementation/.DS_Store index db6d5fb..cfd17af 100644 Binary files a/dnosearch/examples/utils/.DS_Store and b/dnosearch/examples/lamp/Matlab_GP_Implementation/.DS_Store differ diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/Active_Search_Parameters.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/Active_Search_Parameters.m new file mode 100644 index 0000000..b9d705c --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/Active_Search_Parameters.m @@ -0,0 +1,76 @@ +classdef Active_Search_Parameters < handle + %ACTIVE_SEARCH_PARAMETERS Summary of this class goes here + % Detailed explanation goes here + + properties + n_dim_in = 3; + + n_init = 10; + n_iter = 50; + z_max = 4.5; + + nqb = 65; + q_min = -6.5; + q_max = 6.5; + + %true_model_noise_rule = 'none'; + true_model_noise_rule = 'full'; + + %mode_choice_rule = 'fixed-mode'; + mode_choice_rule = 'round-robin'; + + n_acq_restarts = 10; + + acq_rule = 'lw-us'; + %acq_rule = 'lw-kus'; + opt_rule = 'as'; + + q_plot = 1; + na = 65; + save_intermediate_plots = false; + nq_mc = 5e6; + n_grid_likelihood = 65; + q_pdf_rule = 'MC'; + true_q_pdf_rule = 'MC'; + + likelihood_alg = 'kde'; + + vid_profile = 'Motion JPEG AVI'; + video_path = ''; + video_frame_rate = 10; + draw_plots = true; + + acq_active_output_mode = 1; + + kl_bound_list = [2, 2.25, 2.5, 2.75, 3]; + kl_bound_list_vbm_upper = [1.1, 1.3, 1.5, 1.7, 1.9].*1e9; + kl_bound_list_vbm_lower = -[1.5, 1.7, 1.9, 2.1, 2.3].*1e9; + n_kl_bounds = 5; + + n_rr_rondel_size = 6; + + save_errors = true; + save_videos = false; + + initial_samples_rule = 'fixed-lhs'; + fixed_sigma_for_optimization = false; + + compute_mode_errors = true; + compute_surr_errors = true; + + overall_norm_factor = 1; + end + + methods + function as_par = Active_Search_Parameters() + %ACTIVE_SEARCH_PARAMETERS Construct an instance of this class + % Detailed explanation goes here + + as_par.n_kl_bounds = length(as_par.kl_bound_list); + + end + + + end +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/Analysis_Parameters.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/Analysis_Parameters.m new file mode 100644 index 0000000..1f2e5f7 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/Analysis_Parameters.m @@ -0,0 +1,101 @@ +classdef Analysis_Parameters < handle + %ANALYSIS_PARAMETERS Summary of this class goes here + % Detailed explanation goes here + + properties + plot_pos = [20,9,6.5,4.5]; + full_paper_pos = [0, 0, 7.5, 5.5]; + full_paper_size = [7.5, 5.5]; + half_paper_pos = [0, 0, 3.5, 2.5]; + half_paper_size = [3.5, 2.5]; + third_paper_pos = [0, 0, 2.5, 1.9]; + third_paper_size = [2.5, 1.9]; + quarter_paper_pos = [0, 0, 1.8, 1.3]; + quarter_paper_size = [1.8, 1.3]; + save_figs = true; + + fig_path = '/home/stevejon-computer/Dropbox (MIT)/Output/scsp/june_pix/'; + + %kl_data_path = '../../../Data/LAMP/sandlab_jonswap_kl/'; + kl_data_path = '../../../Data/LAMP/june_2/'; + kl25d_data_path_total = '../../../Data/LAMP/july_25d/'; + + mc_peaks_data_path = '../../../Data/LAMP/sandlab_jonswap_all_peaks/'; + mc_pdf_data_path = '../../../Data/LAMP/jonswap_pdf/'; + klmc_data_path = '../../../Data/LAMP/sandlab_kl_mc/'; + kl1d_data_path = '../../../Data/LAMP/june_1d/'; + + kl2d_data_path_phase = '../../../Data/LAMP/june_2d/run_1/'; + kl2d_data_path_shape = '../../../Data/LAMP/june_2d/run_2/'; + kl2d_data_path_alt = '../../../Data/LAMP/june_2d/run_3/'; + + kl2d_data_path_long_1 = '../../../Data/LAMP/sept_long/'; + kl2d_data_path_long_2 = '../../../Data/LAMP/oct_data/'; + + kl2d_data_path_set = '../../../Data/LAMP/oct_data/set/'; + + kl_scsp_data_path = '../../../Data/LAMP/oct_data/scsp_var/'; + kl_nov_bonus_path = '../../../Data/LAMP/nov_data/t_60_n_4_bonus/'; + kl_mar_bonus_path = '../../../Data/LAMP/mar_data_for_mf_as/'; + + data_path_klmc = '../../../Data/LAMP/nov_data/klmc_t_60_n_30/'; + data_path_ss = '../../../Data/LAMP/nov_data/steady-state/'; + + n_trials = 10000; + n_kl_trials = 2000; + n_modes = 25; + + n_samples_per_fit = 200; + + n_hist_resample = 5e5; + %n_hist_resample = 5e6; + + truncate_t_steps = false; + t_steps_kept = 169; + %max_fmodes = 20; + %max_klmodes = 12; + nt_save = 1; + + quantile_QQ = [0.9, 0.99, 0.999]; + n_q; + + z_max = 4; + + gpr_kernel_class = 'ardsquaredexponential'; + %gpr_kernel_class = 'ardmatern52'; + gpr_verbosity = 0; + + %gpr_resampling_strat = 'mean-only'; + %gpr_resampling_strat = 'normally-distributed'; + gpr_resampling_strat = 'vector-resample'; + + gpr_fixed_sigma = false; + gpr_initial_sigma = 1; + + peakfinding_threshold_strat = 'stdev-based'; + + n_hist_bins = 257; + + %kl_transformation_rule = 'full-mc'; + kl_transformation_rule = 'restricted-mc'; + %kl_transformation_rule = 'structured-sampling'; + + gpr_explicit_basis_class = 'constant'; + + default_mf_rho = 1; + + end + + methods + function a_par = Analysis_Parameters() + %ANALYSIS_PARAMETERS Construct an instance of this class + % Detailed explanation goes here + + a_par.n_q = length(a_par.quantile_QQ); + + end + + + end +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_List.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_List.m new file mode 100644 index 0000000..121606b --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_List.m @@ -0,0 +1,229 @@ +classdef GPR_List < handle + %GPR_LIST Summary of this class goes here + % Detailed explanation goes here + + properties + a_par; + exp_name + + scalar_gpr_list; + scalar_mode_list; + + vector_gpr_list; + vector_mode_list; + + n_inputs; + n_outputs; + basis_class; + + Y_rot_matrix; + g_mu_list; + + D_out; + V_out; + ts_mu; + overall_norm_factor; + + end + + methods + function g_obj = GPR_List( a_par, exp_name ) + %GPR_LIST Construct an instance of this class + % Detailed explanation goes here + g_obj.scalar_gpr_list = cell(1, 0); + g_obj.scalar_mode_list = zeros(0, 1); + g_obj.vector_gpr_list = cell(1, 0); + g_obj.vector_mode_list = zeros(0, 2); + + g_obj.a_par = a_par; + g_obj.exp_name = exp_name; + + g_obj.Y_rot_matrix = 1; + + end + + function [ outcode ] = set_kl(g_obj, D_list, V_list, ts_mu, beta) + g_obj.D_out = D_list; + g_obj.V_out = V_list; + g_obj.ts_mu = ts_mu; + g_obj.overall_norm_factor = beta; + + outcode = 1; + end + + function [ outcode ] = set_Y_rot_matrix(g_obj, R) + g_obj.Y_rot_matrix = R; + + outcode = 1; + end + + + function [ outcode ] = train(g_obj, xx_train, yy_train, vector_pair_list, rho_list) + + g_obj.n_inputs = size(xx_train, 2); + g_obj.n_outputs = size(yy_train, 2); + + JJ2 = vector_pair_list; + JJ1 = 1:g_obj.n_outputs; + for k = 1:size(JJ2, 1) + JJ1(JJ2(k, 1)) = nan; + JJ1(JJ2(k, 2)) = nan; + end + JJ1 = JJ1(~isnan(JJ1)); + + + + R = g_obj.Y_rot_matrix; + yy_train = yy_train*R; + + g_obj.g_mu_list = mean(yy_train, 1); + yy_train = yy_train - repmat(g_obj.g_mu_list, [size(yy_train, 1), 1]); + + for k1 = 1:size(JJ2, 1) + + cur_yy = yy_train(:, JJ2(k1, :)); + + test_sur = GPR_Separable(g_obj.a_par, g_obj.exp_name); + test_sur.set_kl(g_obj.D_out, g_obj.V_out, g_obj.ts_mu, ... + g_obj.overall_norm_factor); + test_sur.n_outputs = 2; + %test_sur.gpr_kernel_class = 'ardsquaredexponential'; + test_sur.gpr_kernel_class = 'squaredexponential'; + test_sur.train(xx_train, cur_yy); + + + isard = 'no'; + switch isard + case 'yes' + t1 = test_sur.g_fit_list{1}.KernelInformation.KernelParameters; + t2 = test_sur.g_fit_list{2}.KernelInformation.KernelParameters; + rho = 1/10*sqrt(t2(2)); + s1 = test_sur.g_fit_list{1}.Sigma; + s2 = test_sur.g_fit_list{2}.Sigma; + + ln = g_obj.n_inputs; + tn = 3*g_obj.n_inputs; + theta = ones(1, tn + 3); + theta(1:ln) = t1(1)*ones(1, ln); + theta(ln + (1:ln)) = t2(1)*ones(1, ln); + theta(2*ln + (1:ln)) = (t1(1) + t2(1))/2*ones(1, ln); + %theta(1:tn) = ones(1, tn); + theta(tn+1) = sqrt(t1(2)); + theta(tn+2) = 1/10*sqrt(min(t1(2), t2(2))); + theta(tn+3) = sqrt(t2(2)); + case 'no' + t1 = test_sur.g_fit_list{1}.KernelInformation.KernelParameters; + t2 = test_sur.g_fit_list{2}.KernelInformation.KernelParameters; + rho = 1/10*sqrt(t2(2)); + s1 = test_sur.g_fit_list{1}.Sigma; + s2 = test_sur.g_fit_list{2}.Sigma; + + ln =1; + tn = 3; + theta = ones(1, tn + 3); + theta(1:ln) = t1(1)*ones(1, ln); + theta(ln + (1:ln)) = t2(1)*ones(1, ln); + theta(2*ln + (1:ln)) = (t1(1) + t2(1))/2*ones(1, ln); + %theta(1:tn) = ones(1, tn); + theta(tn+1) = sqrt(t1(2)); + theta(tn+2) = 1/10*sqrt(min(t1(2), t2(2))); + theta(tn+3) = sqrt(t2(2)); + end + + + vek_gpr = GPR_Vector_SoS(g_obj.a_par, g_obj.exp_name); + vek_gpr.set_kl(g_obj.D_out, g_obj.V_out, g_obj.ts_mu, ... + g_obj.overall_norm_factor); + %vek_gpr.set_kernel('full-2d-sqdexp-ard', size(xx_train, 2), size(cur_yy, 2) ); + vek_gpr.set_kernel('full-2d-sqdexp', size(xx_train, 2), size(cur_yy, 2) ); + vek_gpr.set_theta0( theta ); + vek_gpr.set_sigma0( 1/2*(s1 + s2) ); + vek_gpr.rho = rho_list(k1); + vek_gpr.fit_type = 'fit-all'; + vek_gpr.basis_class = g_obj.a_par.gpr_explicit_basis_class; + vek_gpr.sig_mat_class = '2d-rho-correlated'; + vek_gpr.train(xx_train, cur_yy); + + + g_obj.add_vector_gpr(vek_gpr, JJ2(k1, :)); + end + + + for k1 = 1:length(JJ1) + cur_yy = yy_train(:, JJ1(k1)); + + test_sur = GPR_Separable(g_obj.a_par, g_obj.exp_name); + test_sur.set_kl(g_obj.D_out, g_obj.V_out, g_obj.ts_mu, ... + g_obj.overall_norm_factor); + test_sur.n_outputs = 1; + test_sur.gpr_kernel_class = 'ardsquaredexponential'; + test_sur.train(xx_train, cur_yy); + + g_obj.add_scalar_gpr(test_sur, JJ1(k1)); + end + + outcode = 1; + end + + + + function [ outcode ] = add_scalar_gpr(g_list, g_obj, mode) + + new_list = cell(length( g_list.scalar_gpr_list) + 1, 1); + for k = 1:length( g_list.scalar_gpr_list) + new_list{k} = g_list.scalar_gpr_list{k}; + end + new_list{length( g_list.scalar_gpr_list) + 1} = g_obj; + + %g_list.scalar_gpr_list = {g_list.scalar_gpr_list, g_obj}; + g_list.scalar_gpr_list = new_list; + g_list.scalar_mode_list = [g_list.scalar_mode_list; mode]; + + g_list.n_outputs = max(g_list.n_outputs, mode); + g_list.n_inputs = g_obj.n_inputs; + + outcode = 1; + end + + function [ outcode ] = add_vector_gpr(g_list, g_obj, mode_pair) + + new_list = cell(length( g_list.vector_gpr_list) + 1, 1); + for k = 1:length( g_list.vector_gpr_list) + new_list{k} = g_list.vector_gpr_list{k}; + end + new_list{length( g_list.vector_gpr_list) + 1} = g_obj; + + %g_list.vector_gpr_list = {g_list.scalar_gpr_list, g_obj}; + g_list.vector_gpr_list = new_list; + g_list.vector_mode_list = [g_list.vector_mode_list; mode_pair]; + + g_list.n_outputs = max(g_list.n_outputs, max(mode_pair(:))); + g_list.n_inputs = g_obj.n_inputs; + + outcode = 1; + end + + function [ yy_sample, yy_predict, yy_std ] = sample(g_list, xx_test) + yy_sample = zeros(size(xx_test, 1), g_list.n_outputs); + yy_predict = zeros(size(xx_test, 1), g_list.n_outputs); + yy_std = zeros(size(xx_test, 1), g_list.n_outputs); + + for k = 1:length(g_list.scalar_gpr_list) + [ cur_sample, cur_predict, cur_std ] = ... + g_list.scalar_gpr_list{k}.sample(xx_test); + yy_sample(:, g_list.scalar_mode_list(k)) = cur_sample; + yy_predict(:, g_list.scalar_mode_list(k)) = cur_predict; + yy_std(:, g_list.scalar_mode_list(k)) = cur_std; + end + + for k = 1:length(g_list.vector_gpr_list) + [ cur_sample, cur_predict, cur_std ] = ... + g_list.vector_gpr_list{k}.sample(xx_test); + yy_sample(:, g_list.vector_mode_list(k, :)) = cur_sample; + yy_predict(:, g_list.vector_mode_list(k, :)) = cur_predict; + yy_std(:, g_list.vector_mode_list(k, :)) = zeros(size(cur_predict)); + end + end + end +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_Separable.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_Separable.m new file mode 100644 index 0000000..db65cc0 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_Separable.m @@ -0,0 +1,159 @@ +classdef GPR_Separable < handle + %GPR_SEPARABLE Summary of this class goes here + % Detailed explanation goes here + + properties + a_par; + %n_modes; + n_outputs; + g_fit_list; + g_mu_list; + n_inputs; + gpr_kernel_class; + basis_class; + + D_out; + V_out; + ts_mu; + overall_norm_factor; + + Y_rot_matrix; + + exp_name; + end + + methods + function g_obj = GPR_Separable( a_par, exp_name ) + %GPR_SEPARABLE Construct an instance of this class + % Detailed explanation goes here + + g_obj.a_par = a_par; + %g_obj.n_modes = a_par.n_modes; + g_obj.n_outputs = a_par.n_modes; + g_obj.exp_name = exp_name; + g_obj.gpr_kernel_class = a_par.gpr_kernel_class; + + g_obj.basis_class = 'constant'; + + g_obj.Y_rot_matrix = 1; + + end + + function [ outcode ] = set_kl(g_obj, D_list, V_list, ts_mu, beta) + g_obj.D_out = D_list; + g_obj.V_out = V_list; + g_obj.ts_mu = ts_mu; + g_obj.overall_norm_factor = beta; + + outcode = 1; + end + + function [ outcode ] = set_Y_rot_matrix(g_obj, R) + g_obj.Y_rot_matrix = R; + + outcode = 1; + end + + function [ outcode ] = train(g_obj, xx_train, yy_train) + g_obj.n_inputs = size(xx_train, 2); + g_obj.n_outputs = min(g_obj.n_outputs, size(yy_train, 2)); + + R = g_obj.Y_rot_matrix; + yy_train = yy_train*R; + + g_obj.g_mu_list = mean(yy_train, 1); + yy_train = yy_train - repmat(g_obj.g_mu_list, [size(yy_train, 1), 1]); + g_obj.g_fit_list = cell(g_obj.n_outputs, 1); + + for k_mode = 1:g_obj.n_outputs + yy_train_cur = real(yy_train(:, k_mode)); + + if g_obj.a_par.gpr_fixed_sigma + g_obj.g_fit_list{k_mode} = fitrgp(xx_train, yy_train_cur, ... + 'Verbose', g_obj.a_par.gpr_verbosity, ... + 'KernelFunction', g_obj.gpr_kernel_class, ... + 'BasisFunction', g_obj.basis_class, ... + 'ConstantSigma', g_obj.a_par.gpr_fixed_sigma, ... + 'Sigma', g_obj.a_par.gpr_initial_sigma); + else + g_obj.g_fit_list{k_mode} = fitrgp(xx_train, yy_train_cur, ... + 'Verbose', g_obj.a_par.gpr_verbosity, ... + 'KernelFunction', g_obj.gpr_kernel_class, ... + 'BasisFunction', g_obj.basis_class); + end + % 'DistanceMethod', 'accurate' + end + + outcode = 1; + end + + function [yy_predict, yy_std ] = predict(g_obj, xx_test) + + [yy_predict, yy_std ] = g_obj.predict_raw( xx_test ); + R = g_obj.Y_rot_matrix; + + yy_predict = yy_predict*transpose(R); + yy_std = yy_std*transpose(R); + + end + + function [ yy_sample, yy_predict, yy_std ] = sample(g_obj, xx_test) + + [yy_predict, yy_std ] = g_obj.predict_raw( xx_test ); + R = g_obj.Y_rot_matrix; + + n_samples = size(xx_test, 1); + %yy_sample = zeros(n_samples, g_obj.n_outputs); + rr = randn(n_samples, g_obj.n_outputs); + + yy_sample(:, :) = yy_predict(:, :) + rr.*yy_std(:, :); + + yy_predict = yy_predict*transpose(R); + yy_std = yy_std*transpose(R); + yy_sample = yy_sample*transpose(R); + end + + + function [yy_predict, yy_std ] = predict_raw(g_obj, xx_test) + % this avoids the postprocessing rotation step. Outsiders + % shouldn't call it + + n_samples = size(xx_test, 1); + yy_predict = zeros(n_samples, g_obj.n_outputs); + yy_std = zeros(n_samples, g_obj.n_outputs); + + for k_mode = 1:g_obj.n_outputs + [ ypred, ysd ] = g_obj.g_fit_list{k_mode}.predict(xx_test); + yy_predict(:, k_mode) = ypred + g_obj.g_mu_list(k_mode); + yy_std(:, k_mode) = ysd; + end + + + + end + + function [ outcode ] = save_to_text(g_obj, output_filebase) + + for k = 1:length(g_obj.g_fit_list) + cur_fit = g_obj.g_fit_list{k}; + pars = [cur_fit.Sigma, cur_fit.KernelInformation.KernelParameters', cur_fit.Beta']; + outputfilename = sprintf('%s_k_%d_parameters', output_filebase, k); + save(outputfilename, 'pars', '-ascii'); + end + outputfilename = sprintf('%s_g_mu_list', output_filebase); + zz = g_obj.g_mu_list; + save(outputfilename, 'zz', '-ascii'); + + outcode = 1; + + end + + function [ sigma_n ] = get_sigma_n_list(g_obj) + sigma_n = zeros(size(g_obj.g_fit_list)); + for k = 1:length(g_obj.g_fit_list) + sigma_n(k) = g_obj.g_fit_list{k}.Sigma; + end + end + end +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_Vector_SoS.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_Vector_SoS.m new file mode 100644 index 0000000..81c0f50 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/GPR_Vector_SoS.m @@ -0,0 +1,729 @@ +classdef GPR_Vector_SoS < handle + %GPR_SEPARABLE Summary of this class goes here + % Detailed explanation goes here + + properties + a_par; + % n_modes; + g_fit; + g_mu; + n_inputs; + n_outputs; + + D_out; + V_out; + ts_mu; + overall_norm_factor; + + kernel_class; + basis_class; + sker; + mker; + kfcn; + theta0; + sigma0; + rho; + + fit_type; + mean_sample_alg; + sig_mat_class; + cov_sample_alg; + + K; + Kinv; + %H + Hinv; + K_L_chol; + H_L_chol; + alpha; + + exp_name; + verbosity; + end + + methods + function g_obj = GPR_Vector_SoS( a_par, exp_name ) + %GPR_SEPARABLE Construct an instance of this class + % Detailed explanation goes here + + g_obj.a_par = a_par; + %g_obj.n_modes = a_par.n_modes; + %g_obj.g_fit_list = cell(g_obj.n_modes, 1); + g_obj.exp_name = exp_name; + + g_obj.fit_type = 'none'; + + %g_obj.mean_sample_alg = 'direct'; + g_obj.mean_sample_alg = 'representer'; + g_obj.sig_mat_class = 'constant-diagonal'; + %g_obj.sig_mat_class = '2d-rho-correlated'; + %g_obj.cov_sample_alg = 'direct'; + g_obj.cov_sample_alg = 'mat-reimplement'; + + g_obj.basis_class = 'constant'; + + g_obj.verbosity = g_obj.a_par.gpr_verbosity; + g_obj.sigma0 = 1; + g_obj.rho = 0; + + end + + function [ outcode ] = set_kl(g_obj, D_list, V_list, ts_mu, beta) + g_obj.D_out = D_list; + g_obj.V_out = V_list; + g_obj.ts_mu = ts_mu; + g_obj.overall_norm_factor = beta; + + outcode = 1; + end + + function [ outcode ] = set_kernel(g_obj, kernel_class, n_inputs, n_outputs ) + + g_obj.n_inputs = n_inputs; + g_obj.n_outputs = n_outputs; + g_obj.kernel_class = kernel_class; + + switch g_obj.kernel_class + case 'smi-sqdexp' + + %g_obj.sker = @(XN,XM,theta) (exp(theta(2))^2)*... + % exp(-(pdist2(XN(:, 1:end-1),XM(:, 1:end-1)).^2)/(2*exp(theta(1))^2)); + g_obj.sker = @(XN,XM,theta) (theta(2)^2)*... + exp(-(pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'squaredeuclidean'))/(2*theta(1)^2)); + + g_obj.mker = @(XN,XM,theta) (XN(:,end) == XM(:,end)); + + g_obj.kfcn = @(XN,XM,theta) g_obj.sker(XN,XM,theta).*... + g_obj.mker(XN,XM,theta); + + % + % Choose an initial kernel parameter set + % N.B. size ought depend on choice of kernel + % + + g_obj.theta0 = [1, 1]; + + case 'diag-sqdexp' + sker1 = @(XN,XM,theta) (theta(2)^2)*... + exp(-(pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'squaredeuclidean'))/(2*theta(1)^2)); + sker2 = @(XN,XM,theta) (theta(4)^2)*... + exp(-(pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'squaredeuclidean'))/(2*theta(3)^2)); + mker1 = @(XN,XM,theta) (XN(:,end) == 1).*(XM(:,end) == 1); + mker2 = @(XN,XM,theta) (XN(:,end) == 2).*(XM(:,end) == 2); + + g_obj.kfcn = @(XN,XM,theta) sker1(XN,XM,theta).*mker1(XN,XM,theta) + ... + sker2(XN,XM,theta).*mker2(XN,XM,theta); + + g_obj.theta0 = [1, 1, 1, 1]; + + case 'full-2d-sqdexp' + % 2x2 chol matrix: [A, 0; B, C]*[A, B; 0, C] + % = [A^2, AB; AB, B^2 + C^2] + % + % instead of keeping the amplitude factor with each + % separable kernel, we'll group the amplitudes with the + % matrix part + % + % Ordering: [1,1] length parameters + % [2,2] length parameters + % [1,2] length parameters + % matrix amplitudes [1, 0; 2, 3] + sker1 = @(XN,XM,theta) exp(-(pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'squaredeuclidean'))/(2*theta(1)^2)); + sker2 = @(XN,XM,theta) exp(-(pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'squaredeuclidean'))/(2*theta(2)^2)); + sker3 = @(XN,XM,theta) exp(-(pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'squaredeuclidean'))/(2*theta(3)^2)); + mker1 = @(XN,XM,theta) theta(4).^2.*... + (XN(:,end) == 1).*(XM(:,end) == 1)'; + mker2 = @(XN,XM,theta) (theta(5).^2 + theta(6).^2).*... + (XN(:,end) == 2).*(XM(:,end) == 2)'; + mker3 = @(XN,XM,theta) (theta(4).*theta(5)).*... + (XN(:,end) == 1).*(XM(:,end) == 2)'; + mker4 = @(XN,XM,theta) (theta(4).*theta(5)).*... + (XN(:,end) == 2).*(XM(:,end) == 1)'; + + g_obj.kfcn = @(XN,XM,theta) sker1(XN,XM,theta).*mker1(XN,XM,theta) + ... + sker2(XN,XM,theta).*mker2(XN,XM,theta) + ... + sker3(XN,XM,theta).*(mker3(XN,XM,theta) + mker4(XN,XM,theta)); + + g_obj.theta0 = [1, 1, 1, 1, 1, 1]; + + case 'full-2d-sqdexp-ard' + + ii1 = 1:n_inputs; + ii2 = n_inputs + (1:n_inputs); + ii3 = 2*n_inputs + (1:n_inputs); + + sker1 = @(XN,XM,theta) exp(-pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'seuclidean', abs(theta(ii1))).^2/2); + sker2 = @(XN,XM,theta) exp(-pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'seuclidean', abs(theta(ii2))).^2/2); + sker3 = @(XN,XM,theta) exp(-pdist2(XN(:, 1:end-1),XM(:, 1:end-1), 'seuclidean', abs(theta(ii3))).^2/2); + + %sker1 = @(XN,XM,theta) exp(-sum((XN(:, 1:end-1) - XM(:, 1:end-1)).^2./(2*theta(ii1).^2), 2)); + %sker2 = @(XN,XM,theta) exp(-sum((XN(:, 1:end-1) - XM(:, 1:end-1)).^2./(2*theta(ii2).^2), 2)); + %sker3 = @(XN,XM,theta) exp(-sum((XN(:, 1:end-1) - XM(:, 1:end-1)).^2./(2*theta(ii3).^2), 2)); + + i4 = 3*n_inputs+1; + i5 = 3*n_inputs+2; + i6 = 3*n_inputs+3; + mker1 = @(XN,XM,theta) theta(i4).^2.*... + (XN(:,end) == 1).*(XM(:,end) == 1)'; + mker2 = @(XN,XM,theta) (theta(i5).^2 + theta(i6).^2).*... + (XN(:,end) == 2).*(XM(:,end) == 2)'; + mker3 = @(XN,XM,theta) (theta(i4).*theta(i5)).*... + (XN(:,end) == 1).*(XM(:,end) == 2)'; + mker4 = @(XN,XM,theta) (theta(i4).*theta(i5)).*... + (XN(:,end) == 2).*(XM(:,end) == 1)'; + + g_obj.kfcn = @(XN,XM,theta) sker1(XN,XM,theta).*mker1(XN,XM,theta) + ... + sker2(XN,XM,theta).*mker2(XN,XM,theta) + ... + sker3(XN,XM,theta).*(mker3(XN,XM,theta) + mker4(XN,XM,theta)); + + g_obj.theta0 = ones(1, 3*n_inputs+3); + + otherwise + warning('%s not recognized!\n', g_obj.kernel_class) + end + + outcode = 1; + end + + + + function [ outcode ] = set_theta0(g_obj, theta0 ) + + if ~isequal(length(theta0), length(g_obj.theta0)) + warning('Length mismatch! Looking for %d, got %d!\n', ... + length(g_obj.theta0), length(theta0)); + end + + g_obj.theta0 = theta0; + + outcode = 1; + end + + + + function [ outcode ] = set_sigma0(g_obj, sigma0 ) + g_obj.sigma0 = sigma0; + outcode = 1; + end + + + + function [ xx_unroll, yy_unroll ] = ... + unroll_training_data(g_obj, xx_train, yy_train) + + n_reconds = size(yy_train, 1); + xx_unroll = zeros(n_reconds*g_obj.n_outputs, g_obj.n_inputs + 1); + yy_unroll = zeros(n_reconds*g_obj.n_outputs, 1); + + for k = 1:n_reconds + for j = 1:g_obj.n_outputs + index = (k-1)*g_obj.n_outputs + j; + xx_unroll(index, :) = [xx_train(k, :), j]; + yy_unroll(index) = yy_train(k, j); + end + end + + end + + function [ xx_unroll ] = ... + unroll_test_data(g_obj, xx_test) + + n_reconds = size(xx_test, 1); + xx_unroll = zeros(n_reconds*g_obj.n_outputs, g_obj.n_inputs + 1); + + for k = 1:n_reconds + for j = 1:g_obj.n_outputs + index = (k-1)*g_obj.n_outputs + j; + xx_unroll(index, :) = [xx_test(k, :), j]; + end + end + + end + + + + + + function [ outcode ] = train(g_obj, xx_train, yy_train) + + g_obj.g_mu = mean(yy_train, 1); + yy_trainc = yy_train - repmat(g_obj.g_mu, [size(yy_train, 1), 1]); + + [ xx_utrain, yy_utrain ] = g_obj.unroll_training_data(xx_train, yy_trainc); + + + fprintf('Beginning parameter optimization step: %s.\n', g_obj.fit_type); + + switch g_obj.fit_type + case 'fit-all' + g_obj.train_smi( xx_utrain, yy_utrain ); + + case 'no_opt' + g_obj.train_no( xx_utrain, yy_utrain ); + + case 'none' + warning('No fitting specified!\n'); + + otherwise + warning('%s not implemented!\n', g_obj.vector_model); + + end + + fprintf('Beginning matrix inversion step.\n'); + tic; + + %theta = g_obj.g_fit.ModelParameters.KernelParameters; % initial values + %theta = g_obj.g_fit.KernelInformation.KernelParameters; % trained values + + beta = g_obj.g_fit.Beta; + sigma = g_obj.g_fit.Sigma; + n_usamples = size(xx_utrain, 1); + + [ KK ] = g_obj.calc_K(); + g_obj.K = KK; + + % inverse kernel matrix + + KK_aug = g_obj.K + sigma^2*eye(n_usamples); + g_obj.Kinv = inv(KK_aug); + + % Matlab internally saves the cholesky factor too + + [L,status] = chol(KK_aug,'lower'); + g_obj.K_L_chol = L; + + % inverse kernel matrix, explicit basis modification + + if ~isequal(g_obj.basis_class, 'none') + + %g_obj.H = ones(1, size(xx_utrain, 1)); + H = calc_H(g_obj); + HH = H*g_obj.Kinv*transpose(H); + g_obj.Hinv = inv(HH); + [Lh,status] = chol(HH,'lower'); + g_obj.H_L_chol = Lh; + + % + % This is just the same calculation that Matlab got for + % g_fit.Beta + % + + Y_raw = g_obj.g_fit.Y; + beta_hat = (H*g_obj.Kinv*H') \ (H*g_obj.Kinv*Y_raw); + + switch g_obj.basis_class + case 'constant' + fprintf('Matlab beta: %0.2f. Direct beta: %0.2f.\n', ... + beta, beta_hat); + case 'linear' + fprintf('Matlab beta:\n'); + disp(beta'); + fprintf('Direct beta:\n'); + disp(beta_hat'); + end + + % Calculate alpha explicitly, because I don't trust Matlab's + % internal calculation? for some reason? in the vector case + + Y = g_obj.g_fit.Y - H'*beta; + g_obj.alpha = transpose(L)\(L\Y); + + else + Y = g_obj.g_fit.Y; + g_obj.alpha = transpose(L)\(L\Y); + end + + + fprintf('Matrix inversion complete after %0.2f seconds.\n', toc); + + % + % Resub the training points to try to figure out the error + % correlations + % + + fprintf('Resampling training points for residual correlation study.\n'); + tic + + [ qq_pred_mu ] = g_obj.predict_mean(xx_train); + rr = yy_train - qq_pred_mu; + rho0 = corr(rr); + g_obj.rho = rho0(1, 2); + + fprintf('Resampling complete after %0.2f seconds.\n', toc); + + outcode = 1; + end + + + + function [ outcode ] = train_smi(g_obj, xx_train, yy_train) + + g_obj.g_fit = fitrgp(xx_train, yy_train, ... + 'Verbose', g_obj.verbosity, ... + 'KernelFunction', g_obj.kfcn, 'KernelParameters', g_obj.theta0, ... + 'Sigma', g_obj.sigma0, 'BasisFunction', g_obj.basis_class); + + outcode = 1; + end + + + + function [ outcode ] = train_no(g_obj, xx_train, yy_train) + + g_obj.g_fit = fitrgp(xx_train, yy_train, ... + 'Verbose', g_obj.verbosity, ... + 'KernelFunction', g_obj.kfcn, 'KernelParameters', g_obj.theta0, ... + 'Sigma', g_obj.sigma0, 'FitMethod', 'none', ... + 'BasisFunction', g_obj.basis_class); + + % 'OptimizeHyperparamters', {'Beta'} + + outcode = 1; + end + + + + function [ yy_predict ] = predict_mean(g_obj, xx_test) + + beta = g_obj.g_fit.Beta; + + n_samples = size(xx_test, 1); + + yy_predict = zeros(n_samples, g_obj.n_outputs); + + %X = g_obj.g_fit.ActiveSetVectors; + Y = g_obj.g_fit.Y; + %H = calc_H(g_obj); + g_obj.Kinv; + aa = g_obj.alpha; + + + + for k = 1:n_samples + [ xx_utest_cur ] = g_obj.unroll_test_data(xx_test(k, :)); + %n_usamples = size(X, 1); + + % + % Build our kernel matrices, Kinv, Ks, and Kss + % + + [ Ks ] = g_obj.calc_Ks( xx_utest_cur ); + %[ Kss ] = g_obj.calc_Kss( xx_utest_cur ); + + % + % basis correction terms (for covariance) + % + + Hs = g_obj.calc_Hs(xx_utest_cur); + %Hs = ones(1, g_obj.n_outputs); + %R = Hs - g_obj.H*g_obj.Kinv*Ks; + + % + % Use our kernel matrices to calculate the mean of the + % posterior distributions + % + + switch g_obj.mean_sample_alg + case 'direct' + yy_direct = transpose(Ks)*g_obj.Kinv*(Y) + ... + Hs'*beta + g_obj.g_mu'; + + yy_predict(k, :) = yy_direct; + + case 'representer' + %[ Ka ] = g_obj.calc_Ka( xx_utest_cur ); + + yy_representer = (Ks')*aa + g_obj.g_mu' + ... + Hs'*beta; + + yy_predict(k, :) = yy_representer; + end + end + + end + + + + + + function [ yy_predict, yy_cov ] = predict(g_obj, xx_test) + % + % Matlab's built in predict() method does not allow for + % correlated uncertainties. This makes it sub-optimal for our + % purposes. Instead, we will implement the exact GP sampling + % procedure + % + % We use the representer theorem (which apparently better + % matches Matlab's built in predict() numerics). We also + % carefully adjust the mean and covariance using the explicit + % basis functions (again, b/c that's what Matlab does). + % + % Matlab probably uses the Nadaraya-Watson estimator? + % + % We haven't expanded to handle different explicit bases yet. + % We're assuming that the explicit prior on the basis functions + % isn't important (See R&W 2.42) + % + % yy_predict -- [n_samples x n_outputs] matrix of predicted means + % yy_std -- [n_samples] cell array of [n_outputs x n_outputs] + % square covariance matrices + % + + %theta = g_obj.g_fit.KernelInformation.KernelParameters; + beta = g_obj.g_fit.Beta; + + n_samples = size(xx_test, 1); + + yy_predict = zeros(n_samples, g_obj.n_outputs); + yy_cov = cell(n_samples, 1); + + %X = g_obj.g_fit.X; % these are uually the same + X = g_obj.g_fit.ActiveSetVectors; + Y = g_obj.g_fit.Y; + g_obj.Kinv; + %aa = g_obj.g_fit.Alpha; + aa = g_obj.alpha; + H = g_obj.calc_H(); + + + for k = 1:n_samples + [ xx_utest_cur ] = g_obj.unroll_test_data(xx_test(k, :)); + %n_usamples = size(X, 1); + + % + % Build our kernel matrices, Kinv, Ks, and Kss + % + + [ Ks ] = g_obj.calc_Ks( xx_utest_cur ); + [ Kss ] = g_obj.calc_Kss( xx_utest_cur ); + + % + % basis correction terms (for covariance) + % + + Hs = g_obj.calc_Hs( xx_utest_cur ); + R = Hs - H*g_obj.Kinv*Ks; + + % + % Use our kernel matrices to calculate the mean of the + % posterior distributions + % + + switch g_obj.mean_sample_alg + case 'direct' + yy_direct = transpose(Ks)*g_obj.Kinv*(Y) + ... + Hs'*beta + g_obj.g_mu'; + + yy_predict(k, :) = yy_direct; + + case 'representer' + %[ Ka ] = g_obj.calc_Ka( xx_utest_cur ); + + yy_representer = (Ks')*aa + g_obj.g_mu' + ... + Hs'*beta; + + yy_predict(k, :) = yy_representer; + end + + % + % Calculate the covariance contribution from the explicit + % basis + % + + if ~isequal(g_obj.basis_class, 'none') + + basis_adj_alg = 'chol'; + switch basis_adj_alg + case 'direct' + %g_star = transpose(R) *inv(inv(B) + H*Kinv*transpose(H)) + g_star = transpose(R) * g_obj.Hinv * R; + + case 'chol' + LInvHXXnew = g_obj.H_L_chol \ R; + g_star = LInvHXXnew'*LInvHXXnew; + end + %g_star = g_star*eye(g_obj.n_outputs); + else + g_star = zeros(g_obj.n_outputs); + end + + %fprintf('Measurment error covariance algorithm: %s.\n', cov_sample_alg); + + switch g_obj.sig_mat_class + case 'constant-diagonal' + sig_star = g_obj.g_fit.Sigma^2*eye(g_obj.n_outputs); + + case '2d-rho-correlated' + s = g_obj.g_fit.Sigma; + r = g_obj.rho; + sig_star = [s.^2, r.*s.^2; r.*s.^2, s.^2]./(1 + abs(r)); + + + otherwise + warning('%s not recognized!\n', g_obj.sig_mat_class) + + end + + % + % Use our kernel matrices to calculate the covariance of + % the posterior distributions + % + + %fprintf('Posterior covariance algorithm: %s.\n', cov_sample_alg); + + switch g_obj.cov_sample_alg + case 'direct' + f_star = Kss - transpose(Ks)*g_obj.Kinv*Ks; + + case 'mat-reimplement' + % attempted reimplementation of + % predictExactWithCov() from CompactGPImpl.m + + LInvKXXnew = g_obj.K_L_chol \ (Ks); + f_star = Kss - (LInvKXXnew'*LInvKXXnew); + end + + covmat = f_star + g_star + sig_star; + + M = g_obj.n_outputs; + %covmat(1:M+1:M^2) = max(0,covmat(1:M+1:M^2) + g_obj.g_fit.Sigma^2); + covmat(1:M+1:M^2) = max(0,covmat(1:M+1:M^2)); + + + yy_cov{k} = (covmat + covmat')/2; + end + end + + function [ yy_sample, yy_predict, yy_cov ] = sample(g_obj, xx_test) + % + % draw independent samples from the vector GPR model, where the + % sampled outputs have the correct between mode covariance + % + % Convert uncertainty covariance matrix to residuals via + % Cholesky factorization of covariance matrix-- + % L*z ~ N(0, Sigma) when L^T*L = Sigma + % + + n_samples = size(xx_test, 1); + + if (g_obj.verbosity >= 1) + fprintf('Beginning vector sampling of %d distinct samples.\n', n_samples); + tic; + end + + [ yy_predict, yy_cov ] = g_obj.predict(xx_test); + + if (g_obj.verbosity >= 1) + fprintf('GPR prediction step complete after %0.2f seconds.\n', toc); + fprintf('Beginning correlated error calculations.\n'); + tic; + end + + yy_sample = zeros(n_samples, g_obj.n_outputs); + + for k = 1:n_samples + rr = randn(g_obj.n_outputs, 1); + Y_cov = yy_cov{k}; + [L, flag] = chol(Y_cov, 'lower'); + if (flag > 0) + warning('Non-positive definite matrix! [%0.f, %0.f; %0.f, %0.f]\n', ... + Y_cov(1, 1), Y_cov(1, 2), Y_cov(2,1), Y_cov(2,2)); + L = zeros(2, 2); + end + + yy_sample(k, :) = yy_predict(k, :) + (L*rr)'; + end + + if (g_obj.verbosity >= 1) + fprintf('Correlated error calculations complete after %0.2f seconds.\n', toc); + end + end + + + % + % Helper functions for matrix building + % + + function [ KK ] = calc_K(g_obj) + xx_utrain = g_obj.g_fit.ActiveSetVectors; + theta = g_obj.g_fit.KernelInformation.KernelParameters; + %sigma = g_obj.g_fit.Sigma; + + n_usamples = size(xx_utrain, 1); + KK = zeros(n_usamples, n_usamples); + for k1 = 1:n_usamples + for k2 = k1:n_usamples + KK(k1, k2) = g_obj.kfcn(xx_utrain(k1, :), xx_utrain(k2, :), theta); + KK(k2, k1) = KK(k1, k2); + end + end + end + + + function [ Ks ] = calc_Ks(g_obj, xx_utest_cur) + X = g_obj.g_fit.ActiveSetVectors; + n_usamples = size(X, 1); + theta = g_obj.g_fit.KernelInformation.KernelParameters; + Ks = zeros(n_usamples, g_obj.n_outputs); + for k1 = 1:n_usamples + for k2 = 1:g_obj.n_outputs + Ks(k1, k2) = g_obj.kfcn(X(k1, :), xx_utest_cur(k2, :), theta); + end + end + end + + function [ Kss ] = calc_Kss(g_obj, xx_utest_cur) + theta = g_obj.g_fit.KernelInformation.KernelParameters; + Kss = zeros( g_obj.n_outputs, g_obj.n_outputs); + for k1 = 1:g_obj.n_outputs + for k2 = k1:g_obj.n_outputs + Kss(k1, k2) = g_obj.kfcn(xx_utest_cur(k1, :), xx_utest_cur(k2, :), theta); + Kss(k2, k1) = Kss(k1, k2); + end + end + end + + function [ Ka ] = calc_Ka(g_obj, xx_utest_cur) + theta = g_obj.g_fit.KernelInformation.KernelParameters; + Ka = zeros(g_obj.n_outputs, g_obj.g_fit.ActiveSetSize); + A = g_obj.g_fit.ActiveSetVectors; + + for k1 = 1:g_obj.g_fit.ActiveSetSize + for k2 = 1:g_obj.n_outputs + Ka(k2, k1) = g_obj.kfcn(xx_utest_cur(k2, :), A(k1, :), theta); + %Ka(k2, k1) = g_obj.kfcn(A(k1, :), xx_utest_cur(k2, :), theta); + end + end + end + + + function [ H ] = calc_H(g_obj) + X = g_obj.g_fit.ActiveSetVectors; + switch g_obj.basis_class + case 'none' + H = ones(0, size(X, 1)); + case 'constant' + H = ones(1, size(X, 1)); + case 'linear' + H = [ones(1, size(X, 1)); X']; + otherwise + warning('%s not recognized!\n'); + end + end + + function [ Hs ] = calc_Hs(g_obj, xx_utest_cur) + switch g_obj.basis_class + case 'none' + Hs = ones(0, size(xx_utest_cur, 1)); + case 'constant' + Hs = ones(1, size(xx_utest_cur, 1)); + case 'linear' + Hs = [ones(1, size(xx_utest_cur, 1)); xx_utest_cur']; + otherwise + warning('%s not recognized!\n'); + end + end + + end +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/LAMP_Protocol.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/LAMP_Protocol.m new file mode 100644 index 0000000..f86dbff --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/LAMP_Protocol.m @@ -0,0 +1,262 @@ +classdef LAMP_Protocol < handle + %LAMP_PROTOCOL Summary of this class goes here + % Detailed explanation goes here + + properties + a_par; + exp_name; + + aa_train; + aa_test; + + zz_train; + zz_test; + + qq_train; + qq_test; + + V_kl; + D_kl; + ts_mu; + overall_norm; + n_output_modes; + + rot_mat; + + gpr_obj; + + vector_pair_list; + rho_list; + + RR_res; + end + + methods + function p_obj = LAMP_Protocol(a_par) + p_obj.a_par = a_par; + p_obj.overall_norm = 1; + p_obj.n_output_modes = a_par.n_modes; + p_obj.vector_pair_list = []; + + p_obj.rot_mat = 1; + end + + + + function [ outcode ] = load_training_data(p_obj, aa_train, zz_train) + p_obj.aa_train = aa_train; + + if p_obj.a_par.truncate_t_steps + K = p_obj.a_par.t_steps_kept; + L = size(zz_train, 1); + ii = (L - K + 1):L; + zz_train = zz_train(ii, :); + end + p_obj.zz_train = zz_train; + + outcode = 1; + end + + + + function [ outcode ] = load_testing_data(p_obj, aa_test, zz_test) + p_obj.aa_test = aa_test; + + if p_obj.a_par.truncate_t_steps + K = p_obj.a_par.t_steps_kept; + L = size(zz_test, 1); + ii = (L - K + 1):L; + zz_test = zz_test(ii, :); + end + p_obj.zz_test = zz_test; + + outcode = 1; + end + + + + function [ outcode ] = transform_data(p_obj) + + fprintf('Transform rule: %s.\n', p_obj.a_par.kl_transformation_rule); + switch p_obj.a_par.kl_transformation_rule + case 'full-mc' + warning('%s not implemented!\n', p_obj.a_par.kl_transformation_rule) + case 'restricted-mc' + [ V, D, ts ] = calc_kl_modes(p_obj.zz_test); + + p_obj.V_kl = V; + p_obj.D_kl = D; + p_obj.ts_mu = ts; + + [ p_obj.qq_train ] = kl_transform_ts(p_obj.a_par, p_obj.zz_train, ... + p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu); + [ p_obj.qq_test ] = kl_transform_ts(p_obj.a_par, p_obj.zz_test, ... + p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu); + + case 'structured-sampling' + [ V, D, ts ] = calc_kl_modes(p_obj.zz_train); + + p_obj.V_kl = V; + p_obj.D_kl = D; + p_obj.ts_mu = ts; + + [ p_obj.qq_train ] = kl_transform_ts(p_obj.a_par, p_obj.zz_train, ... + p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu); + [ p_obj.qq_test ] = kl_transform_ts(p_obj.a_par, p_obj.zz_test, ... + p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu); + + case 'no-transform' + p_obj.V_kl = 1; + p_obj.D_kl = 1; + p_obj.ts_mu = 0; + + p_obj.qq_train = p_obj.zz_train'; + p_obj.qq_test = p_obj.zz_test'; + + case 'fixed-transform' + [ p_obj.qq_train ] = kl_transform_ts(p_obj.a_par, p_obj.zz_train, ... + p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu); + [ p_obj.qq_test ] = kl_transform_ts(p_obj.a_par, p_obj.zz_test, ... + p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu); + + end + + outcode = 1; + end + + + function [ outcode ] = train_gpr(p_obj) + + if isempty(p_obj.vector_pair_list) + sur = GPR_Separable(p_obj.a_par, p_obj.exp_name ); + sur.n_outputs = p_obj.n_output_modes; + sur.set_kl(p_obj.D_kl, p_obj.V_kl, p_obj.ts_mu, p_obj.overall_norm); + sur.basis_class = p_obj.a_par.gpr_explicit_basis_class; + + sur.set_Y_rot_matrix(p_obj.rot_mat); + sur.train(p_obj.aa_train, p_obj.qq_train); + p_obj.gpr_obj = sur; + + else + sur = GPR_List(p_obj.a_par, p_obj.exp_name ); + sur.n_outputs = p_obj.n_output_modes; + sur.set_kl(p_obj.D_kl, p_obj.V_kl, p_obj.ts_mu, p_obj.overall_norm); + sur.basis_class = p_obj.a_par.gpr_explicit_basis_class; + + sur.set_Y_rot_matrix(p_obj.rot_mat); + + sur.train(p_obj.aa_train, p_obj.qq_train, ... + p_obj.vector_pair_list, p_obj.rho_list); + p_obj.gpr_obj = sur; + + end + + outcode = 1; + + end + + function [ zz_sample, zz_mu, zz_std ] = sample( p_obj, aa) + + n_samples = size(aa, 1); + [ yprd, ysd ] = p_obj.gpr_obj.predict(aa); + + bb = randn(size(ysd)); + ysample = yprd + bb.*ysd; + + zz_sample = zeros(n_samples, size(p_obj.V_kl, 1)); + zz_mu = zeros(n_samples, size(p_obj.V_kl, 1)); + zz_var = zeros(n_samples, size(p_obj.V_kl, 1)); + %zz_list_mo = zeros(n_samples, size(V_out, 1)); + + M = p_obj.n_output_modes; + + for k_sample = 1:n_samples + zz_sample(k_sample, :) = ts_transform_kl( p_obj.a_par, ... + ysample(k_sample, :), p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu ); + zz_mu(k_sample, :) = ts_transform_kl( p_obj.a_par, ... + yprd(k_sample, :), p_obj.V_kl, p_obj.D_kl, p_obj.ts_mu ); + zz_var(k_sample, :) = ysd(k_sample, :).^2*... + (p_obj.V_kl(:, 1:M)'.^2.*p_obj.D_kl(1:M)); + end + + zz_std = sqrt(zz_var); + + end + + function [ outcode ] = plot_basis( p_obj ) + figure(21); + clf; + for k = 1:4 + subplot(2, 2, k); + hold on + title(sprintf('modes $%d$ \\& $%d$', 2*k-1, 2*k), 'Interpreter', 'Latex') + plot(p_obj.V_kl(:, 2*k - 1)) + plot(p_obj.V_kl(:, 2*k)) + end + + outcode = 1; + end + + function [ outcode ] = plot_surrogate( p_obj, k_mode ) + figure(22); + clf; + scatter3(p_obj.aa_train(:, 1), p_obj.aa_train(:, 2), p_obj.qq_train(:, k_mode)); + title('Training Data', 'Interpreter', 'Latex'); + + [ ~, qq_hat, ~] = p_obj.gpr_obj.sample(p_obj.aa_train); + + figure(23); + clf; + scatter3(p_obj.aa_train(:, 1), p_obj.aa_train(:, 2), qq_hat(:, k_mode)); + title('Resampled means', 'Interpreter', 'Latex'); + + + z_max = 4.5; + a_grid = linspace(-z_max, z_max, 65); + [aa1, aa2] = meshgrid(a_grid, a_grid); + aa_grid = [aa1(:), aa2(:), zeros(size(aa1(:)))]; + zz = p_obj.gpr_obj.predict(aa_grid); + zz_plot = reshape(zz(:, k_mode), size(aa1)); + + figure(24); + clf; + pcolor(aa1, aa2, zz_plot) + shading flat + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_2$', 'Interpreter', 'Latex') + title(sprintf('surrogate mode %d', k_mode), 'Interpreter', 'Latex') + colorbar() + + outcode = 1; + end + + function [ outcode ] = save_to_text( p_obj, output_path) + + output_filebase = sprintf('%s/%s', output_path, p_obj.exp_name); + + outputfilename = sprintf('%s_aa_train', output_filebase); + zz = p_obj.aa_train; + save(outputfilename, 'zz', '-ascii'); + outputfilename = sprintf('%s_qq_train', output_filebase); + zz = p_obj.qq_train; + save(outputfilename, 'zz', '-ascii'); + outputfilename = sprintf('%s_V_kl', output_filebase); + zz = p_obj.V_kl; + save(outputfilename, 'zz', '-ascii'); + outputfilename = sprintf('%s_D_kl', output_filebase); + zz = p_obj.D_kl; + save(outputfilename, 'zz', '-ascii'); + outputfilename = sprintf('%s_ts_mu', output_filebase); + zz = p_obj.ts_mu; + save(outputfilename, 'zz', '-ascii'); + outputfilename = sprintf('%s_overall_norm', output_filebase); + zz = p_obj.overall_norm; + save(outputfilename, 'zz', '-ascii'); + + p_obj.gpr_obj.save_to_text(output_filebase); + + outcode = 1; + end + end +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/RUN_ME.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/RUN_ME.m new file mode 100644 index 0000000..397a715 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/RUN_ME.m @@ -0,0 +1,92 @@ +% Run this code to compute the GP LHS and GP US-LW-K results +% The resuls will be saved as: + +close('all'); +clearvars + + +a_par = Analysis_Parameters(); +fig_bas_path = './output/'; +if ~exist(fig_bas_path, 'dir') + mkdir(fig_bas_path); +end +a_par.n_modes = 12; +%a_par.kl_transformation_rule = 'structured-sampling'; +a_par.kl_transformation_rule = 'restricted-mc'; +a_par.gpr_verbosity = 1; +%a_par.gpr_kernel_class = 'squaredexponential'; +a_par.gpr_kernel_class = 'ardsquaredexponential'; +a_par.gpr_explicit_basis_class = 'none'; + +as_par = Active_Search_Parameters(); + + + + +path_as_dataset = '../LAMP_10D_Data/'; +filename_aa = sprintf('%skl-2d-10-40-design.txt', path_as_dataset); +aa10d = load(filename_aa); +filename_zz = sprintf('%skl-2d-10-40-vbmg.txt', path_as_dataset); +zz10d = load(filename_zz); + + +as_par.initial_samples_rule = 'random-sample'; +n_acq_restarts = 45; +as_par.n_init = 3; +as_par.n_dim_in = 10; +as_par.n_rr_rondel_size = 6; +as_par.n_iter = 100; +as_par.compute_mode_errors = false; +as_par.compute_surr_errors = false; +a_par.kl_transformation_rule = 'restricted-mc'; +as_par.n_grid_likelihood = 32; % needs to be small in big dimension! +as_par.nq_mc = 5*10^4; +as_par.acq_rule = 'lw-kus'; + +% +% magic nromalization constant calculated from MC things +% +as_par.overall_norm_factor = 5.0435e+08; + +aa_data = aa10d(:, 1:as_par.n_dim_in); + +true_pq = 0; + +filename_pp = sprintf('%smc-vbm-hist.txt', path_as_dataset); +true_pz = load(filename_pp); +filename_bb = sprintf('%smc-vbm-bins.txt', path_as_dataset); +bbz = load(filename_bb); +as_par.nqb = length(bbz); + +% +% Iterate through as scenarios +% +n_repeats = 20; + + +as_par.opt_rule = 'uniform'; + +for jj = 1:n_repeats + a_par.fig_path = sprintf('%s%s-run-%d/', fig_bas_path, as_par.opt_rule ,jj); + as_par.video_path = a_par.fig_path; + if ~exist(a_par.fig_path, 'dir') + mkdir(a_par.fig_path); + end + + sj_as_precomputed(a_par, as_par, aa_data, zz10d, true_pz ) + +end + +as_par.opt_rule = 'as'; +as_par.acq_rule = 'lw-kus'; + +for jj = 1:n_repeats + a_par.fig_path = sprintf('%s%s-run-%d/', fig_bas_path, as_par.opt_rule ,jj); + as_par.video_path = a_par.fig_path; + if ~exist(a_par.fig_path, 'dir') + mkdir(a_par.fig_path); + end + + sj_as_precomputed(a_par, as_par, aa_data, zz10d, true_pz ) + +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/build_likelihood.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/build_likelihood.m new file mode 100644 index 0000000..0ef3089 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/build_likelihood.m @@ -0,0 +1,17 @@ +function [ f_likelihood ] = build_likelihood(f, aa3_grid, ww3, bbq) +%BUILD_LIKELIHOOD Summary of this function goes here +% Currently, we build the likelihood transform using only the surrogate +% mean, and not the surrogate uncertainty. This makes a certain +% importance-weighted histogram easier to build, but might lead to some +% issues + + %[ qq3, ~] = f(aa3_grid); + [ qq3 ] = f(aa3_grid); + qq3 = qq3(:, 1); + [ pp3 ] = weighted_histogram(qq3, ww3, bbq); + + eps0 = 1e-9; + + f_likelihood = @(q) max(interp1(bbq(1:(end-1)), pp3, q, 'linear', 'extrap'), eps0); +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/build_likelihood_function.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/build_likelihood_function.m new file mode 100644 index 0000000..674eca1 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/build_likelihood_function.m @@ -0,0 +1,62 @@ +function [ f_likelihood ] = build_likelihood_function(as_par, f_input, f_black_box, k_out) +%BUILD_LIKELIHOOD_FUNCTION Summary of this function goes here +% Detailed explanation goes here + + if (nargin == 3) + k_out = 1; + end + + + a3_grid = linspace(-as_par.z_max, as_par.z_max, as_par.n_grid_likelihood); + + switch as_par.n_dim_in + case 1 + aa3_grid = a3_grid'; + case 3 + [aa13, aa23, aa33] = meshgrid(a3_grid, a3_grid, a3_grid); + aa3_grid = [aa13(:), aa23(:), aa33(:)]; + case 6 + % + % real grids seem silly in high D + % + + aa3_grid = as_par.z_max*(1-2*lhsdesign(1e4, 6)); + otherwise + %warning('d=%d not handled!\n', as_par.n_dim_in) + aa3_grid = as_par.z_max*(1-2*lhsdesign(1e4, as_par.n_dim_in)); + end + + + [ qq3 ] = f_black_box(aa3_grid); + qq3 = qq3(:, k_out); + ww3 = f_input(aa3_grid); + + + + switch as_par.likelihood_alg + case 'weighted-histogram' + bbq = linspace(-as_par.q_max, as_par.q_max, as_par.nqb+1); + + [ pp3 ] = weighted_histogram(qq3, ww3, bbq); + + eps0 = 1e-9; + + f_likelihood = @(q) max(interp1(bbq(1:(end-1)), pp3, q, 'linear', 'extrap'), eps0); + + case 'kde' + + %a3_grid = linspace(-as_par.z_max, as_par.z_max, as_par.n_grid_likelihood); + %aa3_grid = a3_grid'; + %ww3 = f_input(aa3_grid); + + %[ qq3 ] = f_black_box(aa3_grid); + %qq3 = qq3(:, as_par.acq_active_output_mode); + + f_likelihood = @(q) kde_wrapper_function(qq3, ww3, q); + + otherwise + warning('%s not recognized!\n', as_par.likelihood_alg); + end + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_kl_modes.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_kl_modes.m new file mode 100644 index 0000000..f42d395 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_kl_modes.m @@ -0,0 +1,26 @@ +function [ V_out , D_out, zz_mu ] = calc_kl_modes(zz) +%CALC_KL_MODES Summary of this function goes here +% Detailed explanation goes here + + + + sig_vmbg = std(zz(:)); % important for normalization! + mu_vbmg = mean(zz(:)); + + %ZZ_klmc_vbmg_normed = zz/sig_vmbg; + ZZ_klmc_vbmg_normed = zz; + + + n_exp = size(ZZ_klmc_vbmg_normed, 2); + zz_mu = mean(ZZ_klmc_vbmg_normed, 2); + zz_res = ZZ_klmc_vbmg_normed - repmat(zz_mu, [1, n_exp]); + + RR = (zz_res)*(zz_res') / n_exp; + + [V,D] = eig(RR, 'vector'); + + V_out = fliplr(V); + D_out = flipud(D); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_log_pdf_errors.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_log_pdf_errors.m new file mode 100644 index 0000000..6f67f14 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_log_pdf_errors.m @@ -0,0 +1,26 @@ +function [ log_mae, log_rmse ] = calc_log_pdf_errors(pz1, pz2, zz, trunc_level) +%CALC_LOG_PDF_ERRORS Summary of this function goes here +% Detailed explanation goes here + + if (trunc_level == 0) + trunc_level = 1e-13; + end + + pz1t = max(pz1, trunc_level); + pz2t = max(pz2, trunc_level); + + dz = zz(2) - zz(1); + + use_log10 = true; + + if use_log10 + dp = abs(log10(pz2t) - log10(pz1t)); + else + dp = abs(log(pz2t) - log(pz1t)); + end + + log_mae = dz*sum(dp); + log_rmse = dz*sqrt(sum(dp.^2)); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_spectra_transform.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_spectra_transform.m new file mode 100644 index 0000000..496e519 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/calc_spectra_transform.m @@ -0,0 +1,53 @@ +function [ RR ] = calc_spectra_transform(H1, H2, T_m, fixed_T) +%EK_REBALANCE_SPECTRA Summary of this function goes here +% Detailed explanation goes here + +% +% Spectra rebalancing +% + + fprintf('Calculating energy ratio between J1 and J2.\n'); + + %fixed_T = 32; + + H_s = H1; + j_par = JONSWAP_Parameters(); + j_par.update_significant_wave_height( H_s ); + j_par.update_modal_period( T_m ); % close to what I was using before? + amp_of_cosine = @(S, w, dw) sqrt(2*S(w).*dw); + + WW_kl = linspace(j_par.omega_min, j_par.omega_max, j_par.n_W)'; + dW = WW_kl(2) - WW_kl(1); + AA_kl = amp_of_cosine(j_par.S, WW_kl, dW); + + T_max_kl = fixed_T; + n_t_kl = 512; + TT_kl = linspace(0, T_max_kl, n_t_kl); + dt_kl = TT_kl(2) - TT_kl(1); + + [ V_1, D_1 ] = calc_direct_kl_modes(AA_kl, WW_kl, TT_kl); + + H_s = H2; + j_par = JONSWAP_Parameters(); + j_par.update_significant_wave_height( H_s ); + j_par.update_modal_period( T_m ); % close to what I was using before? + amp_of_cosine = @(S, w, dw) sqrt(2*S(w).*dw); + + WW_kl = linspace(j_par.omega_min, j_par.omega_max, j_par.n_W)'; + dW = WW_kl(2) - WW_kl(1); + AA_kl = amp_of_cosine(j_par.S, WW_kl, dW); + + T_max_kl = fixed_T; + n_t_kl = 512; + TT_kl = linspace(0, T_max_kl, n_t_kl); + dt_kl = TT_kl(2) - TT_kl(1); + + [ V_2, D_2 ] = calc_direct_kl_modes(AA_kl, WW_kl, TT_kl); + + + RR = real(sqrt(D_2./D_1)); + + %disp(RR(1:3)); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/compare_wavegroup_histograms.asv b/dnosearch/examples/lamp/Matlab_GP_Implementation/compare_wavegroup_histograms.asv new file mode 100644 index 0000000..961996e --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/compare_wavegroup_histograms.asv @@ -0,0 +1,63 @@ +function [ outcode ] = compare_wavegroup_histograms( cur_protocol ) +%COMPARE_WAVEGROUP_HISTOGRAMS Summary of this function goes here +% Detailed explanation goes here + + a_par = cur_protocol.a_par; + + n_recoveries = size(cur_protocol.aa_test, 1); + + XX_raw = cell(n_recoveries, 1); + XX_cooked = cell(n_recoveries, 1); + + V_out = cur_protocol.gpr_obj.V_out; + lambda = cur_protocol.gpr_obj.D_out; %rescale by KL eigenweights + ts_mu = cur_protocol.gpr_obj.ts_mu; + beta = cur_protocol.gpr_obj.overall_norm_factor; + + for k = 1:n_recoveries + qq = cur_protocol.gpr_obj.predict(cur_protocol.aa_test(k, :)); + XX_cooked{k} = ts_transform_kl( a_par, qq, V_out, lambda, ts_mu )*... + beta; + + XX_raw{k} = cur_protocol.zz_test(:, k)*beta; + end + + b_max = 7*beta; + bb = linspace(-b_max, b_max, 65); + bb_cen = (bb(2:end) + bb(1:end-1))/2; + + NN_raw = cell(n_recoveries, 1); + NN_cooked = cell(n_recoveries, 1); + + for k = 1:n_recoveries + NN_raw{k} = histcounts(XX_raw{k}, bb, 'pdf'); + NN_cooked{k} = histcounts(XX_cooked{k}, bb, 'pdf'); + end + + + figure(205); + clf; + for k = 1:min(n_recoveries, 9) + subplot(3, 3, k); + hold on + plot(bb_cen, NN_raw{k}); + plot(bb_cen, NN_cooked{k}); + + title(sprintf('wave %d', 400 + k), 'Interpreter', 'Latex'); + legend({'openFOAM', 'reconstruct'}, 'Interpreter', 'Latex'); + + set(gca, 'FontSize', 9); + end + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%swavegroup-histograms_%s', a_par.fig_path, cur_protocol.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/compare_wavegroup_histograms.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/compare_wavegroup_histograms.m new file mode 100644 index 0000000..10dd163 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/compare_wavegroup_histograms.m @@ -0,0 +1,72 @@ +function [ outcode ] = compare_wavegroup_histograms( cur_protocol ) +%COMPARE_WAVEGROUP_HISTOGRAMS Summary of this function goes here +% Detailed explanation goes here + + a_par = cur_protocol.a_par; + + n_recoveries = size(cur_protocol.aa_test, 1); + + XX_raw = cell(n_recoveries, 1); + XX_cooked = cell(n_recoveries, 1); + + V_out = cur_protocol.gpr_obj.V_out; + lambda = cur_protocol.gpr_obj.D_out; %rescale by KL eigenweights + ts_mu = cur_protocol.gpr_obj.ts_mu; + beta = cur_protocol.gpr_obj.overall_norm_factor; + + for k = 1:n_recoveries + qq = cur_protocol.gpr_obj.predict(cur_protocol.aa_test(k, :)); + XX_cooked{k} = ts_transform_kl( a_par, qq, V_out, lambda, ts_mu )*... + beta; + + XX_raw{k} = cur_protocol.zz_test(:, k)*beta; + end + + b_max = 4*beta; + bb = linspace(-b_max, b_max, 17); + bb_cen = (bb(2:end) + bb(1:end-1))/2; + + NN_raw = cell(n_recoveries, 1); + NN_cooked = cell(n_recoveries, 1); + + for k = 1:n_recoveries + NN_raw{k} = histcounts(XX_raw{k}, bb, 'Normalization','pdf'); + NN_cooked{k} = histcounts(XX_cooked{k}, bb, 'Normalization', 'pdf'); + end + + + figure(205); + clf; + for k = 1:min(n_recoveries, 9) + subplot(3, 3, k); + hold on + plot(bb_cen, NN_raw{k}); + plot(bb_cen, NN_cooked{k}); + + title(sprintf('wave %d', 400 + k), 'Interpreter', 'Latex'); + + if k == 1 + legend({'openFOAM', 'reconstruct'}, 'Interpreter', 'Latex'); + end + xlim([bb(1), bb(end)]); + + set(gca, 'FontSize', 9); + %set(gca, 'YScale', 'log'); too little data + + aa = gca; + set(gca, 'XTickLabel', aa.XTickLabel()) + set(gca, 'YTickLabel', aa.YTickLabel()) + end + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%swavegroup-histograms_%s', a_par.fig_path, cur_protocol.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + outcode = 1; +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_extrema_pdf.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_extrema_pdf.m new file mode 100644 index 0000000..28be2cd --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_extrema_pdf.m @@ -0,0 +1,71 @@ +function [ maxf, minf, xx ] = compute_extrema_pdf( protocol ) +%COMPUTE_EXTREMA_PDF Summary of this function goes here +% Detailed explanation goes here + + + + a_par = protocol.a_par; + gpr_surrogate = protocol.gpr_obj; + + % + % sample from the surrogates + % + + n_samples = a_par.n_hist_resample; + %n_samples = 20; + + xx_test = randn(n_samples, gpr_surrogate.n_inputs); + + + + switch a_par.gpr_resampling_strat + case 'normally-distributed' + [ yprd, ysd ] = gpr_surrogate.predict(xx_test); + + bb = randn(size(ysd)); + + yy_guess_nd = yprd + bb.*ysd; + + case 'vector-resample' + [ qq_sample, qq_pred_mu, ~ ] = gpr_surrogate.sample(xx_test); + + yy_guess_nd = qq_sample; + + case 'list-only' + [ qq_sample ] = gpr_surrogate.sample(xx_test); + + yy_guess_nd = qq_sample; + + end + + + + V_out = gpr_surrogate.V_out; + lambda = gpr_surrogate.D_out; %rescale by KL eigenweights + beta = gpr_surrogate.overall_norm_factor; % final rescaling + ts_mu = gpr_surrogate.ts_mu; + + zz_list_nd = zeros(n_samples, size(V_out, 1)); + + local_max_list = cell(n_samples, 1); + local_min_list = cell(n_samples, 1); + + for k_sample = 1:n_samples + zz_list_nd(k_sample, :) = ts_transform_kl( a_par, yy_guess_nd(k_sample, :), V_out, lambda, ts_mu ); + local_max_list{k_sample} = findpeaks(zz_list_nd(k_sample, :), 'MinPeakDistance', 15 ); + local_min_list{k_sample} = -findpeaks(-zz_list_nd(k_sample, :), 'MinPeakDistance', 15); + end + + local_max_total = cell2mat(local_max_list'); + local_min_total = cell2mat(local_min_list'); + + xx_zz = linspace(-10*beta, 10*beta, a_par.n_hist_bins); + xx = 1/2*(xx_zz(2:end) + xx_zz(1:end-1)); + + maxf = histcounts(local_max_total*beta, xx_zz, 'Normalization', 'pdf'); + minf = histcounts(local_min_total*beta, xx_zz, 'Normalization', 'pdf'); + + + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_histograms_from_gpr_protocol.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_histograms_from_gpr_protocol.m new file mode 100644 index 0000000..9dcf312 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_histograms_from_gpr_protocol.m @@ -0,0 +1,46 @@ +function [ pq, pz] = compute_histograms_from_gpr_protocol(a_par, as_par, cur_model_protocol) +%COMPUTE_HISTOGRAMS_FROM_GPR_PROTOCOL Summary of this function goes here +% Detailed explanation goes here + + bbq = linspace(-as_par.q_max, as_par.q_max, as_par.nqb+1); + + beta = cur_model_protocol.gpr_obj.overall_norm_factor; + %zstar = 7; + zstar = 10; + bbz = linspace(-zstar*beta, zstar*beta, as_par.nqb+1); + + + switch as_par.q_pdf_rule + case 'likelihood-transform' + %pq_list{k} = zz; + warning('This (%s) is bad these days!\n', as_par.q_pdf_rule) + case 'MC' + aa_q = randn(as_par.nq_mc, as_par.n_dim_in); + [ qq, yprd, ysd ] = cur_model_protocol.gpr_obj.sample(aa_q); + pq = histcounts(qq(:, as_par.q_plot), bbq, ... + 'Normalization', 'pdf'); + end + + % + % full pdf + % + + bb = randn(size(ysd)); + yy_guess_nd = yprd + bb.*ysd; + + V_out = cur_model_protocol.gpr_obj.V_out; + lambda = cur_model_protocol.gpr_obj.D_out; %rescale by KL eigenweights + beta = cur_model_protocol.gpr_obj.overall_norm_factor; % final rescaling + ts_mu = cur_model_protocol.gpr_obj.ts_mu; + + zz_list_nd = zeros(as_par.nq_mc, length(ts_mu)); + + for k_sample = 1:as_par.nq_mc + zz_list_nd(k_sample, :) = ts_transform_kl( a_par, yy_guess_nd(k_sample, :), V_out, lambda, ts_mu ); + end + + %zz_list_nd = zz_list_nd*beta; + + pz = histcounts(zz_list_nd(:), bbz, 'Normalization', 'pdf'); +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_histograms_from_steady_state.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_histograms_from_steady_state.m new file mode 100644 index 0000000..84c971e --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_histograms_from_steady_state.m @@ -0,0 +1,14 @@ +function [ pq, pz] = compute_histograms_from_steady_state(a_par, as_par, zz, beta, V_out) +%COMPUTE_HISTOGRAMS_FROM_GPR_PROTOCOL Summary of this function goes here +% Detailed explanation goes here + + bbq = linspace(-as_par.q_max, as_par.q_max, as_par.nqb+1); + + bbz = linspace(-7*beta, 7*beta, as_par.nqb+1); + + pz = histcounts(zz(:)*beta, bbz, 'Normalization', 'pdf'); + + pq = ones(1, length(bbq)-1); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_reconstruction_error.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_reconstruction_error.m new file mode 100644 index 0000000..6512aea --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_reconstruction_error.m @@ -0,0 +1,277 @@ +function [ rmse_list, frac_rmse_list, env_rmse_list, frac_env_rmse_list ] = ... + compute_reconstruction_error( cur_protocol ) +%COMPUTE_RECONSTRUCTION_ERROR Summary of this function goes here +% Detailed explanation goes here + + a_par = cur_protocol.a_par; + + n_recoveries = size(cur_protocol.aa_test, 1); + + XX_raw = cell(n_recoveries, 1); + XX_cooked = cell(n_recoveries, 1); + + V_out = cur_protocol.gpr_obj.V_out; + lambda = cur_protocol.gpr_obj.D_out; %rescale by KL eigenweights + ts_mu = cur_protocol.gpr_obj.ts_mu; + beta = cur_protocol.gpr_obj.overall_norm_factor; + + for k = 1:n_recoveries + qq = cur_protocol.gpr_obj.predict(cur_protocol.aa_test(k, :)); + XX_cooked{k} = ts_transform_kl( a_par, qq, V_out, lambda, ts_mu )*... + beta; + + XX_raw{k} = cur_protocol.zz_test(:, k)*beta; + end + + + rmse_list = zeros(n_recoveries, 1); + frac_rmse_list = zeros(n_recoveries, 1); + + for k = 1:n_recoveries + rmse_list(k) = sqrt(mean((XX_cooked{k} - XX_raw{k}).^2)); + mean_energy = sqrt(mean((XX_raw{k}).^2)); + frac_rmse_list(k) = rmse_list(k)/mean_energy; + end + + ww_cooked = cell(n_recoveries, 1); + ww_raw = cell(n_recoveries, 1); + + nu_max = 0; + nu_min = inf; + + for k = 1:n_recoveries + ww_cooked{k} = fft(XX_cooked{k}); + ww_raw{k} = fft(XX_raw{k}); + + if max(abs(ww_cooked{k})) > nu_max + nu_max = max(abs(ww_cooked{k})); + end + + if min(abs(ww_cooked{k})) < nu_min + nu_min = min(abs(ww_cooked{k})); + end + end + + + + % + % look at Hilbert Transfrom envelope equivalence + % + + XX_cooked_hilbert_env = cell(n_recoveries, 1); + XX_raw_hilbert_env = cell(n_recoveries, 1); + + env_rmse_list = zeros(n_recoveries, 1); + frac_env_rmse_list = zeros(n_recoveries, 1); + + for k = 1:n_recoveries + cooked_Hilbert = hilbert(XX_cooked{k}); + raw_Hilbert = hilbert(XX_raw{k}); + + XX_cooked_hilbert_env{k} = abs(cooked_Hilbert); + XX_raw_hilbert_env{k} = abs(raw_Hilbert); + + env_rmse_list(k) = sqrt(mean((XX_cooked_hilbert_env{k} - XX_raw_hilbert_env{k}).^2)); + mean_energy = sqrt(mean((XX_raw_hilbert_env{k}).^2)); + frac_env_rmse_list(k) = env_rmse_list(k)/mean_energy; + end + + + + + + n_spread_recoveries = 7; + + n_reps = 1000; + XX_cooked_mean = cell(n_spread_recoveries, 1); + XX_cooked_std = cell(n_spread_recoveries, 1); + + for k = 1:n_spread_recoveries + zz_cooked = zeros(n_reps, length(ts_mu)); + for j = 1:n_reps + qq = cur_protocol.gpr_obj.sample(cur_protocol.aa_test(k, :)); + zz_cooked(j, :) = ts_transform_kl( a_par, qq, V_out, lambda, ts_mu )*... + beta; + end + + XX_cooked_mean{k} = mean(zz_cooked, 1); + XX_cooked_std{k} = std(zz_cooked, 0, 1); + end + + + + draw_plots = true; + + + if draw_plots + + % + % reconstruction comparison plot + % + + %zstar = 3; + lw = 1; + + + z_max = 4*cur_protocol.overall_norm; + + + TT_plot = linspace(0, 32, length(XX_raw{1})); + n_plot_recoveries = 7; + + dw = (2*pi)/32; + ww_plot = (0:1:(length(XX_raw{1})-1))*dw; + + figure(201); + clf; + for k = 1:n_plot_recoveries + subplot(3, 3, k); + hold on + plot(TT_plot, XX_cooked{k}, 'LineWidth', lw) + plot(TT_plot, XX_raw{k}, 'LineWidth', lw) + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$F_x^{\mbox{tot}}$', 'Interpreter', 'Latex'); + if (k == 1) + legend({'reconstruct', 'openFOAM'}, 'Interpreter', 'Latex', 'Location', 'Northwest'); + end + title(sprintf('wave %d', 400 + k), 'Interpreter', 'Latex') + + xlim([0, max(TT_plot(:))]); + ylim([-z_max, z_max]); + end + + %subplot(2, 2, 2); + %legend({'LAMP', 'recon'}, 'Interpreter', 'Latex'); + + set(gca, 'FontSize', 9); + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%sreconstructed_time_series-%s', a_par.fig_path, cur_protocol.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + + % + % reconstruction comparison plots with overlaid GP uncertainty + % + + + + figure(202); + clf; + for k = 1:n_spread_recoveries + subplot(3, 3, k); + hold on + + x2 = [TT_plot, fliplr(TT_plot) ]; + inBetween = [ XX_cooked_mean{k} + XX_cooked_std{k}, fliplr(XX_cooked_mean{k} - XX_cooked_std{k}) ]; + fill(x2', inBetween', 'cyan'); + h2 = plot(TT_plot, XX_cooked_mean{k}, 'LineWidth', lw, 'Color', 'blue'); + + h1 = plot(TT_plot, XX_raw{k}, 'LineWidth', lw, 'Color', 'Red'); + + if (k == 1) + legend([h1, h2], {'openFOAM', 'reconstruct'}, 'Interpreter', 'Latex', 'Location', 'Northwest'); + end + title(sprintf('wave %d', 400 + k), 'Interpreter', 'Latex') + + xlim([0, max(TT_plot(:))]); + ylim([-z_max, z_max]); + aa = gca; + set(gca, 'YTickLabel', aa.YTickLabel()) + end + + set(gca, 'FontSize', 9); + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%sreconstructed_time_series_spread-%s', a_par.fig_path, cur_protocol.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + % + % fourier space comparison plots + % + + n_plot_recoveries = 7; + + figure(203); + clf; + for k = 1:n_plot_recoveries + subplot(3, 3, k); + hold on + plot(ww_plot, abs(ww_cooked{k}), 'LineWidth', lw) + plot(ww_plot, abs(ww_raw{k}), 'LineWidth', lw) + xlabel('$\omega$', 'Interpreter', 'Latex'); + ylabel('$\nu$', 'Interpreter', 'Latex'); + if (k == 1) + legend({'reconstruct', 'openFOAM'}, 'Interpreter', 'Latex', 'Location', 'Northeast'); + end + title(sprintf('wave %d', 400 + k), 'Interpreter', 'Latex') + xlim([0, 4]); + %ylim([nu_min, nu_max]); + ylim([5e5, 1e9]); + set(gca, 'YScale', 'log'); + + aa = gca; + set(gca, 'YTickLabel', aa.YTickLabel()) + end + + set(gca, 'FontSize', 9); + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%sreconstructed_fourier-%s', a_par.fig_path, cur_protocol.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + % + % Hilbert envelope comparison plots + % + + n_plot_recoveries = 7; + + figure(204); + clf; + for k = 1:n_plot_recoveries + subplot(3, 3, k); + hold on + plot(TT_plot, XX_cooked_hilbert_env{k}, 'LineWidth', lw) + plot(TT_plot, XX_raw_hilbert_env{k}, 'LineWidth', lw) + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$|H(F_z)(t)|$', 'Interpreter', 'Latex'); + if (k == 1) + legend({'reconstruct', 'openFOAM'}, 'Interpreter', 'Latex', 'Location', 'Northeast'); + end + title(sprintf('wave %d', 400 + k), 'Interpreter', 'Latex') + xlim([0, max(TT_plot(:))]); + ylim([0, z_max]); + + %aa = gca; + %set(gca, 'YTickLabel', aa.YTickLabel()) + end + + set(gca, 'FontSize', 9); + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%sreconstructed_hilbert_envelope-%s', a_par.fig_path, cur_protocol.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + end + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_true_model_histograms.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_true_model_histograms.m new file mode 100644 index 0000000..5f0a216 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/compute_true_model_histograms.m @@ -0,0 +1,20 @@ +function [true_pq, true_pz] = compute_true_model_histograms(true_model_as_par, true_model_protocol) +%COMPUTE_TRUE_MODEL_HISTOGRAMS Summary of this function goes here +% Detailed explanation goes here + + bbq = linspace(-true_model_as_par.q_max, true_model_as_par.q_max, true_model_as_par.nqb+1); + + true_model_as_par.nq_mc = 1e6; + switch true_model_as_par.true_q_pdf_rule + case 'likelihood-transform' + [ f_likelihood ] = build_likelihood(true_model_protocol.gpr_obj, aa3_grid, ww3, bbq); + true_pq = f_likelihood(qq_interval); + warning('this is bad!\n') + case 'MC' + aa_q = randn(true_model_as_par.nq_mc, 3); + [ qq, ~, ~ ] = true_model_protocol.gpr_obj.sample(aa_q); + true_pq = histcounts(qq(:, true_model_as_par.q_plot), bbq, 'Normalization', 'pdf'); + end + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_4d_mds_map.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_4d_mds_map.m new file mode 100644 index 0000000..ef67529 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_4d_mds_map.m @@ -0,0 +1,70 @@ +function [ outcode ] = draw_4d_mds_map( f, zstar) +%DRAW_4D_MDS_MAP Summary of this function goes here +% Algorithm taken from NCSS Statistical Software description +% https://ncss-wpengine.netdna-ssl.com/wp-content/themes/ncss/pdf/Procedures/NCSS/Multidimensional_Scaling.pdf + + nx = 5; + x = linspace(-zstar, zstar, nx); + [x1, x2, x3, x4] = ndgrid(x, x, x, x); + + xx1 = x1(:); + xx2 = x2(:); + xx3 = x3(:); + xx4 = x4(:); + nl = length(xx1); + + zz = f(xx1, xx2, xx3, xx4); + + %D = zeros(nl, nl, nl, nl); + %for k1 = 1:(nl-1) + % for k2 = (k1+1):nl + % D(k1, k2) = sqrt((xx1(k1) - xx2(k2)).^2) + % end + %end + + xxs1 = reshape([xx1, xx2, xx3, xx4], [nx^4, 1, 4]); + xxs2 = reshape([xx1, xx2, xx3, xx4], [1, nx^4, 4]); + + xxt1 = repmat(xxs1, [1, nx^4, 1]); + xxt2 = repmat(xxs2, [nx^4, 1, 1]); + + D = sqrt(squeeze(sum((xxt1 - xxt2).^2, 3))); + + A = - 1/2*D.^2; + + Ar = mean(A, 1); + Ac = mean(A, 2); + Ad = mean(Ac, 1); + B = A - repmat(Ar, [nl, 1]) - repmat(Ac, [1, nl]) - Ad; + + [V, lambda] = eig(B); + lambda = diag(lambda); + + n1 = length(lambda); + n2 = length(lambda) - 1; + + V1 = V(:, n1); + V2 = V(:, n2); + + Vmax = max(max(abs(V1(:))), max(abs(V2(:)))); + + Vq = linspace(-Vmax, Vmax, 127); + [ V1q, V2q ] = meshgrid(Vq, Vq); + + %zzV = interp2(V1, V2, zz', L1, L2); + + F = scatteredInterpolant(V1, V2, zz, 'natural', 'nearest'); + zzV = F(V1q, V2q); + + %zzV = griddata(V1,V2,zz,V1q,V2q); + + figure(1); + clf; + pcolor(V1q, V2q, zzV) + shading flat + xlabel('$\lambda_1$', 'Interpreter', 'Latex') + ylabel('$\lambda_2$', 'Interpreter', 'Latex') + + outcode = 1; +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_combined_plots.asv b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_combined_plots.asv new file mode 100644 index 0000000..fa91f73 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_combined_plots.asv @@ -0,0 +1,206 @@ +a_par = Analysis_Parameters(); +a_par.fig_path = '../../../Output/LAMP_active_search/mar_pix_3/'; +if ~exist(a_par.fig_path, 'dir') + mkdir(a_par.fig_path); +end + +acq_rules = {'lw-us', 'lw-kus'}; +noise_rules = {'none', 'full'}; +noise_rules_names = {'noiseless', 'noisy'}; +bases = {'none', 'linear'}; +bases_names = {'no-basis', 'linear-basis'}; + +fig_bas_path = '../../../Output/LAMP_active_search/mar_pix_3/'; + +combined_qp_mae_list = cell(2, 2, 2); +combined_qp_rmse_list = cell(2, 2, 2); +combined_surr_mu_mae_list = cell(2, 2, 2); +combined_surr_mu_rmse_list = cell(2, 2, 2); +combined_qp_kl_div_forward_list = cell(2, 2, 2); +combined_qp_kl_div_backward_list = cell(2, 2, 2); +combined_qp_log_mae_list = cell(2, 2, 2); +combined_qp_log_rmse_list = cell(2, 2, 2); +legend_names = cell(2, 2, 2); + +for k3 = 1:length(bases) + for k1 = 1:length(acq_rules) + for k2 = 1:length(noise_rules) + run_name = sprintf('%s-%s-%s', ... + bases{k3}, noise_rules{k2}, ... + acq_rules{k1}); + filename = sprintf('%s%s/error_data.m', fig_bas_path, run_name); + cur_data = load(filename, '-mat'); + + combined_qp_mae_list{k1, k2, k3} = cur_data.qp_mae_list; + combined_qp_rmse_list{k1, k2, k3} = cur_data.qp_rmse_list; + combined_surr_mu_mae_list{k1, k2, k3} = cur_data.surr_mu_mae_list; + combined_surr_mu_rmse_list{k1, k2, k3} = cur_data.surr_mu_rmse_list; + combined_qp_kl_div_forward_list{k1, k2, k3} = cur_data.qp_kl_div_forward_list; + combined_qp_kl_div_backward_list{k1, k2, k3} = cur_data.qp_kl_div_backward_list; + combined_qp_log_mae_list{k1, k2, k3} = cur_data.qp_log_mae_list; + combined_qp_log_rmse_list{k1, k2, k3} = cur_data.qp_log_rmse_list; + + %legend_names{k1, k2, k3} = sprintf('%s-%s-%s', acq_rules{k1}, ... + % noise_rules_names{k2}, bases_names{k3}); + legend_names{k1, k2, k3} = sprintf('%s-%s', ... + noise_rules_names{k2}, bases_names{k3}); + end + end +end + +run_name = '-uniform-noiseless'; +filename = sprintf('%s%s/error_data.m', fig_bas_path, run_name); +uniform_noiseless_data = load(filename, '-mat'); + + + +run_name = '-uniform-noisy'; +filename = sprintf('%s%s/error_data.m', fig_bas_path, run_name); +uniform_noisy_data = load(filename, '-mat'); + + + +NN_plot = 11:60; + +figure(101); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_mae_list, 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_mae_list, 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_mae_list{1, 2, 1}, 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_mae_list{1, 1, 1}, 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_mae_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_mae_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_mae_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_mae_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon$', 'Interpreter', 'Latex'); +title('Mean Absolute Error (MAE)', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%smae-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + +figure(102); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_rmse_list, 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_rmse_list, 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_rmse_list{1, 2, 1}, 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_rmse_list{1, 1, 1}, 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon$', 'Interpreter', 'Latex'); +title('Root Mean Square Error (RMSE)', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%srmse-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + +figure(103); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.surr_mu_mae_list, 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.surr_mu_mae_list, 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_surr_mu_mae_list{1, 2, 1}, 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_surr_mu_mae_list{1, 1, 1}, 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_surr_mu_mae_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_surr_mu_mae_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_surr_mu_mae_list{1, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_surr_mu_mae_list{1, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon$', 'Interpreter', 'Latex'); +title('Surrogate Mean Absolute Error (MAE)', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + + + +figure(104); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_kl_div_forward_list(:, 5), 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_kl_div_forward_list(:, 5), 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_kl_div_forward_list{1, 2, 1}(:, 5), 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_kl_div_forward_list{1, 1, 1}(:, 5), 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$D_{KL}$', 'Interpreter', 'Latex'); +title('Kullbeck Leibler Divergence', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-forward-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + +figure(105); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_log_mae_list(:, 5), 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_log_mae_list(:, 5), 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_log_mae_list{1, 2, 1}(:, 5), 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_log_mae_list{1, 1, 1}(:, 5), 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon_{\mbox{log}}$', 'Interpreter', 'Latex'); +title('Mean Absolute Error of log pdf', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%slog-mae-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_combined_plots.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_combined_plots.m new file mode 100644 index 0000000..b8ff1cc --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_combined_plots.m @@ -0,0 +1,206 @@ +a_par = Analysis_Parameters(); +a_par.fig_path = '../../../Output/LAMP_active_search/mar_pix_3/'; +if ~exist(a_par.fig_path, 'dir') + mkdir(a_par.fig_path); +end + +acq_rules = {'lw-us', 'lw-kus'}; +noise_rules = {'none', 'full'}; +noise_rules_names = {'noiseless', 'noisy'}; +bases = {'none', 'linear'}; +bases_names = {'no-basis', 'linear-basis'}; + +fig_bas_path = '../../../Output/LAMP_active_search/mar_pix_3/'; + +combined_qp_mae_list = cell(2, 2, 2); +combined_qp_rmse_list = cell(2, 2, 2); +combined_surr_mu_mae_list = cell(2, 2, 2); +combined_surr_mu_rmse_list = cell(2, 2, 2); +combined_qp_kl_div_forward_list = cell(2, 2, 2); +combined_qp_kl_div_backward_list = cell(2, 2, 2); +combined_qp_log_mae_list = cell(2, 2, 2); +combined_qp_log_rmse_list = cell(2, 2, 2); +legend_names = cell(2, 2, 2); + /res +for k3 = 1:length(bases) + for k1 = 1:length(acq_rules) + for k2 = 1:length(noise_rules) + run_name = sprintf('%s-%s-%s', ... + bases{k3}, noise_rules{k2}, ... + acq_rules{k1}); + filename = sprintf('%s%s/error_data.m', fig_bas_path, run_name); + cur_data = load(filename, '-mat'); + + combined_qp_mae_list{k1, k2, k3} = cur_data.qp_mae_list; + combined_qp_rmse_list{k1, k2, k3} = cur_data.qp_rmse_list; + combined_surr_mu_mae_list{k1, k2, k3} = cur_data.surr_mu_mae_list; + combined_surr_mu_rmse_list{k1, k2, k3} = cur_data.surr_mu_rmse_list; + combined_qp_kl_div_forward_list{k1, k2, k3} = cur_data.qp_kl_div_forward_list; + combined_qp_kl_div_backward_list{k1, k2, k3} = cur_data.qp_kl_div_backward_list; + combined_qp_log_mae_list{k1, k2, k3} = cur_data.qp_log_mae_list; + combined_qp_log_rmse_list{k1, k2, k3} = cur_data.qp_log_rmse_list; + + %legend_names{k1, k2, k3} = sprintf('%s-%s-%s', acq_rules{k1}, ... + % noise_rules_names{k2}, bases_names{k3}); + legend_names{k1, k2, k3} = sprintf('%s-%s', ... + noise_rules_names{k2}, bases_names{k3}); + end + end +end + +run_name = '-uniform-noiseless'; +filename = sprintf('%s%s/error_data.m', fig_bas_path, run_name); +uniform_noiseless_data = load(filename, '-mat'); + + + +run_name = '-uniform-noisy'; +filename = sprintf('%s%s/error_data.m', fig_bas_path, run_name); +uniform_noisy_data = load(filename, '-mat'); + + + +NN_plot = 11:60; + +figure(101); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_mae_list, 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_mae_list, 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_mae_list{1, 2, 1}, 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_mae_list{1, 1, 1}, 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_mae_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_mae_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_mae_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_mae_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon$', 'Interpreter', 'Latex'); +title('Mean Absolute Error (MAE)', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%smae-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + +figure(102); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_rmse_list, 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_rmse_list, 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_rmse_list{1, 2, 1}, 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_rmse_list{1, 1, 1}, 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon$', 'Interpreter', 'Latex'); +title('Root Mean Square Error (RMSE)', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%srmse-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + +figure(103); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.surr_mu_mae_list, 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.surr_mu_mae_list, 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_surr_mu_mae_list{1, 2, 1}, 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_surr_mu_mae_list{1, 1, 1}, 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_surr_mu_mae_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_surr_mu_mae_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_surr_mu_mae_list{1, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_surr_mu_mae_list{1, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon$', 'Interpreter', 'Latex'); +title('Surrogate Mean Absolute Error (MAE)', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + + + +figure(104); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_kl_div_forward_list(:, 5), 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_kl_div_forward_list(:, 5), 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_kl_div_forward_list{1, 2, 1}(:, 5), 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_kl_div_forward_list{1, 1, 1}(:, 5), 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$D_{KL}$', 'Interpreter', 'Latex'); +title('Kullbeck Leibler Divergence', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-forward-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + +figure(105); +clf; +hold on +plot(NN_plot, uniform_noiseless_data.qp_log_mae_list(:, 5), 'Color', 'Black', 'LineStyle','-'); +plot(NN_plot, uniform_noisy_data.qp_log_mae_list(:, 5), 'Color', 'Black', 'LineStyle','-.'); +plot(NN_plot, combined_qp_log_mae_list{1, 2, 1}(:, 5), 'Color', 'Red', 'LineStyle','-') +plot(NN_plot, combined_qp_log_mae_list{1, 1, 1}(:, 5), 'Color', 'Red', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 1}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 1}, 'Color', 'Blue', 'LineStyle','-.') +%plot(NN_plot, combined_qp_rmse_list{2, 2, 2}, 'Color', 'Blue', 'LineStyle','-') +%plot(NN_plot, combined_qp_rmse_list{2, 1, 2}, 'Color', 'Blue', 'LineStyle','-.') +set(gca, 'YScale', 'log'); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\epsilon_{\mbox{log}}$', 'Interpreter', 'Latex'); +title('Mean Absolute Error of log pdf', 'Interpreter', 'Latex'); +%legend({'noisy', 'noiseless'}, 'Interpreter', 'Latex'); +legend({'uniform-noiseless', 'uniform-noisy', ... + legend_names{1, 2, 1}, legend_names{1, 1, 1}}, ... + 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%slog-mae-comparison-plot', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_error_plots.asv b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_error_plots.asv new file mode 100644 index 0000000..92ffe45 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_error_plots.asv @@ -0,0 +1,378 @@ +function [ err_struct ] = draw_error_plots( a_par, as_par, protocol_list, ... + true_f_mean, true_pq, true_pz) +%DRAW_ERROR_PLOTS Summary of this function goes here +% Detailed explanation goes here + + tic; + fprintf('Beginning error plot calculations.\n') + + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + + a3_grid = linspace(-as_par.z_max, as_par.z_max, as_par.n_grid_likelihood); + [aa13, aa23, aa33] = meshgrid(a3_grid, a3_grid, a3_grid); + aa3_grid = [aa13(:), aa23(:), aa33(:)]; + ww3 = f_input(aa3_grid); + dww3 = ww3./sum(ww3(:)); + + bbq = linspace(-as_par.q_max, as_par.q_max, as_par.nqb+1); + qq_interval = 1/2*(bbq(1:end-1) + bbq(2:end)); + + beta = protocol_list{1}.gpr_obj.overall_norm_factor; + bbz = linspace(-7*beta, 7*beta, as_par.nqb+1); + qqz_interval = 1/2*(bbz(1:end-1) + bbz(2:end)); + + pq_list = cell(length(protocol_list), 1); + pz_list = cell(length(protocol_list), 1); + + true_surr_mu = true_f_mean(aa3_grid); + surr_mu_mae_list = zeros(length(protocol_list), 1); + surr_mu_rmse_list = zeros(length(protocol_list), 1); + + + fprintf('Computing q pdf using rules: %s / %s.\n', ... + as_par.q_pdf_rule, as_par.true_q_pdf_rule); + + fprintf('Calculating intermediate pdf errors --- %d total rounds.\n', length(protocol_list)); + for k = 1:length(protocol_list) + fprintf('Starting k=%d. (%0.2f seconds elapsed).\n', k, toc); + + + [ pq, pz] = compute_histograms_from_gpr_protocol(a_par, as_par, ... + protocol_list{k}); + + pq_list{k} = pq; + pz_list{k} = pz; + + % + % surrogate estimate + % + + [ cur_surr_mu, ~] = protocol_list{k}.gpr_obj.predict(aa3_grid); + delta = cur_surr_mu(:, as_par.q_plot) - true_surr_mu(:, as_par.q_plot); + surr_mu_mae_list(k) = sum(abs(delta).*dww3); + surr_mu_rmse_list(k) = sqrt(sum(delta.^2.*dww3)); + + end + + + + + NN_plot = (1:length(pq_list)) + as_par.n_init; + + % + % calculate errors for particular mode coeff + % + + if as_par.compute_mode_errors + qp_mae_list = zeros(length(pq_list), 1); + qp_rmse_list = zeros(length(pq_list), 1); + qp_log_mae_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_log_rmse_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_kl_div_forward_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_kl_div_backward_list = zeros(length(pq_list), as_par.n_kl_bounds); + + for k = 1:length(pq_list) + cur_qp = pq_list{k}; + + delta = cur_qp - true_pq; + qp_mae_list(k) = mean(abs(delta)); + qp_rmse_list(k) = sqrt(mean(delta.^2)); + + log_delta = log(cur_qp) - log(true_pq); + for j = 1:as_par.n_kl_bounds + kl_lim = as_par.kl_bound_list(j); + ii = find((qq_interval > -kl_lim) & (qq_interval < kl_lim)); + + qp_log_mae_list(k, j) = mean(abs(log_delta(ii))); + qp_log_rmse_list(k, j) = sqrt(mean(log_delta(ii).^2)); + + qp_kl_div_forward_list(k, j) = sum(true_pq(ii).*log(true_pq(ii)./cur_qp(ii))); + qp_kl_div_backward_list(k, j) = sum(cur_qp(ii).*log(cur_qp(ii)./true_pq(ii))); + end + end + end + + % + % calculate errors for total vbm + % + + pz_mae_list = zeros(length(pz_list), 1); + pz_rmse_list = zeros(length(pz_list), 1); + pz_log_mae_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_log_rmse_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_kl_div_forward_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_kl_div_backward_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_log_mae = + + for k = 1:length(pz_list) + cur_pz = pz_list{k}; + + delta = cur_pz - true_pz; + pz_mae_list(k) = mean(abs(delta)); + pz_rmse_list(k) = sqrt(mean(delta.^2)); + + log_delta = log(cur_pz) - log(true_pz); + for j = 1:as_par.n_kl_bounds + kl_upper_lim = as_par.kl_bound_list_vbm_upper(j); + kl_lower_lim = as_par.kl_bound_list_vbm_lower(j); + ii = find((qqz_interval > kl_lower_lim) & (qqz_interval < kl_upper_lim)); + + pz_log_mae_list(k, j) = mean(abs(log_delta(ii))); + pz_log_rmse_list(k, j) = sqrt(mean(log_delta(ii).^2)); + + pz_kl_div_forward_list(k, j) = sum(true_pz(ii).*log(true_pz(ii)./cur_pz(ii))); + pz_kl_div_backward_list(k, j) = sum(cur_pz(ii).*log(cur_pz(ii)./true_pz(ii))); + end + end + + fprintf('Plotting recovered q pdf and error metrics.\n'); + + + if as_par.draw_plots + lkk = [5:5:length(protocol_list), length(protocol_list)+1]; + CC = colormap(parula(length(protocol_list))); + + figure(18); + clf; + hold on + hh = zeros(length(protocol_list)+1, 1); + names = cell(length(protocol_list)+1, 1); + for k = 1:length(protocol_list) + hh(k) = plot(qqz_interval, pz_list{k}, 'LineWidth', 1, 'Color', CC(k, :)); + names{k} = sprintf('n = %d', k+as_par.n_init); + end + hh(length(protocol_list)+1) = plot(qqz_interval, true_pz, 'LineWidth', 3, 'Color', 'Black'); + names{length(protocol_list)+1} = 'truth'; + xlabel('$M_y$', 'Interpreter', 'Latex') + ylabel('$p_M(m_y)$', 'Interpreter', 'Latex'); + legend(hh(lkk), names(lkk), 'Interpreter', 'Latex', 'Location', 'South'); + set(gca, 'YScale', 'log') + title(sprintf('full VBM pdf'), 'Interpreter', 'Latex'); + ylim([1e-16, 1e-9]) + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%svbm-pdf_total', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + if as_par.compute_mode_errors + figure(13); + clf; + hold on + hh = zeros(length(protocol_list)+1, 1); + names = cell(length(protocol_list)+1, 1); + for k = 1:length(protocol_list) + hh(k) = plot(qq_interval, pq_list{k}, 'LineWidth', 1, 'Color', CC(k, :)); + names{k} = sprintf('n = %d', k+as_par.n_init); + end + hh(length(pq_list)+1) = plot(qq_interval, true_pq, 'LineWidth', 3, 'Color', 'Black'); + names{length(pq_list)+1} = 'truth'; + xlabel('$q_1$', 'Interpreter', 'Latex') + ylabel('$p_Q(q_1)$', 'Interpreter', 'Latex'); + legend(hh(lkk), names(lkk), 'Interpreter', 'Latex', 'Location', 'South'); + set(gca, 'YScale', 'log') + title(sprintf('q pdf, mode %d', as_par.q_plot), 'Interpreter', 'Latex'); + ylim([1e-7, 1e0]) + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf_total_mode_%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(14); + clf; + hold on + plot(NN_plot, qp_mae_list); + plot(NN_plot, qp_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('q-pdf error q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(15); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, qp_log_mae_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, qp_log_rmse_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('q-pdf log error q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-log_error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(16); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, qp_kl_div_forward_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, qp_kl_div_backward_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$D_{KL}$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'$D_{KL}(\mbox{true} || \mbox{model})$', '$D_{KL}(\mbox{model} || \mbox{true})$'}, ... + 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('q-pdf KL divergence q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-kl-div_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(17); + clf; + hold on + plot(NN_plot, surr_mu_mae_list); + plot(NN_plot, surr_mu_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('surrogate mean expected error q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%ssurrogate-mean-expected-error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + + + figure(19); + clf; + hold on + plot(NN_plot, pz_mae_list); + plot(NN_plot, pz_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('VBM-pdf error'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sVBM-pdf-error', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(20); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, pz_log_mae_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, pz_log_rmse_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('VBM-pdf log error'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sVBM-pdf-log_error', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(21); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, pz_kl_div_forward_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, pz_kl_div_backward_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$D_{KL}$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'$D_{KL}(\mbox{true} || \mbox{model})$', '$D_{KL}(\mbox{model} || \mbox{true})$'}, ... + 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('VBM-pdf KL divergence'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sVBM-pdf-kl-div', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + end + + + + if as_par.save_errors + filename = sprintf('%s/error_data.m', a_par.fig_path); + save(filename, 'surr_mu_mae_list', 'surr_mu_rmse_list', ... + 'qp_mae_list', 'qp_rmse_list', 'qp_kl_div_forward_list', ... + 'qp_kl_div_backward_list', 'qp_log_mae_list', 'qp_log_rmse_list', ... + 'pz_mae_list', 'pz_rmse_list', 'pz_kl_div_forward_list', ... + 'pz_kl_div_backward_list', 'pz_log_mae_list', 'pz_log_rmse_list'); + end + + err_struct = struct(); + err_struct.surr_mu_mae_list = surr_mu_mae_list; + err_struct.surr_mu_rmse_list = surr_mu_rmse_list; + err_struct.qp_mae_list = qp_mae_list; + err_struct.qp_rmse_list = qp_rmse_list; + err_struct.qp_kl_div_forward_list = qp_kl_div_forward_list; + err_struct.qp_kl_div_backward_list = qp_kl_div_backward_list; + err_struct.qp_log_mae_list = qp_log_mae_list; + err_struct.qp_log_rmse_list = qp_log_rmse_list; + err_struct.pz_mae_list = pz_mae_list; + err_struct.pz_rmse_list = pz_rmse_list; + err_struct.pz_kl_div_forward_list = pz_kl_div_forward_list; + err_struct.pz_kl_div_backward_list = pz_kl_div_backward_list; + err_struct.pz_log_mae_list = pz_log_mae_list; + err_struct.pz_log_rmse_list = pz_log_rmse_list; + + fprintf('Error calculation and plotting stuff done after %0.2f seconds.\n', toc); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_error_plots.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_error_plots.m new file mode 100644 index 0000000..df185c8 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_error_plots.m @@ -0,0 +1,398 @@ +function [ err_struct ] = draw_error_plots( a_par, as_par, protocol_list, ... + true_f_mean, true_pq, true_pz) +%DRAW_ERROR_PLOTS Summary of this function goes here +% Detailed explanation goes here + + tic; + fprintf('Beginning error plot calculations.\n') + + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + +% a3_grid = linspace(-as_par.z_max, as_par.z_max, as_par.n_grid_likelihood); +% [aa13, aa23, aa33] = meshgrid(a3_grid, a3_grid, a3_grid); +% aa3_grid = [aa13(:), aa23(:), aa33(:)]; + + aa3_grid = as_par.z_max*(1-2*lhsdesign(1e4, as_par.n_dim_in)); + ww3 = f_input(aa3_grid); + dww3 = ww3./sum(ww3(:)); + + bbq = linspace(-as_par.q_max, as_par.q_max, as_par.nqb+1); + qq_interval = 1/2*(bbq(1:end-1) + bbq(2:end)); + + beta = protocol_list{1}.gpr_obj.overall_norm_factor; + bbz = linspace(-7*beta, 7*beta, as_par.nqb+1); + qqz_interval = 1/2*(bbz(1:end-1) + bbz(2:end)); + + pq_list = cell(length(protocol_list), 1); + pz_list = cell(length(protocol_list), 1); + + + fprintf('Computing q pdf using rules: %s / %s.\n', ... + as_par.q_pdf_rule, as_par.true_q_pdf_rule); + + for k = 1:length(protocol_list) + fprintf('Starting k=%d. (%0.2f seconds elapsed).\n', k, toc); + + [ pq, pz] = compute_histograms_from_gpr_protocol(a_par, as_par, ... + protocol_list{k}); + + pq_list{k} = pq; + pz_list{k} = pz; + end + + + surr_mu_mae_list = zeros(length(protocol_list), 1); + surr_mu_rmse_list = zeros(length(protocol_list), 1); + + if as_par.compute_surr_errors + true_surr_mu = true_f_mean(aa3_grid); + + fprintf('Calculating intermediate pdf errors --- %d total rounds.\n', length(protocol_list)); + for k = 1:length(protocol_list) + fprintf('Starting k=%d. (%0.2f seconds elapsed).\n', k, toc); + + % + % surrogate estimate + % + + [ cur_surr_mu, ~] = protocol_list{k}.gpr_obj.predict(aa3_grid); + delta = cur_surr_mu(:, as_par.q_plot) - true_surr_mu(:, as_par.q_plot); + surr_mu_mae_list(k) = sum(abs(delta).*dww3); + surr_mu_rmse_list(k) = sqrt(sum(delta.^2.*dww3)); + + end + end + + + + + NN_plot = (1:length(pq_list)) + as_par.n_init; + + % + % calculate errors for particular mode coeff + % + + qp_mae_list = zeros(length(pq_list), 1); + qp_rmse_list = zeros(length(pq_list), 1); + qp_log_mae_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_log_rmse_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_kl_div_forward_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_kl_div_backward_list = zeros(length(pq_list), as_par.n_kl_bounds); + + if as_par.compute_mode_errors + + for k = 1:length(pq_list) + cur_qp = pq_list{k}; + + delta = cur_qp - true_pq; + qp_mae_list(k) = mean(abs(delta)); + qp_rmse_list(k) = sqrt(mean(delta.^2)); + + log_delta = log(cur_qp) - log(true_pq); + for j = 1:as_par.n_kl_bounds + kl_lim = as_par.kl_bound_list(j); + ii = find((qq_interval > -kl_lim) & (qq_interval < kl_lim)); + + qp_log_mae_list(k, j) = mean(abs(log_delta(ii))); + qp_log_rmse_list(k, j) = sqrt(mean(log_delta(ii).^2)); + + qp_kl_div_forward_list(k, j) = sum(true_pq(ii).*log(true_pq(ii)./cur_qp(ii))); + qp_kl_div_backward_list(k, j) = sum(cur_qp(ii).*log(cur_qp(ii)./true_pq(ii))); + end + end + end + + % + % calculate errors for total vbm + % + + pz_mae_list = zeros(length(pz_list), 1); + pz_rmse_list = zeros(length(pz_list), 1); + pz_log_mae_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_log_rmse_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_kl_div_forward_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_kl_div_backward_list = zeros(length(pz_list), as_par.n_kl_bounds); + pz_log_mae_trunc_list = zeros(length(pz_list), 1); + pz_log_mae_trunc_list2 = zeros(length(pz_list), 1); + + for k = 1:length(pz_list) + cur_pz = pz_list{k}; + + delta = cur_pz - true_pz; + pz_mae_list(k) = mean(abs(delta)); + pz_rmse_list(k) = sqrt(mean(delta.^2)); + + log_delta = log(cur_pz) - log(true_pz); + for j = 1:as_par.n_kl_bounds + kl_upper_lim = as_par.kl_bound_list_vbm_upper(j); + kl_lower_lim = as_par.kl_bound_list_vbm_lower(j); + ii = find((qqz_interval > kl_lower_lim) & (qqz_interval < kl_upper_lim)); + + pz_log_mae_list(k, j) = mean(abs(log_delta(ii))); + pz_log_rmse_list(k, j) = sqrt(mean(log_delta(ii).^2)); + + pz_kl_div_forward_list(k, j) = sum(true_pz(ii).*log(true_pz(ii)./cur_pz(ii))); + pz_kl_div_backward_list(k, j) = sum(cur_pz(ii).*log(cur_pz(ii)./true_pz(ii))); + end + + bbz = linspace(-10*beta, 10*beta, as_par.nqb+1); + pz_log_mae_trunc_list(k) = calc_log_pdf_errors(true_pz, cur_pz, bbz, 1e-13); + pz_log_mae_trunc_list2(k) = calc_log_pdf_errors(true_pz, cur_pz, bbz, 1e-12); + end + + fprintf('Plotting recovered q pdf and error metrics.\n'); + + + if as_par.draw_plots + lkk = [5:5:length(protocol_list), length(protocol_list)+1]; + CC = colormap(parula(length(protocol_list))); + + figure(18); + clf; + hold on + hh = zeros(length(protocol_list)+1, 1); + names = cell(length(protocol_list)+1, 1); + for k = 1:length(protocol_list) + hh(k) = plot(qqz_interval, pz_list{k}, 'LineWidth', 1, 'Color', CC(k, :)); + names{k} = sprintf('n = %d', k+as_par.n_init); + end + hh(length(protocol_list)+1) = plot(qqz_interval, true_pz, 'LineWidth', 3, 'Color', 'Black'); + names{length(protocol_list)+1} = 'truth'; + xlabel('$M_y$', 'Interpreter', 'Latex') + ylabel('$p_M(m_y)$', 'Interpreter', 'Latex'); + legend(hh(lkk), names(lkk), 'Interpreter', 'Latex', 'Location', 'South'); + set(gca, 'YScale', 'log') + title(sprintf('full VBM pdf'), 'Interpreter', 'Latex'); + ylim([1e-16, 1e-9]) + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%svbm-pdf_total', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + if as_par.compute_mode_errors + figure(13); + clf; + hold on + hh = zeros(length(protocol_list)+1, 1); + names = cell(length(protocol_list)+1, 1); + for k = 1:length(protocol_list) + hh(k) = plot(qq_interval, pq_list{k}, 'LineWidth', 1, 'Color', CC(k, :)); + names{k} = sprintf('n = %d', k+as_par.n_init); + end + hh(length(pq_list)+1) = plot(qq_interval, true_pq, 'LineWidth', 3, 'Color', 'Black'); + names{length(pq_list)+1} = 'truth'; + xlabel('$q_1$', 'Interpreter', 'Latex') + ylabel('$p_Q(q_1)$', 'Interpreter', 'Latex'); + legend(hh(lkk), names(lkk), 'Interpreter', 'Latex', 'Location', 'South'); + set(gca, 'YScale', 'log') + title(sprintf('q pdf, mode %d', as_par.q_plot), 'Interpreter', 'Latex'); + ylim([1e-7, 1e0]) + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf_total_mode_%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(14); + clf; + hold on + plot(NN_plot, qp_mae_list); + plot(NN_plot, qp_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('q-pdf error q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(15); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, qp_log_mae_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, qp_log_rmse_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('q-pdf log error q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-log_error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(16); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, qp_kl_div_forward_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, qp_kl_div_backward_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$D_{KL}$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'$D_{KL}(\mbox{true} || \mbox{model})$', '$D_{KL}(\mbox{model} || \mbox{true})$'}, ... + 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('q-pdf KL divergence q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-kl-div_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + if as_par.compute_surr_errors + figure(17); + clf; + hold on + plot(NN_plot, surr_mu_mae_list); + plot(NN_plot, surr_mu_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('surrogate mean expected error q=%d', as_par.q_plot), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%ssurrogate-mean-expected-error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + + + figure(19); + clf; + hold on + plot(NN_plot, pz_mae_list); + plot(NN_plot, pz_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('VBM-pdf error'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sVBM-pdf-error', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(20); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, pz_log_mae_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, pz_log_rmse_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('VBM-pdf log error'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sVBM-pdf-log_error', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(21); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, pz_kl_div_forward_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, pz_kl_div_backward_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$D_{KL}$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'$D_{KL}(\mbox{true} || \mbox{model})$', '$D_{KL}(\mbox{model} || \mbox{true})$'}, ... + 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('VBM-pdf KL divergence'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sVBM-pdf-kl-div', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + end + + + + if as_par.save_errors + filename = sprintf('%s/error_data.m', a_par.fig_path); + save(filename, 'surr_mu_mae_list', 'surr_mu_rmse_list', ... + 'qp_mae_list', 'qp_rmse_list', 'qp_kl_div_forward_list', ... + 'qp_kl_div_backward_list', 'qp_log_mae_list', 'qp_log_rmse_list', ... + 'pz_mae_list', 'pz_rmse_list', 'pz_kl_div_forward_list', ... + 'pz_kl_div_backward_list', 'pz_log_mae_list', 'pz_log_rmse_list'); + end + + err_struct = struct(); + err_struct.pz_list = pz_list; + err_struct.surr_mu_mae_list = surr_mu_mae_list; + err_struct.surr_mu_rmse_list = surr_mu_rmse_list; + err_struct.qp_mae_list = qp_mae_list; + err_struct.qp_rmse_list = qp_rmse_list; + err_struct.qp_kl_div_forward_list = qp_kl_div_forward_list; + err_struct.qp_kl_div_backward_list = qp_kl_div_backward_list; + err_struct.qp_log_mae_list = qp_log_mae_list; + err_struct.qp_log_rmse_list = qp_log_rmse_list; + err_struct.pz_mae_list = pz_mae_list; + err_struct.pz_rmse_list = pz_rmse_list; + err_struct.pz_kl_div_forward_list = pz_kl_div_forward_list; + err_struct.pz_kl_div_backward_list = pz_kl_div_backward_list; + err_struct.pz_log_mae_list = pz_log_mae_list; + err_struct.pz_log_rmse_list = pz_log_rmse_list; + err_struct.pz_log_mae_trunc_list = pz_log_mae_trunc_list; + err_struct.pz_log_mae_trunc_list2 = pz_log_mae_trunc_list2; + + fprintf('Error calculation and plotting stuff done after %0.2f seconds.\n', toc); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_kl_eigendecay.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_kl_eigendecay.m new file mode 100644 index 0000000..b309cd0 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_kl_eigendecay.m @@ -0,0 +1,301 @@ +addpath '../common'; +addpath '../analysis'; +addpath '../trunk'; + +a_par = Analysis_Parameters(); +a_par.fig_path = '../../../Output/scsp/oct_pix_2/'; + +j_par = JONSWAP_Parameters(); + +% feed in one-sided spectrum +amp_of_cosine = @(S, w, dw) sqrt(2*S(w).*dw); + + + +TT = [20, 40, 60, 80, 100, 120]; + +D_raw_list{kt} = D_kl; +D_list = cell(length(TT), 1); +D_raw_cum_list = cell(length(TT), 1); +D_cum_list = cell(length(TT), 1); +D_remainder_list = cell(length(TT), 1); + +for kt = 1:length(TT) + + fprintf('Building KL basis.\n') + + WW_kl = linspace(j_par.omega_min, j_par.omega_max, j_par.n_W)'; + dW = WW_kl(2) - WW_kl(1); + AA_kl = amp_of_cosine(j_par.S, WW_kl, dW); + + T_max_kl = TT(kt); + n_t_kl = 1024; + TT_kl = linspace(0, T_max_kl, n_t_kl); + dt_kl = TT_kl(2) - TT_kl(1); + + [ V_kl, D_kl ] = calc_direct_kl_modes(AA_kl, WW_kl, TT_kl); + + %kl_struct = struct; + %kl_struct.T = TT_kl; + %kl_struct.modes = V_kl; + %kl_struct.variance = D_kl; + + D_raw_list{kt} = D_kl; + D_list{kt} = D_kl./max(D_kl); + D_raw_cum_list{kt} = cumsum(D_kl); + D_cum_list{kt} = D_raw_cum_list{kt}./max(D_raw_cum_list{kt}); + D_remainder_list{kt} = max(D_cum_list{kt}) - D_cum_list{kt}; +end + + + +CC = colormap(parula(length(TT))); + +names = cell(length(TT), 1); +for kt = 1:length(TT) + names{kt} = sprintf('$T=%d$', TT(kt)); +end + + +figure(1); +clf; +hold on +for kt = 1:length(TT) + plot(D_list{kt}, 'Color', CC(kt, :), 'LineWidth', 3); +end +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\lambda$', 'Interpreter', 'Latex'); +title('KL sea state eigenspectrum decay','Interpreter', 'Latex'); +set(gca, 'yscale', 'log'); +xlim([0, 50]); +ylim([1e-4, 1]) +legend(names, 'Interpreter', 'Latex', 'Location', 'Southwest'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-spectrum-eigendecay', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + + +figure(2); +clf; +hold on +for kt = 1:length(TT) + plot(D_cum_list{kt}, 'Color', CC(kt, :), 'LineWidth', 3); +end +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\Sigma \lambda$', 'Interpreter', 'Latex'); +title('KL sea state cumulative energy','Interpreter', 'Latex'); +%set(gca, 'yscale', 'log'); +xlim([0, 50]); +legend(names, 'Interpreter', 'Latex', 'Location', 'Southeast'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-spectrum-total', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + +figure(3); +clf; +hold on +for kt = 1:length(TT) + plot(D_remainder_list{kt}, 'Color', CC(kt, :), 'LineWidth', 3); +end +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\Sigma \lambda$', 'Interpreter', 'Latex'); +title('KL sea state energy remainder','Interpreter', 'Latex'); +set(gca, 'yscale', 'log'); +xlim([0, 50]); +ylim([1e-4, 1]) +legend(names, 'Interpreter', 'Latex', 'Location', 'Southwest'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-spectrum-remainder', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + + + + + +load_klmc_data(); +load_kl2d_long_data(); + +Do_raw_list = cell(5, 1); +Do_list = cell(5, 1); +Do_raw_cum_list = cell(5, 1); +Do_cum_list = cell(5, 1); +Do_remainder_list = cell(5, 1); + +[ ~ , D_out_1, ~ ] = calc_kl_modes(ZZ_klmc_vbmg); +[ ~ , D_out_2, ~ ] = calc_kl_modes(ZZ_list_long{8}); % six-80 +[ ~ , D_out_3, ~ ] = calc_kl_modes(ZZ_list_long{10}); % eight-80 +[ ~ , D_out_4, ~ ] = calc_kl_modes(ZZ_list_long{12}); % ten-80 +[ ~ , D_out_5, ~ ] = calc_kl_modes(ZZ_list_long{17}); % four-80 +%[ ~ , D_out_5, ~ ] = calc_kl_modes(ZZ_list_long{6}); % four-80 + +Do_raw_list{1} = D_out_1; +Do_raw_list{2} = D_out_2; +Do_raw_list{3} = D_out_3; +Do_raw_list{4} = D_out_4; +Do_raw_list{5} = D_out_5; + +for kt = 1:5 + Do_list{kt} = Do_raw_list{kt}./max(Do_raw_list{kt}); + Do_raw_cum_list{kt} = cumsum(Do_raw_list{kt}); + Do_cum_list{kt} = Do_raw_cum_list{kt}./max(Do_raw_cum_list{kt}); + Do_remainder_list{kt} = max(Do_cum_list{kt}) - Do_cum_list{kt}; +end + + + +CC = colormap(parula(4)); + +names_2 = cell(4, 1); +%names_2{1} = 'MC'; +names_2{1} = 'four-80'; +names_2{2} = 'six-80'; +names_2{3} = 'eight-80'; +names_2{4} = 'ten-80'; + +figure(11); +clf; +hold on +%plot(Do_list{1}, 'Color', 'Black', 'LineWidth', 3); +plot(Do_list{5}, 'Color', CC(1, :), 'LineWidth', 2); +plot(Do_list{2}, 'Color', CC(2, :), 'LineWidth', 2); +plot(Do_list{3}, 'Color', CC(3, :), 'LineWidth', 2); +plot(Do_list{4}, 'Color', CC(4, :), 'LineWidth', 2); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\lambda$', 'Interpreter', 'Latex'); +title('KL VBM eigenspectrum decay','Interpreter', 'Latex'); +set(gca, 'yscale', 'log'); +xlim([0, 50]); +ylim([1e-4, 1]) +legend(names_2, 'Interpreter', 'Latex', 'Location', 'Southwest'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-out-spectrum', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + + +figure(12); +clf; +hold on +%plot(Do_cum_list{1}, 'Color', 'Black', 'LineWidth', 3); +plot(Do_cum_list{5}, 'Color', CC(1, :), 'LineWidth', 2); +plot(Do_cum_list{2}, 'Color', CC(2, :), 'LineWidth', 2); +plot(Do_cum_list{3}, 'Color', CC(3, :), 'LineWidth', 2); +plot(Do_cum_list{4}, 'Color', CC(4, :), 'LineWidth', 2); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\Sigma \lambda$', 'Interpreter', 'Latex'); +title('KL VBM cumulative energy','Interpreter', 'Latex'); +%set(gca, 'yscale', 'log'); +xlim([0, 50]); +legend(names_2, 'Interpreter', 'Latex', 'Location', 'Southeast'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-out-spectrum-total', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + +figure(13); +clf; +hold on +%plot(Do_remainder_list{1}, 'Color', 'Black', 'LineWidth', 3); +plot(Do_remainder_list{5}, 'Color', CC(1, :), 'LineWidth', 2); +plot(Do_remainder_list{2}, 'Color', CC(2, :), 'LineWidth', 2); +plot(Do_remainder_list{3}, 'Color', CC(3, :), 'LineWidth', 2); +plot(Do_remainder_list{4}, 'Color', CC(4, :), 'LineWidth', 2); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\Sigma \lambda$', 'Interpreter', 'Latex'); +title('KL VBM energy remainder','Interpreter', 'Latex'); +set(gca, 'yscale', 'log'); +xlim([0, 50]); +ylim([1e-4, 1]) +legend(names_2, 'Interpreter', 'Latex', 'Location', 'Southwest'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-out-spectrum-remainder', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end + + + +CC = colormap(parula(4)); + +names_3 = cell(5, 1); +names_3{1} = 'Sea State'; +names_3{2} = 'VBM $n=4$'; +names_3{3} = 'VBM $n=6$'; +names_3{4} = 'VBM $n=8$'; +names_3{5} = 'VBM $n=10$'; + +figure(21); +clf; +hold on +plot(D_list{4}, 'Color', 'Black', 'LineWidth', 3); +plot(Do_list{5}, 'Color', CC(1, :), 'LineWidth', 2); +plot(Do_list{2}, 'Color', CC(2, :), 'LineWidth', 2); +plot(Do_list{3}, 'Color', CC(3, :), 'LineWidth', 2); +plot(Do_list{4}, 'Color', CC(4, :), 'LineWidth', 2); +xlabel('$n$', 'Interpreter', 'Latex'); +ylabel('$\lambda$', 'Interpreter', 'Latex'); +title('KL eigenspectrum decay','Interpreter', 'Latex'); +set(gca, 'yscale', 'log'); +xlim([0, 65]); +ylim([1e-5, 1]) +legend(names_3, 'Interpreter', 'Latex', 'Location', 'Northeast'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + +if a_par.save_figs + filename = sprintf('%skl-combined-spectrum-eigendecay', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_movie_plots.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_movie_plots.m new file mode 100644 index 0000000..753bcbe --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_movie_plots.m @@ -0,0 +1,217 @@ +function [ outcode ] = draw_movie_plots( a_par, as_par, protocol_list, true_f_mean, true_pq) +%DRAW_PLOTS Summary of this function goes here +% Detailed explanation goes here + + tic; + + %z_max = 4.5; + %n_init = 10; + + %q_plot = 1; + a_grid = linspace(-as_par.z_max, as_par.z_max, as_par.na); + [aa1, aa2] = meshgrid(a_grid, a_grid); + aa_grid = [aa1(:), aa2(:), zeros(size(aa1(:)))]; + + + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + + a3_grid = linspace(-as_par.z_max, as_par.z_max, as_par.n_grid_likelihood); + [aa13, aa23, aa33] = meshgrid(a3_grid, a3_grid, a3_grid); + aa3_grid = [aa13(:), aa23(:), aa33(:)]; + ww3 = f_input(aa3_grid); + dww3 = ww3./sum(ww3(:)); + + %nqb= 65; + %q_max = 6.5; + bbq = linspace(-as_par.q_max, as_par.q_max, as_par.nqb+1); + + %qq_interval = linspace(-q_max, q_max, nqb); + qq_interval = 1/2*(bbq(1:end-1) + bbq(2:end)); + + %save_intermediate_plots = false; + + pq_list = cell(length(protocol_list), 1); + %nq_mc = 5e6; + %q_pdf_rule = 'MC'; + %true_q_pdf_rule = 'MC'; + fprintf('Computing q pdf using rules: %s / %s.\n', ... + as_par.q_pdf_rule, as_par.true_q_pdf_rule); + + zz = true_f_mean(aa_grid); + zz_plot = reshape(zz(:, as_par.q_plot), size(aa1)); + max_surr_abs = max(abs(zz_plot(:))); + + true_surr_mu = true_f_mean(aa3_grid); + surr_mu_mae_list = zeros(length(protocol_list), 1); + surr_mu_rmse_list = zeros(length(protocol_list), 1); + + + + + if as_par.draw_plots + filename = sprintf('%ssurrogate-q-likelihood-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + q_likelihood_vid_file = VideoWriter(filename, as_par.vid_profile); + q_likelihood_vid_file.FrameRate = as_par.video_frame_rate; + open(q_likelihood_vid_file); + + filename = sprintf('%ssurrogate-mean-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + surr_mean_vid_file = VideoWriter(filename, as_par.vid_profile); + surr_mean_vid_file.FrameRate = as_par.video_frame_rate; + open(surr_mean_vid_file); + + filename = sprintf('%sacq-function-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + lw_us_acq_vid_file = VideoWriter(filename, as_par.vid_profile); + lw_us_acq_vid_file.FrameRate = as_par.video_frame_rate; + open(lw_us_acq_vid_file); + end + + + fprintf('Drawing iterated reconstruction stuff --- %d total rounds.\n', length(protocol_list)); + for k = 1:length(protocol_list) + fprintf('Starting k=%d. (%0.2f seconds elapsed).\n', k, toc); + + cur_model_protocol = protocol_list{k}; + + %[ f_likelihood ] = build_likelihood(cur_model_protocol.gpr_obj, aa3_grid, ww3, bbq); + + f_blackbox = @(alpha) cur_model_protocol.gpr_obj.predict(alpha); + [ f_likelihood ] = build_likelihood_function(as_par, f_input, f_blackbox, ... + as_par.q_plot); + zz = f_likelihood(qq_interval); + + + switch as_par.q_pdf_rule + case 'likelihood-transform' + pq_list{k} = zz; + case 'MC' + aa_q = randn(as_par.nq_mc, 3); + [ qq, ~, ~ ] = cur_model_protocol.gpr_obj.sample(aa_q); + pq_list{k} = histcounts(qq(:, as_par.q_plot), bbq, ... + 'Normalization', 'pdf'); + end + + + + [qq, ss] = cur_model_protocol.gpr_obj.predict(aa_grid); + zz = f_likelihood(qq(:, as_par.q_plot)); + + if as_par.draw_plots + zz_plot = reshape(zz(:, as_par.q_plot), size(aa1)); + figure(6); + clf; + pcolor(aa1, aa2, zz_plot) + shading flat + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_2$', 'Interpreter', 'Latex') + title(sprintf('q-likelihood mode %d', as_par.q_plot), 'Interpreter', 'Latex'); + colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%sq-likelihood_n_%d_q_%d', a_par.fig_path, k, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + F = getframe(gcf); + writeVideo(q_likelihood_vid_file,F); + end + + + [ cur_surr_mu, ~] = cur_model_protocol.gpr_obj.predict(aa3_grid); + delta = cur_surr_mu(:, as_par.q_plot) - true_surr_mu(:, as_par.q_plot); + surr_mu_mae_list(k) = sum(abs(delta).*dww3); + surr_mu_rmse_list(k) = sqrt(sum(delta.^2.*dww3)); + + + + + + zz = f_input(aa_grid)./f_likelihood(qq(:, as_par.q_plot)).*ss.^2; + + if as_par.draw_plots + zz_plot = reshape(zz(:, as_par.q_plot), size(aa1)); + figure(9); + clf; + pcolor(aa1, aa2, zz_plot) + shading flat + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_2$', 'Interpreter', 'Latex') + title(sprintf('lw-us-direct %d', as_par.q_plot), 'Interpreter', 'Latex'); + colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%sacq-direct_n_%d_q_%d', a_par.fig_path, k, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + F = getframe(gcf); + writeVideo(lw_us_acq_vid_file,F); + end + + + zz = protocol_list{k}.gpr_obj.predict(aa_grid); + + if as_par.draw_plots + zz_plot = reshape(zz(:, as_par.q_plot), size(aa1)); + figure(3); + clf; + pcolor(aa1, aa2, zz_plot) + shading flat + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_2$', 'Interpreter', 'Latex') + title(sprintf('recovered surrogate mode %d -- n =%d', as_par.q_plot, k+as_par.n_init), ... + 'Interpreter', 'Latex'); + caxis([-1.25*max_surr_abs, 1.25*max_surr_abs]); + colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%ssurrogate_n_%d_q_%d', a_par.fig_path, k, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + + F = getframe(gcf); + writeVideo(surr_mean_vid_file,F); + end + + end + + + if as_par.draw_plots + close(q_likelihood_vid_file); + close(surr_mean_vid_file); + close(lw_us_acq_vid_file); + end + + + %cur_model_protocol.plot_surrogate(1); + %true_model_protocol.plot_surrogate(1); + + + + fprintf('Movie plotting stuff done after %0.2f seconds.\n', toc); + + + outcode = 1; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_plots_toy.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_plots_toy.m new file mode 100644 index 0000000..dfc3ab4 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_plots_toy.m @@ -0,0 +1,609 @@ +function [ outcode ] = draw_plots_toy( a_par, as_par, model_list, true_f, true_pq) +%DRAW_PLOTS Summary of this function goes here +% Detailed explanation goes here + + tic; + + %z_max = 4.5; + %n_init = 10; + + %q_plot = 1; + a_grid = linspace(-as_par.z_max, as_par.z_max, as_par.na); + %[aa1, aa2] = meshgrid(a_grid, a_grid); + %aa_grid = [aa1(:), aa2(:), zeros(size(aa1(:)))]; + aa_grid = a_grid'; + + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + + a3_grid = linspace(-as_par.z_max, as_par.z_max, as_par.n_grid_likelihood); + %[aa13, aa23, aa33] = meshgrid(a3_grid, a3_grid, a3_grid); + %aa3_grid = [aa13(:), aa23(:), aa33(:)]; + aa3_grid = a3_grid'; + ww3 = f_input(aa3_grid); + dww3 = ww3./sum(ww3(:)); + + %nqb= 65; + %q_max = 6.5; + bbq = linspace(as_par.q_min, as_par.q_max, as_par.nqb+1); + + %qq_interval = linspace(-q_max, q_max, nqb); + qq_interval = 1/2*(bbq(1:end-1) + bbq(2:end)); + + %save_intermediate_plots = false; + + pq_list = cell(length(model_list), 1); + %nq_mc = 5e6; + %q_pdf_rule = 'MC'; + %true_q_pdf_rule = 'MC'; + fprintf('Computing q pdf using rules: %s / %s.\n', ... + as_par.q_pdf_rule, as_par.true_q_pdf_rule); + + fprintf('Drawing true model stuff.\n'); + + + + true_f_noiseless = @(x) x.^1.5.*(x>0) + x.*(x<0).*(x>=-2) -2.*(x<-2); + true_surr_mu = true_f_noiseless(aa3_grid); + surr_mu_mae_list = zeros(length(model_list), 1); + surr_mu_rmse_list = zeros(length(model_list), 1); + + zz = true_f_noiseless(aa_grid); + zz_plot = reshape(zz, size(zz)); + + if as_par.draw_plots + figure(4); + clf; + %pcolor(aa1, aa2, zz_plot) + plot(aa3_grid, zz_plot, 'LineWidth', 3) + shading flat + xlabel('$x$', 'Interpreter', 'Latex') + ylabel('$\mu_y$', 'Interpreter', 'Latex') + title(sprintf('true function'), 'Interpreter', 'Latex'); + %colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.third_paper_pos, 'PaperSize', a_par.third_paper_size); + + if a_par.save_figs + filename = sprintf('%ssurrogate_true_q_%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + max_surr_abs = max(abs(zz_plot(:))); + + zz = f_input(aa_grid); + + + if as_par.draw_plots + zz_plot = reshape(zz, size(zz)); + figure(11); + clf; + %pcolor(aa1, aa2, zz_plot) + plot(aa_grid, zz_plot, 'LineWidth', 3); + shading flat + xlabel('$x$', 'Interpreter', 'Latex') + ylabel('$f_X(x)$', 'Interpreter', 'Latex') + title(sprintf('input distribution'), 'Interpreter', 'Latex'); + %colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.third_paper_pos, 'PaperSize', a_par.third_paper_size); + + if a_par.save_figs + filename = sprintf('%sinput_density', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + end + + if as_par.save_videos + filename = sprintf('%ssurrogate-q-likelihood-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + q_likelihood_vid_file = VideoWriter(filename, as_par.vid_profile); + q_likelihood_vid_file.FrameRate = as_par.video_frame_rate; + open(q_likelihood_vid_file); + + filename = sprintf('%ssurrogate-mean-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + surr_mean_vid_file = VideoWriter(filename, as_par.vid_profile); + surr_mean_vid_file.FrameRate = as_par.video_frame_rate; + open(surr_mean_vid_file); + + filename = sprintf('%sacq-function-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + lw_us_acq_vid_file = VideoWriter(filename, as_par.vid_profile); + lw_us_acq_vid_file.FrameRate = as_par.video_frame_rate; + open(lw_us_acq_vid_file); + + filename = sprintf('%slw-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + lw_vid_file = VideoWriter(filename, as_par.vid_profile); + lw_vid_file.FrameRate = as_par.video_frame_rate; + open(lw_vid_file); + + filename = sprintf('%ssr-std-evolution', as_par.video_path); + if isfile(filename) + delete(filename) + end + surr_std_vid_file = VideoWriter(filename, as_par.vid_profile); + surr_std_vid_file.FrameRate = as_par.video_frame_rate; + open(surr_std_vid_file); + end + + + fprintf('Drawing iterated reconstruction stuff --- %d total rounds.\n', length(model_list)); + for k = 1:length(model_list) + fprintf('Starting k=%d. (%0.2f seconds elapsed).\n', k, toc); + + cur_model = model_list{k}; + + %[ f_likelihood ] = build_likelihood(@(alpha) cur_model.predict(alpha), ... + % aa3_grid, ww3, bbq); + [ f_likelihood ] = build_likelihood_function(as_par, f_input, true_f); + zz = f_likelihood(qq_interval); + + + switch as_par.q_pdf_rule + case 'likelihood-transform' + pq_list{k} = zz; + case 'MC' + aa_q = randn(as_par.nq_mc, 1); + [ mm, ss ] = cur_model.predict(aa_q); + qq = mm + ss.*randn(size(mm)); + pq_list{k} = histcounts(qq, bbq, ... + 'Normalization', 'pdf'); + end +% figure(10) +% clf; +% plot(qq_interval, zz, 'LineWidth', 3); +% xlabel('$q_1$', 'Interpreter', 'Latex') +% ylabel('$p_Q(q_1)$', 'Interpreter', 'Latex') +% set(gca, 'YScale', 'log') +% title(sprintf('q-likelihood %d', q_plot), 'Interpreter', 'Latex'); +% ylim([1e-7, 1e0]) +% set(gca, 'FontSize', 9); +% set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% +% if a_par.save_figs +% filename = sprintf('%sq-likelihood_transform_n_%d_q_%d', a_par.fig_path, k, q_plot); +% print(filename,'-dpdf'); +% savefig(filename); +% end + + + [qq, ss] = cur_model.predict(aa_grid); + zz = f_likelihood(qq); + + if as_par.draw_plots + zz_plot = reshape(zz, size(zz)); + figure(6); + clf; + %pcolor(aa1, aa2, zz_plot) + plot(qq, zz_plot) + set(gca, 'YScale', 'log'); + %shading flat + xlabel('$y$', 'Interpreter', 'Latex') + ylabel('$f_Y(y)$', 'Interpreter', 'Latex') + title(sprintf('output-likelihood'), 'Interpreter', 'Latex'); + %colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%sq-likelihood_n_%d', a_par.fig_path, k); + print(filename,'-dpdf'); + savefig(filename); + end + + if as_par.save_videos + F = getframe(gcf); + writeVideo(q_likelihood_vid_file,F); + end + end + + + [ cur_surr_mu, ~] = cur_model.predict(aa3_grid); + delta = cur_surr_mu - true_surr_mu; + surr_mu_mae_list(k) = sum(abs(delta).*dww3); + surr_mu_rmse_list(k) = sqrt(sum(delta.^2.*dww3)); + + if as_par.draw_plots + zz = f_input(aa_grid)./f_likelihood(qq); + zz_plot = reshape(zz, size(aa_grid)); + figure(8); + clf; + %pcolor(aa1, aa2, zz_plot) + plot(aa_grid, zz_plot) + %shading flat + xlabel('$x$', 'Interpreter', 'Latex') + ylabel('$w(x)$', 'Interpreter', 'Latex') + title(sprintf('likelihood ratio'), 'Interpreter', 'Latex'); + %colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%slikelihood-ratio_n_%d', a_par.fig_path, k); + print(filename,'-dpdf'); + savefig(filename); + end + + if as_par.save_videos + F = getframe(gcf); + writeVideo(lw_vid_file,F); + end + end + + + zz = f_input(aa_grid)./f_likelihood(qq).*ss.^2; + + if as_par.draw_plots + zz_plot = reshape(zz, size(zz)); + figure(9); + clf; + %pcolor(aa1, aa2, zz_plot) + plot(aa_grid, zz_plot) + %shading flat + xlabel('$x$', 'Interpreter', 'Latex') + ylabel('$a_{LW-US}$', 'Interpreter', 'Latex') + title(sprintf('lw-us-direct %d', as_par.q_plot), 'Interpreter', 'Latex'); + %colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%sacq-direct_n_%d', a_par.fig_path, k); + print(filename,'-dpdf'); + savefig(filename); + end + + if as_par.save_videos + F = getframe(gcf); + writeVideo(lw_us_acq_vid_file,F); + end + end + + +% ss_adj = ss - cur_model_protocol.gpr_obj.g_fit_list{1}.Sigma; +% zz = f_input(aa_grid)./f_likelihood(qq(:, 1)).*ss_adj; +% zz_plot = reshape(zz(:, q_plot), size(aa1)); +% figure(12); +% clf; +% pcolor(aa1, aa2, zz_plot) +% shading flat +% xlabel('$\alpha_1$', 'Interpreter', 'Latex') +% ylabel('$\alpha_2$', 'Interpreter', 'Latex') +% title(sprintf('lw-us-direct-adj %d', q_plot), 'Interpreter', 'Latex'); +% colorbar(); +% set(gca, 'FontSize', 9); +% set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% +% if a_par.save_intermediate_plots +% filename = sprintf('%sacq-direct-adj_n_%d_q_%d', a_par.fig_path, k, q_plot); +% print(filename,'-dpdf'); +% savefig(filename); +% end + + + +% zz_plot = reshape(ss(:, q_plot), size(aa1)); +% figure(7); +% clf; +% pcolor(aa1, aa2, zz_plot) +% shading flat +% xlabel('$\alpha_1$', 'Interpreter', 'Latex') +% ylabel('$\alpha_2$', 'Interpreter', 'Latex') +% title(sprintf('gpr uncertainty mode %d', q_plot), 'Interpreter', 'Latex'); +% colorbar(); +% set(gca, 'FontSize', 9); +% set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% +% if a_par.save_intermediate_plots +% filename = sprintf('%suncertainty_n_%d_q_%d', a_par.fig_path, k, q_plot); +% print(filename,'-dpdf'); +% savefig(filename); +% end + + +% f_blackbox = @(alpha) cur_model_protocol.gpr_obj.predict(alpha); +% f_acq = @(alpha) -f_acq_lw_us(alpha, f_input, f_likelihood, f_blackbox); +% zz = f_acq(aa_grid); +% zz_plot = reshape(zz(:, q_plot), size(aa1)); +% figure(5); +% clf; +% pcolor(aa1, aa2, zz_plot) +% shading flat +% xlabel('$\alpha_1$', 'Interpreter', 'Latex') +% ylabel('$\alpha_2$', 'Interpreter', 'Latex') +% title(sprintf('lw-us mode %d', q_plot), 'Interpreter', 'Latex'); +% colorbar(); +% set(gca, 'FontSize', 9); +% set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% +% if a_par.save_figs +% filename = sprintf('%slw-us-acq_n_%d_q_%d', a_par.fig_path, k, q_plot); +% print(filename,'-dpdf'); +% savefig(filename); +% end + + + + [ zz, ss ] = model_list{k}.predict(aa_grid); + + if as_par.draw_plots + zz_plot = reshape(zz, size(zz)); + figure(3); + clf; + %pcolor(aa1, aa2, zz_plot) + plot(aa_grid, zz_plot); + shading flat + xlabel('$x$', 'Interpreter', 'Latex') + ylabel('$\mu_y$', 'Interpreter', 'Latex') + title(sprintf('recovered surrogate -- n = %d', k+as_par.n_init), ... + 'Interpreter', 'Latex'); + %caxis([-1.25*max_surr_abs, 1.25*max_surr_abs]); + %colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%ssurrogate_n_%d', a_par.fig_path, k); + print(filename,'-dpdf'); + savefig(filename); + end + + if as_par.save_videos + F = getframe(gcf); + writeVideo(surr_mean_vid_file,F); + end + + end + + + if as_par.draw_plots + zz_plot = reshape(ss, size(ss)); + figure(23); + clf; + %pcolor(aa1, aa2, zz_plot) + plot(aa_grid, zz_plot); + %shading flat + xlabel('$x$', 'Interpreter', 'Latex') + ylabel('$\sigma_y$', 'Interpreter', 'Latex') + title(sprintf('surrogate uncertainty -- n = %d', k+as_par.n_init), ... + 'Interpreter', 'Latex'); + %caxis([-1.25*max_surr_abs, 1.25*max_surr_abs]); + %colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if as_par.save_intermediate_plots + filename = sprintf('%ssur_uncertainty_n_%d', a_par.fig_path, k); + print(filename,'-dpdf'); + savefig(filename); + end + + if as_par.save_videos + F = getframe(gcf); + writeVideo(surr_std_vid_file,F); + end + end + + end + + + if as_par.save_videos + close(q_likelihood_vid_file); + close(surr_mean_vid_file); + close(lw_us_acq_vid_file); + close(lw_vid_file); + close(surr_std_vid_file); + end + + + + + %cur_model_protocol.plot_surrogate(1); + %true_model_protocol.plot_surrogate(1); + + fprintf('Plotting recovered q pdf and error metrics.\n'); + + + + + if as_par.draw_plots + lkk = [5:5:length(model_list), 51]; + CC = colormap(parula(length(pq_list))); + figure(13); + clf; + hold on + hh = zeros(length(pq_list)+1, 1); + names = cell(length(pq_list)+1, 1); + for k = 1:length(pq_list) + hh(k) = plot(qq_interval, pq_list{k}, 'LineWidth', 1, 'Color', CC(k, :)); + names{k} = sprintf('n = %d', k+as_par.n_init); + end + hh(max(lkk)) = plot(qq_interval, true_pq, 'LineWidth', 3, 'Color', 'Black'); + names{max(lkk)} = 'truth'; + xlabel('$y$', 'Interpreter', 'Latex') + ylabel('$p_Y(y)$', 'Interpreter', 'Latex'); + legend(hh(lkk), names(lkk), 'Interpreter', 'Latex', 'Location', 'Northeast'); + set(gca, 'YScale', 'log') + title(sprintf('y pdf'), 'Interpreter', 'Latex'); + ylim([1e-7, 1e0]) + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf_total_mode_%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + + + + %kl_bound_list = [2, 2.25, 2.5, 2.75, 3]; + + %n_kl_bounds = length(kl_bound_list); + qp_mae_list = zeros(length(pq_list), 1); + qp_rmse_list = zeros(length(pq_list), 1); + qp_log_mae_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_log_rmse_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_kl_div_forward_list = zeros(length(pq_list), as_par.n_kl_bounds); + qp_kl_div_backward_list = zeros(length(pq_list), as_par.n_kl_bounds); + NN_plot = (1:length(pq_list)) + as_par.n_init; + + for k = 1:length(pq_list) + cur_qp = pq_list{k}; + + delta = cur_qp - true_pq; + qp_mae_list(k) = mean(abs(delta)); + qp_rmse_list(k) = sqrt(mean(delta.^2)); + + log_delta = log(cur_qp) - log(true_pq); + for j = 1:as_par.n_kl_bounds + kl_lim = as_par.kl_bound_list(j); + ii = find((qq_interval > -kl_lim) & (qq_interval < kl_lim)); + + qp_log_mae_list(k, j) = mean(abs(log_delta(ii))); + qp_log_rmse_list(k, j) = sqrt(mean(log_delta(ii).^2)); + + qp_kl_div_forward_list(k, j) = sum(true_pq(ii).*log(true_pq(ii)./cur_qp(ii))); + qp_kl_div_backward_list(k, j) = sum(cur_qp(ii).*log(cur_qp(ii)./true_pq(ii))); + end + end + + + if as_par.draw_plots + figure(14); + clf; + hold on + plot(NN_plot, qp_mae_list); + plot(NN_plot, qp_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('y-pdf error'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + if as_par.draw_plots + figure(15); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, qp_log_mae_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, qp_log_rmse_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('y-pdf log error'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-log_error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + if as_par.draw_plots + figure(16); + clf; + hold on + hh = zeros(2*as_par.n_kl_bounds, 1); + for j = 1:as_par.n_kl_bounds + hh(2*j-1) = plot(NN_plot, qp_kl_div_forward_list(:, j), 'Color', 'Red'); + hh(2*j) = plot(NN_plot, qp_kl_div_backward_list(:, j), 'Color', 'Blue'); + end + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$D_{KL}$', 'Interpreter', 'Latex'); + legend(hh(1:2), {'$D_{KL}(\mbox{true} || \mbox{model})$', '$D_{KL}(\mbox{model} || \mbox{true})$'}, ... + 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('pdf KL divergence'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sq-pdf-kl-div_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + + if as_par.draw_plots + figure(17); + clf; + hold on + plot(NN_plot, surr_mu_mae_list); + plot(NN_plot, surr_mu_rmse_list); + xlabel('$n$', 'Interpreter', 'Latex') + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend({'MAE', 'RMSE'}, 'Interpreter', 'Latex') + set(gca, 'YScale', 'log') + title(sprintf('surrogate mean expected error'), 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%ssurrogate-mean-expected-error_q=%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + + + if as_par.save_errors + filename = sprintf('%s/error_data.m', a_par.fig_path); + save(filename, 'qp_mae_list', 'qp_rmse_list', 'surr_mu_mae_list', ... + 'surr_mu_rmse_list', 'qp_kl_div_forward_list', 'qp_kl_div_backward_list', ... + 'qp_log_mae_list', 'qp_log_rmse_list'); + end + + fprintf('Plotting stuff done after %0.2f seconds.\n', toc); + + + outcode = 1; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_recon_pdf.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_recon_pdf.m new file mode 100644 index 0000000..149ef41 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_recon_pdf.m @@ -0,0 +1,296 @@ +function [ XX, FF ] = draw_recon_pdf( protocol ) +%DRAW_RECON_PDF Summary of this function goes here +% Detailed explanation goes here + + a_par = protocol.a_par; + gpr_surrogate = protocol.gpr_obj; + yy_true = protocol.qq_test; + zz_true = protocol.zz_test; + + % + % sample from the surrogates + % + + n_samples = a_par.n_hist_resample; + + xx_test = randn(n_samples, gpr_surrogate.n_inputs); + + + switch a_par.gpr_resampling_strat + case 'normally-distributed' + [ yprd, ysd ] = gpr_surrogate.predict(xx_test); + + %bb = randn(n_samples, a_par.n_modes); + bb = randn(size(ysd)); + + yy_guess_mo = yprd; + yy_guess_nd = yprd + bb.*ysd; + + case 'vector-resample' + [ qq_sample, qq_pred_mu, ~ ] = gpr_surrogate.sample(xx_test); + + %yy_guess_mo = qq_pred_mu; + yy_guess_nd = qq_sample; + + case 'list-only' + [ qq_sample ] = gpr_surrogate.sample(xx_test); + + yy_guess_nd = qq_sample; + + end + + + + + % + % histogram the mode coefficients + % + + x_max = 4.5; + + xx_coeff_surrogate = linspace(-x_max, x_max, a_par.n_hist_bins); + + %PP_coeff_mo = zeros(a_par.n_modes, a_par.n_hist_bins-1); + PP_coeff_nd = zeros(protocol.n_output_modes, a_par.n_hist_bins-1); + + for k_mode = 1:protocol.n_output_modes + %PP_coeff_mo(k_mode, :) = histcounts(yy_guess_mo(:, k_mode), xx_coeff_surrogate, ... + % 'Normalization', 'pdf'); + PP_coeff_nd(k_mode, :) = histcounts(yy_guess_nd(:, k_mode), xx_coeff_surrogate, ... + 'Normalization', 'pdf'); + end + + % + % histogram the 'true' coefficients + % + + n_bins_true = 65; + xx_coeff_true = linspace(-x_max, x_max, n_bins_true); + PP_coeff_true = zeros(a_par.n_modes, n_bins_true-1); + + for k_mode = 1:protocol.n_output_modes + PP_coeff_true(k_mode, :) = histcounts(yy_true(:, k_mode), xx_coeff_true, ... + 'Normalization', 'pdf'); + end + + % + % plot the mode coefficient histograms! + % + + n_plot = min(12, protocol.n_output_modes); + + names = cell(n_plot, 1); + for k_mode = 1:n_plot + names{k_mode} = sprintf('n=%d', k_mode); + end + + xx_plot_1 = 1/2*(xx_coeff_surrogate(2:end) + xx_coeff_surrogate(1:end-1)); + xx_plot_2 = 1/2*(xx_coeff_true(2:end) + xx_coeff_true(1:end-1)); + + CC = parula(n_plot); + + figure(8); + clf; + hold on + hh = zeros(n_plot, 1); + for k_mode = 1:n_plot + hh(k_mode) = plot(xx_plot_1, PP_coeff_nd(k_mode, :), 'Color', CC(k_mode, :)); + plot(xx_plot_2, PP_coeff_true(k_mode, :), 'Color', CC(k_mode, :), 'LineStyle', ':'); + end + xlabel('KL coefficient value -- $q_i$', 'Interpreter', 'Latex'); + ylabel('$f_Q(q)$', 'Interpreter', 'Latex'); + title(sprintf('resampled coefficient pdf -- %s', gpr_surrogate.exp_name), 'Interpreter', 'Latex'); + set(gca, 'YScale', 'log'); + legend(hh, names, 'Interpreter', 'Latex'); + grid on + set(gca, 'FontSize', 9); + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%s%s-recon-modes-pdf-combined', a_par.fig_path, gpr_surrogate.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + +% figure(9); +% clf; +% for k_mode = 1:n_plot +% subplot(2, 3, k_mode) +% hold on +% plot(xx_plot_1, PP_coeff_mo(k_mode, :), 'Color', 'Blue', 'LineStyle', '-'); +% plot(xx_plot_1, PP_coeff_nd(k_mode, :), 'Color', 'Cyan', 'LineStyle', '--'); +% plot(xx_plot_2, PP_coeff_true(k_mode, :), 'Color', 'Red', 'LineStyle', ':'); +% xlabel(sprintf('KL coefficient value -- $q_%d$', k_mode), 'Interpreter', 'Latex'); +% ylabel(sprintf('$f_{Q_%d}(q_%d)$', k_mode, k_mode), 'Interpreter', 'Latex'); +% title(sprintf('mode $%d$ -- %s', k_mode, gpr_surrogate.exp_name), 'Interpreter', 'Latex'); +% set(gca, 'YScale', 'log'); +% legend('gpr-mean', 'gpr-spread', 'mc', 'Interpreter', 'Latex', 'Location', 'South'); +% %xlim([-1.2, 1.2]); +% grid on +% +% set(gca, 'FontSize', 9); +% +% end +% +% set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); +% +% if a_par.save_figs +% filename = sprintf('%s%s-recon-modes-pdf', a_par.fig_path, gpr_surrogate.exp_name); +% print(filename,'-dpdf'); +% savefig(filename); +% end + + + + figure(10); + clf; + for k_mode = 1:n_plot + subplot(3, 4, k_mode) + hold on + plot(xx_plot_1, PP_coeff_nd(k_mode, :), 'Color', 'Blue', 'LineStyle', '-', 'LineWidth', 2); + plot(xx_plot_2, PP_coeff_true(k_mode, :), 'Color', 'Black', 'LineStyle', '-', 'LineWidth', 2); + %xlabel(sprintf('KL coefficient value -- $q_%d$', k_mode), 'Interpreter', 'Latex'); + xlabel(sprintf('$q_{%d}$', k_mode), 'Interpreter', 'Latex'); + ylabel(sprintf('$f_{Q_%d}(q_{%d})$', k_mode, k_mode), 'Interpreter', 'Latex'); + %title(sprintf('mode $%d$ -- %s', k_mode, gpr_surrogate.exp_name), 'Interpreter', 'Latex'); + %title(sprintf('$n_{\\mbox{out}} = %d$', k_mode), 'Interpreter', 'Latex') + set(gca, 'YScale', 'log'); + %legend('gpr', 'mc', 'Interpreter', 'Latex', 'Location', 'South'); + xlim([-3.5, 3.5]); + ylim([1e-3, 1]); + grid on + + set(gca, 'FontSize', 9); + + end + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%s%s-recon-modes-pdf-no-mean', a_par.fig_path, gpr_surrogate.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + % + % histogram the recovered time series + % + + V_out = gpr_surrogate.V_out; + lambda = gpr_surrogate.D_out; %rescale by KL eigenweights + beta = gpr_surrogate.overall_norm_factor; % final rescaling + ts_mu = gpr_surrogate.ts_mu; + + zz_list_nd = zeros(n_samples, size(V_out, 1)); + %zz_list_mo = zeros(n_samples, size(V_out, 1)); + + for k_sample = 1:n_samples + zz_list_nd(k_sample, :) = ts_transform_kl( a_par, yy_guess_nd(k_sample, :), V_out, lambda, ts_mu ); + %zz_list_mo(k_sample, :) = ts_transform_kl( a_par, yy_guess_mo(k_sample, :), V_out, lambda, ts_mu ); + end + + %zz_list_nd = zeros(n_samples, size(V_out, 1)); + %zz_list_mo = zeros(n_samples, size(V_out, 1)); + +% for k_sample = 1:n_samples +% zz_cur_nd = zeros(size(V_out, 1), 1); +% zz_cur_mo = zeros(size(V_out, 1), 1); +% for k_mode = 1:a_par.n_modes +% zz_cur_nd = zz_cur_nd + yy_guess_nd(k_sample, k_mode)*V_out(:, k_mode).*lambda(k_mode); +% zz_cur_mo = zz_cur_mo + yy_guess_mo(k_sample, k_mode)*V_out(:, k_mode).*lambda(k_mode); +% end +% +% zz_cur_nd = (zz_cur_nd + gpr_surrogate.ts_mu)*beta; +% zz_cur_mo = (zz_cur_mo + gpr_surrogate.ts_mu)*beta; % remove normalizations +% +% zz_list_nd(k_sample, :) = zz_cur_nd; +% zz_list_mo(k_sample, :) = zz_cur_mo; +% end + + + %xx_zz = linspace(-3e9, 3e9, a_par.n_hist_bins); + xx_zz = linspace(-10*beta, 10*beta, a_par.n_hist_bins); + + %PP_zz_mo = histcounts(zz_list_mo(:)*beta, xx_zz, 'Normalization', 'pdf'); + PP_zz_nd = histcounts(zz_list_nd(:)*beta, xx_zz, 'Normalization', 'pdf'); + + % + % histogram the true time series + % + + + PP_zz_true = histcounts(zz_true(:)*beta, xx_zz, 'Normalization', 'pdf'); + + % + % plot time series histogram + % + +% figure(11); +% clf; +% hold on +% xx_plot_1 = 1/2*(xx_zz(2:end) + xx_zz(1:end-1)); +% hold on +% plot(xx_plot_1, PP_zz_mo, 'Color', 'Blue', 'LineWidth', 3); +% plot(xx_plot_1, PP_zz_nd, 'Color', 'Cyan', 'LineWidth', 3); +% plot(xx_plot_1, PP_zz_true, 'Color', 'Red', 'LineStyle', ':', 'LineWidth', 3); +% xlabel('$z$', 'Interpreter', 'Latex'); +% ylabel('$f_Z(z)$', 'Interpreter', 'Latex'); +% title(sprintf('resampled global vbm pdf -- %s', gpr_surrogate.exp_name), 'Interpreter', 'Latex'); +% %legend({'recovered', 'true'}, 'Interpreter', 'Latex'); +% legend('gpr-mean', 'gpr-spread', 'mc', 'Interpreter', 'Latex', 'Location', 'South'); +% set(gca, 'YScale', 'log'); +% xlim([-2e9, 2e9]); +% grid on +% +% set(gca, 'FontSize', 9); +% +% set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% +% if a_par.save_figs +% filename = sprintf('%s%s-recon-vbmg-pdf', a_par.fig_path, gpr_surrogate.exp_name); +% print(filename,'-dpdf'); +% savefig(filename); +% end + + + + figure(12); + clf; + hold on + xx_plot_1 = 1/2*(xx_zz(2:end) + xx_zz(1:end-1)); + hold on + plot(xx_plot_1, PP_zz_nd, 'Color', 'Blue', 'LineStyle', '-', 'LineWidth', 3); + plot(xx_plot_1, PP_zz_true, 'Color', 'Black', 'LineStyle', '-', 'LineWidth', 3); + xlabel('$z$', 'Interpreter', 'Latex'); + ylabel('$f_Z(z)$', 'Interpreter', 'Latex'); + title(sprintf('resampled global vbm pdf -- %s', gpr_surrogate.exp_name), 'Interpreter', 'Latex'); + %legend({'recovered', 'true'}, 'Interpreter', 'Latex'); + legend('gpr', 'mc', 'Interpreter', 'Latex', 'Location', 'South'); + set(gca, 'YScale', 'log'); + %xlim([-2e9, 2e9]); + %ylim([5e-14, 2e-9]) + grid on + + set(gca, 'FontSize', 9); + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%s%s-recon-vbmg-pdf-no-mean', a_par.fig_path, gpr_surrogate.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + + + XX = xx_plot_1; + FF = PP_zz_nd; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_reconstruction_scatterplots.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_reconstruction_scatterplots.m new file mode 100644 index 0000000..541790d --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_reconstruction_scatterplots.m @@ -0,0 +1,188 @@ +function [ RR ] = draw_reconstruction_scatterplots( protocol ) +%DRAW_RECONSTRUCTION_SCATTERPLOTS Summary of this function goes here +% Detailed explanation goes here + + %if (nargin == 2) + a_par = protocol.a_par; + xx_test = protocol.aa_test; + yy_test = protocol.qq_test; + gpr_surrogate = protocol.gpr_obj; + %end + + + nyp_plot_1 = min(4, gpr_surrogate.n_outputs); + nyp_plot_2 = min(12, gpr_surrogate.n_outputs); + + fprintf('Scatterplot construction rule: %s.\n', a_par.gpr_resampling_strat ); + + switch a_par.gpr_resampling_strat + case 'normally-distributed' + [ qq_pred_mu, qq_pred_sigma ] = gpr_surrogate.predict(xx_test); + + bb = randn(size(qq_pred_mu)); + + rr = yy_test - qq_pred_mu; + err = bb.*qq_pred_sigma; + + case 'vector-resample' + %[ qq_pred_mu, ~ ] = gpr_surrogate.predict(xx_test); + [ qq_sample, qq_pred_mu, qq_pred_cov ] = gpr_surrogate.sample(xx_test); + rr = yy_test - qq_pred_mu; + err = qq_sample - qq_pred_mu; + + end + + + RR = zeros(nyp_plot_2, nyp_plot_2); + for k1 = 1:nyp_plot_2 + for k2 = 1:nyp_plot_2 + xx = rr(:, k1); + yy = rr(:, k2); + RR(k1, k2) = corr(xx, yy); + end + end + + + + + z_plot_max = 1; + + label_list = cell(nyp_plot_1, 1); + for k = 1:nyp_plot_1 + label_list{k} = sprintf('$q_{%d} - \\overline{q}_{%d}$', k, k); + end + + n_z = 65; + zz = linspace(-z_plot_max, z_plot_max, n_z); + HH = cell(nyp_plot_1, nyp_plot_1); + for k1 = 1:nyp_plot_1 + for k2 = k1:nyp_plot_1 + HH{k1, k2} = histcounts2(rr(:, k1), rr(:, k2), zz, zz); + end + end + + + + figure(3); + clf; + for k1 = 1:nyp_plot_1 + for k2 = (k1+1):nyp_plot_1 + subplot(nyp_plot_1, nyp_plot_1, k1 + (k2-1).*nyp_plot_1); + scatter(rr(:, k1), rr(:, k2), 1, '.'); + %title(sprintf('$q_{%d}$ vs $q_{%d}$', k1, k2), 'Interpreter', 'Latex'); + xlabel(label_list{k1}, 'Interpreter', 'Latex'); + ylabel(label_list{k2}, 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + xlim([-z_plot_max, z_plot_max]); + ylim([-z_plot_max, z_plot_max]); + end + end + + for k1 = 1:nyp_plot_1 + subplot(nyp_plot_1, nyp_plot_1, k1 + (k1-1).*nyp_plot_1); + histogram(rr(:, k1), 'Normalization', 'pdf'); + xlim([-z_plot_max, z_plot_max]); + end + + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + if a_par.save_figs + filename = sprintf('%s%s-scatter-residuals', a_par.fig_path, gpr_surrogate.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + label_list = cell(nyp_plot_1, 1); + for k = 1:nyp_plot_1 + label_list{k} = sprintf('$\\hat{q}_%d - \\overline{q}_%d$', k, k); + end + + + + figure(14) + clf; + for k1 = 1:nyp_plot_1 + for k2 = k1:nyp_plot_1 + subplot(nyp_plot_1, nyp_plot_1, k1 + (k2-1).*nyp_plot_1); + %histogram2(rr(:, k1), rr(:, k2), zz, zz); + imagesc(zz, zz, HH{k1, k2}) + xlabel(label_list{k1}, 'Interpreter', 'Latex'); + ylabel(label_list{k2}, 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + xlim([-z_plot_max, z_plot_max]); + ylim([-z_plot_max, z_plot_max]); + end + end + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + + if a_par.save_figs + filename = sprintf('%s%s-scatter-residuals-hist', a_par.fig_path, gpr_surrogate.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + + figure(4); + clf; + for k1 = 1:nyp_plot_1 + for k2 = (k1+1):nyp_plot_1 + subplot(nyp_plot_1, nyp_plot_1, k1 + (k2-1).*nyp_plot_1); + scatter(err(:, k1), err(:, k2), 1, '.'); + %title(sprintf('$q_%d$ vs $q_%d$', k1, k2), 'Interpreter', 'Latex'); + xlabel(label_list{k1}, 'Interpreter', 'Latex'); + ylabel(label_list{k2}, 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + xlim([-z_plot_max, z_plot_max]); + ylim([-z_plot_max, z_plot_max]); + end + end + + for k1 = 1:nyp_plot_1 + subplot(nyp_plot_1, nyp_plot_1, k1 + (k1-1).*nyp_plot_1); + histogram(err(:, k1), 'Normalization', 'pdf'); + xlim([-z_plot_max, z_plot_max]); + end + + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + + + if a_par.save_figs + filename = sprintf('%s%s-scatter-cross-err', a_par.fig_path, gpr_surrogate.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + + + + + figure(13) + clf; + imagesc(RR.^2); + colorbar(); + title(sprintf('Residual point cloud $r^2$ -- %s', gpr_surrogate.exp_name), ... + 'Interpreter', 'Latex'); + xlabel('Output mode $1$', 'Interpreter', 'Latex'); + ylabel('Output mode $2$', 'Interpreter', 'Latex'); + + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + + if a_par.save_figs + filename = sprintf('%s%s-scatter-residuals-r', a_par.fig_path, gpr_surrogate.exp_name); + print(filename,'-dpdf'); + savefig(filename); + end + + + %outcode = 1; +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_sample_point_plots.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_sample_point_plots.m new file mode 100644 index 0000000..c94049a --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_sample_point_plots.m @@ -0,0 +1,54 @@ +function [ outcode ] = draw_sample_point_plots(a_par, as_par, aa_train) +%DRAW_SAMPLE_POINT_PLOTS Summary of this function goes here +% Detailed explanation goes here + +if as_par.draw_plots + fprintf('Drawing sample locations.\n'); + + sz = 25; + figure(1); + clf; + hold on + scatter(aa_train(1:as_par.n_init, 1) ,aa_train(1:as_par.n_init, 2), sz, 'red') + scatter(aa_train((as_par.n_init+1):(as_par.n_init+as_par.n_iter), 1), ... + aa_train((as_par.n_init+1):(as_par.n_init+as_par.n_iter), 2), sz, 'blue') + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_2$', 'Interpreter', 'Latex') + title('training samples', 'Interpreter', 'Latex') + legend({'initial', 'active'}, 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + if a_par.save_figs + filename = sprintf('%straining_data_locs_a12', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + figure(2); + clf; + hold on + scatter(aa_train(1:as_par.n_init, 1) ,aa_train(1:as_par.n_init, 3), sz, 'red') + scatter(aa_train((as_par.n_init+1):(as_par.n_init+as_par.n_iter), 1), ... + aa_train((as_par.n_init+1):(as_par.n_init+as_par.n_iter), 3), sz, 'blue') + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_3$', 'Interpreter', 'Latex') + title('training samples', 'Interpreter', 'Latex') + legend({'initial', 'active'}, 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%straining_data_locs_a13', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + + fprintf('Done drawing sample locations.\n'); +end + +outcode = 1; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_scsp_lots.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_scsp_lots.m new file mode 100644 index 0000000..2ee531b --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_scsp_lots.m @@ -0,0 +1,172 @@ +close('all'); + +a_par = Analysis_Parameters(); +a_par.fig_path = '../../../Output/scsp/jan_pix_2/'; +if ~exist(a_par.fig_path, 'dir') + mkdir(a_par.fig_path); +end + + + +filepath = '/home/stevejon/Dropbox (MIT)/Data/LAMP/jan_scsp/'; + +filename = sprintf('%skl-2d-test-vbmg.txt', filepath); +load(filename); +vbmg = reshape(kl_2d_test_vbmg, [25, 25, 599])/1e9; +filename = sprintf('%skl-2d-test-zz.txt', filepath); +load(filename); +zz = reshape(kl_2d_test_zz, [25, 25, 1025]); +filename = sprintf('%skl-2d-test-tt.txt', filepath); +load(filename); + +vbmg_mu = squeeze(mean(vbmg, 1)); +vbmg_sig = squeeze(sqrt(var(vbmg, 1))); + +T_cut_start = -60; +T_cut_end = 0; +tt1 = kl_2d_test_tt; +mm = (tt1 > T_cut_start) & (tt1 < T_cut_end); +tt2 = tt1(mm); + + + +tt3 = linspace(-100, 100, 1025); +ttzp = -tt3 + 60; +ttvp = tt2+ 60*42.5/100; + +for j = 1:5 + + figure(3); + clf; + hold on + for k = 1:10 + plot(ttzp, squeeze(zz(k, j, :))) + end + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$x$', 'Interpreter', 'Latex'); + title('sea elevation', 'Interpreter', 'Latex') + xlim([-25, 25]) + %ylim([-10, 10]); + + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.third_paper_pos, 'PaperSize', a_par.third_paper_size); + + filename = sprintf('%szz_ts_%d', a_par.fig_path , j); + print(filename,'-dpdf'); + savefig(filename) + + + + figure(2); + clf; + hold on + for k = 1:10 + plot(ttvp, squeeze(vbmg(k, j, :))) + end + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$M_y$', 'Interpreter', 'Latex'); + title('VBM', 'Interpreter', 'Latex') + xlim([-25, 25]) + %ylim([-10, 10]); + + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.third_paper_pos, 'PaperSize', a_par.third_paper_size); + + filename = sprintf('%svbm_ts_%d', a_par.fig_path , j); + print(filename,'-dpdf'); + savefig(filename); + + + + figure(1); + clf; + hold on + x2 = [ttvp, fliplr(ttvp) ]; + inBetween = [ vbmg_mu(j, :) + vbmg_sig(j, :), fliplr(vbmg_mu(j, :) - vbmg_sig(j, :)) ]; + fill(x2, squeeze(inBetween), 'cyan'); + plot(ttvp, squeeze(vbmg_mu(j, :)), 'LineWidth', 3) + %xline(0) + %xline(-100) + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$M_y$', 'Interpreter', 'Latex'); + title('VBM', 'Interpreter', 'Latex') + set(gca, 'FontSize', 14); + xlim([-25, 25]) + %ylim([-10, 10]); + + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.third_paper_pos, 'PaperSize', a_par.third_paper_size); + + filename = sprintf('%svbm_spread_%d', a_par.fig_path , j); + print(filename,'-dpdf'); + savefig(filename) +end + + + + + +figure(11); +clf +hold on + + + +for j = 1:3 + subplot(3, 3, (j-1)*3 + 1) + hold on + for k = 1:10 + plot(ttzp, squeeze(zz(k, j, :))) + end + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$x$', 'Interpreter', 'Latex'); + title('sea elevation', 'Interpreter', 'Latex') + set(gca, 'FontSize', 9); + xlim([-25, 25]) + %ylim([-10, 10]); + + subplot(3, 3, (j-1)*3 + 2) + hold on + for k = 1:10 + plot(ttvp, squeeze(vbmg(k, j, :))) + end + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$M_y$', 'Interpreter', 'Latex'); + title('VBM', 'Interpreter', 'Latex') + set(gca, 'FontSize', 9); + xlim([-25, 25]) + %ylim([-10, 10]); + + subplot(3, 3, (j-1)*3 + 3) + hold on + x2 = [ttvp, fliplr(ttvp) ]; + inBetween = [ vbmg_mu(j, :) + vbmg_sig(j, :), fliplr(vbmg_mu(j, :) - vbmg_sig(j, :)) ]; + fill(x2, squeeze(inBetween), 'cyan'); + plot(ttvp, squeeze(vbmg_mu(j, :)), 'LineWidth', 2) + %xline(0) + %xline(-100) + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$M_y$', 'Interpreter', 'Latex'); + title('VBM', 'Interpreter', 'Latex') + set(gca, 'FontSize', 9); + xlim([-25, 25]) + %ylim([-10, 10]); + + +end + + + +figure(11) +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + +filename = sprintf('%sscsp_demo', a_par.fig_path); +print(filename,'-dpdf'); +savefig(filename) + + + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_true_model_plots.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_true_model_plots.m new file mode 100644 index 0000000..2c8cbdc --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/draw_true_model_plots.m @@ -0,0 +1,66 @@ +function [ outcode ] = draw_true_model_plots(a_par, as_par, true_f_mean) +%DRAW_TRUE_MODEL_PLOTS Summary of this function goes here +% Detailed explanation goes here + + + fprintf('Drawing true model stuff.\n'); + tic + + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + + a_grid = linspace(-as_par.z_max, as_par.z_max, as_par.na); + [aa1, aa2] = meshgrid(a_grid, a_grid); + aa_grid = [aa1(:), aa2(:), zeros(size(aa1(:)))]; + + zz = true_f_mean(aa_grid); + zz_plot = reshape(zz(:, as_par.q_plot), size(aa1)); + + + if as_par.draw_plots + figure(4); + clf; + pcolor(aa1, aa2, zz_plot) + shading flat + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_2$', 'Interpreter', 'Latex') + title(sprintf('true surrogate mode %d', as_par.q_plot), 'Interpreter', 'Latex'); + colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%ssurrogate_true_q_%d', a_par.fig_path, as_par.q_plot); + print(filename,'-dpdf'); + savefig(filename); + end + end + + zz = f_input(aa_grid); + + if as_par.draw_plots + zz_plot = reshape(zz, size(aa1)); + figure(11); + clf; + pcolor(aa1, aa2, zz_plot) + shading flat + xlabel('$\alpha_1$', 'Interpreter', 'Latex') + ylabel('$\alpha_2$', 'Interpreter', 'Latex') + title(sprintf('input probability'), 'Interpreter', 'Latex'); + colorbar(); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + if a_par.save_figs + filename = sprintf('%sinput_density', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + end + + fprintf('True model plotting stuff done after %0.2f seconds.\n', toc); + + outcode = 1; +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_as_choose_next_wavegroup.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_as_choose_next_wavegroup.m new file mode 100644 index 0000000..e0b312a --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_as_choose_next_wavegroup.m @@ -0,0 +1,203 @@ +% % +% % Spectra rebalancing +% % +% +% fprintf('Calculating energy ratio between J1 and J2.\n'); +% +% fixed_T = 32; +% +% H_s = 5.5; +% T_m = 8; +% j_par = JONSWAP_Parameters(); +% j_par.update_significant_wave_height( H_s ); +% j_par.update_modal_period( T_m ); % close to what I was using before? +% amp_of_cosine = @(S, w, dw) sqrt(2*S(w).*dw); +% +% WW_kl = linspace(j_par.omega_min, j_par.omega_max, j_par.n_W)'; +% dW = WW_kl(2) - WW_kl(1); +% AA_kl = amp_of_cosine(j_par.S, WW_kl, dW); +% +% T_max_kl = fixed_T; +% n_t_kl = 512; +% TT_kl = linspace(0, T_max_kl, n_t_kl); +% dt_kl = TT_kl(2) - TT_kl(1); +% +% [ V_1, D_1 ] = calc_direct_kl_modes(AA_kl, WW_kl, TT_kl); +% +% H_s = 13; +% T_m = 8; +% j_par = JONSWAP_Parameters(); +% j_par.update_significant_wave_height( H_s ); +% j_par.update_modal_period( T_m ); % close to what I was using before? +% amp_of_cosine = @(S, w, dw) sqrt(2*S(w).*dw); +% +% WW_kl = linspace(j_par.omega_min, j_par.omega_max, j_par.n_W)'; +% dW = WW_kl(2) - WW_kl(1); +% AA_kl = amp_of_cosine(j_par.S, WW_kl, dW); +% +% T_max_kl = fixed_T; +% n_t_kl = 512; +% TT_kl = linspace(0, T_max_kl, n_t_kl); +% dt_kl = TT_kl(2) - TT_kl(1); +% +% [ V_2, D_2 ] = calc_direct_kl_modes(AA_kl, WW_kl, TT_kl); +% +% +% RR = real(sqrt(D_2./D_1)); +% +% disp(RR(1:3)); + +% +% adjust the training data +% it's all in j1 basis +% + +[ RR ] = ek_rebalance_spectra(); + +rr = RR(1:3)'; +aa_training_adj = aa_training./rr; + +% +% back to regularly scheduled GPR fitting +% + + + +%a_par.kl_transformation_rule = 'structured-sampling'; +a_par.kl_transformation_rule = 'restricted-mc'; + +cur_prot = LAMP_Protocol(a_par); +cur_prot.exp_name = 'openFoam--april-as'; +cur_prot.overall_norm = overall_norm_factor_f; +cur_prot.load_training_data(aa_training_adj, ff_training); +cur_prot.load_testing_data(aa_training_adj(1:26, :), ff_training(:, 1:26)); +cur_prot.transform_data(); +cur_prot.train_gpr(); + +cur_prot.plot_basis(); +cur_prot.plot_surrogate(1); +%[ p_foam_1.RR_res ] = draw_reconstruction_scatterplots( p_foam_1 ); +draw_recon_pdf( cur_prot ); +[ rmse_list_1, frac_rmse_list_1, env_rmse_list_1, env_frac_rmse_list_1 ] = ... + compute_reconstruction_error( cur_prot ); +compare_wavegroup_histograms( cur_prot ); + + +figure(61); +clf; +plot(cur_prot.D_kl./cur_prot.D_kl(1)); +set(gca, 'YScale', 'log') +xlim([1, 10]) +title('Output Mode spectral Decay', 'Interpreter', 'Latex'); + + + +as_par = Active_Search_Parameters(); +as_par.draw_plots = true; +as_par.opt_rule = 'as'; +as_par.n_acq_restarts = 100; +as_par.save_intermediate_plots = true; + +as_par.n_dim_in = 3; +as_par.q_min = -6; +as_par.q_max = 22; + +as_par.z_max = 5.5; +%as_par.z_max = 7.5; + +% +% Adjust this value depending on which mode we want to opt for! +% + +as_par.acq_active_output_mode = 3; + +% + +as_par.acq_rule = 'lw-kus'; +as_par.likelihood_alg = 'kde'; + +options = optimoptions('fmincon','Display','off'); + +as_par.video_path = a_par.fig_path; +if ~exist(a_par.fig_path, 'dir') + mkdir(a_par.fig_path); +end + + + + + +fprintf('Building acquisition function with rule: %s.\n', as_par.acq_rule); + +alpha_space = 'j2'; +switch alpha_space + case 'j1' + rr = RR(1:3)'; + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-(alpha./(rr)).^2/2), 2); + case 'j2' + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); +end + +f_blackbox = @(alpha) cur_prot.gpr_obj.predict(alpha); +[ f_likelihood ] = build_likelihood_function(as_par, f_input, f_blackbox); + + +sigma_n_list= cur_prot.gpr_obj.get_sigma_n_list(); +sigma2n = sigma_n_list(as_par.acq_active_output_mode).^2; +f_acq = @(alpha) -f_acq_lw_kus_multi_out(alpha, f_input, f_likelihood, ... + f_blackbox, sigma2n, as_par.acq_active_output_mode); + + + +% +% Choose next point +% + +fprintf('Evaluating acquisition function to choose next point.\n'); + +switch as_par.opt_rule + case 'uniform' + new_aa = as_par.z_max*(ones(1, as_par.n_dim_in) - 2*rand(1, as_par.n_dim_in)); + + case 'as' + A = [eye(as_par.n_dim_in); -eye(as_par.n_dim_in)]; + b = [as_par.z_max*ones(as_par.n_dim_in, 1); as_par.z_max*ones(as_par.n_dim_in, 1)]; + %ub = as_par.z_max*ones(1, 3); + + a_opt_list = zeros(as_par.n_acq_restarts, as_par.n_dim_in); + f_opt_list = zeros(as_par.n_acq_restarts, 1); + a0_list = as_par.z_max*(ones(as_par.n_acq_restarts, as_par.n_dim_in) - ... + 2*lhsdesign(as_par.n_acq_restarts, as_par.n_dim_in)); + + for j = 1:as_par.n_acq_restarts + fprintf('Restart round %d.\n', j); + a0 = a0_list(j, :); + %disp(f_acq(a0)); + %[x,fval,~,~] = fmincon(f_acq, a0, A, b); + + [x,fval,~,~] = fmincon(f_acq, a0, A, b, [], [], [], [], [], ... + options); + + %[x,fval,~,~] = fmincon(f_acq, a0, [], [], [], [], -ub, ub, ... + % 'Display', 'off'); + + a_opt_list(j, :) = x; + f_opt_list(j) = fval; + end + + [~, i ] = min(f_opt_list); + + new_aa = a_opt_list(i, :); + %fprintf('Next point at alpha = (%0.2f, 0.2f, 0.2f).\n', new_aa(1), ... + % new_aa(2), new_aa(3)) + + otherwise + warning('%s not recognized\n', as_par.acq_rule); +end + + +fprintf('New acquired sample point (in J2 space):\n'); +disp(new_aa); + +fprintf('New acquired sample point (in J1 space):\n'); +disp(new_aa.*rr); \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_rebalance_spectra.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_rebalance_spectra.m new file mode 100644 index 0000000..136bd16 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_rebalance_spectra.m @@ -0,0 +1,55 @@ +function [ RR ] = ek_rebalance_spectra() +%EK_REBALANCE_SPECTRA Summary of this function goes here +% Detailed explanation goes here + +% +% Spectra rebalancing +% + + fprintf('Calculating energy ratio between J1 and J2.\n'); + + fixed_T = 32; + + H_s = 5.5; + T_m = 8; + j_par = JONSWAP_Parameters(); + j_par.update_significant_wave_height( H_s ); + j_par.update_modal_period( T_m ); % close to what I was using before? + amp_of_cosine = @(S, w, dw) sqrt(2*S(w).*dw); + + WW_kl = linspace(j_par.omega_min, j_par.omega_max, j_par.n_W)'; + dW = WW_kl(2) - WW_kl(1); + AA_kl = amp_of_cosine(j_par.S, WW_kl, dW); + + T_max_kl = fixed_T; + n_t_kl = 512; + TT_kl = linspace(0, T_max_kl, n_t_kl); + dt_kl = TT_kl(2) - TT_kl(1); + + [ V_1, D_1 ] = calc_direct_kl_modes(AA_kl, WW_kl, TT_kl); + + H_s = 13; + T_m = 8; + j_par = JONSWAP_Parameters(); + j_par.update_significant_wave_height( H_s ); + j_par.update_modal_period( T_m ); % close to what I was using before? + amp_of_cosine = @(S, w, dw) sqrt(2*S(w).*dw); + + WW_kl = linspace(j_par.omega_min, j_par.omega_max, j_par.n_W)'; + dW = WW_kl(2) - WW_kl(1); + AA_kl = amp_of_cosine(j_par.S, WW_kl, dW); + + T_max_kl = fixed_T; + n_t_kl = 512; + TT_kl = linspace(0, T_max_kl, n_t_kl); + dt_kl = TT_kl(2) - TT_kl(1); + + [ V_2, D_2 ] = calc_direct_kl_modes(AA_kl, WW_kl, TT_kl); + + + RR = real(sqrt(D_2./D_1)); + + %disp(RR(1:3)); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_train_model.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_train_model.m new file mode 100644 index 0000000..fd09166 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/ek_train_model.m @@ -0,0 +1,451 @@ +%ii = [1:6, 8:size(aa_training, 1)]; +%ii = [1:6, 8:20]; +ii = [1:size(aa_training, 1)]; + +% +% adjust the training data +% it's all in j1 basis +% + +target_spectrum = 'j2'; + +switch target_spectrum + case 'j1' + SS = ss_ff; + + aa_training_adj = aa_training; + aa_testing_adj = aa_testing; + + case 'j2' + [ RR ] = ek_rebalance_spectra(); + + rr = RR(1:3)'; + aa_training_adj = aa_training./rr; + aa_testing_adj = aa_testing./rr; + + SS = ss_table2(:, f_col_index); +end + + +p_foam_1 = LAMP_Protocol(a_par); +p_foam_1.exp_name = 'openFoam--april'; +p_foam_1.overall_norm = overall_norm_factor_f; +p_foam_1.load_training_data(aa_training_adj(ii, :), ff_training(:, ii)); +p_foam_1.load_testing_data(aa_testing_adj, ff_testing); +p_foam_1.transform_data(); +p_foam_1.train_gpr(); + +p_foam_1.plot_basis(); +p_foam_1.plot_surrogate(1); +%[ p_foam_1.RR_res ] = draw_reconstruction_scatterplots( p_foam_1 ); +[ XX, FF ] = draw_recon_pdf( p_foam_1 ); +%[ rmse_list_1, frac_rmse_list_1, env_rmse_list_1, env_frac_rmse_list_1 ] = ... +% compute_reconstruction_error( p_foam_1 ); +%compare_wavegroup_histograms( p_foam_1 ); + + +switch f_col_index + case 2 + f_name = '$F_x$'; + filename_tag = 'fx'; + case 3 + f_name = '$F_y$'; + filename_tag = 'fy'; + case 4 + f_name = '$F_z$'; + filename_tag = 'fz'; + case 5 + f_name = '$F_x^P$'; + filename_tag = 'fpx'; + case 6 + f_name = '$F_y^P$'; + filename_tag = 'fpy'; + case 7 + f_name = '$F_z^P$'; + filename_tag = 'fpz'; + case 8 + f_name = '$F_x^\nu$'; + filename_tag = 'fnx'; + case 9 + f_name = '$F_y^\nu$'; + filename_tag = 'fny'; + case 10 + f_name = '$F_z^\nu$'; + filename_tag = 'fnz'; +end + + + + + +figure(101); +clf; +hold on +plot(XX, FF, 'LineWidth', 3); +%[pf_ss, bb_ss] = histcounts(ss_ff, 'Normalization', 'pdf'); +[pf_ss, bb_ss] = histcounts(SS, 'Normalization', 'pdf'); +xx_ss = 1/2*(bb_ss(1:end-1) + bb_ss(2:end)); +maxP = max(pf_ss(:)); +plot(xx_ss, pf_ss, 'LineWidth', 3); +set(gca, 'YScale', 'log'); +%ylim([1e-10, 1e-6]) +ylim([5*maxP*10^-6, maxP*5]) +xlabel(f_name, 'Interpreter', 'Latex'); +ylabel('$f_F(f)$', 'Interpreter', 'Latex'); +title(sprintf('Comparison of steady state and GPR %s', f_name), 'Interpreter', 'Latex'); +legend({'GPR', 'MC'}, 'Location', 'best'); +grid on +set(gca, 'FontSize', 9); +%set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +filename = sprintf('%spdf_hist_%s', a_par.fig_path, filename_tag); +print(filename,'-dpdf'); +savefig(filename); + + + + + + +figure(201); +clf +plot(p_foam_1.D_kl./p_foam_1.D_kl(1), 'LineWidth', 3); +xlim([0, 10]) +set(gca, 'YScale', 'log'); +set(gca, 'FontSize', 9); +%set(gcf,'units','inches','position', a_par.plot_pos); +title(sprintf('PCA eigenvalues -- %s', f_name), 'Interpreter', 'Latex'); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +filename = sprintf('%sfx_output_pca_eignevalues_%s', a_par.fig_path, filename_tag); +print(filename,'-dpdf'); +savefig(filename); + + + +figure(202); +clf; +hold on +for k = 1:4 + subplot(2, 2, k); + hold on + plot(TT, p_foam_1.V_kl(:, 2*k-1), 'LineWidth', 1.5); + plot(TT, p_foam_1.V_kl(:, 2*k), 'LineWidth', 1.5); + xlim([0, max(TT)]) + set(gca, 'FontSize', 9); + %set(gcf,'units','inches','position', a_par.plot_pos); + title(sprintf('modes %d \\& %d', 2*k-1, 2*k), 'Interpreter', 'Latex'); +end +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +filename = sprintf('%sfx_output_pca_eignemodes_%s', a_par.fig_path, filename_tag); +print(filename,'-dpdf'); +savefig(filename); + + + + +% +% TT = linspace(0, 32, 512); +% [zz, mu, ss] = p_foam_1.sample(aa_training(end, :)); +% figure(103); +% clf; +% hold on +% plot(TT, ff_training(:, end), 'LineWidth', 3, 'Color', 'Red'); +% plot(TT, zz, 'LineWidth', 3, 'Color', 'Blue'); +% plot(TT, mu+ss, 'LineWidth', 3, 'Color', 'Cyan', 'LineStyle', ':'); +% plot(TT, mu-ss, 'LineWidth', 3, 'Color', 'Cyan', 'LineStyle', ':'); +% legend('openFOAM', 'surrogate', 'Interpreter', 'Latex'); +% title(sprintf('Comparison of steady state and GPR %s', f_name), 'Interpreter', 'Latex'); +% set(gca, 'FontSize', 9); +% %set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% filename = sprintf('%sts_recovery_unblind%s', a_par.fig_path, filename_tag); +% print(filename,'-dpdf'); +% savefig(filename); +% +% figure(104); +% clf; +% hold on +% plot(TT, mu, 'LineWidth', 3, 'Color', 'Red'); +% plot(TT, ss, 'LineWidth', 3, 'Color', 'Blue'); + +% figure(102); +% clf; +% hold on +% plot(XX, FF, 'LineWidth', 3); +% %[pf_ss, xx_ss] = ksdensity(ss_ff); +% [pf_ss, xx_ss] = ksdensity(ss_table2(:, f_col_index)); +% plot(xx_ss, pf_ss, 'LineWidth', 3); +% set(gca, 'YScale', 'log'); +% %ylim([1e-10, 1e-6]) +% ylim([5*maxP*10^-6, maxP*5]) +% xlabel('$F_x$', 'Interpreter', 'Latex'); +% ylabel('$f_F(f)$', 'Interpreter', 'Latex'); +% title(sprintf('Comparison of steady state and GPR %s', f_name), 'Interpreter', 'Latex'); +% legend({'GPR', 'MC'}) +% grid on +% set(gca, 'FontSize', 9); +% %set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% filename = sprintf('%spdf_kde_%s', a_par.fig_path, filename_tag); +% print(filename,'-dpdf'); +% savefig(filename); + + + +% +% +% p_foam_2 = LAMP_Protocol(a_par); +% p_foam_2.exp_name = 'openFoam--jan--recover-coeff'; +% p_foam_2.overall_norm = overall_norm_factor_f; +% p_foam_2.load_training_data(aa_training_recovered, ff_training); +% p_foam_2.load_testing_data(aa_testing_recovered, ff_testing); +% p_foam_2.transform_data(); +% p_foam_2.train_gpr(); +% +% p_foam_2.plot_basis(); +% p_foam_2.plot_surrogate(1); +% %[ p_foam_2.RR_res ] = draw_reconstruction_scatterplots( p_foam_2 ); +% %draw_recon_pdf( p_foam_2 ); +% [ rmse_list_2, frac_rmse_list_2, env_rmse_list_2, env_frac_rmse_list_2 ] = ... +% compute_reconstruction_error( p_foam_2 ); +% compare_wavegroup_histograms( p_foam_2 ); +% +% +% +% p_foam_3 = LAMP_Protocol(a_par); +% p_foam_3.exp_name = 'openFoam--jan--coeff2wavegroup'; +% p_foam_3.overall_norm = overall_norm_factor_z; +% p_foam_3.load_training_data(aa_training_recovered, zz_training); +% p_foam_3.load_testing_data(aa_testing_recovered, zz_testing); +% p_foam_3.transform_data(); +% p_foam_3.train_gpr(); +% +% p_foam_3.plot_basis(); +% p_foam_3.plot_surrogate(1); +% %[ p_foam_3.RR_res ] = draw_reconstruction_scatterplots( p_foam_3 ); +% %draw_recon_pdf( p_foam_3 ); +% [ rmse_list_3, frac_rmse_list_3, env_rmse_list_3, env_frac_rmse_list_3 ] = ... +% compute_reconstruction_error( p_foam_3 ); +% compare_wavegroup_histograms( p_foam_3 ); +% +% + + +% +% +% +% a1 = 1; +% a2 = 2; +% q1 = 1; +% +% figure(101); +% clf; +% scatter3(p_foam_1.aa_train(:, a1), p_foam_1.aa_train(:, a2), p_foam_1.qq_train(:, q1)); +% title(sprintf('$q_%d$ surrogate, resampled training points -- %s', q1, p_foam_1.exp_name), 'Interpreter', 'Latex'); +% xlabel(sprintf('$\\alpha_%d$', a1), 'Interpreter', 'Latex') +% ylabel(sprintf('$\\alpha_%d$', a2), 'Interpreter', 'Latex') +% zlabel(sprintf('$q_%d$', q1), 'Interpreter', 'Latex') +% +% +% [ ~, qq_hat, ~] = p_foam_1.gpr_obj.sample(p_foam_1.aa_train); +% +% figure(102); +% clf; +% scatter3(p_foam_1.aa_train(:, a1), p_foam_1.aa_train(:, a2), qq_hat(:, q1)); +% title(sprintf('$q_%d$ surrogate, resampled testing points -- %s', q1, p_foam_1.exp_name), 'Interpreter', 'Latex'); +% xlabel(sprintf('$\\alpha_%d$', a1), 'Interpreter', 'Latex') +% ylabel(sprintf('$\\alpha_%d$', a2), 'Interpreter', 'Latex') +% zlabel(sprintf('$q_%d$', q1), 'Interpreter', 'Latex') +% +% +% +% a1_list = [1, 1]; +% a2_list = [2, 3]; +% q1_list = [1, 2, 3]; +% +% ng = 65; +% a_grid = linspace(-4, 4, ng); +% [aa1, aa2] = meshgrid(a_grid, a_grid); +% +% LL = linspace(-4, 4, 10); +% +% for ka = 1:2 +% figure(110 + ka); +% clf; +% +% for kq = 1:length(q1_list) +% +% a1 = a1_list(ka); +% a2 = a2_list(ka); +% q1 = q1_list(kq); +% +% aa_grid = zeros(ng^2, 3); +% aa_grid(:, [a1, a2]) = [aa1(:), aa2(:)]; +% [ ~, qq_grid, ~] = p_foam_1.gpr_obj.sample(aa_grid); +% +% zz = reshape(qq_grid(:, q1), [ng, ng]); +% figure(103); +% clf; +% mesh(a_grid, a_grid, zz); +% title(sprintf('$q_%d$ surrogate, resampled grid -- %s', q1, p_foam_1.exp_name), 'Interpreter', 'Latex'); +% xlabel(sprintf('$\\alpha_%d$', a1), 'Interpreter', 'Latex') +% ylabel(sprintf('$\\alpha_%d$', a2), 'Interpreter', 'Latex') +% zlabel(sprintf('$q_%d$', q1), 'Interpreter', 'Latex') +% +% figure(104); +% clf; +% pcolor(a_grid, a_grid, zz); +% shading flat +% title(sprintf('$q_%d$ surrogate, resampled grid -- %s', q1, p_foam_1.exp_name), 'Interpreter', 'Latex'); +% xlabel(sprintf('$\\alpha_%d$', a1), 'Interpreter', 'Latex') +% ylabel(sprintf('$\\alpha_%d$', a2), 'Interpreter', 'Latex') +% %zlabel(sprintf('$q_%d$', q1), 'Interpreter', 'Latex') +% colorbar(); +% +% figure(110 + ka); +% subplot(2, 2, kq) +% hold on +% pcolor(a_grid, a_grid, zz); +% caxis([-3.5, 3.5]) +% contour(a_grid, a_grid, zz, LL, 'Color', 'Black'); +% shading flat +% title(sprintf('$q_%d$', q1), 'Interpreter', 'Latex'); +% xlabel(sprintf('$\\alpha_%d$', a1), 'Interpreter', 'Latex') +% ylabel(sprintf('$\\alpha_%d$', a2), 'Interpreter', 'Latex') +% end +% +% figure(110 + ka); +% set(gca, 'FontSize', 9); +% +% set(gcf,'units','inches','position', a_par.plot_pos); +% set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); +% +% +% filename = sprintf('%ssurrogate_model_%s_%d', a_par.fig_path, p_foam_1.exp_name, ka); +% print(filename,'-dpdf'); +% savefig(filename); +% +% end +% +% +% + + + +n_recoveries = 4; + +XX4_raw = cell(n_recoveries, 1); +XX4_cooked = cell(n_recoveries, 1); + +V_out = p_foam_1.gpr_obj.V_out; +lambda = p_foam_1.gpr_obj.D_out; %rescale by KL eigenweights +beta = p_foam_1.gpr_obj.overall_norm_factor; % final rescaling +ts_mu = p_foam_1.gpr_obj.ts_mu; + +for k = 1:n_recoveries + qq = p_foam_1.gpr_obj.predict(aa_testing(k, :)); + XX4_cooked{k} = ts_transform_kl( a_par, qq, V_out, lambda, ts_mu )*... + p_foam_1.gpr_obj.overall_norm_factor; + + XX4_raw{k} = ff_testing(:, k)*p_foam_1.gpr_obj.overall_norm_factor; +end + + + +n_reps = 1000; +XX4_cooked_mean = cell(n_recoveries, 1); +XX4_cooked_std = cell(n_recoveries, 1); + +for k = 1:n_recoveries + zz_cooked = zeros(n_reps, length(ts_mu)); + for j = 1:n_reps + qq = p_foam_1.gpr_obj.sample(aa_testing(k, :)); + zz_cooked(j, :) = ts_transform_kl( a_par, qq, V_out, lambda, ts_mu )*... + p_foam_1.gpr_obj.overall_norm_factor; + end + + XX4_cooked_mean{k} = mean(zz_cooked, 1); + XX4_cooked_std{k} = std(zz_cooked, 0, 1); +end + + +zstar = 3; +lw = 1; + + +TT_plot = linspace(0, 32, length(XX4_raw{1})); + +figure(131); +clf; +for k = 1:n_recoveries + subplot(2, 2, k); + hold on + plot(TT_plot, XX4_raw{k}, 'LineWidth', lw) + plot(TT_plot, XX4_cooked{k}, 'LineWidth', lw) + xlabel('$t$', 'Interpreter', 'Latex'); + ylabel('$F_x^{\mbox{tot}}$', 'Interpreter', 'Latex'); + if (k == 1) + legend({'openFOAM', 'reconstruct'}, 'Interpreter', 'Latex', 'Location', 'Northwest'); + end + title(sprintf('openFOAM vs GP surrogate -- wave %d', 400 + k), 'Interpreter', 'Latex') + + xlim([0, max(TT_plot(:))]); + %ylim([-zstar, zstar]); + z_max = 5e6; + ylim([-z_max, z_max]); +end + +%subplot(2, 2, 2); +%legend({'LAMP', 'recon'}, 'Interpreter', 'Latex'); + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.full_paper_pos, 'PaperSize', a_par.full_paper_size); + +if a_par.save_figs + filename = sprintf('%sreconstructed_time_series_openFOAM-%s', a_par.fig_path, p_foam_1.exp_name); + print(filename,'-dpdf'); + savefig(filename); +end + + + +figure(132); +clf; +for k = 1:n_recoveries + subplot(2, 2, k); + hold on + + x2 = [TT_plot, fliplr(TT_plot) ]; + inBetween = [ XX4_cooked_mean{k} + XX4_cooked_std{k}, fliplr(XX4_cooked_mean{k} - XX4_cooked_std{k}) ]; + fill(x2', inBetween', 'cyan'); + h2 = plot(TT_plot, XX4_cooked_mean{k}, 'LineWidth', lw, 'Color', 'blue'); + + h1 = plot(TT_plot, XX4_raw{k}, 'LineWidth', lw, 'Color', 'Red'); + + if (k == 1) + legend([h1, h2], {'openFOAM', 'reconstruct'}, 'Interpreter', 'Latex', 'Location', 'Northwest'); + end + %title(sprintf('openFOAM vs GP surrogate -- wave %d', 400 + k), 'Interpreter', 'Latex') + title(sprintf('wave %d', 400 + k), 'Interpreter', 'Latex') + + xlim([0, max(TT_plot(:))]); + %ylim([-zstar, zstar]); + z_max = 5e6; + ylim([-z_max, z_max]); + aa = gca; + set(gca, 'YTickLabel', aa.YTickLabel()) + +end + +set(gca, 'FontSize', 9); + +set(gcf,'units','inches','position', a_par.plot_pos); +set(gcf,'PaperUnits', 'inches', 'PaperPosition', [0, 0, 4.5, 3.5], 'PaperSize', [4.5, 3.5]); + +if a_par.save_figs + filename = sprintf('%sreconstructed_time_series_openFOAM-%s_spread', a_par.fig_path, p_foam_1.exp_name); + print(filename,'-dpdf'); + savefig(filename); +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_kus.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_kus.m new file mode 100644 index 0000000..ac6f15a --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_kus.m @@ -0,0 +1,17 @@ +function [ u ] = f_acq_lw_kus(alpha, f_input, f_likelihood, f_blackbox, sigma2n) +%F_ACQ_LW_US Summary of this function goes here +% Detailed explanation goes here + + pa = f_input(alpha); + [ mu, std ] = f_blackbox(alpha); + + mu = mu(:); + sigma2adj = std(:).^2 - sigma2n; + eps0 = 1e-9; + + pq = f_likelihood( mu ); + w = pa./(pq + eps0); + u = sigma2adj.*w; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_kus_multi_out.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_kus_multi_out.m new file mode 100644 index 0000000..32617bd --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_kus_multi_out.m @@ -0,0 +1,17 @@ +function [ u ] = f_acq_lw_kus_multi_out(alpha, f_input, f_likelihood, f_blackbox, sigma2n, k_out) +%F_ACQ_LW_US Summary of this function goes here +% Detailed explanation goes here + + pa = f_input(alpha); + [ mu, std ] = f_blackbox(alpha); + + mu = mu(:, k_out); + sigma2adj = std(:, k_out).^2 - sigma2n; + eps0 = 1e-9; + + pq = f_likelihood( mu ); + w = pa./(pq + eps0); + u = sigma2adj.*w; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_us.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_us.m new file mode 100644 index 0000000..1798849 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_us.m @@ -0,0 +1,17 @@ +function [ u ] = f_acq_lw_us(alpha, f_input, f_likelihood, f_blackbox) +%F_ACQ_LW_US Summary of this function goes here +% Detailed explanation goes here + + pa = f_input(alpha); + [ mu, std ] = f_blackbox(alpha); + + mu = mu(:, 1); + sigma2 = std(:, 1).^2; + eps0 = 1e-9; + + pq = f_likelihood( mu ); + w = pa./(pq + eps0); + u = sigma2.*w; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_us_multi_out.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_us_multi_out.m new file mode 100644 index 0000000..2d41015 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/f_acq_lw_us_multi_out.m @@ -0,0 +1,17 @@ +function [ u ] = f_acq_lw_us_multi_out(alpha, f_input, f_likelihood, f_blackbox, k_out) +%F_ACQ_LW_US Summary of this function goes here +% Detailed explanation goes here + + pa = f_input(alpha); + [ mu, std ] = f_blackbox(alpha); + + mu = mu(:, k_out); + sigma2 = std(:, k_out).^2; + eps0 = 1e-9; + + pq = f_likelihood( mu ); + w = pa./(pq + eps0); + u = sigma2.*w; + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/kde_wrapper_function.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/kde_wrapper_function.m new file mode 100644 index 0000000..bd1c401 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/kde_wrapper_function.m @@ -0,0 +1,15 @@ +function [ff] = kde_wrapper_function(xx, ww, alpha) +%KEW_WRAPPER_FUNCTION Summary of this function goes here +% Detailed explanation goes here + + [a, ~] = ksdensity(xx, alpha, 'Weights', ww); + + %d = size(xx, 2); + %n = size(xx, 1); + %bw = std(xx).*(4/((d+2)*n)).^(1./(d+4)); % Silverman's rule + %a = mvksdensity(xx,alpha,... + % 'Bandwidth',bw, 'Weights', ww); + + ff = a; +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/kl_transform_ts.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/kl_transform_ts.m new file mode 100644 index 0000000..e8ec556 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/kl_transform_ts.m @@ -0,0 +1,17 @@ +function [ QQ ] = kl_transform_ts(a_par, ZZ, V_basis, lambda, ts_mu) +%SJ1D_CALC_BASIS_COEFFS Summary of this function goes here +% Detailed explanation goes here + + n_exp = size(ZZ, 2); + QQ = zeros(n_exp, a_par.n_modes ); + + ZZ_norm = ZZ - repmat(ts_mu, [1, n_exp]); + + for k_exp = 1:n_exp + for k_f = 1:a_par.n_modes + QQ(k_exp, k_f) = sum(V_basis(:, k_f).*ZZ_norm(:, k_exp))./sqrt(lambda(k_f)); + end + end + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/load_data_4_as.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/load_data_4_as.m new file mode 100644 index 0000000..a55630f --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/load_data_4_as.m @@ -0,0 +1,72 @@ +% +% The Data! +% + +II = [3]; +TT = [60]; + +aa_list_set = cell(length(II), length(TT), 2); +ZZ_list_set = cell(length(II), length(TT), 2); + +filename_list = cell(length(II), length(TT), 2); +NN_list = cell(length(II), length(TT), 2); + +ps = '/research2/sguth/Work/data_4_as/'; +%ps = true_model_a_par.kl2d_data_path_set; + +for ki = 1:1 + for kt = 1:1 + for j = 1:2 + filename_list{ki, kt, 1} = sprintf('%skl-2d-%d-%d-', ps, II(ki), TT(kt)); + filename_list{ki, kt, 2} = sprintf('%skl-2d-%d-%d-test-', ps, II(ki), TT(kt)); + + NN_list{ki, kt, 1} = 1:625; + NN_list{ki, kt, 2} = 1:1300; + end + end +end + +for ki = 1:length(II) + for kt = 1:length(TT) + for j = 1:2 + cur_name = filename_list{ki, kt, j}; + + fprintf('Loading KL-2D LAMP statistics -- %s model.\n', cur_name); + + cur_II = NN_list{ki, kt, j}; + + summaryfilename = sprintf('%sdesign.txt', cur_name); + design = load(summaryfilename); + summaryfilename = sprintf('%sisgood.txt', cur_name); + isgood = load(summaryfilename); + summaryfilename = sprintf('%spitch.txt', cur_name); + pitch = load(summaryfilename); + summaryfilename = sprintf('%svbmg.txt', cur_name); + vbmg = load(summaryfilename); + summaryfilename = sprintf('%stt.txt', cur_name); + tt = load(summaryfilename); + + MM = cur_II(logical(isgood(cur_II))); + MM = MM(~(isnan(sum(vbmg(MM, :), 2)))); + + aa_vbmg_2d = design(MM, :); + ZZ_vbmg_2d = vbmg(MM, :); + + %PP_vbmg_2d = pitch(MM, :); + %PP_vbmg_2d = PP_vbmg_2d/pitch_norm_factor; + + aa_list_set{ki, kt, j} = aa_vbmg_2d; + ZZ_list_set{ki, kt, j} = ZZ_vbmg_2d'; + %PP_list_set{ki, kt, j} = PP_vbmg_2d'; + + clear aa_vbmg_2d ZZ_vbmg_2d PP_vbmg_2d + end + + end +end + +vbmg_norm_factor = std(ZZ_list_set{1, 1, 2}(:)); +ZZ_list_set{1, 1, 1} = ZZ_list_set{1, 1, 1} / vbmg_norm_factor; +ZZ_list_set{1, 1, 2} = ZZ_list_set{1, 1, 2} / vbmg_norm_factor; + + diff --git a/dnosearch/examples/intracycle/model/.DS_Store b/dnosearch/examples/lamp/Matlab_GP_Implementation/output/.DS_Store similarity index 91% rename from dnosearch/examples/intracycle/model/.DS_Store rename to dnosearch/examples/lamp/Matlab_GP_Implementation/output/.DS_Store index 5008ddf..3b38d57 100644 Binary files a/dnosearch/examples/intracycle/model/.DS_Store and b/dnosearch/examples/lamp/Matlab_GP_Implementation/output/.DS_Store differ diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/plot_as_errors.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/plot_as_errors.m new file mode 100644 index 0000000..d66f648 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/plot_as_errors.m @@ -0,0 +1,95 @@ +addpath('../analysis'); +a_par = Analysis_Parameters(); +a_par.fig_path = '../../../Output/LAMP_active_search/april_pix_1/'; + +data_base_path = '../../../Data/Active_Search/Round_1_error_metrics/'; + +run_name_list = cell(5, 1); +run_name_list{1} = 'lw-us-fixed-mode'; +run_name_list{2} = 'lw-kus-fixed-mode'; +run_name_list{3} = 'lw-us-round-robin'; +run_name_list{4} = 'lw-kus-round-robin'; +run_name_list{5} = 'uniform-fixed-mode'; + +sz1 = 16; +sz2 = 5; +nt = 75; +nb = 5; + +kl_bound_list = [2, 2.25, 2.5, 2.75, 3]; +kl_bound_list_vbm_upper = [1.1, 1.3, 1.5, 1.7, 1.9].*1e9; +kl_bound_list_vbm_lower = -[1.5, 1.7, 1.9, 2.1, 2.3].*1e9; + +error_array_list = cell(sz2, 1); + +for j = 1:sz2 + filename = sprintf('%serrs-%s.txt', data_base_path, run_name_list{j}); + cur_errs = load(filename, '-ascii'); + + error_array_list{j} = cur_errs; +end + +NN = 10 + (1:nt); + + +title_list = cell(46, 1); +title_list{1} = 'Surrogate MAE -- mode 1'; +title_list{2} = 'Surrogate RMSE -- mode 1'; +title_list{3} = 'Mode PDF MAE -- mode 1'; +title_list{4} = 'Mode PDF RMSE -- mode 1'; +for k = 5:9, title_list{k} = 'Mode PDF KL-div -- mode 1'; end +for k = 10:14, title_list{k} = 'Mode PDF reverse KL-div -- mode 1'; end +for k = 15:19, title_list{k} = 'Mode PDF log MAE -- mode 1'; end +for k = 20:24, title_list{k} = 'Mode PDF log RMSE -- mode 1'; end +title_list{25} = 'VBM PDF MAE'; +title_list{26} = 'VBM PDF RMSE'; +for k = 27:31, title_list{k} = 'VBM PDF KL-div'; end +for k = 32:36, title_list{k} = 'VBM PDF reverse KL-div'; end +for k = 37:41, title_list{k} = 'VBM PDF log MAE'; end +for k = 42:46, title_list{k} = 'VBM PDF log RMSE'; end + +color_list = cell(5, 1); +color_list{1} = 'Red'; +color_list{2} = 'Blue'; +color_list{3} = 'Magenta'; +color_list{4} = 'Cyan'; +color_list{5} = 'Black'; + +line_style_list = cell(5, 1); +line_style_list{1} = '-'; +line_style_list{2} = '-'; +line_style_list{3} = '-.'; +line_style_list{4} = '-.'; +line_style_list{5} = '-'; + +legend_names = cell(5, 1); +legend_names{1} = 'lw-us-fix'; +legend_names{2} = 'lw-kus-fix'; +legend_names{3} = 'lw-us-rr'; +legend_names{4} = 'lw-kus-rr'; +legend_names{5} = 'uniform'; + +for k = 1:46 + figure(k); + clf + hold on + for j = 1:sz2 + plot(NN, error_array_list{j}(k, :), 'Color', color_list{j}, ... + 'LineStyle', line_style_list{j}); + end + xlabel('$n$', 'Interpreter', 'Latex'); + ylabel('$\epsilon$', 'Interpreter', 'Latex'); + legend(legend_names, 'Interpreter', 'Latex', 'Location', 'best'); + title(title_list{k}, 'Interpreter', 'Latex'); + set(gca, 'YScale', 'log'); + xlim([10, max(NN)]); + + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + + filename = sprintf('%splot_%d', a_par.fig_path, k); + print(filename,'-dpdf'); + savefig(filename); + +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/plot_raw_timeseries.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/plot_raw_timeseries.m new file mode 100644 index 0000000..8a248ea --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/plot_raw_timeseries.m @@ -0,0 +1,35 @@ +function [ outcode ] = plot_raw_timeseries( a_par, zz, tt) +%PLOT_RAW_TIMESERIES Summary of this function goes here +% Detailed explanation goes here + + + if (nargin == 2) + tt = linspace(1, size(zz, 1), size(zz, 1)); + end + + figure(31); + clf; + hold on + %k_off = 12*25; + k_off = 0; + for k = k_off + 1*(1:20) + plot(tt, zz(:, k)); + end + title('Sample time series'); + + figure(32); + clf; + hold on + plot(tt, var(zz, [], 2)); + title('time series variance'); + +% figure(33); +% clf; +% hold on +% plot(tt, mean(zz, 2)); +% title('time series mean'); + + + outcode = 1; +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/sample_from_protocol.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/sample_from_protocol.m new file mode 100644 index 0000000..790f003 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/sample_from_protocol.m @@ -0,0 +1,7 @@ +function [outputArg1,outputArg2] = sample_from_protocol(inputArg1,inputArg2) +%SAMPLE_FROM_PROTOCOL Summary of this function goes here +% Detailed explanation goes here +outputArg1 = inputArg1; +outputArg2 = inputArg2; +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/scratchpad.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/scratchpad.m new file mode 100644 index 0000000..dce7d34 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/scratchpad.m @@ -0,0 +1,6 @@ +figure(101); +clf +hold on +for k = 1:10 + plot(ZZ_list_long{2}(:, k)) +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_precomputed.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_precomputed.m new file mode 100644 index 0000000..7236b22 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_precomputed.m @@ -0,0 +1,157 @@ +function [ outcode ] = sj_as_precomputed(a_par, as_par, data_aa, data_zz, true_pz ) + + % + % Initialize stuff + % + + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + + max_n_data = as_par.n_init + as_par.n_iter + 10; + %a_par.kl_transformation_rule = 'no-transform'; + + + aa_train = zeros(max_n_data, as_par.n_dim_in); + zz_train = zeros(max_n_data, size(data_zz, 2)); + + switch as_par.initial_samples_rule + case 'random-sample' + [~, ii] = datasample(1:size(data_aa, 1), as_par.n_init); + + aa_train(1:as_par.n_init, :) = data_aa(ii, :); + zz_train(1:as_par.n_init, :) = data_zz(ii, :); + + otherwise + warning('%s not recognized!\n', as_par.initial_samples_rule); + end + + protocol_list = cell(as_par.n_iter, 1); + + options = optimoptions('fmincon','Display','off'); + + % + % Main active search loop + % + + tic; + + for k = 1:as_par.n_iter + fprintf('Starting round k=%d.\n', k); + + switch as_par.mode_choice_rule + case 'fixed-mode' + cur_opt_mode = 1; + case 'round-robin' + cur_opt_mode = mod(k-1, as_par.n_rr_rondel_size)+1; + otherwise + warning('%s not recognized!\n', as_par.mode_choice_rule); + end + cur_aa_train = aa_train(1:(as_par.n_init+k-1), :); + cur_zz_train = zz_train(1:(as_par.n_init+k-1), :); + + cur_model_protocol = LAMP_Protocol(a_par); + cur_model_protocol.exp_name = sprintf('six-60'); + cur_model_protocol.overall_norm = as_par.overall_norm_factor; + cur_model_protocol.load_training_data(cur_aa_train, cur_zz_train'); + cur_model_protocol.load_testing_data(data_aa, data_zz'); + cur_model_protocol.transform_data(); + cur_model_protocol.train_gpr(); + +% cur_model_protocol.gpr_obj.V_out = true_model_protocol.gpr_obj.V_out; +% cur_model_protocol.gpr_obj.D_out = true_model_protocol.gpr_obj.D_out; +% cur_model_protocol.gpr_obj.overall_norm_factor = ... +% true_model_protocol.gpr_obj.overall_norm_factor; % final rescaling +% cur_model_protocol.gpr_obj.ts_mu = true_model_protocol.gpr_obj.ts_mu; + + % + % Build acquisition function + % + + fprintf('Building acquisition function with rule: %s.\n', as_par.acq_rule); + + f_blackbox = @(alpha) cur_model_protocol.gpr_obj.predict(alpha); + [ f_likelihood ] = build_likelihood_function(as_par, f_input, f_blackbox, ... + cur_opt_mode); + + switch as_par.acq_rule + case 'lw-kus' + sigma_n_list= cur_model_protocol.gpr_obj.get_sigma_n_list(); + sigma2n = sigma_n_list(cur_opt_mode).^2; + f_acq = @(alpha) -f_acq_lw_kus_multi_out(alpha, f_input, ... + f_likelihood, f_blackbox, sigma2n, cur_opt_mode); + + case 'lw-us' + f_acq = @(alpha) -f_acq_lw_us_multi_out(alpha, f_input, ... + f_likelihood, f_blackbox, cur_opt_mode); + + case 'uniform' + f_acq = @(alpha) 1; + + otherwise + warning('%s not recognized\n', as_par.acq_rule); + end + + % + % Choose next point + % + + fprintf('Evaluating acquisition function to choose next point.\n'); + + switch as_par.opt_rule + case 'uniform' + %new_aa = as_par.z_max*(ones(1, as_par.n_dim_in) - 2*rand(1, as_par.n_dim_in)); + new_ii = ceil(size(data_aa, 1)*rand(1,1)); + new_aa = data_aa(new_ii, :); + new_zz = data_zz(new_ii, :); + + case 'as' + uu = f_acq(data_aa); + [~, new_ii] = min(uu); + new_aa = data_aa(new_ii, :); + new_zz = data_zz(new_ii, :); + + otherwise + warning('%s not recognized\n', as_par.acq_rule); + end + + % + % Evaluate next point + % + + + aa_train(as_par.n_init+k, :) = new_aa; + zz_train(as_par.n_init+k, :) = new_zz; + + protocol_list{k} = cur_model_protocol; + end + + fprintf('Main active search loop over after %0.2f seconds\n', toc); + + % + % Plots! + % + + fprintf('Starting plots!\n'); + + %draw_true_model_plots(a_par, as_par, true_f_mean); + %draw_movie_plots(a_par, as_par, protocol_list, true_f_mean, true_pq); + draw_sample_point_plots(a_par, as_par, aa_train); + + f_fake = @(a) 1; + [ err_struct ] = draw_error_plots( a_par, as_par, protocol_list, ... + f_fake, 0, true_pz); + + err_filename = sprintf('%serr_struct.mat', a_par.fig_path); + save(err_filename, 'err_struct', '-mat'); + + + gamma = 8.9224e-10; + figure(101); + clf; + plot(1:length(err_struct.pz_log_mae_trunc_list), gamma*err_struct.pz_log_mae_trunc_list); + set(gca, 'YScale', 'log'); + + + + outcode = 1; + +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_tester.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_tester.m new file mode 100644 index 0000000..e2b475a --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_tester.m @@ -0,0 +1,182 @@ +function [ outcode ] = sj_as_tester(a_par, as_par, true_f_sample, true_f_mean, ... + aa_fixed_initial, zz_fixed_initial, true_pq, true_pz, ... + true_testing_aa, true_testing_zz, true_model_protocol) + + % + % Initialize stuff + % + + f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + + max_n_data = as_par.n_init + as_par.n_iter + 10; + a_par.kl_transformation_rule = 'no-transform'; + + + aa_train = zeros(max_n_data, as_par.n_dim_in); + zz_train = zeros(max_n_data, a_par.n_modes); + + switch as_par.initial_samples_rule + case 'uniform' + aa_train(1:as_par.n_init, :) = as_par.z_max*... + (ones(as_par.n_init, as_par.n_dim_in) - 2*rand(as_par.n_init, as_par.n_dim_in)); + + [yy] = true_f_sample(aa_train(1:as_par.n_init, :)); + zz_train(1:as_par.n_init, :) = yy; + case 'fixed-lhs' + aa_train(1:as_par.n_init, :) = aa_fixed_initial; + zz_train(1:as_par.n_init, :) = zz_fixed_initial; + otherwise + warning('%s not recognized!\n', as_par.initial_samples_rule); + end + + [yy] = true_f_sample(aa_train(1:as_par.n_init, :)); + zz_train(1:as_par.n_init, :) = yy; + + %a3_grid = linspace(-as_par.z_max, as_par.z_max, as_par.n_grid_likelihood); + %[aa13, aa23, aa33] = meshgrid(a3_grid, a3_grid, a3_grid); + %aa3_grid = [aa13(:), aa23(:), aa33(:)]; + %ww3 = f_input(aa3_grid); + + %bbq = linspace(-as_par.q_max, as_par.q_max, as_par.nqb+1); + + protocol_list = cell(as_par.n_iter, 1); + + options = optimoptions('fmincon','Display','off'); + + % + % Main active search loop + % + + tic; + + for k = 1:as_par.n_iter + fprintf('Starting round k=%d.\n', k); + + switch as_par.mode_choice_rule + case 'fixed-mode' + cur_opt_mode = 1; + case 'round-robin' + cur_opt_mode = mod(k-1, 6)+1; + otherwise + warning('%s not recognized!\n', as_par.mode_choice_rule); + end + cur_aa_train = aa_train(1:(as_par.n_init+k-1), :); + cur_zz_train = zz_train(1:(as_par.n_init+k-1), :); + + cur_model_protocol = LAMP_Protocol(a_par); + cur_model_protocol.exp_name = sprintf('three-60'); + cur_model_protocol.overall_norm = 1; + cur_model_protocol.load_training_data(cur_aa_train, cur_zz_train'); + cur_model_protocol.load_testing_data(true_testing_aa, true_testing_zz'); + cur_model_protocol.transform_data(); + cur_model_protocol.train_gpr(); + + cur_model_protocol.gpr_obj.V_out = true_model_protocol.gpr_obj.V_out; + cur_model_protocol.gpr_obj.D_out = true_model_protocol.gpr_obj.D_out; + cur_model_protocol.gpr_obj.overall_norm_factor = ... + true_model_protocol.gpr_obj.overall_norm_factor; % final rescaling + cur_model_protocol.gpr_obj.ts_mu = true_model_protocol.gpr_obj.ts_mu; + + % + % Build acquisition function + % + + fprintf('Building acquisition function with rule: %s.\n', as_par.acq_rule); + + f_blackbox = @(alpha) cur_model_protocol.gpr_obj.predict(alpha); + [ f_likelihood ] = build_likelihood_function(as_par, f_input, f_blackbox, ... + cur_opt_mode); + + switch as_par.acq_rule + case 'lw-kus' + sigma_n_list= cur_model_protocol.gpr_obj.get_sigma_n_list(); + sigma2n = sigma_n_list(cur_opt_mode).^2; + f_acq = @(alpha) -f_acq_lw_kus_multi_out(alpha, f_input, ... + f_likelihood, f_blackbox, sigma2n, cur_opt_mode); + + case 'lw-us' + f_acq = @(alpha) -f_acq_lw_us_multi_out(alpha, f_input, ... + f_likelihood, f_blackbox, cur_opt_mode); + + case 'uniform' + f_acq = @(alpha) 1; + + otherwise + warning('%s not recognized\n', as_par.acq_rule); + end + + % + % Choose next point + % + + fprintf('Evaluating acquisition function to choose next point.\n'); + + switch as_par.opt_rule + case 'uniform' + new_aa = as_par.z_max*(ones(1, as_par.n_dim_in) - 2*rand(1, as_par.n_dim_in)); + + case 'as' + A = [eye(as_par.n_dim_in); -eye(as_par.n_dim_in)]; + b = [as_par.z_max*ones(as_par.n_dim_in, 1); as_par.z_max*ones(as_par.n_dim_in, 1)]; + %ub = as_par.z_max*ones(1, 3); + + a_opt_list = zeros(as_par.n_acq_restarts, as_par.n_dim_in); + f_opt_list = zeros(as_par.n_acq_restarts, 1); + a0_list = as_par.z_max*(ones(as_par.n_acq_restarts, as_par.n_dim_in) - ... + 2*lhsdesign(as_par.n_acq_restarts, as_par.n_dim_in)); + + for j = 1:as_par.n_acq_restarts + fprintf('%d-', j); + a0 = a0_list(j, :); + + [x,fval,~,~] = fmincon(f_acq, a0, A, b, [], [], [], [], [], ... + options); + + a_opt_list(j, :) = x; + f_opt_list(j) = fval; + end + + [~, i ] = min(f_opt_list); + + new_aa = a_opt_list(i, :); + %fprintf('Next point at alpha = (%0.2f, 0.2f, 0.2f).\n', new_aa(1), ... + % new_aa(2), new_aa(3)) + + otherwise + warning('%s not recognized\n', as_par.acq_rule); + end + + % + % Evaluate next point + % + + [yy] = true_f_sample(new_aa); + new_zz = yy; + + aa_train(as_par.n_init+k, :) = new_aa; + zz_train(as_par.n_init+k, :) = new_zz; + + protocol_list{k} = cur_model_protocol; + end + + fprintf('Main active search loop over after %0.2f seconds\n', toc); + + % + % Plots! + % + + fprintf('Starting plots!\n'); + + %draw_true_model_plots(a_par, as_par, true_f_mean); + %draw_movie_plots(a_par, as_par, protocol_list, true_f_mean, true_pq); + draw_sample_point_plots(a_par, as_par, aa_train); + [ err_struct ] = draw_error_plots( a_par, as_par, protocol_list, ... + true_f_mean, true_pq, true_pz); + + + + + + outcode = 1; + +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_tester_toy.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_tester_toy.m new file mode 100644 index 0000000..9eb74f0 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_as_tester_toy.m @@ -0,0 +1,181 @@ +function [ outcode ] = sj_as_tester_toy(a_par, as_par, true_f, ... + aa_fixed_initial, true_pq) + +% +% Initialize stuff +% + +f_input = @(alpha) prod(1/sqrt(2*pi)*exp(-alpha.^2/2), 2); + +max_n_data = as_par.n_init + as_par.n_iter + 10; + + +aa_train = zeros(max_n_data, as_par.n_dim_in); +zz_train = zeros(max_n_data, a_par.n_modes); + +switch as_par.initial_samples_rule + case 'uniform' + aa_train(1:as_par.n_init, :) = as_par.z_max*... + (ones(as_par.n_init, as_par.n_dim_in) - 2*rand(as_par.n_init, as_par.n_dim_in)); + case 'fixed-lhs' + aa_train(1:as_par.n_init, :) = aa_fixed_initial; + otherwise + warning('%s not recognized!\n', as_par.initial_samples_rule); +end + +[yy] = true_f(aa_train(1:as_par.n_init, :)); +zz_train(1:as_par.n_init, :) = yy; + + +model_list = cell(as_par.n_iter, 1); + +options = optimoptions('fmincon','Display','off'); + +% +% Main active search loop +% + +tic; + +for k = 1:as_par.n_iter + fprintf('Starting round k=%d.\n', k); + tic; + + cur_aa_train = aa_train(1:(as_par.n_init+k-1), :); + cur_zz_train = zz_train(1:(as_par.n_init+k-1), :); + + + switch as_par.fixed_sigma_for_optimization + case true + cur_gpr = fitrgp(cur_aa_train, cur_zz_train, ... + 'BasisFunction', a_par.gpr_explicit_basis_class, ... + 'Sigma', 1e-5, 'ConstantSigma', true, 'SigmaLowerBound', 1e-6); + + case false + cur_gpr = fitrgp(cur_aa_train, cur_zz_train); + end + + + + % + % Build acquisition function + % + + fprintf('Building acquisition function with rule: %s.\n', as_par.acq_rule); + + f_blackbox = @(alpha) cur_gpr.predict(alpha); + [ f_likelihood ] = build_likelihood_function(as_par, f_input, f_blackbox); + + switch as_par.acq_rule + case 'lw-kus' + sigma_n= cur_gpr.Sigma; + sigma2n = sigma_n.^2; + f_acq = @(alpha) -f_acq_lw_kus(alpha, f_input, f_likelihood, f_blackbox, sigma2n); + + case 'lw-us' + f_acq = @(alpha) -f_acq_lw_us(alpha, f_input, f_likelihood, f_blackbox); + + otherwise + warning('%s not recognized\n', as_par.acq_rule); + end + + % + % Choose next point + % + + fprintf('Evaluating acquisition function to choose next point.\n'); + + switch as_par.opt_rule + case 'uniform' + new_aa = as_par.z_max*(ones(1, as_par.n_dim_in) - 2*rand(1, as_par.n_dim_in)); + + case 'as' + A = [eye(as_par.n_dim_in); -eye(as_par.n_dim_in)]; + b = [as_par.z_max*ones(as_par.n_dim_in, 1); as_par.z_max*ones(as_par.n_dim_in, 1)]; + %ub = as_par.z_max*ones(1, 3); + + a_opt_list = zeros(as_par.n_acq_restarts, as_par.n_dim_in); + f_opt_list = zeros(as_par.n_acq_restarts, 1); + a0_list = as_par.z_max*(ones(as_par.n_acq_restarts, as_par.n_dim_in) - ... + 2*lhsdesign(as_par.n_acq_restarts, as_par.n_dim_in)); + + for j = 1:as_par.n_acq_restarts + fprintf('Restart round %d.\n', j); + a0 = a0_list(j, :); + %disp(f_acq(a0)); + %[x,fval,~,~] = fmincon(f_acq, a0, A, b); + + [x,fval,~,~] = fmincon(f_acq, a0, A, b, [], [], [], [], [], ... + options); + + %[x,fval,~,~] = fmincon(f_acq, a0, [], [], [], [], -ub, ub, ... + % 'Display', 'off'); + + a_opt_list(j, :) = x; + f_opt_list(j) = fval; + end + + [~, i ] = min(f_opt_list); + + new_aa = a_opt_list(i, :); + %fprintf('Next point at alpha = (%0.2f, 0.2f, 0.2f).\n', new_aa(1), ... + % new_aa(2), new_aa(3)) + + otherwise + warning('%s not recognized\n', as_par.acq_rule); + end + + % + % Evaluate next point + % + + [yy] = true_f(new_aa); + new_zz = yy; + + aa_train(as_par.n_init+k, :) = new_aa; + zz_train(as_par.n_init+k, :) = new_zz; + + model_list{k} = cur_gpr; +end + +fprintf('Main active search loop over after %0.2f seconds\n', toc); + +% +% Plots! +% + +fprintf('Starting plots!\n'); + +draw_plots_toy(a_par, as_par, model_list, true_f, true_pq); + +if as_par.draw_plots + fprintf('Drawing sample locations.\n'); + + sz = 25; + figure(1); + clf; + hold on + scatter(aa_train(1:as_par.n_init, 1) , zeros([as_par.n_init, 1]), sz, 'red') + scatter(aa_train((as_par.n_init+1):(as_par.n_init+as_par.n_iter), 1), ... + (1:as_par.n_iter), sz, 'blue'); + xlabel('$x$', 'Interpreter', 'Latex') + title('training samples', 'Interpreter', 'Latex') + legend({'initial', 'active'}, 'Interpreter', 'Latex'); + set(gca, 'FontSize', 9); + set(gcf,'units','inches','position', a_par.plot_pos); + set(gcf,'PaperUnits', 'inches', 'PaperPosition', a_par.half_paper_pos, 'PaperSize', a_par.half_paper_size); + if a_par.save_figs + filename = sprintf('%straining_data_locs_a12', a_par.fig_path); + print(filename,'-dpdf'); + savefig(filename); + end + +end + + + + + +outcode = 1; + +end \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_create_text_gpr_obj.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_create_text_gpr_obj.m new file mode 100644 index 0000000..b513fe3 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_create_text_gpr_obj.m @@ -0,0 +1,42 @@ +% cur_prot = LAMP_Protocol(a_par); +% cur_prot.exp_name = sprintf('two-60'); +% cur_prot.overall_norm = vbmg_norm_factor; +% cur_prot.load_training_data(aa_list_set{1, 4, 1}(:, :), ZZ_list_set{1, 4, 1}(:, :)); +% cur_prot.load_testing_data(aa_list_set{1, 4, 2}(:, :), ZZ_list_set{1, 4, 2}); +% cur_prot.transform_data(); +% cur_prot.train_gpr(); +% +% cur_prot.plot_basis(); +% cur_prot.plot_surrogate(1); +% [ cur_prot.RR_res ] = draw_reconstruction_scatterplots( cur_prot ); +% draw_recon_pdf( cur_prot ); +% +% save_path = '../../../Data/GPR/Two-60-no-basis'; +% if ~exist(save_path, 'dir') +% mkdir(save_path); +% end +% cur_prot.save_to_text(save_path); + + + + + +cur_prot = LAMP_Protocol(a_par); +cur_prot.exp_name = sprintf('three-60'); +cur_prot.overall_norm = vbmg_norm_factor; +cur_prot.load_training_data(aa_list_mar_bonus{1, 1}(:, :), ZZ_list_mar_bonus{1, 1}(:, :)); +%cur_prot.load_testing_data(aa_list_set{2, 4, 2}(:, :), ZZ_list_set{2, 4, 2}); +cur_prot.load_testing_data(aa_list_mar_bonus{1, 1}(:, :), ZZ_list_mar_bonus{1, 1}(:, :)); +cur_prot.transform_data(); +cur_prot.train_gpr(); + +%cur_prot.plot_basis(); +%cur_prot.plot_surrogate(1); +%[ cur_prot.RR_res ] = draw_reconstruction_scatterplots( cur_prot ); +%draw_recon_pdf( cur_prot ); + +save_path = '../../../Data/GPR/Mar-three-60'; +if ~exist(save_path, 'dir') + mkdir(save_path); +end +cur_prot.save_to_text(save_path); \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_export_pdf.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_export_pdf.m new file mode 100644 index 0000000..f4b7626 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_export_pdf.m @@ -0,0 +1,58 @@ +II = [2, 3, 4, 5, 6]; +TT = [20, 30, 40, 60, 80, 100, 120]; + +pdf_save_path = '../../../Data/LAMP/vbm_pdf_for_fatigue/'; +if ~exist(pdf_save_path, 'dir') + mkdir(pdf_save_path); +end + + +p_set_list = cell(length(II), length(TT)); + +XXs_list = cell(length(II), length(TT)); +FFs_list = cell(length(II), length(TT)); + +for ki = 1:length(II) + for kt = 1:length(TT) + %II = randperm(size(aa_list_set{ki, kt}, 1), 300); + + p_set_list{ki, kt} = LAMP_Protocol(a_par); + p_set_list{ki, kt}.exp_name = sprintf('%d-%d', II(ki), TT(kt)); + p_set_list{ki, kt}.overall_norm = vbmg_norm_factor; + p_set_list{ki, kt}.load_training_data(aa_list_set{ki, kt, 1}(:, :), ZZ_list_set{ki, kt, 1}(:, :)); + p_set_list{ki, kt}.load_testing_data(aa_list_set{ki, kt, 2}(:, :), ZZ_list_set{ki, kt, 2}); + p_set_list{ki, kt}.transform_data(); + p_set_list{ki, kt}.train_gpr(); + + %p_set_list{ki, kt}.plot_basis(); + %p_set_list{ki, kt}.plot_surrogate(1); + %[ p_set_list{ki, kt}.RR_res ] = draw_reconstruction_scatterplots( p_set_list{ki, kt} ); + [ XX, FF] = draw_recon_pdf( p_set_list{ki, kt} ); + XXs_list{ki, kt} = XX; + FFs_list{ki, kt} = FF; + + xx_filename = sprintf('%s/xx_t_%dn_%d.txt', pdf_save_path, TT(kt), II(ki)); + save(xx_filename, 'XX'); + ff_filename = sprintf('%s/ff_t_%dn_%d.txt', pdf_save_path, TT(kt), II(ki)); + save(ff_filename, 'FF'); + end +end + + + +xx_zz = linspace(-3e9, 3e9, a_par.n_hist_bins); +PP_zz_klmc = histcounts(ZZ_nov_klmc(:)*vbmg_norm_factor, xx_zz, 'Normalization', 'pdf'); +PP_zz_ssmc = histcounts(ZZ_nov_ss(:)*vbmg_norm_factor, xx_zz, 'Normalization', 'pdf'); +xx_plot_1 = 1/2*(xx_zz(2:end) + xx_zz(1:end-1)); + + + +xx_filename = sprintf('%sxx_klmc.txt', pdf_save_path); +save(xx_filename, 'xx_plot_1'); +ff_filename = sprintf('%sff_klmc.txt', pdf_save_path); +save(ff_filename, 'PP_zz_klmc'); + +xx_filename = sprintf('%sxx_ssmc.txt', pdf_save_path); +save(xx_filename, 'xx_plot_1'); +ff_filename = sprintf('%sff_ssmc.txt', pdf_save_path); +save(ff_filename, 'PP_zz_ssmc'); \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_test_vektor_gpr.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_test_vektor_gpr.m new file mode 100644 index 0000000..9189ca6 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/sj_test_vektor_gpr.m @@ -0,0 +1,112 @@ +% aa_train = aa_list_set{3, 3, 1}; +% aa_test = aa_list_set{3, 3, 2}; +% +% zz_train = ZZ_list_set{3, 3, 1}; +% zz_test = ZZ_list_set{3, 3, 2}; +% +% base_name = 'four-40'; + +aa_train = aa_list_set{2, 4, 1}; +aa_test = aa_list_set{2, 4, 2}; + +zz_train = ZZ_list_set{2, 4, 1}; +zz_test = ZZ_list_set{2, 4, 2}; + +base_name = 'three-60'; + + +p_scalar = LAMP_Protocol(a_par); +p_scalar.exp_name = sprintf('%s-scalar', base_name); +p_scalar.overall_norm = vbmg_norm_factor; +p_scalar.load_training_data(aa_train, zz_train); +p_scalar.load_testing_data(aa_test, zz_test); +p_scalar.transform_data(); +p_scalar.train_gpr(); + +p_scalar.plot_basis(); +p_scalar.plot_surrogate(1); +[ p_scalar.RR_res ] = draw_reconstruction_scatterplots( p_scalar ); +draw_recon_pdf( p_scalar ); + +p_scalar.gpr_obj.g_fit_list{1}.KernelInformation.KernelParameters +p_scalar.gpr_obj.g_fit_list{1}.Sigma +p_scalar.gpr_obj.g_fit_list{1}.Beta + + + + +p_vektor = LAMP_Protocol(a_par); +p_vektor.exp_name = sprintf('%s-vektor', base_name); +p_vektor.overall_norm = vbmg_norm_factor; +p_vektor.load_training_data(aa_train, zz_train); +p_vektor.load_testing_data(aa_test, zz_test); +p_vektor.transform_data(); +p_vektor.vector_pair_list = [1, 2]; +p_vektor.rho_list = [p_scalar.RR_res(1, 2)]; +p_vektor.train_gpr(); + +p_vektor.plot_basis(); +p_vektor.plot_surrogate(1); +[ p_vektor.RR_res ] = draw_reconstruction_scatterplots( p_vektor ); +a_par.n_hist_resample = 2000; +draw_recon_pdf( p_vektor ); + +p_vektor.gpr_obj.vector_gpr_list{1}.g_fit.KernelInformation.KernelParameters +p_vektor.gpr_obj.vector_gpr_list{1}.rho +p_vektor.gpr_obj.vector_gpr_list{1}.sigma0 + +% +% qq = zeros(100, 2); +% for k = 1:100 +% [ qq_sample, qq_pred_mu, qq_pred_cov ] = p_vektor.gpr_obj.sample(aa_test(1, :)); +% qq(k, :) = qq_sample(1:2); +% end +% +% [ qq_pred_mu, qq_pred_cov ] = p_vektor.gpr_obj.vector_gpr_list{1}.predict(aa_test(1, :)); +% +% figure(11); +% clf; +% scatter(qq(:, 1), qq(:, 2)); + + + + +% [rot_mat, ~] = eig(p_vektor.RR_res); +% +% +% p_vektor_rot = LAMP_Protocol(a_par); +% p_vektor_rot.exp_name = 'four-40-rot'; +% p_vektor_rot.overall_norm = vbmg_norm_factor; +% p_vektor_rot.load_training_data(aa_list_set{3, 3, 1}, ZZ_list_set{3, 3, 1}); +% p_vektor_rot.load_testing_data(aa_list_set{3, 3, 2}, ZZ_list_set{3, 3, 2}); +% p_vektor_rot.rot_mat = rot_mat; +% p_vektor_rot.transform_data(); +% p_vektor_rot.train_gpr(); +% +% p_vektor_rot.plot_basis(); +% p_vektor_rot.plot_surrogate(1); +% [ p_vektor_rot.RR_res ] = draw_reconstruction_scatterplots( p_vektor_rot ); +% draw_recon_pdf( p_vektor_rot ); +% +% p_vektor_rot.gpr_obj.g_fit_list{1}.KernelInformation.KernelParameters +% p_vektor_rot.gpr_obj.g_fit_list{1}.Sigma +% p_vektor_rot.gpr_obj.g_fit_list{1}.Beta + + +% +% p_vektor2 = LAMP_Protocol(a_par); +% p_vektor2.exp_name = 'four-80'; +% p_vektor2.overall_norm = vbmg_norm_factor; +% p_vektor2.load_training_data(aa_list_set{3, 5, 1}, ZZ_list_set{3, 5, 1}); +% p_vektor2.load_testing_data(aa_list_set{3, 5, 2}, ZZ_list_set{3, 5, 2}); +% p_vektor2.transform_data(); +% p_vektor2.train_gpr(); +% +% p_vektor2.plot_basis(); +% p_vektor2.plot_surrogate(1); +% [ p_vektor2.RR_res ] = draw_reconstruction_scatterplots( p_vektor2 ); +% draw_recon_pdf( p_vektor2 ); +% +% p_vektor2.gpr_obj.g_fit_list{1}.KernelInformation.KernelParameters +% p_vektor2.gpr_obj.g_fit_list{1}.Sigma +% p_vektor2.gpr_obj.g_fit_list{1}.Beta \ No newline at end of file diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/ts_transform_kl.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/ts_transform_kl.m new file mode 100644 index 0000000..5bd9502 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/ts_transform_kl.m @@ -0,0 +1,15 @@ +function [ zz ] = ts_transform_kl( a_par, qq, V_basis, lambda, ts_mu ) +%TS_TRANSFORM_KL Summary of this function goes here +% Detailed explanation goes here + + + zz_cur = zeros(size(V_basis, 1), 1); + + for k_mode = 1:size(qq, 2) + zz_cur = zz_cur + qq(:, k_mode)*V_basis(:, k_mode).*sqrt(lambda(k_mode)); + end + + zz = (zz_cur + ts_mu); + +end + diff --git a/dnosearch/examples/lamp/Matlab_GP_Implementation/weighted_histogram.m b/dnosearch/examples/lamp/Matlab_GP_Implementation/weighted_histogram.m new file mode 100644 index 0000000..c8c5ae9 --- /dev/null +++ b/dnosearch/examples/lamp/Matlab_GP_Implementation/weighted_histogram.m @@ -0,0 +1,16 @@ +function [ pp ] = weighted_histogram(yy, ww, bb) +%WEIGHTED_HISTOGRAM Summary of this function goes here +% Detailed explanation goes here + + ii = discretize(yy, bb); + ii(isnan(ii)) = 1; + + pp = zeros(length(bb)-1, 1); + for j = 1:length(yy) + pp(ii(j)) = pp(ii(j)) + ww(j); + end + + pp = pp./sum(pp(:)); + +end + diff --git a/dnosearch/examples/lamp/__init__.py b/dnosearch/examples/lamp/__init__.py new file mode 100644 index 0000000..890a009 --- /dev/null +++ b/dnosearch/examples/lamp/__init__.py @@ -0,0 +1,2 @@ +from .main_lamp.py import * +from .lamp_helper_functions.py import * diff --git a/dnosearch/examples/lamp/__pycache__/lamp_helper_functions.cpython-38.pyc b/dnosearch/examples/lamp/__pycache__/lamp_helper_functions.cpython-38.pyc new file mode 100644 index 0000000..d3a62f7 Binary files /dev/null and b/dnosearch/examples/lamp/__pycache__/lamp_helper_functions.cpython-38.pyc differ diff --git a/dnosearch/examples/lamp/__pycache__/lamp_helper_functions.cpython-39.pyc b/dnosearch/examples/lamp/__pycache__/lamp_helper_functions.cpython-39.pyc new file mode 100644 index 0000000..00726f8 Binary files /dev/null and b/dnosearch/examples/lamp/__pycache__/lamp_helper_functions.cpython-39.pyc differ diff --git a/dnosearch/examples/lamp/lamp_helper_functions.py b/dnosearch/examples/lamp/lamp_helper_functions.py new file mode 100644 index 0000000..7292f03 --- /dev/null +++ b/dnosearch/examples/lamp/lamp_helper_functions.py @@ -0,0 +1,544 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Mon Jul 25 16:47:11 2022 + +@author: stevejon +""" + +# DNOSearch Imports +import numpy as np +from dnosearch import (BlackBox, GaussianInputs, DeepONet) +#from oscillator import Noise + +# DeepONet Imports +import deepxde as dde + +# Other Imports +import sys +import scipy +from scipy.interpolate import InterpolatedUnivariateSpline +import scipy.io as sio +import h5py +import matplotlib.pyplot as plt +plt.rcParams.update({ + "text.usetex": False, + "font.family": "serif", + "font.serif": ["Times"]}) + +# SJ imports +import sklearn as sk +import os + + + +# +# Would be our link to the Matlab wrapper for LAMP, if we implement it +# + +def map_def(alpha, ii, QQ, sample_strat='discrete-noisless', sigma_n=0): + if sample_strat == 'discrete-noiseless' : + return QQ[ii, :] + + if sample_strat == 'discrete-noisy' : + return QQ[ii, :] + np.randn()*sigma_n + + print('Uh-oh, sample strategy {} not recognized!'.format(sample_strat)) + return 1 + + +##################### +# IO Helper Functions +##################### + +def make_dirs(output_path, err_save_path, model_dir, as_dir, fig_save_path, intermediate_data_dir): + # + # Why oh why can't mkdir() fail silently if a direktory already exists? + # + + try: + os.mkdir(output_path) + except OSError as error: + print(error) + try: + os.mkdir(err_save_path) + except OSError as error: + print(error) + try: + os.mkdir(model_dir) + except OSError as error: + print(error) + try: + os.mkdir('{}/model'.format(model_dir)) # 'cause DeepONet has some built in pathing + except OSError as error: + print(error) + try: + os.mkdir(as_dir) + except OSError as error: + print(error) + try: + os.mkdir(fig_save_path) + except OSError as error: + print(error) + try: + os.mkdir(intermediate_data_dir) + except OSError as error: + print(error) + + +def load_wave_data(data_path, model_suffix): + # + # Load some precomputed LAMP data + # + + wave_TT_filename = '{}TT{}.txt'.format(data_path, model_suffix) + wave_DD_filename = '{}DD{}.txt'.format(data_path, model_suffix) + wave_VV_filename = '{}VV{}.txt'.format(data_path, model_suffix) + + + wTT = np.loadtxt(wave_TT_filename) + wDD = np.loadtxt(wave_DD_filename) + wVV = np.loadtxt(wave_VV_filename) + + return wTT, wDD, wVV + +def load_vbm_lhs_data(data_path, model_suffix, trim=True): + vbm_TT_lhs_filename = '{}kl-2d{}-tt.txt'.format(data_path, model_suffix) + vbm_zz_lhs_filename = '{}kl-2d{}-vbmg.txt'.format(data_path, model_suffix) + vbm_aa_lhs_filename = '{}kl-2d{}-design.txt'.format(data_path, model_suffix) + + vTTlhs = np.loadtxt(vbm_TT_lhs_filename) + vZZlhs = np.loadtxt(vbm_zz_lhs_filename) + vAAlhs = np.loadtxt(vbm_aa_lhs_filename) + + if trim : + vZZlhs = vZZlhs[0:625, :] # minor accounting error during LAMP problem design + vAAlhs = vAAlhs[0:625, :] + + return vTTlhs, vZZlhs, vAAlhs + +def load_vbm_mc_data(data_path, model_suffix): + vbm_TT_mc_filename = '{}kl-2d{}-test-tt.txt'.format(data_path, model_suffix) + vbm_zz_mc_filename = '{}kl-2d{}-test-vbmg.txt'.format(data_path, model_suffix) + vbm_aa_mc_filename = '{}kl-2d{}-test-design.txt'.format(data_path, model_suffix) + + vTTmc = np.loadtxt(vbm_TT_mc_filename) + vZZmc = np.loadtxt(vbm_zz_mc_filename) + vAAmc = np.loadtxt(vbm_aa_mc_filename) + + return vTTmc, vZZmc, vAAmc + +def load_gpr_precomputed(gpr_pdf_path, ndim): + if (ndim > 6) : + qdim = 6 + else : + qdim = ndim + + qq_xx_filename = '{}{}-40-modes-bins.txt'.format(gpr_pdf_path, qdim) + qq_pp_filename = '{}{}-40-modes-hist.txt'.format(gpr_pdf_path, qdim) + + if ndim >= 6 : + # b/c the GPR VBM pdf is real bad for 6D, skip straight to the true MC + # data, equivalent to \inf D + mm_xx_filename = '{}mc-vbm-bins.txt'.format(gpr_pdf_path) + mm_pp_filename = '{}mc-vbm-hist.txt'.format(gpr_pdf_path) + else : + mm_xx_filename = '{}{}-40-vbm-bins.txt'.format(gpr_pdf_path, ndim) + mm_pp_filename = '{}{}-40-vbm-hist.txt'.format(gpr_pdf_path, ndim) + + qq_xx = np.loadtxt(qq_xx_filename) + qq_xx = 1/2*(qq_xx[0:-1] + qq_xx[1::]) # b/c big dumb I saved it wrong + qq_pp = np.loadtxt(qq_pp_filename) + mm_xx = np.loadtxt(mm_xx_filename) + mm_pp = np.loadtxt(mm_pp_filename) + + return qq_xx, qq_pp, mm_xx, mm_pp + +############ +# PCA Stuff +############ + +def project_onto_vector(x, v): + a = np.dot(x, v) / np.dot(v, v) + return a + + # + # PCA transform of VBM! + # + # sklearn doesn't automatically normalize the PCA components, so we do that + # by hand + # + # Actually, sklearn doesn't do PCA the same way I've been doing it, so I should + # use my other method. Probably smart PCA has various regularization stuff + # for statisticians that I don't want + # + +def pca_transform_z_2_q(vZZlhs, vZZmc, sklearn_pca_algo = False, n_q_modes=6) : + + n_lhs_data = vZZlhs.shape[0] + + if sklearn_pca_algo : + q_pca = sk.decomposition.PCA(n_components = n_q_modes) + q_pca.fit(vZZmc) + + #print(q_pca.explained_variance_ratio_) + #print(q_pca.singular_values_) + + q_lambda_mat = q_pca.get_covariance() + q_lambda_list = np.zeros([n_q_modes,]) + for k in range(0, n_q_modes): + q_lambda_list[k] = q_lambda_mat[k ,k] + + QQ_raw = q_pca.transform(vZZlhs) + QQ = np.zeros(QQ_raw.shape) + + for k in range(0, n_q_modes): + QQ[:, k] = QQ_raw[:, k] / np.sqrt( q_lambda_list[k]) + + else : + vv_var = np.var(vZZmc.ravel()) + vv_norm = vZZmc/np.sqrt(vv_var) + + CC = np.matmul(np.transpose(vv_norm), vv_norm) + CC = CC/n_lhs_data + w_vbm, v_vbm = np.linalg.eig(CC) + + QQ = np.zeros([n_lhs_data, n_q_modes]) + vZZlhs_norm = vZZlhs/np.sqrt(vv_var) + + for k in range(0, n_q_modes): + aa = project_onto_vector(vZZlhs_norm, v_vbm[:, k]) + QQ[:, k] = aa/np.sqrt(w_vbm[k]) + + return QQ, w_vbm, v_vbm, vv_var + + +####################### +# DNO Transform Things +####################### + + # + # These functions are defined for normalizing, standardizing, or flatenining interal to DeepONet + # + # Ethan sez: decimation_factor = 2 is good, but might even be too low + # + +def DNO_Y_transform(x, decimation_factor = 3): + x_transform = x/decimation_factor + return x_transform + +def DNO_Y_itransform(x_transform, decimation_factor = 3): + x = x_transform*decimation_factor + return x + +##################### +# Error calculations +##################### + +def calc_log_error(pq_true, pq_surr, dx=1, ii = range(50,2750), trunc_thresh= 1*10**-4): + # + # Can handle the long tail problem by truncating at finite x, or + # truncating at finite px. Currently, we do the latter + # + # This calcs log10-MAE, but we log10 it a second time for plotting? + # + trunc_pq_true = np.maximum(pq_true, trunc_thresh) + trunc_pq_surr = np.maximum(pq_surr, trunc_thresh) + eps = np.sum(np.abs(np.log10(trunc_pq_surr) - np.log10(trunc_pq_true)))*(dx) + #eps = np.sum(np.abs(np.log10(pq_surr[ii]) - np.log10(pq_true[ii])))/(dx) + return eps + + +##################### +# Active Samping Things +##################### + +# +# Peel out the Active Sampling calculation into a function, so that we +# can call it on different sets of points for AS and for error calculations +# + +def acq_calculation_rom(model_list, Theta_test, inputs, qq_xx=np.linspace(-10,10,10000), + input_rule='grd', sigma_n=0, numerical_eps=1*10**-16, acq_rule='US_LW', + cur_mode=0, as_target_quantity='mode-coefficient', n_q_modes=2, + v_vbm=1, w_vbm=1, vv_var=1): + + test_pts = Theta_test.shape[0] + Mean_Val, Var_Val = acq_evaluate_rom(model_list, Theta_test,sigma_n=sigma_n, + cur_mode=cur_mode, as_target_quantity=as_target_quantity, n_q_modes=n_q_modes, + v_vbm=v_vbm, w_vbm=w_vbm, vv_var=vv_var) + + # Determine Bounds for evaluating the metric + x_max = np.max(Mean_Val) + x_min = np.min(Mean_Val) + x_int = np.linspace(x_min,x_max,10000) # Linearly space points + #x_int_standard = np.linspace(-10,10,10000) # Static for pt-wise comparisons + x_int_standard = qq_xx + + # Create the weights/exploitation values + if input_rule=='pdf' : + px = np.ones([Theta_test.shape[0],]) + else : + px = inputs.pdf(Theta_test) + + sc = scipy.stats.gaussian_kde(Mean_Val.reshape(test_pts,), weights=px) # Fit a guassian kde using px input weights + py = sc.evaluate(x_int) # Evaluate at x_int + py[pyseed_start -seed_end=1 -rank=1 # Set rank = 2, for 4D, 3 for 6D and 4 for 8D -acq='US_LW' -lam=-0.5 -batch_size=1 -n_init=3 -epochs=1000 -b_layers=5 -t_layers=1 -neurons=200 -n_guess=1 -init_method='lhs' -model='DON' # Deep O Net -objective='MaxAbsRe' #'MaxAbsRe' -N=2 # Number of ensembles -iters_max=100 -# Currently written to parallize seeds (i.e. independent runs) - -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - /Applications/MATLAB_R2020b.app/bin/matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -acq='US' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - /Applications/MATLAB_R2020b.app/bin/matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -acq='lhs' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls_lhs.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N $iters_max & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - /Applications/MATLAB_R2020b.app/bin/matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -acq='US' -model='GP' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls_gp.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - /Applications/MATLAB_R2020b.app/bin/matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -acq='US_LW' -model='GP' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls_gp.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - /Applications/MATLAB_R2020b.app/bin/matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -acq='lhs' -model='GP' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls_gp_lhs.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N $iters_max & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - /Applications/MATLAB_R2020b.app/bin/matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done diff --git a/dnosearch/examples/nls/Create_Figure1_Data_a_shell.sh b/dnosearch/examples/nls/Create_Figure3_Data_a_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_a_shell.sh rename to dnosearch/examples/nls/Create_Figure3_Data_a_shell.sh diff --git a/dnosearch/examples/nls/Create_Figure1_Data_abc_shell2.sh b/dnosearch/examples/nls/Create_Figure3_Data_abc_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_abc_shell2.sh rename to dnosearch/examples/nls/Create_Figure3_Data_abc_shell.sh diff --git a/dnosearch/examples/nls/Create_Figure1_Data_b_shell.sh b/dnosearch/examples/nls/Create_Figure3_Data_b_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_b_shell.sh rename to dnosearch/examples/nls/Create_Figure3_Data_b_shell.sh diff --git a/dnosearch/examples/nls/Create_Figure1_Data_c_shell.sh b/dnosearch/examples/nls/Create_Figure3_Data_c_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_c_shell.sh rename to dnosearch/examples/nls/Create_Figure3_Data_c_shell.sh diff --git a/dnosearch/examples/nls/Create_Figure1_Data_d_shell.sh b/dnosearch/examples/nls/Create_Figure3_Data_d_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_d_shell.sh rename to dnosearch/examples/nls/Create_Figure3_Data_d_shell.sh diff --git a/dnosearch/examples/nls/Create_Figure1_Data_e_N16_shell.sh b/dnosearch/examples/nls/Create_Figure3_Data_e_N16_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_e_N16_shell.sh rename to dnosearch/examples/nls/Create_Figure3_Data_e_N16_shell.sh diff --git a/dnosearch/examples/nls/Create_Figure1_Data_e_shell.sh b/dnosearch/examples/nls/Create_Figure3_Data_e_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_e_shell.sh rename to dnosearch/examples/nls/Create_Figure3_Data_e_shell.sh diff --git a/dnosearch/examples/nls/Create_Figure4_1_shell.sh b/dnosearch/examples/nls/Create_Figure4_1_shell.sh deleted file mode 100755 index 35d2514..0000000 --- a/dnosearch/examples/nls/Create_Figure4_1_shell.sh +++ /dev/null @@ -1,91 +0,0 @@ -# This shell script currently performs 1 experiement (seed 1) of all models for the 2D case. -# The script may be changed for additional experiments (seeds 2-10, etc) and the 4D, 6D, and 8D cases. -# CRITICAL DIFFERENCE: here we choose N=2 while the paper uses N=10, this can easily be changed below to replicate the paper, but as we show, why make life 5 times as hard?! -# MATLAB NOTE, ensure the path to MATLAB is correct for your system - -seed_start=1 # The start and end values give the number of experiments, these will run in parallel if seed_end>seed_start -seed_end=3 -rank=10 # Set rank = 2, for 4D, 3 for 6D and 4 for 8D -acq='US_LW' -lam=-0.5 -batch_size=50 -n_init=11 -epochs=1000 -b_layers=5 -t_layers=1 -neurons=200 -n_guess=1 -init_method='lhs' -model='DON' # Deep O Net -objective='MaxAbsRe' #'MaxAbsRe' -N=2 # Number of ensembles -iters_max=50 -# Currently written to parallize seeds (i.e. independent runs) - - -acq='US' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -iters_max=100 -b_layers=6 -for iter_num in $(seq 51 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done -iters_max=50 - -acq='US_LW' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - - -iters_max=100 -b_layers=6 -for iter_num in $(seq 51 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done -iters_max=50 - - diff --git a/dnosearch/examples/nls/Create_Figure4_2_shell.sh b/dnosearch/examples/nls/Create_Figure4_2_shell.sh deleted file mode 100755 index fe52a1d..0000000 --- a/dnosearch/examples/nls/Create_Figure4_2_shell.sh +++ /dev/null @@ -1,54 +0,0 @@ -# This shell script currently performs 1 experiement (seed 1) of all models for the 2D case. -# The script may be changed for additional experiments (seeds 2-10, etc) and the 4D, 6D, and 8D cases. -# CRITICAL DIFFERENCE: here we choose N=2 while the paper uses N=10, this can easily be changed below to replicate the paper, but as we show, why make life 5 times as hard?! -# MATLAB NOTE, ensure the path to MATLAB is correct for your system - -seed_start=1 # The start and end values give the number of experiments, these will run in parallel if seed_end>seed_start -seed_end=3 -rank=10 # Set rank = 2, for 4D, 3 for 6D and 4 for 8D -acq='US_LW' -lam=-0.5 -batch_size=50 -n_init=11 -epochs=1000 -b_layers=5 -t_layers=1 -neurons=200 -n_guess=1 -init_method='lhs' -model='DON' # Deep O Net -objective='MaxAbsRe' #'MaxAbsRe' -N=2 # Number of ensembles -iters_max=50 -# Currently written to parallize seeds (i.e. independent runs) - -acq='lhs' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls_lhs.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N 100 & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -iters_max=100 -b_layers=6 -for iter_num in $(seq 51 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls_lhs.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N 100 & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done diff --git a/dnosearch/examples/nls/Create_Figure4_3_shell.sh b/dnosearch/examples/nls/Create_Figure4_3_shell.sh deleted file mode 100755 index 484844e..0000000 --- a/dnosearch/examples/nls/Create_Figure4_3_shell.sh +++ /dev/null @@ -1,92 +0,0 @@ -# This shell script currently performs 1 experiement (seed 1) of all models for the 2D case. -# The script may be changed for additional experiments (seeds 2-10, etc) and the 4D, 6D, and 8D cases. -# CRITICAL DIFFERENCE: here we choose N=2 while the paper uses N=10, this can easily be changed below to replicate the paper, but as we show, why make life 5 times as hard?! -# MATLAB NOTE, ensure the path to MATLAB is correct for your system - -seed_start=1 # The start and end values give the number of experiments, these will run in parallel if seed_end>seed_start -seed_end=3 -rank=10 # Set rank = 2, for 4D, 3 for 6D and 4 for 8D -acq='US_LW' -lam=-0.5 -batch_size=50 -n_init=11 -epochs=1000 -b_layers=5 -t_layers=1 -neurons=200 -n_guess=1 -init_method='lhs' -model='DON' # Deep O Net -objective='MaxAbsRe' #'MaxAbsRe' -N=2 # Number of ensembles -iters_max=50 -# Currently written to parallize seeds (i.e. independent runs) - -acq='US' -init_method='pdf' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -iters_max=100 -b_layers=6 -for iter_num in $(seq 51 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done -iters_max=50 - - - -acq='US_LW' -init_method='pdf' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done - -iters_max=100 -b_layers=6 -for iter_num in $(seq 51 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done -iters_max=50 - diff --git a/dnosearch/examples/nls/Create_Figure1_Data_4_ab_shell.sh b/dnosearch/examples/nls/Create_Figure4_Data_ab_shell.sh similarity index 100% rename from dnosearch/examples/nls/Create_Figure1_Data_4_ab_shell.sh rename to dnosearch/examples/nls/Create_Figure4_Data_ab_shell.sh diff --git a/dnosearch/examples/nls/IC/.DS_Store b/dnosearch/examples/nls/IC/.DS_Store deleted file mode 100644 index 5008ddf..0000000 Binary files a/dnosearch/examples/nls/IC/.DS_Store and /dev/null differ diff --git a/dnosearch/examples/nls/IC/Rank10_Xs1.mat b/dnosearch/examples/nls/IC/Rank10_Xs1.mat deleted file mode 100644 index e545bb4..0000000 Binary files a/dnosearch/examples/nls/IC/Rank10_Xs1.mat and /dev/null differ diff --git a/dnosearch/examples/nls/IC/Rank2_Xs1.mat b/dnosearch/examples/nls/IC/Rank2_Xs1.mat deleted file mode 100644 index 96ba3ad..0000000 Binary files a/dnosearch/examples/nls/IC/Rank2_Xs1.mat and /dev/null differ diff --git a/dnosearch/examples/nls/IC/Rank3_Xs1.mat b/dnosearch/examples/nls/IC/Rank3_Xs1.mat deleted file mode 100644 index 6e430fb..0000000 Binary files a/dnosearch/examples/nls/IC/Rank3_Xs1.mat and /dev/null differ diff --git a/dnosearch/examples/nls/IC/Rank4_Xs1.mat b/dnosearch/examples/nls/IC/Rank4_Xs1.mat deleted file mode 100644 index 89214ca..0000000 Binary files a/dnosearch/examples/nls/IC/Rank4_Xs1.mat and /dev/null differ diff --git a/dnosearch/examples/nls/__pycache__/complex_noise.cpython-38.pyc b/dnosearch/examples/nls/__pycache__/complex_noise.cpython-38.pyc index 20e4f2b..d29bdea 100644 Binary files a/dnosearch/examples/nls/__pycache__/complex_noise.cpython-38.pyc and b/dnosearch/examples/nls/__pycache__/complex_noise.cpython-38.pyc differ diff --git a/dnosearch/examples/nls/__pycache__/oscillator.cpython-38.pyc b/dnosearch/examples/nls/__pycache__/oscillator.cpython-38.pyc index a014eb1..976ca59 100644 Binary files a/dnosearch/examples/nls/__pycache__/oscillator.cpython-38.pyc and b/dnosearch/examples/nls/__pycache__/oscillator.cpython-38.pyc differ diff --git a/dnosearch/examples/nls/__pycache__/rbf.cpython-38.pyc b/dnosearch/examples/nls/__pycache__/rbf.cpython-38.pyc deleted file mode 100644 index d1235fa..0000000 Binary files a/dnosearch/examples/nls/__pycache__/rbf.cpython-38.pyc and /dev/null differ diff --git a/dnosearch/examples/nls/model/checkpoint b/dnosearch/examples/nls/model/checkpoint deleted file mode 100644 index f8f057b..0000000 --- a/dnosearch/examples/nls/model/checkpoint +++ /dev/null @@ -1,2 +0,0 @@ -model_checkpoint_path: "N1seed1_Rank10_DON_lhs_Seed1_N2_model.ckpt-1000" -all_model_checkpoint_paths: "N1seed1_Rank10_DON_lhs_Seed1_N2_model.ckpt-1000" diff --git a/dnosearch/examples/nls/nls.py b/dnosearch/examples/nls/nls.py index c824e26..94aae6f 100755 --- a/dnosearch/examples/nls/nls.py +++ b/dnosearch/examples/nls/nls.py @@ -12,7 +12,8 @@ from dnosearch import (BlackBox, GaussianInputs, DeepONet) from oscillator import Noise import deepxde as dde - +from KDEpy import FFTKDE + # NLS Import from complex_noise import Noise_MMT @@ -230,7 +231,12 @@ def DNO_Y_itransform(x_transform): # Create the weights/exploitation values px = inputs.pdf(Theta_test) - sc = scipy.stats.gaussian_kde(Mean_Val.reshape(n_monte,), weights=px) # Fit a guassian kde using px input weights + if rank > 9: + bw = 0.035 + sc = FFTKDE(bw=bw).fit(Mean_Val.reshape(n_monte,), px) # Special bandwith for high dimensional problems. + else: + sc = scipy.stats.gaussian_kde(Mean_Val.reshape(n_monte,), weights=px) # Fit a guassian kde using px input weights + py = sc.evaluate(x_int) # Evaluate at x_int py[py<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) py_standard = sc.evaluate(x_int_standard) # Evaluate for pt-wise comparisons diff --git a/dnosearch/examples/nls/plotting_nls.py b/dnosearch/examples/nls/plotting_nls.py new file mode 100755 index 0000000..b631607 --- /dev/null +++ b/dnosearch/examples/nls/plotting_nls.py @@ -0,0 +1,367 @@ +#!/usr/bin/env python3 +# -*- coding: utf-8 -*- +""" +Created on Fri May 27 21:07:44 2022 + +@author: ethanpickering +""" + +from dnosearch import (GaussianInputs) +import scipy.io as sio +import numpy as np +import scipy +import matplotlib.pyplot as plt + + +print('This script is meant to be run in an IDE, but can be adjusted to print/save plots if desired.') +print('To plot, lite 3 seed results are provided in the data folder and the directory points there currently') +data_dir = '../../../../../data/nls/Lite_Results/results/' +print('If the desire is to plot results ran from this scripts on your computer, choose the following directory: ./results/') +#data_dir = './results/' + +# Plot a three seed version of Figure 2a + +# Load in the truth data +def Calc_Truth_PDF(rank): + Theta_validation_file = '../../../../../data/nls/truth_data/Rank'+str(rank)+'_Xs1.mat' + if rank==1: + Y_validation_file = '../../../../../data/nls/truth_data/Rank'+str(rank)+'_lam-0.5_t50_1_1024_MaxAbsRe.mat' + else: + Y_validation_file = '../../../../../data/nls/truth_data/Rank'+str(rank)+'_lam-0.5_t50_1_100000_MaxAbsRe.mat' + + d = sio.loadmat(Theta_validation_file) + Theta = d['Xs'] + d = sio.loadmat(Y_validation_file) + Y = d['Ys'] + + # Define the input propabilities + ndim = rank*2 + mean, cov = np.zeros(ndim), np.ones(ndim) + domain = [ [-6, 6] ] * ndim + inputs = GaussianInputs(domain, mean, cov) + ptheta = inputs.pdf(Theta) + + # Calculate the resulting output distribution + sc = scipy.stats.gaussian_kde(Y.reshape(np.size(Y),), weights=ptheta) # Fit a guassian kde using px input weights + y_int = np.linspace(0,1,1000) + py = sc.evaluate(y_int) # Evaluate at x_int + py[py<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) + + # Id there is a desire to plot the true pdf + #plt.plot(y_int, np.log10(py)); plt.title('Truth PDF for '+str(ndim)+'D') + #plt.ylabel('PDF'); plt.xlabel('y') + #plt.show() + return y_int, py, ptheta + + +def PDF_Error(rank,acq,model,y_int,py,ptheta,seeds,iterations,N,batch,init): + Error = np.zeros((iterations,seeds)) + for j in range(1,seeds+1): + for i in range(1,iterations+1): + d = sio.loadmat(data_dir+'Rank'+str(rank)+'_'+model+'_'+acq+'_Seed'+str(j)+'_N'+str(N)+'_Batch_'+str(batch)+'_Init_'+init+'_Iteration'+str(i)+'.mat') + #print(d) + pya = d['py'] + #sca = scipy.stats.gaussian_kde(Ya.reshape(np.size(Ya),), weights=ptheta) # Fit a guassian kde using px input weights + #pya = sca.evaluate(y_int) # Evaluate at x_int + #pya[pya<10**-16] = 10**-16 # Eliminate spuriously small values (smaller than numerical precision) + py_diff = np.abs(np.log10(pya.reshape(1000,))-np.log10(py.reshape(1000,))) + Error[i-1,j-1] = np.sum(py_diff[0:-1])*(y_int[1]-y_int[0]) + # plt.plot(y_int.reshape(1000,), np.log10(py.reshape(1000,))) + # plt.plot(y_int.reshape(1000,), np.log10(pya.reshape(1000,))) + # plt.title(str(i)) + # plt.show() + return Error + +#%% Figure 3a +rank=1 +seeds = 1 +iterations = 300 +N = 10 +batch = 1 +init = 'lhs' +y_int, py, ptheta = Calc_Truth_PDF(rank) + + +Error_GP_LHS = PDF_Error(rank,'lhs','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_LHS = PDF_Error(rank,'lhs','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +Error_GP_US = PDF_Error(rank,'US','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_US = PDF_Error(rank,'US','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +Error_GP_US_LW = PDF_Error(rank,'US_LW','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_US_LW = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +plt.plot(np.median(np.log10(Error_NN_LHS),axis=1), 'r:', label='NN LHS'); +plt.plot(np.median(np.log10(Error_NN_US),axis=1), 'r--', label='NN US'); +plt.plot(np.median(np.log10(Error_NN_US_LW),axis=1), 'r',label='NN USLW' ); +plt.plot(np.median(np.log10(Error_GP_LHS),axis=1), 'b:', label='GP LHS'); +plt.plot(np.median(np.log10(Error_GP_US),axis=1), 'b--', label='GP US'); +plt.plot(np.median(np.log10(Error_GP_US_LW),axis=1), 'b',label='GP USLW'); +plt.title('Lite Version of Figure 3a (1 Seed, N=10)') +plt.legend() +plt.show() + +#%% Figure 3b +rank=2 +seeds = 1 +iterations = 300 +N = 10 +batch = 1 +init = 'lhs' +y_int, py, ptheta = Calc_Truth_PDF(rank) + + +Error_GP_LHS = PDF_Error(rank,'lhs','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_LHS = PDF_Error(rank,'lhs','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +Error_GP_US = PDF_Error(rank,'US','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_US = PDF_Error(rank,'US','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +Error_GP_US_LW = PDF_Error(rank,'US_LW','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_US_LW = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,177,N,batch,init) + +plt.plot(np.median(np.log10(Error_NN_LHS),axis=1), 'r:', label='NN LHS'); +plt.plot(np.median(np.log10(Error_NN_US),axis=1), 'r--', label='NN US'); +plt.plot(np.median(np.log10(Error_NN_US_LW),axis=1), 'r',label='NN US-LW' ); +plt.plot(np.median(np.log10(Error_GP_LHS),axis=1), 'b:', label='GP LHS'); +plt.plot(np.median(np.log10(Error_GP_US),axis=1), 'b--', label='GP US'); +plt.plot(np.median(np.log10(Error_GP_US_LW),axis=1), 'b',label='GP US-LW'); +plt.title('Lite Version of Figure 3b (1 Seed, N=10)') +plt.legend() +plt.show() + +#%% Figure 3c +rank=3 +seeds = 1 +iterations = 300 +N = 10 +batch = 1 +init = 'lhs' +y_int, py, ptheta = Calc_Truth_PDF(rank) + + +Error_GP_LHS = PDF_Error(rank,'lhs','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_LHS = PDF_Error(rank,'lhs','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +Error_GP_US = PDF_Error(rank,'US','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_US = PDF_Error(rank,'US','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +Error_GP_US_LW = PDF_Error(rank,'US_LW','GP',y_int,py,ptheta,seeds,iterations,N,batch,init) +Error_NN_US_LW = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,iterations,N,batch,init) + +plt.plot(np.median(np.log10(Error_NN_LHS),axis=1), 'r:', label='NN LHS'); +plt.plot(np.median(np.log10(Error_NN_US),axis=1), 'r--', label='NN US'); +plt.plot(np.median(np.log10(Error_NN_US_LW),axis=1), 'r',label='NN US-LW' ); +plt.plot(np.median(np.log10(Error_GP_LHS),axis=1), 'b:', label='GP LHS'); +plt.plot(np.median(np.log10(Error_GP_US),axis=1), 'b--', label='GP US'); +plt.plot(np.median(np.log10(Error_GP_US_LW),axis=1), 'b',label='GP US-LW'); +plt.title('Lite Version of Figure 3c (1 Seed, N=10)') +plt.legend() +plt.show() + +#%% Figure 3d +rank=4 +seeds = 1 +iterations = 300 +N = 10 +batch = 1 +init = 'lhs' +y_int, py, ptheta = Calc_Truth_PDF(rank) + + +Error_1 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,300,N,1,init) +Error_5 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,60,N,5,init) +Error_10 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,30,N,10,init) +Error_25 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,12,N,25,init) + +plt.plot(np.linspace(1,300,300),np.median(np.log10(Error_1),axis=1), 'b', label='1'); +plt.plot(np.linspace(1,300,60),np.median(np.log10(Error_5),axis=1), 'r', label='5'); +plt.plot(np.linspace(1,300,30),np.median(np.log10(Error_10),axis=1), 'k',label='10' ); +plt.plot(np.linspace(1,300,12),np.median(np.log10(Error_25),axis=1), 'g', label='25'); +plt.title('Lite Version of Figure 3d (1 Seed, N=10)') +plt.legend() +plt.show() + +#%% Figure 3e +rank=4 +seeds = 1 +iterations = 100 +batch = 50 +init = 'lhs' +y_int, py, ptheta = Calc_Truth_PDF(rank) + + +Error_2 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,2,50,init) +Error_4 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,4,50,init) +Error_8 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,8,50,init) +Error_16 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,16,50,init) + +plt.plot(np.median(np.log10(Error_2),axis=1), 'b', label='N=2'); +plt.plot(np.median(np.log10(Error_4),axis=1), 'r', label='N=4'); +plt.plot(np.median(np.log10(Error_8),axis=1), 'k',label='N=8' ); +plt.plot(np.median(np.log10(Error_16),axis=1), 'g', label='N=16'); +plt.title('Lite Version of Figure 3e (1 Seed, batch=50)') +plt.legend() +plt.show() + +#%% Figure 3f +print('Note this script requires more than one seed, here only one seed is provided and several more are necessary to run for useful results.') +rank=4 +seeds = 1 +iterations = 100 +batch = 50 +init = 'lhs' +y_int, py, ptheta = Calc_Truth_PDF(rank) + + +Error_2 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,2,50,init) +Error_4 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,4,50,init) +Error_8 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,8,50,init) +Error_16 = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,16,50,init) + +plt.plot(np.var(np.log10(Error_2),axis=1), 'b', label='N=2'); +plt.plot(np.var(np.log10(Error_4),axis=1), 'r', label='N=4'); +plt.plot(np.var(np.log10(Error_8),axis=1), 'k',label='N=8' ); +plt.plot(np.var(np.log10(Error_16),axis=1), 'g', label='N=16'); +plt.title('Lite Version of Figure 3f (>1 Seed?, batch=50)') +plt.legend() +plt.show() + +#%% Figure 4a +print('This will only be representative with several independent experiments.') +rank=10 +seeds = 1 +iterations = 100 +batch = 50 +y_int, py, ptheta = Calc_Truth_PDF(rank) + +samples = np.linspace(50,50000,100) +samples_log = np.log10(samples) + +Error_lhs = PDF_Error(rank,'lhs','DON',y_int,py,ptheta,seeds,100,2,50,'lhs') +Error_NN_USLW_lhs = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,2,50,'lhs') +Error_NN_USLW_pdf = PDF_Error(rank,'US_LW','DON',y_int,py,ptheta,seeds,100,2,50,'pdf') +Error_NN_US_lhs = PDF_Error(rank,'US','DON',y_int,py,ptheta,seeds,100,2,50,'lhs') +Error_NN_US_pdf = PDF_Error(rank,'US','DON',y_int,py,ptheta,seeds,100,2,50,'pdf') + +plt.plot(samples,np.median(np.log10(Error_lhs),axis=1), 'ko', label='LHS'); +plt.plot(samples,np.median(np.log10(Error_NN_US_lhs),axis=1), 'go', label='US: lhs'); +plt.plot(samples,np.median(np.log10(Error_NN_US_pdf),axis=1), 'mo', label='US: pdf'); +plt.plot(samples,np.median(np.log10(Error_NN_USLW_lhs),axis=1), 'ro', label='US_LW: lhs'); +plt.plot(samples,np.median(np.log10(Error_NN_USLW_pdf),axis=1), 'bo', label='US_LW: pdf'); +plt.title('Lite Version of Figure 4a (1 Seed, batch=50)') +plt.legend() +plt.show() + +#%% Figure 4b +plt.plot(samples_log,np.median(np.log10(Error_lhs),axis=1), 'ko', label='LHS'); +plt.plot(samples_log,np.median(np.log10(Error_NN_US_lhs),axis=1), 'go', label='US: lhs'); +plt.plot(samples_log,np.median(np.log10(Error_NN_US_pdf),axis=1), 'mo', label='US: pdf'); +plt.plot(samples_log,np.median(np.log10(Error_NN_USLW_lhs),axis=1), 'ro', label='US_LW: lhs'); +plt.plot(samples_log,np.median(np.log10(Error_NN_USLW_pdf),axis=1), 'bo', label='US_LW: pdf'); +plt.title('Lite Version of Figure 4b (1 Seed, batch=50)') +plt.legend() +plt.show() + +#%% Figure 4c +from complex_noise import Noise_MMT +tf = 1 +rank=10 +noise = Noise_MMT([0, tf], rank) + +def Theta_to_U(Theta,nsteps,coarse,rank): + # We can also coarsen the steps 512 is likely extra fine for Deeponet + Theta = np.atleast_2d(Theta) + U = np.zeros((np.shape(Theta)[0],2*int(nsteps/coarse)),dtype=np.complex_) + + # Determine real and imaginary inds + dim = int(np.shape(Theta)[1]/2) + xr = Theta[:,0:(dim)] + xi = Theta[:,dim:dim*2] + x = xr + 1j*xi + Us = np.transpose(noise.get_sample(x)) + coarser_inds = np.linspace(0,nsteps-1,int(nsteps/coarse)).astype(int) + + real_inds = np.linspace(0,nsteps/coarse*2-2,int(nsteps/coarse)).astype(int) + imag_inds = np.linspace(1,nsteps/coarse*2-1,int(nsteps/coarse)).astype(int) + + U[:,real_inds] = np.real(Us[:,coarser_inds]) + U[:,imag_inds] = np.imag(Us[:,coarser_inds]) + return U + +j = 1 +i=100 +iterations = 100 +batch = 50 +acq = 'US_LW' +model = 'DON' +init='lhs' +N = 2 +d = sio.loadmat(data_dir+'Rank'+str(rank)+'_'+model+'_'+acq+'_Seed'+str(j)+'_N'+str(N)+'_Batch_'+str(batch)+'_Init_'+init+'_Iteration'+str(i)+'.mat') +Y = d['Y'] +Theta = d['Theta'] + + +x = np.linspace(0,1,128) +inds = np.linspace(0,254,128).astype(int) + + +Us_initial = Theta_to_U(Theta[0:10,:], 512, 4, rank) +for i in range(0,10): + alpha_val = Y[i]/np.max(Y[0:50]) + plt.plot(x, np.transpose(np.real(Us_initial[i,inds])), 'k', alpha=alpha_val[0], label=str(np.round(alpha_val[0],2))) +plt.legend() +plt.title('Figure 4c left') +plt.xlabel('x') +plt.ylabel('Re(u)') +plt.show() + +Us_sampled = Theta_to_U(Theta[4800:4810,:], 512, 4, rank) + +for i in range(0,10): + alpha_val = Y[4800+i]/np.max(Y[4800:4810]) + plt.plot(x, np.transpose(np.real(Us_sampled[i,inds])), 'k', alpha=alpha_val[0], label=str(np.round(alpha_val[0],2))) +plt.legend() +plt.title('Figure 4c right') +plt.xlabel('x') +plt.ylabel('Re(u)') +plt.show() + + + + +#%% Figure 4d +j = 1 +i=100 +iterations = 100 +init='pdf' +d = sio.loadmat(data_dir+'Rank'+str(rank)+'_'+model+'_'+acq+'_Seed'+str(j)+'_N'+str(N)+'_Batch_'+str(batch)+'_Init_'+init+'_Iteration'+str(i)+'.mat') +Y = d['Y'] +Theta = d['Theta'] + + +x = np.linspace(0,1,128) +inds = np.linspace(0,254,128).astype(int) + +Us_initial = Theta_to_U(Theta[0:10,:], 512, 4, rank) +for i in range(0,10): + alpha_val = Y[i]/np.max(Y[0:50]) + plt.plot(x, np.transpose(np.real(Us_initial[i,inds])), 'k', alpha=alpha_val[0], label=str(np.round(alpha_val[0],2))) +plt.legend() +plt.title('Figure 4d left') +plt.xlabel('x') +plt.ylabel('Re(u)') +plt.show() + +Us_sampled = Theta_to_U(Theta[4800:4810,:], 512, 4, rank) + +for i in range(0,10): + alpha_val = Y[4800+i]/np.max(Y[4800:4810]) + plt.plot(x, np.transpose(np.real(Us_sampled[i,inds])), 'k', alpha=alpha_val[0], label=str(np.round(alpha_val[0],2))) +plt.legend() +plt.title('Figure 4d right') +plt.xlabel('x') +plt.ylabel('Re(u)') +plt.show() + + diff --git a/dnosearch/examples/nls/testing.sh b/dnosearch/examples/nls/testing.sh deleted file mode 100755 index 708e408..0000000 --- a/dnosearch/examples/nls/testing.sh +++ /dev/null @@ -1,34 +0,0 @@ -seed_start=1 # The start and end values give the number of experiments, these will run in parallel if seed_end>seed_start -seed_end=1 -rank=1 # Set rank = 2, for 4D, 3 for 6D and 4 for 8D -acq='US_LW' -lam=-0.5 -batch_size=1 -n_init=3 -epochs=1000 -b_layers=5 -t_layers=1 -neurons=200 -n_guess=1 -init_method='lhs' -model='DON' # Deep O Net -objective='MaxAbsRe' #'MaxAbsRe' -N=2 # Number of ensembles -iters_max=2 -# Currently written to parallize seeds (i.e. independent runs) - -acq='lhs' -model='GP' -for iter_num in $(seq 0 $iters_max) -do - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - python3 ./nls_gp_lhs.py $seed $iter_num $rank $acq $lam $batch_size $n_init $epochs $b_layers $t_layers $neurons $n_guess $init_method $model $N $iters_max & - done - wait - for ((seed=$seed_start;seed<=$seed_end;seed++)) - do - /Applications/MATLAB_R2020b.app/bin/matlab -nojvm -nodesktop -r "seed=$seed; iter_num=$iter_num; rank=$rank; acq='$acq'; lam=$lam; batch_size=$batch_size; n_guess=$n_guess; init_method='$init_method'; model='$model'; objective='$objective'; N=$N; mmt_search; exit" & - done - wait -done \ No newline at end of file diff --git a/dnosearch/examples/nls/IC/Rank1_Xs1.mat b/dnosearch/examples/nls/validation_files/Rank1_Xs1.mat similarity index 100% rename from dnosearch/examples/nls/IC/Rank1_Xs1.mat rename to dnosearch/examples/nls/validation_files/Rank1_Xs1.mat diff --git a/dnosearch/examples/sir/.DS_Store b/dnosearch/examples/sir/.DS_Store index acf641e..5cadc63 100644 Binary files a/dnosearch/examples/sir/.DS_Store and b/dnosearch/examples/sir/.DS_Store differ diff --git a/dnosearch/examples/sir/data/.DS_Store b/dnosearch/examples/sir/data/.DS_Store deleted file mode 100644 index 5008ddf..0000000 Binary files a/dnosearch/examples/sir/data/.DS_Store and /dev/null differ diff --git a/dnosearch/examples/sir/data/SIR_Errors_Seed_3_N2.mat b/dnosearch/examples/sir/data/SIR_Errors_Seed_3_N2.mat new file mode 100644 index 0000000..2ffdb23 Binary files /dev/null and b/dnosearch/examples/sir/data/SIR_Errors_Seed_3_N2.mat differ diff --git a/dnosearch/examples/sir/plots/.DS_Store b/dnosearch/examples/sir/plots/.DS_Store deleted file mode 100644 index 5008ddf..0000000 Binary files a/dnosearch/examples/sir/plots/.DS_Store and /dev/null differ