-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flexibility Analysis #1398
base: main
Are you sure you want to change the base?
Flexibility Analysis #1398
Conversation
Thanks! |
@@ -29,6 +30,7 @@ jsonschema | |||
jupyter_contrib_nbextensions | |||
snowballstemmer==1.2.1 | |||
addheader>=0.2.2 | |||
highspy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would SCIP be sufficient for these tests? We already have SCIP available (conditionally) for use in the Diagnostics tests, and we do try to keep our dependencies to a minimum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SCIP would work, but it makes the tests way slower. I'm solving a bunch of LPs with changes only to parameters. The Pyomo interface to HiGHS is a persistent interface, and is much faster than the interface to SCIP for that type of workflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to see if it makes sense to do something like @lbianchi-lbl did for SCIP then so that it does not need to be a dependency in requirements-dev.txt. If I recall, we basically have a check that sees if SCIP is available and otherwise skips the tests. We then just make sure we have it installed on the test server (via GHA setup) so the test will run on the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me. As long as it gets installed in GHA so the tests run, I'm happy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if it applies, but would the other way around work? i.e. is it possible to use HiGHS instead of SCIP for our existing tests for which SCIP is currently being used?
If so, it might make sense to consider having highspy
as a dev dependency and just use it for testing for both cases (of course, this is purely on the side of streamlining dependencies and very much solver-domain-ignorant, so feel free to add why this wouldn't work).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually need both. SCIP also solves nonlinear problems, while HiGHS does not...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, that makes sense then, thanks for clarifying!
omlt, nump_available = attempt_import("omlt") | ||
if not tensorflow_available or not nump_available: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
omlt, nump_available = attempt_import("omlt") | |
if not tensorflow_available or not nump_available: | |
omlt, omlt_available = attempt_import("omlt") | |
if not tensorflow_available or not omlt_available: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some comments on the first file (I'll try to get through the rest tomorrow).
One general comment; could you add yourself to the top-level CODEOWNERS file as the maintainer for this code (or a nominee).
""" | ||
This module is only used to check dependencies for unit tests. | ||
""" | ||
import unittest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing as this is just for testing, should it be moved into the tests
folder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great point. I'll move this.
""" | ||
This module is only used to check dependencies for unit tests. | ||
""" | ||
import unittest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above.
works for both APPSI solver interfaces and non-appsi solver interfaces. | ||
""" | ||
import pyomo.environ as pe | ||
from pyomo.contrib import appsi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A minor comment (so does not need to be fixed), but this seems like it would be better located in Pyomo (probably in pyomo.contrib
with the appsi
code seeing as it related to that).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great point.
visibility=0, | ||
): | ||
super().__init__( | ||
description=description, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the purpose of this just to change the default values for the implicit
and implicit_domain
arguments (whilst still leaving them as arguments)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I changed any defaults. This class is just meant to be a base class for other, related config classes. I guess it is not really necessary. I think I just created it for type hinting.
visibility=visibility, | ||
) | ||
self.solver = self.declare( | ||
"solver", ConfigValue(default=pe.SolverFactory("ipopt")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this also have a doc
attribute, just in case someone tries to query the ConfigDict?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely. I'll add one.
implicit_domain=implicit_domain, | ||
visibility=visibility, | ||
) | ||
self.n_layers: int = self.declare( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These might be obvious to people with more experience with neural networks, but it might be good to have some short doc strings for these arguments for less experiences users.
|
||
config: ReluDRConfig = config() | ||
if config.batch_size > n_samples: | ||
config.batch_size = n_samples |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be good to log a message for the user here to let them know they set a batch size greater than the number of samples and thus this value was changed.
for _ in range(config.n_layers - 1): | ||
nn.add(keras.layers.Dense(config.n_nodes_per_layer, activation="relu")) | ||
nn.add(keras.layers.Dense(len(outputs))) | ||
if config.learning_rate is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this is needed as keras
does not accept None
as a value for learning_rate
(i.e. if you provide the argument it must have a meaningful value). An inline comment might help to make sure no-one tries to "fix" this later.
training_output, | ||
batch_size=config.batch_size, | ||
epochs=config.epochs, | ||
# verbose=0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commented line.
# All rights reserved. Please see the files COPYRIGHT.md and LICENSE.md | ||
# for full copyright and license information. | ||
################################################################################# | ||
from idaes.apps.flexibility_analysis import _check_dependencies |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be tests for the relu_dr as well?
@michaelbynum any update here? |
@michaelbynum Something you might want to think about for the docs here: I have received a number of questions this week about this PR and how it is distinguished from robust optimization and Pyros (and thus when you would want to use one versus the other). It would probably be good to have some discussion of this somewhere in the docs to guide new users to the best tool for what they want to do. On the plus side, there is already some interest in tools like this. |
@michaelbynum I'm moving this to the Aug release as the May release will be created next week. |
@michaelbynum any update on this? |
Apologies. I'll be pushing updates to address reviews early next week. |
@michaelbynum I know you're busy... any news on this? |
@michaelbynum, I'm taking this off the Aug release board as it doesn't appear work is progressing here. |
Summary/Motivation:
This PR adds a module for formulating flexibility analysis problems.
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: