forked from aiidateam/aiida-core
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/docker build jq #1
Open
agoscinski
wants to merge
82
commits into
main
Choose a base branch
from
fix/docker-build-jq
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…aiidateam#6426) This is for consistency with `verdi user set-default`.
This was deprecated in `v2.0` and had been slated to be removed ever since `v2.1`.
In an ironic turn of events, the `deprecated_command` decorator is itself deprecated. The current way of deprecating `verdi` commands is by passing the deprecation message in the `deprecated` argument in the `command` decorator when the command is declared. New functionality in `VerdiCommandGroup` then ensures that a deprecation message is printed when the command is invoked and the help text is updated accordingly.
…m#6464) The `verdi-autodocs` pre-commit hook automatically generates the reference documentation for `verdi`. It does so based on the available commands. The `verdi tui` command is added dynamically if and only if the optional dependency `trogon` is installed. Since having the dependency installed slows down the loading time `verdi` significantly, many dev environments prefer not to install it. However, this leads to the `verdi-autodocs` hook always failing as it removes the `verdi tui` section from the docs. The developer then has to manually reset the changes and run `git commit` with `--no-verify`. This is annoying enough that as a workaround the `verdi-autodocs` hook now skips the `verdi tui` command when generating the docs.
* Remove pin of mamba version * Set `PIP_USER` as default * Remove double install of `xz-utils`
The commands take one or more processes, but if no processes were provided no output was shown whatsoever which can be confusing to the user. Now an error is now shown if no processes are defined.
The `verdi process` commands pause, play, kill, wait, show, status, report, call-root had inconsistent help messages so we unified them.
Updates `peter-evans/commit-comment` from 1 to 3 - [Release notes](https://github.com/peter-evans/commit-comment/releases) - [Commits](peter-evans/commit-comment@v1...v3) Updates `conda-incubator/setup-miniconda` from 2 to 3 - [Release notes](https://github.com/conda-incubator/setup-miniconda/releases) - [Changelog](https://github.com/conda-incubator/setup-miniconda/blob/main/CHANGELOG.md) - [Commits](conda-incubator/setup-miniconda@v2...v3) Updates `peter-evans/create-pull-request` from 3 to 6 - [Release notes](https://github.com/peter-evans/create-pull-request/releases) - [Commits](peter-evans/create-pull-request@v3...v6) Updates `peter-evans/create-or-update-comment` from 1 to 4 - [Release notes](https://github.com/peter-evans/create-or-update-comment/releases) - [Commits](peter-evans/create-or-update-comment@v1...v4)
The `verdi profile setup` command was added to replace the deprecated command `verdi setup`. The new command dynamically generates a subcommand for each installed storage plugin. However, the new command did not allow to configure the connection parameters for the RabbitMQ broker, unlike `verdi setup`. These common options are now added to the subcommands. In addition, a new option is added `--use-rabbitmq/--no-use-rabbitmq`. This flag is on by default, to keep the old behavior of `verdi setup`. When toggled to `--no-use-rabbitmq`, the RabbitMQ configuration options are no longer required and are also not prompted for. The profile is then configured without a broker.
Now that profiles can be created without defining a broker, a command is needed that can add a RabbitMQ connection configuration. The new command `verdi profile configure-rabbitmq` enables a broker for a profile if it wasn't already, and allows configuring the connection parameters.
…6473) The current implementation of the `RabbitmqBroker.__str__()` method always prints both the version and the URL of the RabbitMQ server. However, the `get_rabbitmq_version()` method fails with a `ConnectionError` in case the RabbitMQ broker is not able to connect to the server. This issue would bubble up into the `verdi status` command, since this prints the string representation of the `RabbitmqBroker` in the message that reports the connection failure. At this point the `ConnectionError` is no longer caught, and hence the user is exposed to the full traceback. Here we adapt the `RabbitmqBroker.__str__()` method to catch the `ConnectionError` and return the URL with the message that the connection failed.
…team#6477) When the user is using `verdi presto` to create more than 11 profiles, the command will fail because `presto-10` already exists. This is due to the fact that the `get_default_presto_profile_name()` function sorts the existing indices as strings, which means `10` will precede `9` and hence the "last index" would be `9`, making the new index `10`, which already exists. Here we fix this issue by casting the extracted existing indices as integers, so the sorting works as intended.
For many setup/configuration CLI commands, the `NON_INTERACTIVE` option is added to allow the user to run the command without being prompted for input and use the defaults instead. However, new users are often not aware of this option, and will not understand some options as they are prompted. Even when a sensible default is offered by the prompt, users will still want to understand the option and be unsure if the default works for them. Hence, it might be preferable to run it non-interactively by default for some commands. Here we adapt the `NON_INTERACTIVE` option into a switch (`-n/-I` or `--non-interactive`/`--interactive`) that is `--interactive` by default.
The `configure-rabbitmq` command was introduced mainly to allow new users that set up their profile with `verdi presto` before they set up RabbitMQ to upgrade their profile to use the message broker. However, since the command is interactive, they would be prompted for each of the options when running the command without `-n`/`--non-interactive`. Users that don't understand these inputs will typically want to set the defaults, so switching the modus operandi of the commmand to be non-interactive will make life easier for these users. Users that _do_ want to set different values than the defaults will understand the options of the command and should be able to be able to provide them directly or via the interactive mode.
Currently the `verdi profile configure-rabbitmq` command doesn't give any feedback to the user whether the provided options can successfully connect to the RabbitMQ server. Here we adapt the `detect_rabbitmq_config` function to accept the broker configuration as `**kwargs`, and use it to check if the provided options in the `configure-rabbitmq` can successfully connect to the RabbitMQ server. A "success" message is printed if we can connect to the server, else a warning is printed and the user is asked for confirmation before proceeding to configure the broker. A `--force` flag is also added to avoid asking for confirmation in case the command is unable to connect to the broker.
…m#6480) For the vast majority of use cases, users will have a default setup for RabbitMQ and so the default configuration will be adequate and so they will not need the options in the command. On the flipside, showing the options by default can makes the command harder to use as users will take pause to think what value to pass. Since there is the `verdi profile configure-rabbitmq` command now that allows to configure or reconfigure the RabbitMQ connection parameters for an existing profile, it is fine to remove these options from the profile setup. Advanced users that need to customize the connection parameters can resort to that separate command.
…up` (aiidateam#6481) This to be consistent with naming of the option for `verdi presto`.
…am#6465) The `aiida.engine.launch.submit` method was just raising a vague `AssertionError` in case the runner did not have a communicator, which is the case if it was constructed without a communicator which in turn happens for profiles that do not configure a broker. Since profiles without brokers are now supported and users are bound to try to submit anyway, the error message should be clearer.
The recursive workchain in code snippets can theoretically not run and is just confusing to have as an example for a user. It has been fixed by using different name for the inner workchain. In the classes `NodeCaching` and `ProcessNodeCaching` the `is valid_cache` is a property. To not render it with method brackets, the `:attr:` sphinx directive is used instead of `:meth:`.
This should cut down the CI time by at least 10 minutes for these tests.
…m#6456) Change the default coloring of the `pydata-sphinx-theme` to use the AiiDA primary colors.
…iidateam#6488) If an explicit profile name is specified with `-p/--profile-name` it should be validated as soon as possible and error if the profile already exists, before anything else is done. This prevents, for example, that a PostgreSQL user and database are created that are then not cleaned up.
…am#6493) All `verdi` commands automatically have the `-v/--verbosity` option added. This option has a callback `set_log_level` that is invoked for each subcommand. The callback is supposed to call `configure_logging` to setup the logging configuration. Besides it being unnecessary to call it multiple times for each subcommand, it would actually cause a bug in that once the profile storage would have been loaded (through the callback of the profile option), which would have called `configure_logging` with `with_orm=True` to make sure the `DbLogHandler` was properly configured, another call to `set_log_level` would call `configure_logging` with the default values (where `with_orm=False`) and so the `DbLogHandler` would be removed. This would result in process log messages not being persisted in the database. This would be manifested when running an AiiDA process through a script invoked through `verdi` or any other CLI that uses the verbosity option provided by `aiida-core`. Since the `set_log_level` only has to make sure that the logging is configured at least once, a guard is added to skip the configuration once the `aiida.common.log.CLI_ACTIVE` global has been set by a previous invocation.
The `verdi setup` and `verdi quicksetup` commands have been deprecated and replaced by `verdi profile setup` and `verdi presto`. The installation docs were heavily outdated and the flow was scattered. The biggest change is that there now is a "quick install guide" that relies on `verdi presto` to provide an install route that is fool proof and will work on almost any system in a minimal amount of commands. Then there is the "complete installation guide" that provides all the details necessary to fully customize an installation.
Processes run through the daemon would no longer have their log messages attached to the database. This would result in `verdi process report` returning nothing. The problem is that the `start_worker` function would call `configure_logging` without setting `with_orm=True` and so the `DbLogHandler` would essentially be undone. An integration test is added as a regression test.
…tamp` (aiidateam#6489) The `get_process_state_change_timestamp` utility calls the method `get_global_variable` on the storage backend to get the timestamp of the latest process state change, which is typically stored in the `db_dbsetting` table. However, not all storage plugins implement, most notably the `core.sqlite_zip` plugin. Since this is read-only, the settings table is never used and requesting the timestamp of the last process state change does not make sense. Since this utility is used in `verdi process list`, the command would error since the `NotImplementedError` was not caught. This is now the case and `verdi process list` will show "never" as the last state change.
The logger adapter was recreated each time the `logger` property of the `ProcessNode` was invoked. It is now created once in the `logger` property. The created logger adapter is assigned to the `_logger_adapter` attribute such that it can simply be returned at the next invocation. The initialization of the adapter cannot be done in the constructor as that route is not taken if an existing node is loaded from the database. Finally, the `logger` property only creates and returns the adapter when the node is stored. Otherwise it simply returns the base logger instance. This is because the logger adapter only works for stored nodes and if it were instantiated at the point when the node is unstored, it would not be regenerated once the node is stored, and so the `DbLogHandler` will never be able to persist log messages to the database.
…dateam#6525) Not all transport plugins implement the `gotocomputer` method. Instead of excepting, the remote working directory is now displayed.
For the `aiida-core-with-services` image where the services are part of the image, we cannot rely on the health check for the services provided by docker-build as is used for the `aiida-core-base` case. Instead, a simple sleep was added to the `aiida-prepare.sh` script that sets up the profile, to make sure the services are up before the profile is created. This solution is neither elegant nor robust. Here the sleep approach is replaced by `s6-notifyoncheck`. This hook allows blocking the startup from continuing until a script returns a 0 exit code. The script in question first calls `rabbitmq-diagnostics ping` to make sure the RabbitMQ server is even up, followed by a call to `rabbitmq-diagnostics check_running`. If the latter returns 0, it means RabbitMQ is up and running and the script returns 0 as well, which will trigger `s6-notifyoncheck` to continue with the rest of the startup. Note that `rabbitmq-diagnostics is_running` could not be used as that command sometimes returns 0 even if the service is not ready at all. Co-authored-by: Sebastiaan Huber <[email protected]>
The `psycopg` library is used by `sqlalchemy` to connect to the PostgreSQL server. So far, the `psycopg2` package was used, but the next generation v3 has already been out for a while. The release comes with a number of performance improvements according to the author. Although there is no clear timeline for support of v2, we are not waiting and switching now to the new version. When v2 was released, instead of releasing a new major version, it was released as a new library changing the name from `psycopg` to `psycopg2` but for v3 they are switching back to `psycopg`. This means that users would still be able to run v2 and v3 along side one another in a single Python environment. This supports the decision to move to v3. Note that `aiida-core` will not be supporting both at the same time and from now only supports v3. Interestingly, from the very early versions of AiiDA, the profile would contain the `database_engine` key. This is still an option in `verdi setup` and `verdi quicksetup`, except it is not really an option as it is hardcoded to `postgresql_psycopg2`. So all `core.psql_dos` profiles out there contain: 'main': { 'storage': { 'backend': 'core.psql_dos', 'config': { 'database_engine': 'postgresql_psycopg2', ... } }, } The value is not actually used however as the connection string for sqlalchemy's engine, where it _could_ be used, simply hardcodes this to `postgres://` which is the default psycopg dialect and maps to `postgres+psycopg2://`. This value is now simply updated to `postgres+psycopg://` to target the new version of `psycopg`. Because it is hardcoded, a migration for the existing configs is not required and it can be left as a vestigial attribute. Since `verdi setup` and `verdi quicksetup` are deprecated now anyway, the options don't have to be removed here.
The original `Scheduler` interface made the assumption that all interfaces would interact with the scheduler through a command line interface that would be invoked through a bash shell. However, this is not always the case. Prime example is the new FirecREST service, being developed by CSCS, that will allow to interact with the scheduler through a REST API. Due to the assumptions of the `Scheduler` interface, it was difficult to implement it for this use case. The `Scheduler` interface is made more generic, by removing the following (abstract) methods: * `_get_joblist_command` * `_parse_joblist_output` * `_get_submit_command` * `_parse_submit_output` * `submit_from_script` * `kill` * `_get_kill_command` * `_parse_kill_output` They are replaced by three abstract methods: * `submit_job` * `get_jobs` * `kill_job` The new interface no longer makes an assumption about how a plugin implements these methods. The first one should simply submit the job, given the location of the submission script on the remote computer. The second should return the status of the list of active jobs. And the final should kill a job and return the result. Unfortunately, this change is backwards incompatible and will break existing scheduler plugins. To simplify the migration pathway, a subclass `BashCliScheduler` is added. This implements the new `Scheduler` interface while maintaining the old interface. This means that this new class is a drop-in replacement of the old `Scheduler` class for existing plugins. The plugins that ship with `aiida-core` are all updated to subclass from `BashCliScheduler`. Any existing plugins that subclassed from these plugins will therefore not be affected whatsoever by these changes.
…team#6527) The recently released v7.4.0 causes the build of the documentation to fail with warnings of the type: Failed to get a method signature for ...: unhashable type It also leads to issues with incremental builds where it fails if a build exists that was generated with an older version. Both these bugs were fixed in v7.4.1 and v7.4.2 respectively, but more warnings remain. Instead of waiting for these to be fixed, we are just pinning to a minor version here which should anyway be done for optional dev dependencies. There is no need to automatically update those to latest releases.
…am#6471) The `LocalTransport.put` method contained a typo and so would check for the `ignore_noexisting` argument instead of `ignore_nonexisting`. The typo is corrected but for backwards compatibility the method continues to check for the misspelled version emitting a deprecation warning.
…team#6535) For a clean environment where the config directory had not yet been created, tab-completing `verdi presto` would result in an exception being thrown because the callable for the default of the `--profile-name` option would try to access the configuration. The exception is now caught and the callable simply returns the default profile name `presto` which should be correct since no profiles should exist if there is not even a configuration directory.
If the opening of the transport would fail in `verdi computer test` it would always report: Warning: 1 out of 0 tests failed Since opening the connection is the first test performed and its failure is dealt with separately, the message can simply be hardcoded to 1 out of 1 tests having failed.
The `pymatgen==2024.07.18` release removed the exposing of `Molecule` in `pymatgen.core` causing `mypy` to fail. The import is updated to reflect the actual origin of the definition.
…eam#6541) The Docker image is supposed to configure the profile such that warnings about using a development version of `aiida-core` and a modern version of RabbitMQ are silenced. The test was checking that `Warning` did not appear in the output of `verdi status`, however, this would result in false positives in case a deprecation warning would be printed due to downstream dependencies that we cannot necessarily control. The test is made more specific to check for a line starting with `Warning:` which should reduce the chance for false positives.
A table is added to the quick installation guide that gives a more succinct overview of the limitations of a profile created without a broker and with SQLite instead of PostgreSQL. This was already explained in more detail in text, but that was not complete and could be too dense for users, making them likely to skip it. The code is updated to provide a link to this documentation section whenever an error is displayed that a broker is not configured. This way users can try to understand why the functionality they are trying to use is not supported and how, if they really care about it, they can go about still installing and configuring RabbitMQ after the fact.
This admonition is not that useful since it still refers the user to an external website and the target information is not immediately obvious either. Besides, if the installed Python version is not supported, this will automatically be reported by the package manager.
The fact that the quick installation guide can lead to a profile that has certain limitations should be the first thing to be read.
This transport plugin subclasses the `SshTransport` plugin in order to remove all the configuration options. Instead, it parses the user's SSH config file using `paramiko.SSHConfig` when the transport is instantiated to determine the connection parameters automatically. The advantage of this approach is that when configuring a `Computer` using this plugin, the user is not prompted with a bunch of options. Rather, if they can connect to the target machine using `ssh` directly, the transport will also work. What's more, when the user updates their SSH config, the transport automatically uses these changes the next time it is instantiated as opposed to the `SshTransport` plugin which stores the configuration in an `AuthInfo` in the database and is therefore static. The original implementation of this plugin looked into the `fabric` library. This library builds on top of `paramiko` and aims to make configuration SSH connections easier, just as this new plugin was aiming to. However, after a closer look, it seems that fabric was not adding a lot of clever code when it comes to parsing the user's SSH config. It does implement some clever code for dealing with proxy jumps and commands but the `SshTransport` also already implements this. Therefore, it is not really justified to add `fabric` as a dependency but instead we opt to use `paramiko` to parse the config ourselves.
The function took a `Process` instance but then only uses it to fetch its associated node. Since a `Process` instance is way more difficult to mock, it makes testing the function unnecessarily complicated. Since it only needs the process node, the signature is changed to accept the node instead of the process. This utility function is unlikely to be used in client code, justifying this technically backwards incompatible change.
For each process state change, the engine calls the utility function `aiida.engine.utils.set_process_state_change_timestamp`. This calls `set_global_variable` on the storage plugin to update the `process|state_change|.*` key in the settings table. This value is used in `verdi process list` to show when the last time a process changed its state, which serves as a proxy of daemon activity. When multiple processes would be running, this call would throw an exception for the `core.sqlite_dos` storage plugin. This is because SQLite does not support concurrent writes that touch the same page, which is the case when multiple writes are updating the same row. Since the updating of the timestamp is not crucial for AiiDA functioning properly, especially since it is because another process was trying to perform the same update, it is safe to ignore the failed update and simply log that as a warning.
…m#6550) The command needs to make sure the daemon of the profile is not running so it instantiates the `DaemonClient` but this raises for profiles that do not define a broker. Since the daemon cannot be started for brokerless profiles anyway the command does not have to check in this case.
The `verdi storage version`, in addition to printing the version of the code's and storage's schema, now also validates the storage. If the storage is corrupt or cannot be reached, the command returns the exit code 3. If the storage and code schema versions are incompatible, exit code 4 is returned. This way this command serves as an alternative to running `verdi storage migrate` as a way to check whether a profile needs to be migrated. The `verdi storage migrate` command needs to perform checks such as whether the daemon is running and so is always going to be slower.
Processes that hit a certain exception were not being sealed. This would cause problems when trying to export them, which only allows sealed nodes. The problem occurs when another exception occurs while handling the original exception. An example is when `Process.update_outputs` would raise a `ValueError` because an unstored node had be attached as an output. Since this method is called in `on_entered`, which is called when the process entered a new state, it would be called again when it entered the excepted state. Since the process was already excepted, the rest of the state changes is cut short by `plumpy`. This would cause the process to never go to the final `TERMINATED` state and so the `on_terminated` method would not be called, which is where the process' node is sealed. The solution is to check the current state in `on_entered` and if it is `EXCEPTED` to simply return and no longer perform any updates on the node. This should prevent any other exceptions from being hit and ensure the process transitions properly to the final terminated state. The only update that is still performed is to update the process state on the process' node, otherwise it would not properly be shown as excepted.
In e952d77 a bug in `verdi plugin list` was fixed where the conditional to check whether the plugin was a process class would always raise an `AttributeError` if the plugin was not a `Process` or a proces function. As a result, the code would never get to the else-clause. The else-clause contained itself another bug, which was now revealed by the fixing of the bug in the conditional. The else-clause would call the `get_description` classmethod of the plugin, but no classes in AiiDA that are pluginnable even define such a class method. Probably, the original author confused it with the instance method `get_description` but the `verdi plugin list` command just deals with the class. The `get_description` call is replaced with just getting `__doc__` which returns the docstring of the class/function, or `None` if it is not defined. In the latter case, a default message is displayed saying that no description is available. Since the else-clause code was never reached before the recent fix and the `get_description` method was never supported officially by AiiDA's pluginnable interfaces, it is fine to just change this behavior.
…ects (aiidateam#6566) Apparently, somehow `mypy` is updated and it is complaining about an `ignore[assignment]` comment that previously was imposed by `mypy` itself. (in src/aiida/orm/utils/serialize.py::L51) This commit removed `ignore[assignment]`, and `ci-style / pre-commit (pull_request)` is passing now.
This commit copies the behavior of `verdi group list`, simply by setting a filter, one can get rid of all matching groups at once.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.