-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman containers monitoring support #1985
Comments
Hi, please you send us the Glances and Python Docker Client version (pip freeze | grep docker) ? |
I have similar problem using podman on CentOS Stream 9
|
Same problem on MicroOS
Don't have pip installed so
Manually testing pulling data from socket returns info without problems |
@daqqad I presume you also have podman, then shouldn't you be using that unix socket? Also I did look into the API a bit, some fields seem to be receiving fixed values(IO & Network), while memory and cpu usage were fine. Getting the images list fails for some reason. Need more research on it. Another alternative would be to make another plugin: |
@RazCrimson I updated service file and renamed the socket for compatibility, but I am using podman. |
+1 for the @RazCrimson proposal. If the Podman API is not fully compliant with the Docker one, the best solution is to create a new Podman plugin for Glances. Under the wood, if Podman is detected, the Docker plugin should be disable automatically. |
Added in the Glances version 4 roadmap (because the Python Podman Lib is only available on Python 3). |
@nicolargo I think we should not automatically disable the docker plugin too. If we do, we should atleast provide the option to enable it. There might be edge cases where someone would want to monitor both docker and podman containers. 🙂 |
Absolutely, I've found the most natural way to get started with podman is to gradually migrate and test services in case of incompatibilities. Having the option to monitor both makes sense. Could a non-required recommendation by offered to disable docker when enabling podman, or vice versa. |
Error during first test: containers/podman-py#179 |
Quick and dirty sandbox: Start some pod:
Execute the Python code (https://gist.github.com/nicolargo/b9e6bd079b07fe64cb8c9b798a219cb0):
Stop the pod:
Main issue: no information concerning the CPU / RAM / DiskIO via this lib but should be exposed by the API because the podman stats command line display the following information: |
Another way:
Return:
Could have a look on this repository https://github.com/m-erhardt/check-container-stats/blob/master/check_container_stats_podman.py for a similar approach (but JSON format is better...). |
Create a new branch for the ongoing development: https://github.com/nicolargo/glances/tree/issue1985 |
@nicolargo So would you prefer a new podman plugin or make the current one support podman too? In the latter case, should we have a column to indicate the container engine from which a container was found? |
API corrected after a restart of the Podman service.
Get images list:
Images and containers infos:
|
i just push some change in the dedicated branche: https://github.com/nicolargo/glances/tree/issue1985 But i am not satisfied of the current implementation and some features should be implemented for the Podman containers. Here is the current status in the console UI: (the container in the middle is a Docker, others are Podman). What we should do:
@RazCrimson any free time to contribute to this enhancement request ? |
Sure, I'll try taking a stab at this. |
@nicolargo I was thinking if we could try moving towards more typed-style of Python and more structured data objects instead of dicts (maybe use dataclasses) as that would probably reduce lot of errors due to missing keys or having typos in keys and would also make the code more readable. Currently, planning to use typing features that are available in What are your thoughts on this? |
@RazCrimson any news ? |
Sorry for the late reply @nicolargo I found what the issue is. |
Thanks @RazCrimson ! After this issue, roadmap for the version 3.4 will be freezed. Glances 3.4 is coming. |
@RazCrimson On the develop branch after an update of the Docker lib (docker-6.1.0), Glances is broken and do not start anymore. Get back to normal when docker-6.0.1 is used. Be careful on the #1985 branch (traced in #2366). |
Last but not least, Glances Alpine Image failed (ERROR: Could not build wheels for cryptography, which is required to install pyproject.toml-based projects). See https://github.com/nicolargo/glances/actions/runs/4900769661/jobs/8751667592 Not sure this is related to the previous issue... |
First problem solved (had to force stream=True in the stats() method). |
Second problem here again: https://github.com/nicolargo/glances/actions/runs/4901481540/jobs/8752798884 Perhaps a Rust compiler not installed on the CI:
Strange because:
Will try: https://cryptography.io/en/latest/installation/#alpine Not better...
Not better after forcing upgrade to the latest pip version: https://github.com/nicolargo/glances/actions/runs/4901847706/jobs/8753350639 |
@nicolargo I missed one patch that needed to be upstreamed to podman-py. That is the cause of the current issue. 😅 I've added a temporary patch to the |
Perfect @RazCrimson ! The temporary patch make the job. What do you think... Do we have to wait for the two upstreams (docker/docker-py#3120 and containers/podman-py#266) to release Glances 3.4 ? |
We could probably make a release now. We don't really depends on any feature in their newer releases, so I think we can go ahead and make a release. |
Before the merge, i have made a quick test using the Note: on my system, i have one Docker containers and two Podman pods. With the current develop branch:
With the #1985 branch:
Not sure this is an issue but perhaps the time take to Grab Podman stats ? @RazCrimson What do you think ? |
Yep its probably podman. In my tests for Just Docker: 0.0299 I had 2 podman containers and 2 docker containers. I'll look into optimising it. Removing the temp patch when those changes get upstreamed should hopefully reduce this time. |
Ok, so we stay on the #1985 branch for the moment. We wait the two upstreams:
After that, we can remove the patch: f7f4f38 and merge into develop. |
@nicolargo
2 docker + 2 podman containers as the above Changes are pushed to |
A last issue when running Glances in Webserver mode:
also reproduced with:
=== Quick analyse: the issue came from the StartedAt type for Pod (type is good for Docker).
=== Also the WebUI interface (VueJS) should be adapated. === The documentation is not up-to-date (talk about Docker and not Containers plugin). |
…able, update the webui, update the documentation
Done in the last commit:
|
Last possible optimization: your patch to cache the Podman version (refreshed only every 5 minutes) makes the job but Glances is always slow to start (1 second longer than the previous version). Is it useful to get the Docker and Podman version as it is not displayed in any UI ? |
Makes sense. Let's drop the version calls in both docker and podman. I could make the changes in another 8 hrs or if you are available now to get that done, go ahead. Edit: Another option would be to drop the version calls during the plugin initialisation and perform them once every 5 min during the updates. What do you think of this? |
@RazCrimson no need, i am currently working on it :) |
Done in latest commit. Every thing should be tested now :) |
@nicolargo Just one small UI inconsistency. In 5e7cb1d the commit message says Do you want to remove color for the |
Only for |
Cool, made the required changes on latest commit to develop! |
Is your feature request related to a problem? Please describe.
When using podman instead of docker, glances fails with an exception.
Describe the solution you'd like
Glances should not fail to start. Podman containers should be visible in a similar manner as docker containers. Podman supports Docker API v1.40: https://docs.podman.io/en/latest/markdown/podman-system-service.1.html
Describe alternatives you've considered
n/a
Additional context
n/a
Steps to reproduce
DOCKER_HOST
environment variable:The text was updated successfully, but these errors were encountered: