-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added yml config for Green Metrics Tool #589
base: master
Are you sure you want to change the base?
Conversation
associated test.
Ok, I've had a quick phone call with Arne, to check how best to run these tests. It's helpful to approach getting measurements in two stages:
With step one, you care more about fast iterations and getting familiar with a new syntax than having precise accurate measurements, so installing a local version of the GMT system can be justified because running testing jobs locally will be faster than having to push to a CI system each time to see a test result. Installing locally on my 2020 macbook M1 was fairly straight forward when following these steps https://docs.green-coding.io/docs/installation/installation-macos/ The only point of confusion was me figuring out how to actually run a job once I have a local instance of the GMT system running. How DO you run a test? Once you have a running system, from the host machine, you need to run a script while inside the checked out GMT project. A bit like this: cd PATH_TO_GREEN_METRICS_TOOL
# (activate the virtual environment if you aren't already in it)
source ./venv/bin/activate
python3 runner.py --uri PATH/TO/YOUR/PROJECT/ --name NAME-OF-YOUR-PROJECT The easiest way to check this part works is to follow the steps in the docs page linked below for the simple example of the Stress Container One Core 5 Seconds scenario: https://docs.green-coding.io/docs/measuring/interacting-with-applications/ You'll then get a readout a bit like this
Visiting the local instance of the GMT tool should show a reading you can visit, that looks a bit like this. I'm working on a Mac, and I have a bajillion other apps running, so I'm not worried that the measurement is invalid here - I care that I'm able to run the test and see any output: Next comment shows me running this locally to check this repo. |
OK, here's what I'm seeing now with the current usage_scenario.yml, that we haven't really worked on yet to make it fit the syntax. It's failing, but that's ok - we'll probably need to fail a few times until we have figured out the correct syntax to use for the usage scenario to run properly, and at a minimum, simulate a client hitting one of the green web greencheck APIs, for example:
At this point we now need to figure out:
|
We still have a build issue to resolve
Brief update. It looks we're getting through until a build issue for django. From what I can see, it's related to the .env parsing code in django-environ, which is expecting a database connection string. I'm able to reproduce the error in the Dockerfile now, so I'll fix that. |
By defining them in environ.Env, we don't need to have the defaults set in the env file
I hope this fixes the failing test
OK, I have docker building, but I'm seeing an error when running this, that I think is related to the fact that the project directory I am using doesn't normally use docker. I'm seeing this:
Working from a totally fresh checkout should simulate how the hosted system would work any too |
ok, I think I have this in better shape, but I'm seeing an error in the usage scenario that I don't see when running docker compose. I wondering if it's linked to a difference in how environment variables are passed into the container between GMT and docket compose. Here's the log output:
We see django stuck in a failing loop. the output logs show this:
How does GMT handle paths and run the final docker commandIn the docker container we set ENV PATH=$VIRTUAL_ENV/bin:$PATH \
PYTHONPATH=/app This sets up the PATH, so that when we call gunicorn, our webserver like this, we don't need to do the The final line of the docker container looks like this: # Use the shell form of CMD, so we have access to our environment variables
# $GUNICORN_CMD_ARGS allows us to add additional arguments to the gunicorn command
CMD gunicorn greenweb.wsgi \
--bind $GUNICORN_BIND_IP:$PORT \
--config gunicorn.conf.py \
$GUNICORN_CMD_ARGS In this case here, we are using environment variables laid out in django:
env_file:
- path: ./.env.docker
build:
context: .
dockerfile: Dockerfile
container_name: greenweb-app
image: greenweb-app
expose:
- 9000
ports:
- 9000:9000 @arne - any idea why this might be happening? There is a
Yet when I run GMT, the output suggests a different shell, or different variables not being available:
An ideas how to resolve this? |
Compose understands the ENV directive but the docker CLI client does not. When executing commands with GMT we use docker CLI. Can you just change the command to use the actual full path? Something like |
sure, I'll give it a go now |
Hmm.. this didn't seem to resolve it
I have one question here @arne:
Is there a link you know that outlines which directives are or are not supported, and which version of docker is in use under the hood? I'm asking as for the container to make it to the gunicorn stage, I think it must have been able to run other commands that would have relied on access to values defined in the ENV section, and it might just be the final command that works differently. Ideally, I'd be able to the into the generated container, so I can see what is going, but it's not obvious to me. Alternatively - if these problems are related to the build stage, another approach might be to fetch the built dockerfile from an image repository. The published images are not on Dockerhub, but Scaleway's image registry |
We have a debugging mode especially for that. Just append the Also to speed up your iteration you can use If you want even more debugging you can use => My offer still stands: If you just can't get it running I can also give it a look and see why the command is not working. I however want to encourage you to play more with the GMT ;) Feel free though to hit me up if you feel that your interest for learning is exhausted |
Thanks for these @ArneTR these are the commands I was running yesterday.
I had discovered From what I can see, it looks like the build is happening with this I'm basing this on the code I see here in the the runner: context, dockerfile = self.get_build_info(service)
print(f"Building {service['image']}")
self.__notes_helper.add_note({'note': f"Building {service['image']}", 'detail_name': '[NOTES]', 'timestamp': int(time.time_ns() / 1_000)})
# Make sure the context docker file exists and is not trying to escape some root. We don't need the returns
context_path = join_paths(self._folder, context, 'directory')
join_paths(context_path, dockerfile, 'file')
docker_build_command = ['docker', 'run', '--rm',
'-v', f"{self._folder}:/workspace:ro", # this is the folder where the usage_scenario is!
'-v', f"{temp_dir}:/output",
'gcr.io/kaniko-project/executor:latest',
f"--dockerfile=/workspace/{context}/{dockerfile}",
'--context', f'dir:///workspace/{context}',
f"--destination={tmp_img_name}",
f"--tar-path=/output/{tmp_img_name}.tar",
'--no-push']
if self.__docker_params:
docker_build_command[2:2] = self.__docker_params
print(' '.join(docker_build_command))
ps = subprocess.run(docker_build_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='UTF-8', check=False)
if ps.returncode != 0:
print(f"Error: {ps.stderr} \n {ps.stdout}")
raise OSError(f"Docker build failed\nStderr: {ps.stderr}\nStdout: {ps.stdout}")
# import the docker image locally
image_import_command = ['docker', 'load', '-q', '-i', f"{temp_dir}/{tmp_img_name}.tar"]
print(' '.join(image_import_command))
ps = subprocess.run(image_import_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='UTF-8', check=False) This is from: I'll try inspecting the running container some more to see if there is something I missed. |
It just occured to me. When I'm inspecting the container in the django build step, here am I not inspecting the kaniko container? The build step doesn't seem to be calling docker build from the host machine: docker_build_command = ['docker', 'run', '--rm',
'-v', f"{self._folder}:/workspace:ro", # this is the folder where the usage_scenario is!
'-v', f"{temp_dir}:/output",
'gcr.io/kaniko-project/executor:latest',
f"--dockerfile=/workspace/{context}/{dockerfile}",
'--context', f'dir:///workspace/{context}',
f"--destination={tmp_img_name}",
f"--tar-path=/output/{tmp_img_name}.tar",
'--no-push'] So presumably, my final django build artefact is in the tarball that gets loaded in next, right? # import the docker image locally
image_import_command = ['docker', 'load', '-q', '-i', f"{temp_dir}/{tmp_img_name}.tar"]
print(' '.join(image_import_command))
ps = subprocess.run(image_import_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding='UTF-8', check=False) If that's the case that would explain why the container I inspected looked so different to the one I was expecting to see. |
I think I'm able to finally inspect the django container to see how the build phase worked differently to how it worked with docker compose.
The contents of this look more familiar to me at least now. |
I've been able to inspect the image in the tarball that is generated by the kaniko container, the image is called Running it without any environment variables passed in gives me this response:
Output is:
That's to be expected, as the final line is this, and relies on there being environment variables being injected in the
When I run it with the same environment variables as listed in the
Output is what I expect:
Here's the version number reported by docker when I call it:
And this is where
I'll post some more when I look it again in the afternoon |
You sure are traversing an interesting path with the GMT that even I I think have never taken :) Inspecting the tarball is something you should not need to do ever. When you use Maybe I have not really understood in which step you are stuck, so I am trying a more detailed breakdown of the debug options:
Kaniko is not something to worry about as it is just a sandbox we put around the build process that ensures some reproducible builds in terms of benchmarking time and some security benefits. It will produce an image on the host system though. If you however expect the build process to somehow get information from the host system this will induce a problem as this is exactly what we are trying to isolate. Everything needed for the container must be in the Dockerfile and the filesystem context. From what I understand is that you can pass the build step but the container is not booting, correct? You can see the command that is executed by GMT in order to start the container in the CLI logs. When I run with this command: ...
State of container 'rabbitmq': running
Running docker run with: docker run -it -d --name django -v /Users/arne/Sites/admin-portal:/tmp/repo:ro --mount type=bind,source=/Users/arne/Sites/admin-portal,target=/app,readonly -e VIRTUAL_ENV=/app/.venv -e PYTHONPATH=/app -e PATH=/app/.venv/bin:$PATH -e PORT=9000 -e GUNICORN_BIND_IP=0.0.0.0 -e PYTHONDONTWRITEBYTECODE=1 -e PYTHONUNBUFFERED=1 -e DATABASE_URL=mysql://deploy:deploy@db:3306/greencheck -e DATABASE_URL_READ_ONLY=mysql://deploy:deploy@db:3306/greencheck -e RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672/ -e DJANGO_SETTINGS_MODULE=greenweb.settings.development --net greencheck-network greenwebapp_gmt_run_tmp
Setting up container: test-container
Resetting container
Creating container
Waiting for dependent container django
State of container 'django': exited
Waiting for 1 second
State of container 'django': exited
Waiting for 1 second
State of container 'django': exited
... So I can reproduce that the container fails to boot. if you want to inspect the container though you can just copy the command from the GMT and boot it yourself:
Note that:
I can then enter the container and see what is going on. Indeed the The reason for that is that you are shadowing your filesystem by issuing a mount different to how you do it in the See: # compose.yml
volumes:
- ./apps:/app/apps
- ./greenweb:/app/greenweb
#usage_scenario.yml
volumes:
- .:/app By mounting everything into To my understanding you do not need the mounts at all since you anyway copy everything in the So in summary:
Then for me at least I can reach the flow and the I am getting an error though which might be macOS related ... unsure. Might also be that the user in the container has wrong access rights ...? Error: Exception (<class 'RuntimeError'>): Process '['docker', 'exec', 'test-container', 'npm', 'test']' had bad returncode: 1. Stderr:
Exception during run: Error: EPERM: operation not permitted, scandir '/proc/1/map_files/400000-5bdd000'
at Object.readdirSync (node:fs:1509:26)
at GlobSync._readdir (/node_modules/glob/sync.js:288:46)
at GlobSync._readdirInGlobStar (/node_modules/glob/sync.js:267:20)
at GlobSync._readdir (/node_modules/glob/sync.js:276:17)
at GlobSync._processReaddir (/node_modules/glob/sync.js:137:22)
at GlobSync._process (/node_modules/glob/sync.js:132:10)
at GlobSync._processGlobStar (/node_modules/glob/sync.js:380:10)
at GlobSync._process (/node_modules/glob/sync.js:130:10)
at GlobSync._processGlobStar (/node_modules/glob/sync.js:383:10)
at GlobSync._process (/node_modules/glob/sync.js:130:10)
at GlobSync._processGlobStar (/node_modules/glob/sync.js:383:10)
at GlobSync._process (/node_modules/glob/sync.js:130:10)
at GlobSync._processGlobStar (/node_modules/glob/sync.js:383:10)
at GlobSync._process (/node_modules/glob/sync.js:130:10)
at new GlobSync (/node_modules/glob/sync.js:45:10)
at Function.globSync [as sync] (/node_modules/glob/sync.js:23:10)
at lookupFiles (/node_modules/mocha/lib/cli/lookup-files.js:90:15)
at /node_modules/mocha/lib/cli/collect-files.js:36:39
at Array.reduce (<anonymous>)
at module.exports (/node_modules/mocha/lib/cli/collect-files.js:34:26)
at singleRun (/node_modules/mocha/lib/cli/run-helpers.js:120:17)
at exports.runMocha (/node_modules/mocha/lib/cli/run-helpers.js:190:10)
at exports.handler (/node_modules/mocha/lib/cli/run.js:370:11)
at /node_modules/yargs/build/index.cjs:443:71 {
errno: -1,
code: 'EPERM',
syscall: 'scandir',
path: '/proc/1/map_files/400000-5bdd000'
} I hope this helps and you can continue from here. P.S:
|
Ah, @ArneTR I think the shadowing of mounts sounds like it was the source of the issue. It makes sense to me why that would happen. I'm also able to get to the same point, and reproduce the error you see with mocha / node js. I think that unblocks me 🎆 Thanks so much! |
Using '**/*.spec.js' on the file root raises errors as Mocha tries to read procfs and so on
OK, I've got the I think this is in shape to run in the calibrated setup, to at least give us some indicative readings via the hosted GMT service. |
TLS is terminated by our reverse proxy server, Caddy
These are the equivalent of `-it` on the command line in docker , but not compatible with GMT
"description": "Integration tests for the Greencheck API, intended for use with the Green Metrics Tool from Green Code Berlin. (https://www.green-coding.io/projects/green-metrics-tool/)", | ||
"main": "index.js", | ||
"scripts": { | ||
"test": "mocha '/*.spec.js'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When run in root, **/*.spec.js will try to read procfs
on the default node container, which triggers the permissions error we saw
depends_on: | ||
- django | ||
setup-commands: | ||
- cp /tmp/repo/green_metric_tests/greencheck_test.spec.js . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we needed to add tmp/repo
to fetch this from the checked out repo on the GMT system
This is a clone of pull #587, that I am able to make changes to as I work on (I was unable to make changes to Jame's PR, as it's on his repo):
#587