-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor Dockerfile and pre-install rtt_ros2_integration and rtt_ros2_common_interfaces #3
Refactor Dockerfile and pre-install rtt_ros2_integration and rtt_ros2_common_interfaces #3
Conversation
… and rtt_ros2_common_interfaces
…wo stages The first image stops at stage orocos_toolchain and is pushed to orocos/ros2-ci:${DOCKER_TAG}. Only the second build command adds the final stage, reusing the cached layers of the first build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR outdates my previous PR on preparing a Docker image with ROS2 + Orocos toolchain + Ros 2 integration of Orocos, including rtt_ros2_common_interfaces
.
Both PRs are limited by the amount of resources needed to compile the typekits in rtt_ros2_common_interfaces
, but it does the work. Furthermore, this PR is a more clean version of the build process compared to the competing PR, it solves better the Python issues generated there.
So, I would go for this one, but first, in the current form it doesn't compile for foxy
, since ruby-facets
are unsuccessfully attempted for installation by rosdep
. After that small fix, it will be all good to go.
The apt package ruby-facets does not exist anymore.
8ad3398
to
a6bb359
Compare
Defining the key here overrides it for all other Ubuntu distros, too. So the override needs to have a default and only override it for focal.
If needed, the number of workers can still be limited using docker build [...] --cpuset-cpus 0-1 to limit the build containers to only CPU 0 and 1. nproc would return 2 in this case and the implicit default of colcon/catkin_tools/catkin_make_isolated is to build with two CPUs then, even if the build host has more than 2 CPUs.
f16fd35
to
bc20a9a
Compare
… a single make job (-j1) for Docker Hub automated builds
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the alternative to PR #2.
LGTM
It checks OK for dashing
, eloquent
, and foxy
.
The checks at Docker Hub CI fail only because of lack or resources, so the fail cross doesn't mean much. |
I am going to merge this patch. Automated builds still fail, likely because of resource constraints, but we can build and push images manually and eventually switch to Travis or GitHub actions instead in a follow-up. |
This is an alternative to #2. The goal is to provide a Docker image with rtt_ros2_integration and rtt_ros2_common_interfaces preinstalled in an overlay workspace.
Dockerfile
inros2_integration/ubuntu
as suggested in add new Docker image integrating ROS2 - tested eloquent #2, the existingDockerfile
inros2/ubuntu
is modified. At the end both images are about Orocos/ROS integration.Dockerfile
anymore. A custom yaml fileprereqs.yaml
lists Ruby dependencies which are not yet in the official rosdep database. Those are for orogen only, which at the moment is not installed in theros2
branch of orocos_toolchain when using cmake/colcon. This approach has been taken from the ros2/nightly builds at osrf/docker_images.orocos_toolchain
in a multi-stage build (docker build --target orocos_toolchain [...]
), one can still create an image with only the core toolchain packages preinstalled, without the overlay workspace.DESTDIR
at workspace level. For installing individual packagesDESTDIR
works fine, but at workspace level it requires "hacks" with potential side effects, like setting theCMAKE_FIND_ROOT_PATH
,PYTHONPATH
,PKG_CONFIG_PATH
variables to the staged installation directory, such that packages can find the artifacts of others that have been built before, but not yet installed to the final destination. The downside is that almost all commands to clone, install new system dependencies, build and install a workspace have to be combined in a singleRUN
command, such that the resulting new layer only contains the new dependencies and the installed artifacts, but not sources or build artifacts./opt/orocos/${ROS_DISTRO}
. Unfortunately installing packages to/opt/ros/${ROS_DISTRO}
with colcon is not officially supported and it has negative side effects (https://answers.ros.org/question/314019/how-to-install-colcon-packages-into-system-workspace/). Doing so overwrites the setup files generated by ament_cmake.orocos_entrypoint.sh
script takes care of sourcing both workspaces,/opt/ros/${ROS_DISTRO}
and/opt/orocos/${ROS_DISTRO}
and also appends the necessary commands to~/.bashrc
. The latter is required for bash-specific extensions to work in the docker container, e.g. tab completion for theros2
command. Sourcing the.bash
setup files in the entrypoint itself, beforebash
is executed as a new process, is useless (as done in osrf/docker_images).--parallel-workers 2
because building typekits and transport libraries is quite memory and CPU intensive and freezes the system with the default number of build jobs, i.e. the number of CPU cores.I am not sure yet whether the final image (up to stage
overlay
) works for CI of rtt_ros2_integration and rtt_ros2_common_interfaces itself, if/opt/orocos/${ROS_DISTRO}
already has installed versions of the same packages. There could be side effects due to mixing of different versions of the packages, e.g. when testing pull requests. I think it is necessary to provide two versions of the image at https://hub.docker.com/repository/docker/orocos/ros2 for each ROS distro, one withrtt_ros2_integration
preinstalled and one which stops at stageorocos_toolchain
. I will test it with automated builds now.If the approach taken here is accepted, it should also be applied to #1 for ROS 1 images.