Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build error when running up.sh #5

Open
salimfadhley opened this issue Nov 25, 2023 · 11 comments
Open

Build error when running up.sh #5

salimfadhley opened this issue Nov 25, 2023 · 11 comments

Comments

@salimfadhley
Copy link

For context:

I'm trying to get this running on an entirely clean docker-in-docker system.

Steps to reproduce:

  • git clone
  • up.sh

root@105f52fe9164:/hostroot/volume1/home/sal/software/PrometheusTube# ./up.sh

  • touch .secrets.env
  • DOCKER_DEFAULT_PLATFORM=linux/amd64
  • DOCKER_BUILDKIT=1
  • COMPOSE_DOCKER_CLI_BUILD=1
  • docker build -f Dockerfile.template -t gen .
    [+] Building 0.6s (7/7) FINISHED docker:default
    => [internal] load .dockerignore 0.1s
    => => transferring context: 2B 0.0s
    => [internal] load build definition from Dockerfile.template 0.0s
    => => transferring dockerfile: 181B 0.0s
    => [internal] load metadata for docker.io/library/python:latest 0.5s
    => [1/3] FROM docker.io/library/python:latest@sha256:31ceea009f42df76371a8fb94fa191f988a25847a228dbeac35b6f8d2518a6ef 0.0s
    => CACHED [2/3] WORKDIR /gen 0.0s
    => CACHED [3/3] RUN pip3 install jinja2 pycryptodome 0.0s
    => exporting to image 0.0s
    => => exporting layers 0.0s
    => => writing image sha256:c9973f2fff2a06893995c83599d77d86ebbc1b332684dc299d3c66cbc3db9ee7 0.0s
    => => naming to docker.io/library/gen 0.0s
    ++ pwd
  • docker run -v /hostroot/volume1/home/sal/software/PrometheusTube:/gen -t gen localhost
    python3: can't open file '/gen/templates/generate-compose.py': [Errno 2] No such file or directory
    root@105f52fe9164:/hostroot/volume1/home/sal/software/PrometheusTube#
@horahoradev
Copy link
Owner

mount looks fine to me, idk what we're missing here

an alternative would be to manually run generate-compose.py on the host machine, but that's a pain

still working on usability issues so maybe I'll try to repro later

@salimfadhley
Copy link
Author

Anything I can do to test this hypothesis?

Just to clarify - the system I am running on is kinda odd. It's an Asustor NAS which provides a very bare-bones host OS. All I can really do is spin up docker and then docker into a more fully featured operating system. At the moment, all i re-mounted was a basic ubuntu image with access to the docker demon and the root filesystem. I didn't remount devfs or anything fancy.

One really common use-case for self-hosters is to just run stuff in Portainer. In that set-up all we can really do is copy a docker-compose file into a UI and just run it, so the current script-based installation really limits how this thing can be run. It's also going to appeal only to self-hosters with a lot of time.

Is it possible that you could ship a pre-compiled docker-compose in the root of the project, that way people can copy it, change some variables and then quickly boot into the system?

@horahoradev
Copy link
Owner

horahoradev commented Nov 25, 2023

Is portainer the docker-in-docker mechanism you're referring to?

in this circumstance I probably could. are you accessing the service from another location in your network, or is it on localhost?

getting rid of the templated docker-compose will take some work. I want to make this easier to run, but it's tricky ofc.

maybe I can ship all of the services in a single container, and publish the image... hmm...

@salimfadhley
Copy link
Author

Portainer is just a dockerized GUI for managing docker. I'm not using it in this circumstance. It's what I'd like to use. It's a very common way for self-hosting apps. You just paste a docker-compose file into the GUI and it runs it.

I'm running docker-in-docker on the actual host. Here's what I did:

  • On the the host, I build a docker image containing ubuntu, git and docker
  • Booted that image, and then mounted the host's root filesystem as /rootfs
  • Git clone, up.sh

The issue is that the host OS is really barebones. It includes the essential NAS stuff, some basic UNIX commands, docker and not a whole lot else, so no Python3. I take advantage of the fact that it can run Docker

maybe I can ship all of the services in a single container, and publish the image... hmm...

Oh no! That would be a mess. Why not have a dockerfile with multiple targets (supported since ages ago), and then a docker-compose file that references each of those targets.

@salimfadhley
Copy link
Author

salimfadhley commented Nov 25, 2023

Just to be clear, in your dockerfile you can have:

FROM --platform=linux/amd64 python:${PYTHON_VERSION}-slim-bullseye as python_stuff
... python build instructions

FROM go:latest AS go_stuff
... go stuff

And then in your compose file you can specify go_stuff and python_stuff as the names of the locally built images:

  python_service:
    platform: "linux/amd64"
    build:
      target: python_stuff
      context: .
      args:
        SOME_ENVIRONMENT_VARIABLE: 'Blah'

But it would be much better if people didn't need to build anything locally - if you have stuff already released on DockerHub it means people who are not running in build-friendly environments (i.e. me) can just docker-compose up and fetch down the latest released versions.

@horahoradev

This comment was marked as resolved.

@horahoradev

This comment was marked as resolved.

@horahoradev
Copy link
Owner

horahoradev commented Nov 25, 2023

give me a few days to rip things out and simplify the process, there's a lot going on. In the end, i should have have a published docker-compose file in source control that people can just run. tomorrow might be enough, we'll see

@salimfadhley
Copy link
Author

I will publish the image, and anyone can just run the finished product. No one needs to build from source, they just pull the single image.

I don't think there's any benefit in having a "single image" for all of the containers that have to run. You can have as many targets as you want, plus if you are dealing with compiled languages you probably want to compile in a compilation image, and then copy the executable output to an execution image. The alternative would be a very bloated image that ships all the compiler and dev tooling.

@salimfadhley
Copy link
Author

2. ship some weird systemd-in-docker solution with a single published docker image, which has all the right defaults, and people can just pull down and run

I'm curious about what special issues Prometheus-tube might have that cannot be dealt with by normal Docker-compose stuff.

Most projects make things easy by shipping a docker-compose.yaml and Dockerfile in the root directory of the project. It's a given that you usually have to customize the project a bit, for example because ports. storage locations are always different. Some self-hosters might already have a database up and running and might not want to spin up an extra.

I notice that you compile the docker-compose file from a template, so couldn't you just pre-compile a bunch of them as part of your github-actions tooling? You'd have a developer docker-compose file and a typical user docker-compose file. Anybody wishing for something more complex can hand-edit or recompile themselves.

simplify the docker-compose templating stuff, ship a single compose file that accepts env var arguments for the origin

This would be great. And that's a really "normal" way of using Docker Compose. If you don't want to customize the project all that much, a docker-compose file should be all you need.

@salimfadhley
Copy link
Author

FYI, I've discovered a likely cause - user error:

When building docker in docker, bind mounts refer to paths in the host-system, not the inner system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants