-
Notifications
You must be signed in to change notification settings - Fork 461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single /unifi volume fails with external mongodb on first run #278
Comments
I've replicated this issue with the following compose file version: "3.7"
unifi-mongo:
image: mongo:3.6
container_name: unifi-mongo
networks:
- unifi
restart: unless-stopped
volumes:
- /home/user/docker/volumes/unifi-mongo/db:/data/db
- /home/user/docker/volumes/unifi-mongo/dbcfg:/data/configdb
unifi:
image: jacobalberty/unifi
init: true
container_name: unifi
depends_on:
- unifi-mongo
networks:
- unifi
restart: unless-stopped
volumes:
- /home/user/docker/volumes/unifi:/unifi
env_file:
- env/unifi.env
ports:
- "3478:3478/udp"
- "6789:6789/tcp"
- "8880:8880/tcp"
- "8843:8843/tcp"
- "8080:8080/tcp"
- "8443:8443/tcp"
- "10001:10001/udp"
networks:
unifi: |
I think this is related to #165 |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Go away stalebot. |
Watching this I'll try to come take a closer look this weekend. |
No worries. I haven't had much time to dive into the details of why it isn't working exactly. On my kubernetes cluster I just deal with it by restoring a site backup whenever the controller restarts. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
Go away stalebot. |
Was this fixed with PR #419 ? |
There is an issue with
docker-entrypoint.sh
that does not properly handle a single volume mapped to/unifi
(i.e. not/unifi/data
,/unifi/log
etc. directly but using a single/unifi
to cover them all) while also defining the env vars needed to specify an externally hosted mongodb instance (or, more incidentally,LOTSOFDEVICES
)The values in
system.properties
are not properly set at first run, which results in a container-local mongo instance being spun up and used instead. An unsuspecting user (like me) would then proceed to setup the Unifi instance, and then after reinstantiating the container (whereuponsystem.properties
is configured properly this time to use the external instance), find none of the data is there.The root cause is the log and data directories are created after
system.properties
is updated.On first run (with a pristine /unifi volume being mounted),
confSet
fails because/unifi/data
doesn't exist, resulting in a defaultsystem.properties
./unifi/data
is then created and the controller started, but because we're using the defaults, the container-local mongodb instance is started and used. The next time the container is instantiated,/unifi/data
exists soconfSet
succeeds, causing the controller to point to the external instance as previously expected.The text was updated successfully, but these errors were encountered: