You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I plan to run pacoloco & apt-cacher ng on a vps which has reflector scheduled via crond.
And locally I run a second layer of both on my nas inside docker.
The idea is that I will always have the most optimal and a single connection for my local vms & servers to the nas and the nas downstream request the same from my upstream vps in the cloud.
Will there be any problems with this setup? Does that even make sense?
The text was updated successfully, but these errors were encountered:
Honestly, if you plan to run pacoloco on your vps for your vps, it doesn't make much sense unless you have many vms running on your vps which use pacman and which are being updated regularly.
If you plan to use pacoloco as a mirror (that is, with a setup such as vps[pacoloco]<-->internet<-->nas), be ready to find more than a few issues. At the moment there are some concurrent downloading issues which need to be solved.
It makes sense to use pacoloco if the machine to be updated has a fast connection to the machine running pacoloco but a slow connection to internet (internet<-->pacoloco(local network)<-->local vm(s)/local server(s)/machines in your local network which use pacman).
If I have understood your setup, you'd better run pacoloco on your nas (eventually inside docker), so that your local vms & servers can be updated faster.
Any caveats when chaining two instances?
I plan to run pacoloco & apt-cacher ng on a vps which has reflector scheduled via crond.
And locally I run a second layer of both on my nas inside docker.
The idea is that I will always have the most optimal and a single connection for my local vms & servers to the nas and the nas downstream request the same from my upstream vps in the cloud.
Will there be any problems with this setup? Does that even make sense?
The text was updated successfully, but these errors were encountered: