Migrating nginxproxymanager docker container between hosts

I will surely forget in the upcoming months how I made it work. This is a quick post about to remind myself, I hope it can be helpful for someone else as well.

Migrating nginxproxymanager docker container between hosts
Photo by Ian Taylor / Unsplash

Backstory

I have been putting together a bunch of services in my home network (still premature to call it a homelab) since I learned about nginxproxymanager several months ago. Unfortunately, I did not design the hierarchy of services and what hardware to run them on in depth when I began throwing stuff into my nginxproxymanager. It began as a thing of curiosity, and as means of learning key docker concepts by practicing.

The hardware on which my nginxproxymanager container's running is a node of a Pi 400 k3s cluster. It is not acting as a cluster at the moment as I am trying to learn and figure out a more resilient storage model instead of keeping the cluster metadata on SD cards. Meanwhile, I threw my nginxproxymanager on one of them.

Turned out it was a mistake. I put 400s in a custom enclosure that I designed and 3D printed at home. I need to replace the enclosure sometime soon, which means that I need to take the cluster offline, which also means all the services that I run at my home network will be offline for a while. Ughhh..

The short term solution was to move nginxproxymanager to another machine. nginxproxymanager is a great tool. There is no built-in support for exporting the entire installation and importing it into another instance, but one can be devised in a few step.

Migrating nginxproxymanager

nginxproxymanager create symbolic links for a few things. To be honest, I did not keep track of what they were while I was debugging my issues. My issues all boiled down to missing symlink pointers because (drumroll) I zipped my data folder inside Pi 400 host, and moved it to a NAS share by copying it into my workstation and using MacOS unarchiver to unpack the files.

There were some issues with Let's Encrypt SSL files. I was getting this error in container log repeatedly:

nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-10/fullchain.pem": PEM_read_bio_X509_AUX() failed (SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: TRUSTED CERTIFICATE)

That is one of the symlinks. Following the link goes nowhere. Not sure why, I missed them somehow while moving from my computer via finder to NAS share. I could see the cert file contents on my workstation via Preview, but they were missing on the new server. I have no idea how symlink is still returning a file handle back to PEM_read_bio_X509_AUX() as it is not a file not found error but a mismatching file content error. The error message threw me off a bit, but I learned to expect minor overlooked details to be the real culprit so I always check the unlikely issues first.

I have nginxproxymanager and letsencrypt saved in my home path on the server. One more reason why I do not like my current setup. I created an NFS share in my NAS for my Docker containers, and have been slowly migrating persistent folders over there.

The following commands would work only if you are using SQLite3 database locally saved inside container data folder. Basically, we just backup the container data with tar, rsync tar file to migration destination, unpack tar. Done. Sample commands.

On migration source host:

tar -czvf letsencrypt.tar.gz letsencrypt
tar -czvf nm.tar.gz nginxproxymanager

On migration target host:

rsync --progress -ave ssh user@host:path/nm.tar.gz .
rsync --progress -ave ssh user@host:path/letsencrypt.tar.gz .
tar xzvf letsencrypt.tar.gz
tar xzvf nm.tar.gz

I built my nginxproxymanager container via docker-compose, so I just needed to transfer my docker-compose to the other machine, massage it for NFS volumes a bit, then start it up. It picked up the nginxproxymanager paths and started running without an issue then.

My docker-compose.yaml looks like this:

version: "2.1"
services:
  nginxproxymanager:
    container_name: nginxproxymanager
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '81:81'
      - '8080:80'
      - '443:443'
    volumes:
      - nginxproxymanager_fs:/data
      - letsencrypt_fs:/etc/letsencrypt
volumes:
  nginxproxymanager_fs:
    driver: local
    driver_opts:
      type: nfs
      o: addr=<IP of NAS>,nolock,soft,nfsvers=4,rw
      device: :<NFS volume path>/nginxproxymanager_fs
  letsencrypt_fs:
    driver: local
    driver_opts:
      type: nfs
      o: addr=<IP of NAS>,nolock,soft,nfsvers=4,rw
      device: :<NFS volume path>/letsencrypt_fs

I am not sure if this is the proper way of mounting NAS shares in Docker. I wanted to mount a common path for all containers, then use them like mycommonshare/containername/data:/containerdata and so on, but I could not figure out how to mix volume name and path. Maybe it is not even a thing. I just have not read Docker docs in detail yet. The setup above takes care of what I need for now. It is just a bit tedious to write out NFS setup in each docker-compose file, but it gives me flexibility to mount them with different options if I ever need to do so.

I also had to specify no mapping on NFS share. I have a Synology NAS, and their NFS documentation can be found here for DSM7.x.

I will surely forget in the upcoming months how I made it work. This is a quick post about to remind myself, I hope it can be helpful for someone else as well.