After having a somewhat sporadic third backup of my data, I decided to improve it and try to live up to the 3-2-1 rule.

Three copies of data, on two different types of media, where one copy is kept offsite.

While this can be done in numerous ways, with or without ZFS (Alternatives: sanoid/syncoid, PBS, Borg, Restic, Kopia) I already run TrueNAS as my main NAS, so creating a 2nd TrueNAS box made things very easy.

Janky TrueNAS offsite box USB-drives is not recommended, but this is what I got laying around so it’ll do for offsite. Janky but hopefully doing the job.

Required preparations

I did some rearranging of my datasets, as I had my data in too large datasets with not clearly sorted data. I created some new datasets and started to move my data,to be able to do more granular settings on what to replicate how. Another thing to consider is how the connection to/from the offsite machine is to be made - I settled on Wireguard and/or Tailscale for this.

Remember if you move around or remove a lot of data, remove snapshots before this to free up space again as all the moved data will allocate space in old snapshots.
If you delete a file, the file is still there - referenced by a snapshot and not by the filesystem.
If you move/change a file, a new file is actually created - keeping the original referenced in snapshots.
Hence, if you do a lot of moving/deletion/changing of files - snapshots will get large!

SSH-connection setup in TrueNAS

First we gotta set up a SSH-connection between the machines - I will call them SOURCE and DESTINATION.

Create SSH Keypair on SOURCE

  • @Credentials/Backup Credentials/SSH Keypairs
    • Name: Descriptive name.
    • Click Generate Keypair
      • Note them down or come back to grab them here later.

Setup SSH-key authorization on DESTINATION

  • @Credentials/Local/root (can be other user with right permissions)
    • Edit
      • Authorized Keys: Paste public key from SOURCE-Keypair
      • enable Allow all sudo commands
      • enable Allow all sudo commands with no password
        • These can and should probably be narrowed down, but I have not experimented with it yet.
    • Save

Create SSH connection on SOURCE

  • @Credentials/Backup Credentials/SSH Connection
    • Name: A descriptive name eg. “SSH to DESTINATION”
    • Setup Method: Manual
    • Host: DESTINATION IP
    • Port: SSH-port open to the destination, edit if changed later.
    • Username: User selected on the DESTINATION
    • PrivateKey: (paste key generated previously)
    • Click Discover Remote Host (this require the keys to be properly setup)

Setup Snapshot Tasks

Here you can either backup a parent dataset with all it’s child dataset recursively but the reason I rearranged and split up my data more was to be able to choose different schedule, retention time and replication for every dataset. So if I’ve got this layout:

tank
    tank/backups        
        tank/backups/photos
        tank/backups/machines
    tank/media          
        tank/media/downloads
        tank/media/movies   
        tank/media/series  

Then I can choose to have frequent snapshots of the “machines” dataset, less frequent of “photos” (doesn’t change as often). And maybe choose to replicate “movies” and “series” daily while not replicate “downloads” at all.

  • @Data Protection/Periodic Snapshot Tasks
  • Dataset: Choose what dataset to snapshot. eg tank/media/movies
  • Snapshot Lifetime: For how long a snapshot will be saved before deleted.
  • Recursive: To snapshot parent snapshot and all underlying children dataset.
    • Exclude: Child-datasets to exclude from the snapshot, eg. if “tank/media” is chosen, choose to exclude “downloads” child dataset.
  • Naming schedule: I choose auto, customize if you’d like.
  • Schedule: How often the snapshots will be created - I recommend spreading them out.
    • For example, if daily, create one snapshot at 01:00 and another at 02:00 etc.
  • Allow taking Empty Snapshots: I use this for consistency - but can take unnecessary space and clutter.

Make this setup for any datasets you wish to replicate (you should have snapshots on all your datasets tbh, even if you dont replicate).

Setup Replication Task:

  • @Data Protection/Periodic Replication Tasks
  • Name: Descriptive name, eg. “movies to backupNAS”
  • SSH Connection: previously created eg SSH to DESTINATION
    • Direction: PUSH (pull if that’s the direction)
    • Stream Compression: Optional, I chose lz4 to not hamper my lowgrade CPU.
    • Transport: SSH
    • Number of Retries: I decided on 20.
  • Source: eg. tank/media/movies
  • Destination: eg. vault/media/movies - this will be browsable through the SSH connection. (select parent dataset vault/media and add /movies to the path)
    • Check Include Dataset Properties
    • Recursive: (optional) to replicate all child datasets in the same job.
    • Use Sudo for ZFS Commands: (optional) Might be necessary if not root-user.
    • Destination Dataset Read-only Policy: I selected SET to always set destination dataset to read-only for security.
    • Check Encryption - Read more below (or Inherit Encryption if already encrypted)
    • Encryption Key Format: Passphrase // Hex - I selected hex and generated one and noted it down.
  • Periodic Snapshot Tasks: Pick previously created snapshot task - eg. tank/media/movies
    • Snapshot Retention Policy: I seleted Same as Source but this can be customized to have longer/shorter retention.
    • Matching naming schema: To automatically use the same naming schema as the task.
    • Replication Schedule: Run automatically (to run everytime the snapshot is created)

Info on Destination Only Encryption:
When choosing the destination - don’t have the destination dataset already created on the destination, but instead choose the parent dataset and then manually write the name of the new dataset. As in this example, pick “vault/media” parent dataset and then write “/movies”.

When you’ve created your Replication Task, you can test it manually with the “Run Now” button under Data Protection/Replication Tasks. This require a Periodic Snapshot to exist, if you’re just labbing - set the time to something close and just wait for it.
Look through logs if you encounter errors to see what when wrong - usually SSH or incorrect paths.

Setup VPN

Tailscale

Before this you’ll have to have an Tailscale account setup and generate Auth keys. Log in to your account and go to @Settings/Keys/Generate auth key - save this key.

Install TrueNAS App

  • @Application/Discover Apps/Tailscale
  • install Tailscale on both machines, then open the Tailscale settings.
    • Auth Key: Previusly created key.
    • Hostname: the hostname you wish each server to show up as.
    • Uncheck Userspace
    • Check Host Network to allow Tailscale to use the hosts network.

Make sure the app is running and connecting.
At the tailscale account again, click machines and see that your machine shows up. Click the dots and Disable key expiry.
I’ve chosen to use ACL’s and just not a “flat” full access tailscale network so I’ve got to add my machine+port to the ACLs.

To be able to manage the remote server, make sure the webui listens on all IP’s.

  • @System Settings/General/GUI/Settings
  • Make sure webui listens to all IPs (to reach over tailscale IP)
    • Web Interface IPv4 Address: 0.0.0.0

When it’s all setup, check the Tailscale-IP of your remote host and edit the SSH-connection on the SOURCE machine.

  • @Credentials/Backup Credentials/SSH Connection
    • Host: DESTINATION IP - set this to the Tailscale IP

Wireguard - alternative solution

For this you need to open an external port on the SERVER-side network - serverside can be either of SOURCE or DESTINATION, whichever fits you.

Create a wireguard configuration - I’ll try to be brief, you’ll need to know what you’re doing here anyway

Create key-pair on the SERVER machine:

wg genkey | tee privatekey | wg pubkey > publickey
cat privatekey publickey 
    oB72eXVT0GOnr5qPjSnDvL2/oBJQEEv2PktXQxKHuV8=
    qYh67BAr9itAFPFq+/idgcIFQPXvDLtVvzr0qEah3Gs=

wg0-server.conf example
add SERVER private key and CLIENT public key (created in the next step)

[Interface]
Address = 10.123.0.1/24
# SERVER privkey:
PrivateKey = oB72eXVT0GOnr5qPjSnDvL2/oBJQEEv2PktXQxKHuV8=
ListenPort = 51820

[Peer]
# Peer1 pubkey:
PublicKey = qYh67BAr9itAFPFq+/idgcIFQPXvDLtVvzr0qEah3Gs=
AllowedIPs = 10.123.0.2/24

PersistentKeepalive = 25

Create another key-pair on the CLIENT machine:

wg genkey | tee privatekey | wg pubkey > publickey
cat privatekey publickey 
    EM5pwJXVkfYl2pXAZghu11rAWOYuQsxykdchapkAZ0g=
    wc7watVQrLPICepcp4oE4jvezMsntrLKRiozzVZ3TTQ=

wg0-peer1.conf example
add CLIENT private key and SERVER public key

[Interface]
Address = 10.123.0.2/24
# CLIENT privkey
PrivateKey = oB72eXVT0GOnr5qPjSnDvL2/oBJQEEv2PktXQxKHuV8=
ListenPort = 51820

[Peer]
# SERVER pubkey:
PublicKey = qYh67BAr9itAFPFq+/idgcIFQPXvDLtVvzr0qEah3Gs=
AllowedIPs = 10.123.0.2/24
Endpoint = sub.domain.tld:51820

PersistentKeepalive = 25

Then create a Init/Shutdown script to automatically open the connection. @System Settings/Advanced/Init-Shutdown Scripts

  • Description: eg. Wireguard Connect
  • Type: Command
  • Command: wg-quick up /path/to/wg0-client.conf
  • When: Postinit
  • Timeout: 10

Do this on both machines with corresponding configuration files.

You might need to create an extra script to re-connect if your endpoint uses dynamic IP, as the connection needs to be reestablished if it changes. This HERE can be used for a solution or inspiration.




Once you’ve done all the setup, dont forget to edit the SSH-connections IP. Test it out with manually again under Data Protection/Replication Tasks and click Run Now

Extra: manually replicate over netcat

This can be useful if you’d just like to clone a dataset to another pool, either for non-scheduled backups or migration.

Adjust names of pool/datasets and IP’s/ports as required.

# create snapshot on source machine
zfs snapshot pool/dataset@relocate
# start the send on the source machine, -w 30 to wait 30 sec so you can start the receiving side
zfs send pool/dataset@relocate | mbuffer -q -s 1024k -m 1G | pv -b | nc -w 30 XX.XX.XX.XX 8023
# start receiving
nc -w 30 -l 8023 | mbuffer -q -s 1024k -m 1G | pv -rtab | zfs receive -vF pool/dataset

Read more: