Can’t access TrueNAS/FreeNAS over VPN

There was an issue accessing a TrueNAS device over the VPN. The VPN was assigning an Ip Address outside the network available to the TrueNAS host. In my case:

  1. VPN assigned IP address is in range 172.16.0.0/24
  2. Network for TrueNAS is in range 10.0.0.0/16

Since the VPN address is outside the range of the CIDR block for the TrueNAS ip address subnet, TrueNAS can’t respond to the incoming request. To fix this, add a Static Route for TrueNAS. To add a Static Route, expand the Network tab in the left hand menu and select Static Routes in the menu.

The left main menu in TrueNAS core with the Network tab expanded and the Static Routes tab within Network selected

From the Static Routes screen, click Add in the top right of the new screen. After that the following form will appear:

SettingValueDescription
DestinationintegerUse the format A.B.C.D/E where E is the CIDR mask.
In the example above it would be 172.16.0.0/24
GatewayintegerEnter the IP address of the gateway.
In the example above it would be 10.0.0.150 (150 is my gateway)
DescriptionstringNotes or identifiers describing the route.
The form fields for adding a static route in TrueNAS

After the fields are populated correctly, click “Submit” and the VPN connections should now be able to reach the TrueNAS core device.

Home Lab – a lie I tell myself

Since March of 2020 I’ve been working on building out a homelab. Something about being inside a little more drove me to want to work with the computers at home. Normally that free time would be spent at community events or with presentations. Something had to fill the void and a homelab was it.

At first, the goal was simple; learn about different server technologies and edge computing by building a “data center” in a closet. The “data center” part of it is where most at home sysadmins fall into a bottomless pit of self-hosted technologies and I am no different. First it is a home media server, then a dashboard, then a database, then a data system, then a clustered set of systems, then there is suddenly a need for documentation for a lab you built yourself as it becomes too much to handle at once.

This will, hopefully, be the first of many posts about home labs that is written from personal and professional experience. Throughout the series there should be a showcase of how to use home labs for:

  • Home Media
  • Automation
  • Edge Computing
  • Archiving
  • Game Servers
  • Development Servers
  • Access and Control
  • Dynamic Public Cloud Integration
  • Redundancy and Disaster Recovery
  • and… more

The first and foremost discussion to have is price control. Enterprise server contracts can start at seven figures. If this is what you are looking for, then this is not the blog you seek. What this blog will focus on is how to keep a relatively low budget. Lets see how far we can make the homelab budget go!

At this point, you may be wondering why the title is “Home Lab – a lie I tell myself”. When this journey started, this was a 1-2 tower server adventure. Over time this has exploded, both in scope and in price. At this point, the name “homelab” no longer describes the system I’ve built. Not just in size but also due to the fact that it is no longer at my home. Hopefully, this series can serve as both enablement and deterrent to an ever expanding homelab.

Some helpful resources I used when I got started:

TrueNAS Azure Sync for Proxmox

Previously, we discuss TrueNAS NFS for Proxmox. Now that Proxmox is using TrueNAS for storage, a Cloud Sync Task can be used to copy the TrueNAS NFS to Azure Blob Storage as a backup. The following steps are required:

  • Create Azure Blob Storage Account
  • Create TrueNAS Cloud Credentials
  • Create Cloud Sync Tasks

Create Azure Blob Storage Account

Create a storage account

Every storage account must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group, or use an existing resource group. This article shows how to create a new resource group.

general-purpose v2 storage account provides access to all of the Azure Storage services: blobs, files, queues, tables, and disks. The steps outlined here create a general-purpose v2 storage account, but the steps to create any type of storage account are similar. For more information about types of storage accounts and other storage account settings, see Azure storage account overview.

Portal

To create a general-purpose v2 storage account in the Azure portal, follow these steps:

  1. On the Azure portal menu, select All services. In the list of resources, type Storage Accounts. As you begin typing, the list filters based on your input. Select Storage Accounts.
  2. On the Storage Accounts window that appears, choose Add.
  3. On the Basics tab, select the subscription in which to create the storage account.
  4. Under the Resource group field, select your desired resource group, or create a new resource group. For more information on Azure resource groups, see Azure Resource Manager overview.
  5. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
  6. Select a location for your storage account, or use the default location.
  7. Select a performance tier. The default tier is Standard.
  8. Set the Account kind field to Storage V2 (general-purpose v2).
  9. Specify how the storage account will be replicated. The default replication option is Read-access geo-redundant storage (RA-GRS). For more information about available replication options, see Azure Storage redundancy.
  10. Additional options are available on the NetworkingData protectionAdvanced, and Tags tabs. To use Azure Data Lake Storage, choose the Advanced tab, and then set Hierarchical namespace to Enabled. For more information, see Azure Data Lake Storage Gen2 Introduction
  11. Select Review + Create to review your storage account settings and create the account.
  12. Select Create.

The following image shows the settings on the Basics tab for a new storage account:

Screenshot showing how to create a storage account in the Azure portal

Create a container

To create a container in the Azure portal, follow these steps:

  1. Navigate to your new storage account in the Azure portal.
  2. In the left menu for the storage account, scroll to the Blob service section, then select Containers.
  3. Select the + Container button.
  4. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For more information about container and blob names, see Naming and referencing containers, blobs, and metadata.
  5. Set the level of public access to the container. The default level is Private (no anonymous access).
  6. Select OK to create the container.Screenshot showing how to create a container in the Azure portal

Create TrueNAS Cloud Credentials

To begin integrating TrueNAS with a Cloud Storage provider, register the account credentials on the system. After saving any credentials, a Cloud Sync Task allows sending or receiving data from that Cloud Storage Provider.

Saving a Cloud Storage Credential

Transferring data from TrueNAS to the Cloud requires saving Cloud Storage Provider credentials on the system.

It is recommended to have another browser tab open and logged in to the Cloud Storage Provider account you intend to link with TrueNAS. Some providers require additional information that is generated on the storage provider account page. For example, saving an Amazon S3 credential on TrueNAS could require logging in to the S3 account and generating an access key pair on the Security Credentials > Access Keys page.

To save cloud storage provider credentials, go to System > Cloud Credentials and click Add.

Using the Azure Portal we can retrieve our access keys.

Create Cloud Sync Tasks

TrueNAS can send, receive, or synchronize data with a Cloud Storage provider. Cloud Sync tasks allow for single time transfers or recurring transfers on a schedule, and are an effective method to back up data to a remote location.

Go to Tasks > Cloud Sync Tasks and click Add.

TasksCloudSyncAdd

Give the task a memorable Description and select an existing cloud Credential. TrueNAS connects to the chosen Cloud Storage Provider and shows the available storage locations. Decide if data is transferring to (PUSH) or from (PULL) the Cloud Storage location (Remote). Choose a Transfer Mode:

Next, Control when the task runs by defining a Schedule. When a specific Schedule is required, choose Custom and use the Advanced Scheduler.Advanced Schedulerexpand

Unsetting Enable makes the configuration available without allowing the Schedule to run the task. To manually activate a saved task, go to Tasks > Cloud Sync Tasks, click  to expand a task, and click RUN NOW.

The remaining options allow tuning the task to your specific requirements.Specific Optionsexpand

Transfer

NameDescription
DescriptionEnter a description of the Cloud Sync Task.
DirectionPUSH sends data to cloud storage. PULL receives data from cloud storage. Changing the direction resets the Transfer Mode to COPY.
Transfer ModeSYNC: Files on the destination are changed to match those on the source. If a file does not exist on the source, it is also deleted from the destination. COPY: Files from the source are copied to the destination. If files with the same names are present on the destination, they are overwritten. MOVE: After files are copied from the source to the destination, they are deleted from the source. Files with the same names on the destination are overwritten.
Directory/FilesSelect the directories or files to be sent to the cloud for Push syncs, or the destination to be written for Pull syncs. Be cautious about the destination of Pull jobs to avoid overwriting existing files.

Remote

NameDescription
CredentialSelect the cloud storage provider credentials from the list of available Cloud Credentials.

Control

NameDescription
ScheduleSelect a schedule preset or choose Custom to open the advanced scheduler.
EnabledEnable this Cloud Sync Task. Unset to disable this Cloud Sync Task without deleting it.

Advanced Options

NameDescription
Follow SymlinksFollow symlinks and copy the items to which they link.
Pre-ScriptScript to execute before running sync.
Post-ScriptScript to execute after running sync.
ExcludeList of files and directories to exclude from sync. Separate entries by pressing Enter. See rclone filtering for more details about the --exclude option.

Advanced Remote Options

NameDescription
Remote EncryptionPUSH: Encrypt files before transfer and store the encrypted files on the remote system. Files are encrypted using the Encryption Password and Encryption Salt values. PULL: Decrypt files that are being stored on the remote system before the transfer. Transferring the encrypted files requires entering the same Encryption Password and Encryption Salt that was used to encrypt the files. Additional details about the encryption algorithm and key derivation are available in the rclone crypt File formats documentation.
TransfersNumber of simultaneous file transfers. Enter a number based on the available bandwidth and destination system performance. See rclone –transfers.
Bandwidth limitA single bandwidth limit or bandwidth limit schedule in rclone format. Separate entries by pressing Enter. Example: 08:00,512 12:00,10MB 13:00,512 18:00,30MB 23:00,off. Units can be specified with the beginning letter: b, k (default), M, or G. See rclone –bwlimit.

Scripting and Environment Variables

Advanced users can write scripts that run immediately before or after the Cloud Sync task. The Post-script field is only run when the Cloud Sync task successfully completes. You can pass a variety of task environment variables into the Pre- and Post- script fields:

  • CLOUD_SYNC_ID
  • CLOUD_SYNC_DESCRIPTION
  • CLOUD_SYNC_DIRECTION
  • CLOUD_SYNC_TRANSFER_MODE
  • CLOUD_SYNC_ENCRYPTION
  • CLOUD_SYNC_FILENAME_ENCRYPTION
  • CLOUD_SYNC_ENCRYPTION_PASSWORD
  • CLOUD_SYNC_ENCRYPTION_SALT
  • CLOUD_SYNC_SNAPSHOT

There also are provider-specific variables like CLOUD_SYNC_CLIENT_ID or CLOUD_SYNC_TOKEN or CLOUD_SYNC_CHUNK_SIZE

Remote storage settings:

  • CLOUD_SYNC_BUCKET
  • CLOUD_SYNC_FOLDER

Local storage settings:

  • CLOUD_SYNC_PATH

Testing Settings

Test the settings before saving by clicking DRY RUN. TrueNAS connects to the Cloud Storage Provider and simulates a file transfer. No data is actually sent or received. A dialog shows the test status and allows downloading the task logs.

TasksCloudsyncAddGoogledriveDryrun

Cloud Sync Behavior

Saved tasks are activated according to their schedule or by clicking RUN NOW. An in-progress cloud sync must finish before another can begin. Stopping an in-progress task cancels the file transfer and requires starting the file transfer over.

To view logs about a running or the most recent run of a task, click the task status.

Cloud Sync Restore

To quickly create a new Cloud Sync that uses the same options but reverses the data transfer, expand () an existing Cloud Sync and click RESTORE.

TasksCloudSyncRestore

Enter a new Description for this reversed task and define the path to a storage location for the transferred data.

The restored cloud sync is saved as another entry in Tasks > Cloud Sync Tasks.

TrueNAS NFS for Proxmox

While setting the server closet at the office, the first thing setup was a TrueNAS Core server. If you are looking for a guide on installing TrueNAS core, a good one can be found here. With it already in place, it can be used to create an NFS share.

One of the major benefits of NFS is being able to easily migrate a container from one environment to another. By using the NFS, the container or VM is pulled over the network which allows any host to host ad hoc. Migrating VMs or containers takes seconds.

TrueNAS NFS support

Creating a Network File System (NFS) share on TrueNAS gives the benefit of making lots of data easily available for anyone with share access. Depending how the share is configured, users accessing the share can be restricted to read or write privileges.To create a new share, make sure a dataset is available with all the data for sharing.

Creating an NFS Share

Go to Sharing > Unix Shares (NFS) and click ADD.

Services NFS Add

Use the file browser to select the dataset to be shared. An optional Description can be entered to help identify the share. Clicking SUBMIT creates the share. At the time of creation, you can select ENABLE SERVICE for the service to start and to automatically start after any reboots. If you wish to create the share but not immediately enable it, select CANCEL.

Services NFS Add Service Enable
Services NFS Service Enable Success

NFS Share Settings

SettingValueDescription
Pathfile browserType or browse to the full path to the pool or dataset to share. Click ADD to configure multiple paths.
DescriptionstringEnter any notes or reminders about the share.
All dirscheckboxSet to allow the client to mount any subdirectory within the Path. Leaving disabled only allows clients to mount the Path endpoint.
QuietcheckboxEnabling inhibits some syslog diagnostics to avoid error messages. See exports(5) for examples. Disabling allows all syslog diagnostics, which can lead to additional cosmetic error messages.
EnabledcheckboxEnable this NFS share. Unset to disable this NFS share without deleting the configuration.

To edit an existing NFS share, go to Sharing > Unix Shares (NFS) and click more_vert > Edit. The options available are identical to the share creation options.

Configure the NFS Service

To begin sharing the data, go to Services and click the NFS toggle. If you want NFS sharing to activate immediately after TrueNAS boots, set Start Automatically.

NFS service settings can be configured by clicking  (Configure).

Services NFS Options
SettingValueDescription
Number of serversintegerSpecify how many servers to create. Increase if NFS client responses are slow. Keep this less than or equal to the number of CPUs reported by sysctl -n kern.smp.cpus to limit CPU context switching.
Bind IP Addressesdrop downSelect IP addresses to listen to for NFS requests. Leave empty for NFS to listen to all available addresses.
Enable NFSv4checkboxSet to switch from NFSv3 to NFSv4.
NFSv3 ownership model for NFSv4checkboxSet when NFSv4 ACL support is needed without requiring the client and the server to sync users and groups.
Require Kerberos for NFSv4checkboxSet to force NFS shares to fail if the Kerberos ticket is unavailable.
Serve UDP NFS clientscheckboxSet if NFS clients need to use the User Datagram Protocol (UDP).
Allow non-root mountcheckboxSet only if required by the NFS client. Set to allow serving non-root mount requests.
Support >16 groupscheckboxSet when a user is a member of more than 16 groups. This assumes group membership is configured correctly on the NFS server.
Log mountd(8) requestscheckboxSet to log mountd syslog requests.
Log rpc.statd(8) and rpc.lockd(8)checkboxSet to log rpc.statd and rpc.lockd syslog requests.
mountd(8) bind portintegerEnter a number to bind mountd only to that port.
rpc.statd(8) bind portintegerEnter a number to bind rpc.statd only to that port.
rpc.lockd(8) bind portintegerEnter a number to bind rpc.lockd only to that port.

Unless a specific setting is needed, it is recommended to use the default settings for the NFS service. When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.

Proxmox NFS storage pool

The NFS backend is based on the directory backend, so it shares most properties. The directory layout and the file naming conventions are the same. The main advantage is that you can directly configure the NFS server properties, so the backend can mount the share automatically. There is no need to modify /etc/fstab. The backend can also test if the server is online, and provides a method to query the server for exported shares.

How to add NFS Storage on Proxmox VE | LinuxHelp Tutorials

The backend supports all common storage properties, except the shared flag, which is always set. Additionally, the following properties are used to configure the NFS server:

server – Server IP or DNS name. To avoid DNS lookup delays, it is usually preferable to use an IP address instead of a DNS name – unless you have a very reliable DNS server, or list the server in the local /etc/hosts file.

export – NFS export path (as listed by pvesm nfsscan). You can also set NFS mount options:

path – The local mount point (defaults to /mnt/pve/<STORAGE_ID>/).

options – NFS mount options (see man nfs).

How to setup an NFS Server and configure NFS Storage in Proxmox VE