Can’t access TrueNAS/FreeNAS over VPN

There was an issue accessing a TrueNAS device over the VPN. The VPN was assigning an Ip Address outside the network available to the TrueNAS host. In my case:

  1. VPN assigned IP address is in range 172.16.0.0/24
  2. Network for TrueNAS is in range 10.0.0.0/16

Since the VPN address is outside the range of the CIDR block for the TrueNAS ip address subnet, TrueNAS can’t respond to the incoming request. To fix this, add a Static Route for TrueNAS. To add a Static Route, expand the Network tab in the left hand menu and select Static Routes in the menu.

The left main menu in TrueNAS core with the Network tab expanded and the Static Routes tab within Network selected

From the Static Routes screen, click Add in the top right of the new screen. After that the following form will appear:

SettingValueDescription
DestinationintegerUse the format A.B.C.D/E where E is the CIDR mask.
In the example above it would be 172.16.0.0/24
GatewayintegerEnter the IP address of the gateway.
In the example above it would be 10.0.0.150 (150 is my gateway)
DescriptionstringNotes or identifiers describing the route.
The form fields for adding a static route in TrueNAS

After the fields are populated correctly, click “Submit” and the VPN connections should now be able to reach the TrueNAS core device.

Home Lab – Keeping Costs Down

Understand the Use Case


If you’re considering building a lab, chances are you’ve got a good idea of what you want to do with it. If not, then let me give you one piece of advice: THINK CAREFULLY BEFOREHAND ABOUT WHAT YOU WANT TO ACHIEVE. 10 GPU “super computers” can be great fun but are woefully unnecessary if you want a web development server. Likewise, a petabyte of redundant storage truly lives up to its name if you’re planning to use it to host a Doom multiplayer server. I know it sounds obvious, but it’s just too easy to get distracted by a great eBay deal and end up with a 900w paper weight.

I can already hear you shouting, “EBAY HO, BUY ALL THE THINGS”, but lets just slow down there for one second, soldier. This is a really, really good way to spend a whole lot of money on a stack of stuff that is worth more to a scrapyard than your homelab. There’s a ton of enterprise gear out there ready for the taking, but do your due diligence or you won’t get the good stuff. Instead, you’ll end up with a server with only one ethernet management port and no network card. It’s a bad scene when that happens, so let’s try to preclude it.

Common Use Cases

Open tower PC computers. Depicts computer repair, maintenance, service or upgrade

Most home lab users get started for very specific use cases. The starter use cases are usually:

  • Game Servers
  • Media Servers
  • Storage and Archiving
  • Web Hosting
  • Certification Study
  • Remote Access
  • Development Servers
  • Home Automation
  • Crypto Currency and other Electric Waste

It is important to understand that your homelab project is not the first of its kind. There are an innumerable number of self hosted game servers, media servers, and self hosted web sites. Before breaking out the credit card, use (at-least) a google search to find a similar project which describes their setup to understand how the underlying hardware is being used. Understanding hardware is key to understanding what you will need instead of what you want or think you may need.

For example, a home media server doesn’t need a 24 port 10 G switch. It also doesn’t need an AMD Ryzen Threadripper 3990X 64Core 128Thread 2.9GHz 7nm sTRX4 CPU Processor. Both of those items will add expense and power usage, without providing a better media server. What is most important is overall disk space and making sure the disks are redundant (note disk speed is not of the utmost importance). With this in mind, you can search for a single server setup that can handle an appropriate amount of disk space for your needs.

Larger Use Cases

In the Modern Data Center: IT Engineer Installs New HDD Hard Drive and Other Hardware into Server Rack Equipment. IT Specialist Doing Maintenance, Running Diagnostics and Updating Hardware.

Larger use cases (or enterprise use cases) will combine multiple use cases from above along with new use cases, monitoring, multiple environments, additional access & control mechanisms, and more. An example of a larger use case can be an edge computing solution for home automation combined with a home security system. Another use case could be a web crawler, archiver, and data processor to create a custom search engine. A personal use case of my home lab is snapshotting/archiving documentation of older products and software so that if said documentation were no longer hosted by a third party, I’d still have access to it.

When diving into larger/enterprise use cases you often run into a different level of concerns. This particular blog post wont dive into such details; just understand there are additional costs that comes with enterprise setups. Backup power, layers of redundancy, DR & backup strategies, mitigating for natural and manmade disasters, and more. Each of these will increase cost and slowly transform a home lab into a datacenter.

Find a Deal

Buy nothing

Keep in mind that this post is meant for someone who’s already started labbing, but wants to up their gear to do more and doesn’t know where to begin.

The vast majority of us all started with an old PC or leftover parts from a previous upgrade or, maybe, from that box your parents didn’t need anymore after that they got a new present. Maybe you volunteer to take it and clean it for them and begin to use what they left behind. Personally, this is exactly how I got my original NAS.

I’d be very surprised if you’re not already sitting on a pile of old parts in some way, shape, or form. If you weren’t the kind to collect parts, you probably wouldn’t be labbing. Even if not, if all you have is one PC, use it. These days we have VirtualBox, which does a fine job of running just about everything you might want to try out. It might be a bit slow, but you can get started while you wait for your tax return/birthday money/lottery winnings to get here.

The key point is that nothing about learning the basics of homelab setup requires enterprise hardware, except, of course, for learning how enterprise hardware itself is laid out. That has its merits, but most of it can still be learned from building your own PC. Coding, Linux, FreeBSD, Win 2012 R2, containers, hypervisors, networking, storage; all of it can be done with a fairly recent laptop or desktop.

Understanding Hardware

Male computer technician repairing broken silver rack mount server while opening its parts and analyzing and understanding problem

Let’s discuss the different types of hardware options you’ll encounter; hopefully, this will save you from traveling an expensive learning curve. You don’t want to end up with a server that is so old it doesn’t support virtualization. Older servers can be power hungry and scream every time you turn them on. These tips will hopefully help you understand the difference between a $150 paperweight and a $200 deal.

This is such an important issue to me, because I have witnessed those uninitiated in homelab quickly lose their enthusiasm when they end up getting Pentium 4-era Xeons that are practically worthless. I point this out not to pound those who have just started with homelab builds into the ground, but to point out that if you don’t research, ask around, and make sure of what you’re getting, you could end up getting worthless hardware without even knowing it. And, trust me, it’s not always easy to see when you might be headed down this path. I speak from experience.

Ask Questions

Young woman asking questions to the speaker during the briefing.

There are a number of guides on the internet to help with buying of used/refurbished/old servers. Using your search engine of choice will lead you on many adventures. It cannot be stressed enough that you should understand your use case before you purchase a machine. Here are a list of questions to ask yourself:

  • What kind of connections does the motherboard provide for hard drives?
    • Does the server have a raid card?
      • If the raid card fails, how hard will it be to replace?
      • If a drive fails, how hard will it be to recreate the raid cluster?
      • What is the maximum memory supported by the raid card?
    • Is this server primarily reading or writing data?
      • Is the primary reading or writing a central focus of this server?
      • What level of redundancy is needed for this data?
      • Can this server use a NAS instead of local hard drives for the non-OS (or all) data?
    • Will this server need to “trust” the hard drives attached to it? (A server may not be able to read the temperature of a hard drive and consider it to be overheating. The fans will then go full blast driving up the energy consumption and noise generation of the machine. This is a problem in servers like Dells, where there is an expectation of a Dell Certified hard drive)
  • What are the network throughput needs of this project?
    • Is the network card fast enough for this project’s needs? Is the switch/router it is connected to fast enough for this project’s needs?
    • Does the card provide enough ports for the considered management setup?
    • Does it provide redundancy at the card or port level?
    • If the network card fails, how hard will it be to replace?
  • What are the memory needs for the project and what are the memory options provided by the motherboard?
    • Not a question, but a note – use ECC RAM. Servers are not personal use computers and with multiple workloads running on them, ECC RAM can prevent a systemic crash that destroys all the workloads on the server.
    • Another note, don’t use DDR2 memory. Its a power hog and getting harder and harder to replace.
    • Does the motherboard except UDIMM, RDIMM, or LDIMM and in what configurations?
    • What RAM is currently available from other projects to reuse?
    • Are any processes or workloads memory intensive or is RAM general use?
  • What level of compute power is needed?
    • Does the motherboard for this project support the expected CPU?
    • Does the CPU support the RAM for this server?
    • Does the CPU support virtual machine passthrough (Intel VT-d or AMD-Vi)?
    • Are vendors readily stocking this CPU?

Places to purchase

Shopping basket with computer device and accessories, 3D rendering isolated on white background

The primary place to find “deals” on retired server equipment would be eBay. eBay serves as a single point where recyclers, repurchasers, and refurbishers can sell IT equipment. In fact, most shops will have multiple “stores” that they use so that they can have a single location with different store fronts. Some shops will have a brand name that is its own web store. eBay is the place that I personally go to first when I am bored and want to look at stuff I will never buy.

There are a number of different categories to check that are not eBay: (It should be noted that this section is from an American perspective. If you searching elsewhere this guide may not be perfectly applicable to buying in your region)

Local Electronic Recyclers

Electronic recyclers are sometimes tasked to clean out old data centers. This leaves the recycler with enterprise servers and networking equipment that will need to be sold. Some items are best to pick up in person. Renting moving equipment and moving server racks to a house or office space from an electronic recycler can save thousands on such a purchase (from personal experience). Personally, I have built/purchased both my mobile testing platform and my server racks from a local electronics recycler. It’s as simple as setting up an appointment with the recycler and taking a tour of their warehouse. There may be more than just the equipment for the project being planned in there for purchase.

The major benefit of visiting an electronic recycler is that they may be willing to make a deal NOW. You are there, you have money, and they do not need to ship out the product. This can reduce their costs and in turn pass that savings on to you. However, make sure that you can move and transport the items that you bought. Server racks can weigh upwards of 400lbs and not fit in a standard rental box truck standing up. Make sure you can transport whatever you buy and that it will fit, not only in the room you purchase it for, but through the doorways to the room in question.

Government Surplus

A surplus store in the Commonwealth of Nations sells items that are used, or purchased but unused, and no longer needed. A surplus store may also sell items that are past their use by date. Additionally, there are government auctions for similar property where some amazing deals can be found. Note, these amazing deals are sought after by many personal and professional hobbyist, so don’t expect too much of an amazing deal.

Online Sales

All that can be said for online sales in this article has been stated. Anything not said should be known from your own online shopping experience. For the sake of being somewhat useful, here is a list sourced from the reddit homelab wiki buying guide:

Using the Cloud

Private cloud left connected to Public cloud right with Hybrid cloud placed between

The cloud can be utilized to keep costs down. You read that last sentence correctly, it can be used to keep costs down. From a business perspective, it can be utilized to shift capital expenses to operational expenses. For a home lab, it can be used so that $10,000 in equipment cost can instead be spread out month to month over the course of years. As a Microsoft MVP for Azure, I have a good sense of when to use the public cloud vs when to invest in the private cloud. Hopefully, this section can provide a quick guide to when and where your project can benefit from either.

A thought that should be shared is that the entire integration with the public cloud can be dynamic, if you so choose. From the VPN components to the different offerings being consumed (unless there is a need for persistent state), the items can all be created on demand. This is said with the understanding that certain items require physical components and long term contracts. If your project requires those parameters, then the project may fall outside the definition of “home lab” being used here. Also, some items, like a VPN Gateway in Azure, may take a half hour to an hour to provision on demand. For a home lab, some pre-planning may be required due to those time constraints (as compared to an enterprise environment, where all those items will be persistent).

For a home lab, the primary purpose is to own and house the equipment running your projects. That being the primary purpose, does not mean there are no other benefits to using the public cloud in a hybrid scenario. The following are a few scenarios where using the public cloud could help reduce costs:

Scaling Out

From the above listed options for projects, some could benefit from being able to scale out due to demand. Web servers, game servers, development servers, and more may have inconsistent demand. If your project involves a forever online game server and suddenly one thousand of your closest friends plan to play together one night, then there may be a need to scale out beyond the capacity of your home lab.

Assuming the project is set up for this scenario, hosting it in the public cloud may be as simple as changing a public DNS entry and uploading your virtualization configuration of the server to a public cloud provider. An example would be Minecraft server running inside of a container. It can be quickly uploaded to something like Azure Container Instances for the evening and cost a fistful of dollars. Compared to the thousands in hardware costs that would be needed for that one evening, the public cloud can provide the required infrastructure for a fraction of the cost.

Another example could be a public web server for a one day conference. For ~360 days out of the year, the site will receive one or two hits a day. When the week of the conference arrives, it may suddenly get hundreds or thousands of hits per day. Instead of running the site in the cloud the entire year or spending capital enough to host it in your lab for the week of the conference, use the cloud the week of and the rest of the year use the home lab (private cloud).

Disaster Recovery

Some of the key tenants to a proper disaster recovery protocol is secondary location, offsite storage, or any other type of physical separation of the recovery environment. This acts as a hedge to a physical disaster in the private cloud region. This multi-locality is a primary tenant of any cloud hosting but in the home lab scenario is mostly not feasible. A public cloud offering can be a cheap disaster recovery option for the home lab. Having encrypted backups of configuration and servers paired with separately located media backups of sensitive data can be combined to form a DR strategy for the home lab.

Not everything will be cheaply hosted in a public cloud for DR purposes. If the project is a media server or data lab, then the storage fees on any media not hosted within the lab may prove to be too costly. The DR scenario that the cloud can help with, on a home lab budget, is one where the underlying data is small enough to make running costs too high.

Core Infrastructure

The recovery strategy used in my personal lab starts with core infrastructure. First, the physical hosts are configured for virtualization and then a mix of VMs and containers are deployed to start:

  • Certificate Authority (with root certs being loaded from backup physical media)
  • apt-mirror & docker-registry
  • DNS
  • LDAP
  • Data Systems
  • Web Servers

By using cloud offerings for some core infrastructure, both costs and restart time are minimized. For each of the following, an option could be:

  • CA – Let’s Encrypt
  • apt-mirror & docker-registry – use public free apt repos and docker hub
  • DNS – use the name servers provided by the registrar
  • LDAP – use Azure Active Directory where it can replace LDAP (don’t write me an essay about how AAD is not LDAP! I know its not but for something like an internal website or gitlab it can make a suitable replacement.)

PAAS/SAAS

Some services in the public cloud can easily out scale a home lab configured version for a lower cost point (even over the long run). Also, some offerings in the public cloud can make the completion of specific portions of the home lab project much faster. If you are building a machine learning setup, utilizing the dynamic compute capabilities of something like Azure Machine Learning to host notebooks or add compute power for the models will have a drastically lower price point than configuring the same in a home lab setup.

Another example could be adding a text messaging feature for two factor authentication. Adding in a twilio messaging account will be much lower than trying to add an entire phone. Similarly, using Office 365 or Zoho mail could be cheaper than any self hosted alternative. Moreover, the free tiers offered by Github are so fully featured now that self hosting is purely for the hobby and not for any feature benefit.

Home Lab – a lie I tell myself

Since March of 2020 I’ve been working on building out a homelab. Something about being inside a little more drove me to want to work with the computers at home. Normally that free time would be spent at community events or with presentations. Something had to fill the void and a homelab was it.

At first, the goal was simple; learn about different server technologies and edge computing by building a “data center” in a closet. The “data center” part of it is where most at home sysadmins fall into a bottomless pit of self-hosted technologies and I am no different. First it is a home media server, then a dashboard, then a database, then a data system, then a clustered set of systems, then there is suddenly a need for documentation for a lab you built yourself as it becomes too much to handle at once.

This will, hopefully, be the first of many posts about home labs that is written from personal and professional experience. Throughout the series there should be a showcase of how to use home labs for:

  • Home Media
  • Automation
  • Edge Computing
  • Archiving
  • Game Servers
  • Development Servers
  • Access and Control
  • Dynamic Public Cloud Integration
  • Redundancy and Disaster Recovery
  • and… more

The first and foremost discussion to have is price control. Enterprise server contracts can start at seven figures. If this is what you are looking for, then this is not the blog you seek. What this blog will focus on is how to keep a relatively low budget. Lets see how far we can make the homelab budget go!

At this point, you may be wondering why the title is “Home Lab – a lie I tell myself”. When this journey started, this was a 1-2 tower server adventure. Over time this has exploded, both in scope and in price. At this point, the name “homelab” no longer describes the system I’ve built. Not just in size but also due to the fact that it is no longer at my home. Hopefully, this series can serve as both enablement and deterrent to an ever expanding homelab.

Some helpful resources I used when I got started:

TrueNAS Azure Sync for Proxmox

Previously, we discuss TrueNAS NFS for Proxmox. Now that Proxmox is using TrueNAS for storage, a Cloud Sync Task can be used to copy the TrueNAS NFS to Azure Blob Storage as a backup. The following steps are required:

  • Create Azure Blob Storage Account
  • Create TrueNAS Cloud Credentials
  • Create Cloud Sync Tasks

Create Azure Blob Storage Account

Create a storage account

Every storage account must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group, or use an existing resource group. This article shows how to create a new resource group.

general-purpose v2 storage account provides access to all of the Azure Storage services: blobs, files, queues, tables, and disks. The steps outlined here create a general-purpose v2 storage account, but the steps to create any type of storage account are similar. For more information about types of storage accounts and other storage account settings, see Azure storage account overview.

Portal

To create a general-purpose v2 storage account in the Azure portal, follow these steps:

  1. On the Azure portal menu, select All services. In the list of resources, type Storage Accounts. As you begin typing, the list filters based on your input. Select Storage Accounts.
  2. On the Storage Accounts window that appears, choose Add.
  3. On the Basics tab, select the subscription in which to create the storage account.
  4. Under the Resource group field, select your desired resource group, or create a new resource group. For more information on Azure resource groups, see Azure Resource Manager overview.
  5. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
  6. Select a location for your storage account, or use the default location.
  7. Select a performance tier. The default tier is Standard.
  8. Set the Account kind field to Storage V2 (general-purpose v2).
  9. Specify how the storage account will be replicated. The default replication option is Read-access geo-redundant storage (RA-GRS). For more information about available replication options, see Azure Storage redundancy.
  10. Additional options are available on the NetworkingData protectionAdvanced, and Tags tabs. To use Azure Data Lake Storage, choose the Advanced tab, and then set Hierarchical namespace to Enabled. For more information, see Azure Data Lake Storage Gen2 Introduction
  11. Select Review + Create to review your storage account settings and create the account.
  12. Select Create.

The following image shows the settings on the Basics tab for a new storage account:

Screenshot showing how to create a storage account in the Azure portal

Create a container

To create a container in the Azure portal, follow these steps:

  1. Navigate to your new storage account in the Azure portal.
  2. In the left menu for the storage account, scroll to the Blob service section, then select Containers.
  3. Select the + Container button.
  4. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For more information about container and blob names, see Naming and referencing containers, blobs, and metadata.
  5. Set the level of public access to the container. The default level is Private (no anonymous access).
  6. Select OK to create the container.Screenshot showing how to create a container in the Azure portal

Create TrueNAS Cloud Credentials

To begin integrating TrueNAS with a Cloud Storage provider, register the account credentials on the system. After saving any credentials, a Cloud Sync Task allows sending or receiving data from that Cloud Storage Provider.

Saving a Cloud Storage Credential

Transferring data from TrueNAS to the Cloud requires saving Cloud Storage Provider credentials on the system.

It is recommended to have another browser tab open and logged in to the Cloud Storage Provider account you intend to link with TrueNAS. Some providers require additional information that is generated on the storage provider account page. For example, saving an Amazon S3 credential on TrueNAS could require logging in to the S3 account and generating an access key pair on the Security Credentials > Access Keys page.

To save cloud storage provider credentials, go to System > Cloud Credentials and click Add.

Using the Azure Portal we can retrieve our access keys.

Create Cloud Sync Tasks

TrueNAS can send, receive, or synchronize data with a Cloud Storage provider. Cloud Sync tasks allow for single time transfers or recurring transfers on a schedule, and are an effective method to back up data to a remote location.

Go to Tasks > Cloud Sync Tasks and click Add.

TasksCloudSyncAdd

Give the task a memorable Description and select an existing cloud Credential. TrueNAS connects to the chosen Cloud Storage Provider and shows the available storage locations. Decide if data is transferring to (PUSH) or from (PULL) the Cloud Storage location (Remote). Choose a Transfer Mode:

Next, Control when the task runs by defining a Schedule. When a specific Schedule is required, choose Custom and use the Advanced Scheduler.Advanced Schedulerexpand

Unsetting Enable makes the configuration available without allowing the Schedule to run the task. To manually activate a saved task, go to Tasks > Cloud Sync Tasks, click  to expand a task, and click RUN NOW.

The remaining options allow tuning the task to your specific requirements.Specific Optionsexpand

Transfer

NameDescription
DescriptionEnter a description of the Cloud Sync Task.
DirectionPUSH sends data to cloud storage. PULL receives data from cloud storage. Changing the direction resets the Transfer Mode to COPY.
Transfer ModeSYNC: Files on the destination are changed to match those on the source. If a file does not exist on the source, it is also deleted from the destination. COPY: Files from the source are copied to the destination. If files with the same names are present on the destination, they are overwritten. MOVE: After files are copied from the source to the destination, they are deleted from the source. Files with the same names on the destination are overwritten.
Directory/FilesSelect the directories or files to be sent to the cloud for Push syncs, or the destination to be written for Pull syncs. Be cautious about the destination of Pull jobs to avoid overwriting existing files.

Remote

NameDescription
CredentialSelect the cloud storage provider credentials from the list of available Cloud Credentials.

Control

NameDescription
ScheduleSelect a schedule preset or choose Custom to open the advanced scheduler.
EnabledEnable this Cloud Sync Task. Unset to disable this Cloud Sync Task without deleting it.

Advanced Options

NameDescription
Follow SymlinksFollow symlinks and copy the items to which they link.
Pre-ScriptScript to execute before running sync.
Post-ScriptScript to execute after running sync.
ExcludeList of files and directories to exclude from sync. Separate entries by pressing Enter. See rclone filtering for more details about the --exclude option.

Advanced Remote Options

NameDescription
Remote EncryptionPUSH: Encrypt files before transfer and store the encrypted files on the remote system. Files are encrypted using the Encryption Password and Encryption Salt values. PULL: Decrypt files that are being stored on the remote system before the transfer. Transferring the encrypted files requires entering the same Encryption Password and Encryption Salt that was used to encrypt the files. Additional details about the encryption algorithm and key derivation are available in the rclone crypt File formats documentation.
TransfersNumber of simultaneous file transfers. Enter a number based on the available bandwidth and destination system performance. See rclone –transfers.
Bandwidth limitA single bandwidth limit or bandwidth limit schedule in rclone format. Separate entries by pressing Enter. Example: 08:00,512 12:00,10MB 13:00,512 18:00,30MB 23:00,off. Units can be specified with the beginning letter: b, k (default), M, or G. See rclone –bwlimit.

Scripting and Environment Variables

Advanced users can write scripts that run immediately before or after the Cloud Sync task. The Post-script field is only run when the Cloud Sync task successfully completes. You can pass a variety of task environment variables into the Pre- and Post- script fields:

  • CLOUD_SYNC_ID
  • CLOUD_SYNC_DESCRIPTION
  • CLOUD_SYNC_DIRECTION
  • CLOUD_SYNC_TRANSFER_MODE
  • CLOUD_SYNC_ENCRYPTION
  • CLOUD_SYNC_FILENAME_ENCRYPTION
  • CLOUD_SYNC_ENCRYPTION_PASSWORD
  • CLOUD_SYNC_ENCRYPTION_SALT
  • CLOUD_SYNC_SNAPSHOT

There also are provider-specific variables like CLOUD_SYNC_CLIENT_ID or CLOUD_SYNC_TOKEN or CLOUD_SYNC_CHUNK_SIZE

Remote storage settings:

  • CLOUD_SYNC_BUCKET
  • CLOUD_SYNC_FOLDER

Local storage settings:

  • CLOUD_SYNC_PATH

Testing Settings

Test the settings before saving by clicking DRY RUN. TrueNAS connects to the Cloud Storage Provider and simulates a file transfer. No data is actually sent or received. A dialog shows the test status and allows downloading the task logs.

TasksCloudsyncAddGoogledriveDryrun

Cloud Sync Behavior

Saved tasks are activated according to their schedule or by clicking RUN NOW. An in-progress cloud sync must finish before another can begin. Stopping an in-progress task cancels the file transfer and requires starting the file transfer over.

To view logs about a running or the most recent run of a task, click the task status.

Cloud Sync Restore

To quickly create a new Cloud Sync that uses the same options but reverses the data transfer, expand () an existing Cloud Sync and click RESTORE.

TasksCloudSyncRestore

Enter a new Description for this reversed task and define the path to a storage location for the transferred data.

The restored cloud sync is saved as another entry in Tasks > Cloud Sync Tasks.

TrueNAS NFS for Proxmox

While setting the server closet at the office, the first thing setup was a TrueNAS Core server. If you are looking for a guide on installing TrueNAS core, a good one can be found here. With it already in place, it can be used to create an NFS share.

One of the major benefits of NFS is being able to easily migrate a container from one environment to another. By using the NFS, the container or VM is pulled over the network which allows any host to host ad hoc. Migrating VMs or containers takes seconds.

TrueNAS NFS support

Creating a Network File System (NFS) share on TrueNAS gives the benefit of making lots of data easily available for anyone with share access. Depending how the share is configured, users accessing the share can be restricted to read or write privileges.To create a new share, make sure a dataset is available with all the data for sharing.

Creating an NFS Share

Go to Sharing > Unix Shares (NFS) and click ADD.

Services NFS Add

Use the file browser to select the dataset to be shared. An optional Description can be entered to help identify the share. Clicking SUBMIT creates the share. At the time of creation, you can select ENABLE SERVICE for the service to start and to automatically start after any reboots. If you wish to create the share but not immediately enable it, select CANCEL.

Services NFS Add Service Enable
Services NFS Service Enable Success

NFS Share Settings

SettingValueDescription
Pathfile browserType or browse to the full path to the pool or dataset to share. Click ADD to configure multiple paths.
DescriptionstringEnter any notes or reminders about the share.
All dirscheckboxSet to allow the client to mount any subdirectory within the Path. Leaving disabled only allows clients to mount the Path endpoint.
QuietcheckboxEnabling inhibits some syslog diagnostics to avoid error messages. See exports(5) for examples. Disabling allows all syslog diagnostics, which can lead to additional cosmetic error messages.
EnabledcheckboxEnable this NFS share. Unset to disable this NFS share without deleting the configuration.

To edit an existing NFS share, go to Sharing > Unix Shares (NFS) and click more_vert > Edit. The options available are identical to the share creation options.

Configure the NFS Service

To begin sharing the data, go to Services and click the NFS toggle. If you want NFS sharing to activate immediately after TrueNAS boots, set Start Automatically.

NFS service settings can be configured by clicking  (Configure).

Services NFS Options
SettingValueDescription
Number of serversintegerSpecify how many servers to create. Increase if NFS client responses are slow. Keep this less than or equal to the number of CPUs reported by sysctl -n kern.smp.cpus to limit CPU context switching.
Bind IP Addressesdrop downSelect IP addresses to listen to for NFS requests. Leave empty for NFS to listen to all available addresses.
Enable NFSv4checkboxSet to switch from NFSv3 to NFSv4.
NFSv3 ownership model for NFSv4checkboxSet when NFSv4 ACL support is needed without requiring the client and the server to sync users and groups.
Require Kerberos for NFSv4checkboxSet to force NFS shares to fail if the Kerberos ticket is unavailable.
Serve UDP NFS clientscheckboxSet if NFS clients need to use the User Datagram Protocol (UDP).
Allow non-root mountcheckboxSet only if required by the NFS client. Set to allow serving non-root mount requests.
Support >16 groupscheckboxSet when a user is a member of more than 16 groups. This assumes group membership is configured correctly on the NFS server.
Log mountd(8) requestscheckboxSet to log mountd syslog requests.
Log rpc.statd(8) and rpc.lockd(8)checkboxSet to log rpc.statd and rpc.lockd syslog requests.
mountd(8) bind portintegerEnter a number to bind mountd only to that port.
rpc.statd(8) bind portintegerEnter a number to bind rpc.statd only to that port.
rpc.lockd(8) bind portintegerEnter a number to bind rpc.lockd only to that port.

Unless a specific setting is needed, it is recommended to use the default settings for the NFS service. When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.

Proxmox NFS storage pool

The NFS backend is based on the directory backend, so it shares most properties. The directory layout and the file naming conventions are the same. The main advantage is that you can directly configure the NFS server properties, so the backend can mount the share automatically. There is no need to modify /etc/fstab. The backend can also test if the server is online, and provides a method to query the server for exported shares.

How to add NFS Storage on Proxmox VE | LinuxHelp Tutorials

The backend supports all common storage properties, except the shared flag, which is always set. Additionally, the following properties are used to configure the NFS server:

server – Server IP or DNS name. To avoid DNS lookup delays, it is usually preferable to use an IP address instead of a DNS name – unless you have a very reliable DNS server, or list the server in the local /etc/hosts file.

export – NFS export path (as listed by pvesm nfsscan). You can also set NFS mount options:

path – The local mount point (defaults to /mnt/pve/<STORAGE_ID>/).

options – NFS mount options (see man nfs).

How to setup an NFS Server and configure NFS Storage in Proxmox VE