Can’t access TrueNAS/FreeNAS over VPN

There was an issue accessing a TrueNAS device over the VPN. The VPN was assigning an Ip Address outside the network available to the TrueNAS host. In my case:

  1. VPN assigned IP address is in range
  2. Network for TrueNAS is in range

Since the VPN address is outside the range of the CIDR block for the TrueNAS ip address subnet, TrueNAS can’t respond to the incoming request. To fix this, add a Static Route for TrueNAS. To add a Static Route, expand the Network tab in the left hand menu and select Static Routes in the menu.

The left main menu in TrueNAS core with the Network tab expanded and the Static Routes tab within Network selected

From the Static Routes screen, click Add in the top right of the new screen. After that the following form will appear:

DestinationintegerUse the format A.B.C.D/E where E is the CIDR mask.
In the example above it would be
GatewayintegerEnter the IP address of the gateway.
In the example above it would be (150 is my gateway)
DescriptionstringNotes or identifiers describing the route.
The form fields for adding a static route in TrueNAS

After the fields are populated correctly, click “Submit” and the VPN connections should now be able to reach the TrueNAS core device.

Home Lab – Keeping Costs Down

Understand the Use Case

If you’re considering building a lab, chances are you’ve got a good idea of what you want to do with it. If not, then let me give you one piece of advice: THINK CAREFULLY BEFOREHAND ABOUT WHAT YOU WANT TO ACHIEVE. 10 GPU “super computers” can be great fun but are woefully unnecessary if you want a web development server. Likewise, a petabyte of redundant storage truly lives up to its name if you’re planning to use it to host a Doom multiplayer server. I know it sounds obvious, but it’s just too easy to get distracted by a great eBay deal and end up with a 900w paper weight.

I can already hear you shouting, “EBAY HO, BUY ALL THE THINGS”, but lets just slow down there for one second, soldier. This is a really, really good way to spend a whole lot of money on a stack of stuff that is worth more to a scrapyard than your homelab. There’s a ton of enterprise gear out there ready for the taking, but do your due diligence or you won’t get the good stuff. Instead, you’ll end up with a server with only one ethernet management port and no network card. It’s a bad scene when that happens, so let’s try to preclude it.

Common Use Cases

Open tower PC computers. Depicts computer repair, maintenance, service or upgrade

Most home lab users get started for very specific use cases. The starter use cases are usually:

  • Game Servers
  • Media Servers
  • Storage and Archiving
  • Web Hosting
  • Certification Study
  • Remote Access
  • Development Servers
  • Home Automation
  • Crypto Currency and other Electric Waste

It is important to understand that your homelab project is not the first of its kind. There are an innumerable number of self hosted game servers, media servers, and self hosted web sites. Before breaking out the credit card, use (at-least) a google search to find a similar project which describes their setup to understand how the underlying hardware is being used. Understanding hardware is key to understanding what you will need instead of what you want or think you may need.

For example, a home media server doesn’t need a 24 port 10 G switch. It also doesn’t need an AMD Ryzen Threadripper 3990X 64Core 128Thread 2.9GHz 7nm sTRX4 CPU Processor. Both of those items will add expense and power usage, without providing a better media server. What is most important is overall disk space and making sure the disks are redundant (note disk speed is not of the utmost importance). With this in mind, you can search for a single server setup that can handle an appropriate amount of disk space for your needs.

Larger Use Cases

In the Modern Data Center: IT Engineer Installs New HDD Hard Drive and Other Hardware into Server Rack Equipment. IT Specialist Doing Maintenance, Running Diagnostics and Updating Hardware.

Larger use cases (or enterprise use cases) will combine multiple use cases from above along with new use cases, monitoring, multiple environments, additional access & control mechanisms, and more. An example of a larger use case can be an edge computing solution for home automation combined with a home security system. Another use case could be a web crawler, archiver, and data processor to create a custom search engine. A personal use case of my home lab is snapshotting/archiving documentation of older products and software so that if said documentation were no longer hosted by a third party, I’d still have access to it.

When diving into larger/enterprise use cases you often run into a different level of concerns. This particular blog post wont dive into such details; just understand there are additional costs that comes with enterprise setups. Backup power, layers of redundancy, DR & backup strategies, mitigating for natural and manmade disasters, and more. Each of these will increase cost and slowly transform a home lab into a datacenter.

Find a Deal

Buy nothing

Keep in mind that this post is meant for someone who’s already started labbing, but wants to up their gear to do more and doesn’t know where to begin.

The vast majority of us all started with an old PC or leftover parts from a previous upgrade or, maybe, from that box your parents didn’t need anymore after that they got a new present. Maybe you volunteer to take it and clean it for them and begin to use what they left behind. Personally, this is exactly how I got my original NAS.

I’d be very surprised if you’re not already sitting on a pile of old parts in some way, shape, or form. If you weren’t the kind to collect parts, you probably wouldn’t be labbing. Even if not, if all you have is one PC, use it. These days we have VirtualBox, which does a fine job of running just about everything you might want to try out. It might be a bit slow, but you can get started while you wait for your tax return/birthday money/lottery winnings to get here.

The key point is that nothing about learning the basics of homelab setup requires enterprise hardware, except, of course, for learning how enterprise hardware itself is laid out. That has its merits, but most of it can still be learned from building your own PC. Coding, Linux, FreeBSD, Win 2012 R2, containers, hypervisors, networking, storage; all of it can be done with a fairly recent laptop or desktop.

Understanding Hardware

Male computer technician repairing broken silver rack mount server while opening its parts and analyzing and understanding problem

Let’s discuss the different types of hardware options you’ll encounter; hopefully, this will save you from traveling an expensive learning curve. You don’t want to end up with a server that is so old it doesn’t support virtualization. Older servers can be power hungry and scream every time you turn them on. These tips will hopefully help you understand the difference between a $150 paperweight and a $200 deal.

This is such an important issue to me, because I have witnessed those uninitiated in homelab quickly lose their enthusiasm when they end up getting Pentium 4-era Xeons that are practically worthless. I point this out not to pound those who have just started with homelab builds into the ground, but to point out that if you don’t research, ask around, and make sure of what you’re getting, you could end up getting worthless hardware without even knowing it. And, trust me, it’s not always easy to see when you might be headed down this path. I speak from experience.

Ask Questions

Young woman asking questions to the speaker during the briefing.

There are a number of guides on the internet to help with buying of used/refurbished/old servers. Using your search engine of choice will lead you on many adventures. It cannot be stressed enough that you should understand your use case before you purchase a machine. Here are a list of questions to ask yourself:

  • What kind of connections does the motherboard provide for hard drives?
    • Does the server have a raid card?
      • If the raid card fails, how hard will it be to replace?
      • If a drive fails, how hard will it be to recreate the raid cluster?
      • What is the maximum memory supported by the raid card?
    • Is this server primarily reading or writing data?
      • Is the primary reading or writing a central focus of this server?
      • What level of redundancy is needed for this data?
      • Can this server use a NAS instead of local hard drives for the non-OS (or all) data?
    • Will this server need to “trust” the hard drives attached to it? (A server may not be able to read the temperature of a hard drive and consider it to be overheating. The fans will then go full blast driving up the energy consumption and noise generation of the machine. This is a problem in servers like Dells, where there is an expectation of a Dell Certified hard drive)
  • What are the network throughput needs of this project?
    • Is the network card fast enough for this project’s needs? Is the switch/router it is connected to fast enough for this project’s needs?
    • Does the card provide enough ports for the considered management setup?
    • Does it provide redundancy at the card or port level?
    • If the network card fails, how hard will it be to replace?
  • What are the memory needs for the project and what are the memory options provided by the motherboard?
    • Not a question, but a note – use ECC RAM. Servers are not personal use computers and with multiple workloads running on them, ECC RAM can prevent a systemic crash that destroys all the workloads on the server.
    • Another note, don’t use DDR2 memory. Its a power hog and getting harder and harder to replace.
    • Does the motherboard except UDIMM, RDIMM, or LDIMM and in what configurations?
    • What RAM is currently available from other projects to reuse?
    • Are any processes or workloads memory intensive or is RAM general use?
  • What level of compute power is needed?
    • Does the motherboard for this project support the expected CPU?
    • Does the CPU support the RAM for this server?
    • Does the CPU support virtual machine passthrough (Intel VT-d or AMD-Vi)?
    • Are vendors readily stocking this CPU?

Places to purchase

Shopping basket with computer device and accessories, 3D rendering isolated on white background

The primary place to find “deals” on retired server equipment would be eBay. eBay serves as a single point where recyclers, repurchasers, and refurbishers can sell IT equipment. In fact, most shops will have multiple “stores” that they use so that they can have a single location with different store fronts. Some shops will have a brand name that is its own web store. eBay is the place that I personally go to first when I am bored and want to look at stuff I will never buy.

There are a number of different categories to check that are not eBay: (It should be noted that this section is from an American perspective. If you searching elsewhere this guide may not be perfectly applicable to buying in your region)

Local Electronic Recyclers

Electronic recyclers are sometimes tasked to clean out old data centers. This leaves the recycler with enterprise servers and networking equipment that will need to be sold. Some items are best to pick up in person. Renting moving equipment and moving server racks to a house or office space from an electronic recycler can save thousands on such a purchase (from personal experience). Personally, I have built/purchased both my mobile testing platform and my server racks from a local electronics recycler. It’s as simple as setting up an appointment with the recycler and taking a tour of their warehouse. There may be more than just the equipment for the project being planned in there for purchase.

The major benefit of visiting an electronic recycler is that they may be willing to make a deal NOW. You are there, you have money, and they do not need to ship out the product. This can reduce their costs and in turn pass that savings on to you. However, make sure that you can move and transport the items that you bought. Server racks can weigh upwards of 400lbs and not fit in a standard rental box truck standing up. Make sure you can transport whatever you buy and that it will fit, not only in the room you purchase it for, but through the doorways to the room in question.

Government Surplus

A surplus store in the Commonwealth of Nations sells items that are used, or purchased but unused, and no longer needed. A surplus store may also sell items that are past their use by date. Additionally, there are government auctions for similar property where some amazing deals can be found. Note, these amazing deals are sought after by many personal and professional hobbyist, so don’t expect too much of an amazing deal.

Online Sales

All that can be said for online sales in this article has been stated. Anything not said should be known from your own online shopping experience. For the sake of being somewhat useful, here is a list sourced from the reddit homelab wiki buying guide:

Using the Cloud

Private cloud left connected to Public cloud right with Hybrid cloud placed between

The cloud can be utilized to keep costs down. You read that last sentence correctly, it can be used to keep costs down. From a business perspective, it can be utilized to shift capital expenses to operational expenses. For a home lab, it can be used so that $10,000 in equipment cost can instead be spread out month to month over the course of years. As a Microsoft MVP for Azure, I have a good sense of when to use the public cloud vs when to invest in the private cloud. Hopefully, this section can provide a quick guide to when and where your project can benefit from either.

A thought that should be shared is that the entire integration with the public cloud can be dynamic, if you so choose. From the VPN components to the different offerings being consumed (unless there is a need for persistent state), the items can all be created on demand. This is said with the understanding that certain items require physical components and long term contracts. If your project requires those parameters, then the project may fall outside the definition of “home lab” being used here. Also, some items, like a VPN Gateway in Azure, may take a half hour to an hour to provision on demand. For a home lab, some pre-planning may be required due to those time constraints (as compared to an enterprise environment, where all those items will be persistent).

For a home lab, the primary purpose is to own and house the equipment running your projects. That being the primary purpose, does not mean there are no other benefits to using the public cloud in a hybrid scenario. The following are a few scenarios where using the public cloud could help reduce costs:

Scaling Out

From the above listed options for projects, some could benefit from being able to scale out due to demand. Web servers, game servers, development servers, and more may have inconsistent demand. If your project involves a forever online game server and suddenly one thousand of your closest friends plan to play together one night, then there may be a need to scale out beyond the capacity of your home lab.

Assuming the project is set up for this scenario, hosting it in the public cloud may be as simple as changing a public DNS entry and uploading your virtualization configuration of the server to a public cloud provider. An example would be Minecraft server running inside of a container. It can be quickly uploaded to something like Azure Container Instances for the evening and cost a fistful of dollars. Compared to the thousands in hardware costs that would be needed for that one evening, the public cloud can provide the required infrastructure for a fraction of the cost.

Another example could be a public web server for a one day conference. For ~360 days out of the year, the site will receive one or two hits a day. When the week of the conference arrives, it may suddenly get hundreds or thousands of hits per day. Instead of running the site in the cloud the entire year or spending capital enough to host it in your lab for the week of the conference, use the cloud the week of and the rest of the year use the home lab (private cloud).

Disaster Recovery

Some of the key tenants to a proper disaster recovery protocol is secondary location, offsite storage, or any other type of physical separation of the recovery environment. This acts as a hedge to a physical disaster in the private cloud region. This multi-locality is a primary tenant of any cloud hosting but in the home lab scenario is mostly not feasible. A public cloud offering can be a cheap disaster recovery option for the home lab. Having encrypted backups of configuration and servers paired with separately located media backups of sensitive data can be combined to form a DR strategy for the home lab.

Not everything will be cheaply hosted in a public cloud for DR purposes. If the project is a media server or data lab, then the storage fees on any media not hosted within the lab may prove to be too costly. The DR scenario that the cloud can help with, on a home lab budget, is one where the underlying data is small enough to make running costs too high.

Core Infrastructure

The recovery strategy used in my personal lab starts with core infrastructure. First, the physical hosts are configured for virtualization and then a mix of VMs and containers are deployed to start:

  • Certificate Authority (with root certs being loaded from backup physical media)
  • apt-mirror & docker-registry
  • DNS
  • LDAP
  • Data Systems
  • Web Servers

By using cloud offerings for some core infrastructure, both costs and restart time are minimized. For each of the following, an option could be:

  • CA – Let’s Encrypt
  • apt-mirror & docker-registry – use public free apt repos and docker hub
  • DNS – use the name servers provided by the registrar
  • LDAP – use Azure Active Directory where it can replace LDAP (don’t write me an essay about how AAD is not LDAP! I know its not but for something like an internal website or gitlab it can make a suitable replacement.)


Some services in the public cloud can easily out scale a home lab configured version for a lower cost point (even over the long run). Also, some offerings in the public cloud can make the completion of specific portions of the home lab project much faster. If you are building a machine learning setup, utilizing the dynamic compute capabilities of something like Azure Machine Learning to host notebooks or add compute power for the models will have a drastically lower price point than configuring the same in a home lab setup.

Another example could be adding a text messaging feature for two factor authentication. Adding in a twilio messaging account will be much lower than trying to add an entire phone. Similarly, using Office 365 or Zoho mail could be cheaper than any self hosted alternative. Moreover, the free tiers offered by Github are so fully featured now that self hosting is purely for the hobby and not for any feature benefit.

Home Lab – a lie I tell myself

Since March of 2020 I’ve been working on building out a homelab. Something about being inside a little more drove me to want to work with the computers at home. Normally that free time would be spent at community events or with presentations. Something had to fill the void and a homelab was it.

At first, the goal was simple; learn about different server technologies and edge computing by building a “data center” in a closet. The “data center” part of it is where most at home sysadmins fall into a bottomless pit of self-hosted technologies and I am no different. First it is a home media server, then a dashboard, then a database, then a data system, then a clustered set of systems, then there is suddenly a need for documentation for a lab you built yourself as it becomes too much to handle at once.

This will, hopefully, be the first of many posts about home labs that is written from personal and professional experience. Throughout the series there should be a showcase of how to use home labs for:

  • Home Media
  • Automation
  • Edge Computing
  • Archiving
  • Game Servers
  • Development Servers
  • Access and Control
  • Dynamic Public Cloud Integration
  • Redundancy and Disaster Recovery
  • and… more

The first and foremost discussion to have is price control. Enterprise server contracts can start at seven figures. If this is what you are looking for, then this is not the blog you seek. What this blog will focus on is how to keep a relatively low budget. Lets see how far we can make the homelab budget go!

At this point, you may be wondering why the title is “Home Lab – a lie I tell myself”. When this journey started, this was a 1-2 tower server adventure. Over time this has exploded, both in scope and in price. At this point, the name “homelab” no longer describes the system I’ve built. Not just in size but also due to the fact that it is no longer at my home. Hopefully, this series can serve as both enablement and deterrent to an ever expanding homelab.

Some helpful resources I used when I got started:

Error: mkisofs not found in $PATH

Using the KVM terraform provider, I ran into the following error – Error: mkisofs not found in $PATH. After hours of trying to install it on a Debian based server, the realization that the executable was missing from the CLIENT and not the SERVER dawned on me.

If this error is currently blocking progress, be sure to INSTALL MKISOFS ON THE CLIENT and don’t worry about the server!

TrueNAS Azure Sync for Proxmox

Previously, we discuss TrueNAS NFS for Proxmox. Now that Proxmox is using TrueNAS for storage, a Cloud Sync Task can be used to copy the TrueNAS NFS to Azure Blob Storage as a backup. The following steps are required:

  • Create Azure Blob Storage Account
  • Create TrueNAS Cloud Credentials
  • Create Cloud Sync Tasks

Create Azure Blob Storage Account

Create a storage account

Every storage account must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group, or use an existing resource group. This article shows how to create a new resource group.

general-purpose v2 storage account provides access to all of the Azure Storage services: blobs, files, queues, tables, and disks. The steps outlined here create a general-purpose v2 storage account, but the steps to create any type of storage account are similar. For more information about types of storage accounts and other storage account settings, see Azure storage account overview.


To create a general-purpose v2 storage account in the Azure portal, follow these steps:

  1. On the Azure portal menu, select All services. In the list of resources, type Storage Accounts. As you begin typing, the list filters based on your input. Select Storage Accounts.
  2. On the Storage Accounts window that appears, choose Add.
  3. On the Basics tab, select the subscription in which to create the storage account.
  4. Under the Resource group field, select your desired resource group, or create a new resource group. For more information on Azure resource groups, see Azure Resource Manager overview.
  5. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
  6. Select a location for your storage account, or use the default location.
  7. Select a performance tier. The default tier is Standard.
  8. Set the Account kind field to Storage V2 (general-purpose v2).
  9. Specify how the storage account will be replicated. The default replication option is Read-access geo-redundant storage (RA-GRS). For more information about available replication options, see Azure Storage redundancy.
  10. Additional options are available on the NetworkingData protectionAdvanced, and Tags tabs. To use Azure Data Lake Storage, choose the Advanced tab, and then set Hierarchical namespace to Enabled. For more information, see Azure Data Lake Storage Gen2 Introduction
  11. Select Review + Create to review your storage account settings and create the account.
  12. Select Create.

The following image shows the settings on the Basics tab for a new storage account:

Screenshot showing how to create a storage account in the Azure portal

Create a container

To create a container in the Azure portal, follow these steps:

  1. Navigate to your new storage account in the Azure portal.
  2. In the left menu for the storage account, scroll to the Blob service section, then select Containers.
  3. Select the + Container button.
  4. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For more information about container and blob names, see Naming and referencing containers, blobs, and metadata.
  5. Set the level of public access to the container. The default level is Private (no anonymous access).
  6. Select OK to create the container.Screenshot showing how to create a container in the Azure portal

Create TrueNAS Cloud Credentials

To begin integrating TrueNAS with a Cloud Storage provider, register the account credentials on the system. After saving any credentials, a Cloud Sync Task allows sending or receiving data from that Cloud Storage Provider.

Saving a Cloud Storage Credential

Transferring data from TrueNAS to the Cloud requires saving Cloud Storage Provider credentials on the system.

It is recommended to have another browser tab open and logged in to the Cloud Storage Provider account you intend to link with TrueNAS. Some providers require additional information that is generated on the storage provider account page. For example, saving an Amazon S3 credential on TrueNAS could require logging in to the S3 account and generating an access key pair on the Security Credentials > Access Keys page.

To save cloud storage provider credentials, go to System > Cloud Credentials and click Add.

Using the Azure Portal we can retrieve our access keys.

Create Cloud Sync Tasks

TrueNAS can send, receive, or synchronize data with a Cloud Storage provider. Cloud Sync tasks allow for single time transfers or recurring transfers on a schedule, and are an effective method to back up data to a remote location.

Go to Tasks > Cloud Sync Tasks and click Add.


Give the task a memorable Description and select an existing cloud Credential. TrueNAS connects to the chosen Cloud Storage Provider and shows the available storage locations. Decide if data is transferring to (PUSH) or from (PULL) the Cloud Storage location (Remote). Choose a Transfer Mode:

Next, Control when the task runs by defining a Schedule. When a specific Schedule is required, choose Custom and use the Advanced Scheduler.Advanced Schedulerexpand

Unsetting Enable makes the configuration available without allowing the Schedule to run the task. To manually activate a saved task, go to Tasks > Cloud Sync Tasks, click  to expand a task, and click RUN NOW.

The remaining options allow tuning the task to your specific requirements.Specific Optionsexpand


DescriptionEnter a description of the Cloud Sync Task.
DirectionPUSH sends data to cloud storage. PULL receives data from cloud storage. Changing the direction resets the Transfer Mode to COPY.
Transfer ModeSYNC: Files on the destination are changed to match those on the source. If a file does not exist on the source, it is also deleted from the destination. COPY: Files from the source are copied to the destination. If files with the same names are present on the destination, they are overwritten. MOVE: After files are copied from the source to the destination, they are deleted from the source. Files with the same names on the destination are overwritten.
Directory/FilesSelect the directories or files to be sent to the cloud for Push syncs, or the destination to be written for Pull syncs. Be cautious about the destination of Pull jobs to avoid overwriting existing files.


CredentialSelect the cloud storage provider credentials from the list of available Cloud Credentials.


ScheduleSelect a schedule preset or choose Custom to open the advanced scheduler.
EnabledEnable this Cloud Sync Task. Unset to disable this Cloud Sync Task without deleting it.

Advanced Options

Follow SymlinksFollow symlinks and copy the items to which they link.
Pre-ScriptScript to execute before running sync.
Post-ScriptScript to execute after running sync.
ExcludeList of files and directories to exclude from sync. Separate entries by pressing Enter. See rclone filtering for more details about the --exclude option.

Advanced Remote Options

Remote EncryptionPUSH: Encrypt files before transfer and store the encrypted files on the remote system. Files are encrypted using the Encryption Password and Encryption Salt values. PULL: Decrypt files that are being stored on the remote system before the transfer. Transferring the encrypted files requires entering the same Encryption Password and Encryption Salt that was used to encrypt the files. Additional details about the encryption algorithm and key derivation are available in the rclone crypt File formats documentation.
TransfersNumber of simultaneous file transfers. Enter a number based on the available bandwidth and destination system performance. See rclone –transfers.
Bandwidth limitA single bandwidth limit or bandwidth limit schedule in rclone format. Separate entries by pressing Enter. Example: 08:00,512 12:00,10MB 13:00,512 18:00,30MB 23:00,off. Units can be specified with the beginning letter: b, k (default), M, or G. See rclone –bwlimit.

Scripting and Environment Variables

Advanced users can write scripts that run immediately before or after the Cloud Sync task. The Post-script field is only run when the Cloud Sync task successfully completes. You can pass a variety of task environment variables into the Pre- and Post- script fields:


There also are provider-specific variables like CLOUD_SYNC_CLIENT_ID or CLOUD_SYNC_TOKEN or CLOUD_SYNC_CHUNK_SIZE

Remote storage settings:


Local storage settings:


Testing Settings

Test the settings before saving by clicking DRY RUN. TrueNAS connects to the Cloud Storage Provider and simulates a file transfer. No data is actually sent or received. A dialog shows the test status and allows downloading the task logs.


Cloud Sync Behavior

Saved tasks are activated according to their schedule or by clicking RUN NOW. An in-progress cloud sync must finish before another can begin. Stopping an in-progress task cancels the file transfer and requires starting the file transfer over.

To view logs about a running or the most recent run of a task, click the task status.

Cloud Sync Restore

To quickly create a new Cloud Sync that uses the same options but reverses the data transfer, expand () an existing Cloud Sync and click RESTORE.


Enter a new Description for this reversed task and define the path to a storage location for the transferred data.

The restored cloud sync is saved as another entry in Tasks > Cloud Sync Tasks.

TrueNAS NFS for Proxmox

While setting the server closet at the office, the first thing setup was a TrueNAS Core server. If you are looking for a guide on installing TrueNAS core, a good one can be found here. With it already in place, it can be used to create an NFS share.

One of the major benefits of NFS is being able to easily migrate a container from one environment to another. By using the NFS, the container or VM is pulled over the network which allows any host to host ad hoc. Migrating VMs or containers takes seconds.

TrueNAS NFS support

Creating a Network File System (NFS) share on TrueNAS gives the benefit of making lots of data easily available for anyone with share access. Depending how the share is configured, users accessing the share can be restricted to read or write privileges.To create a new share, make sure a dataset is available with all the data for sharing.

Creating an NFS Share

Go to Sharing > Unix Shares (NFS) and click ADD.

Services NFS Add

Use the file browser to select the dataset to be shared. An optional Description can be entered to help identify the share. Clicking SUBMIT creates the share. At the time of creation, you can select ENABLE SERVICE for the service to start and to automatically start after any reboots. If you wish to create the share but not immediately enable it, select CANCEL.

Services NFS Add Service Enable
Services NFS Service Enable Success

NFS Share Settings

Pathfile browserType or browse to the full path to the pool or dataset to share. Click ADD to configure multiple paths.
DescriptionstringEnter any notes or reminders about the share.
All dirscheckboxSet to allow the client to mount any subdirectory within the Path. Leaving disabled only allows clients to mount the Path endpoint.
QuietcheckboxEnabling inhibits some syslog diagnostics to avoid error messages. See exports(5) for examples. Disabling allows all syslog diagnostics, which can lead to additional cosmetic error messages.
EnabledcheckboxEnable this NFS share. Unset to disable this NFS share without deleting the configuration.

To edit an existing NFS share, go to Sharing > Unix Shares (NFS) and click more_vert > Edit. The options available are identical to the share creation options.

Configure the NFS Service

To begin sharing the data, go to Services and click the NFS toggle. If you want NFS sharing to activate immediately after TrueNAS boots, set Start Automatically.

NFS service settings can be configured by clicking  (Configure).

Services NFS Options
Number of serversintegerSpecify how many servers to create. Increase if NFS client responses are slow. Keep this less than or equal to the number of CPUs reported by sysctl -n kern.smp.cpus to limit CPU context switching.
Bind IP Addressesdrop downSelect IP addresses to listen to for NFS requests. Leave empty for NFS to listen to all available addresses.
Enable NFSv4checkboxSet to switch from NFSv3 to NFSv4.
NFSv3 ownership model for NFSv4checkboxSet when NFSv4 ACL support is needed without requiring the client and the server to sync users and groups.
Require Kerberos for NFSv4checkboxSet to force NFS shares to fail if the Kerberos ticket is unavailable.
Serve UDP NFS clientscheckboxSet if NFS clients need to use the User Datagram Protocol (UDP).
Allow non-root mountcheckboxSet only if required by the NFS client. Set to allow serving non-root mount requests.
Support >16 groupscheckboxSet when a user is a member of more than 16 groups. This assumes group membership is configured correctly on the NFS server.
Log mountd(8) requestscheckboxSet to log mountd syslog requests.
Log rpc.statd(8) and rpc.lockd(8)checkboxSet to log rpc.statd and rpc.lockd syslog requests.
mountd(8) bind portintegerEnter a number to bind mountd only to that port.
rpc.statd(8) bind portintegerEnter a number to bind rpc.statd only to that port.
rpc.lockd(8) bind portintegerEnter a number to bind rpc.lockd only to that port.

Unless a specific setting is needed, it is recommended to use the default settings for the NFS service. When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.

Proxmox NFS storage pool

The NFS backend is based on the directory backend, so it shares most properties. The directory layout and the file naming conventions are the same. The main advantage is that you can directly configure the NFS server properties, so the backend can mount the share automatically. There is no need to modify /etc/fstab. The backend can also test if the server is online, and provides a method to query the server for exported shares.

How to add NFS Storage on Proxmox VE | LinuxHelp Tutorials

The backend supports all common storage properties, except the shared flag, which is always set. Additionally, the following properties are used to configure the NFS server:

server – Server IP or DNS name. To avoid DNS lookup delays, it is usually preferable to use an IP address instead of a DNS name – unless you have a very reliable DNS server, or list the server in the local /etc/hosts file.

export – NFS export path (as listed by pvesm nfsscan). You can also set NFS mount options:

path – The local mount point (defaults to /mnt/pve/<STORAGE_ID>/).

options – NFS mount options (see man nfs).

How to setup an NFS Server and configure NFS Storage in Proxmox VE

PostgreSQL Historical Log by Table

In my current project, there is a need for tracking data changes in the PostgreSQL tables. The end goal is, if a row changes, we copy the previous row before the change transaction completes and write it to a logging table. We will accomplish this with in the following steps:

  •  Creating a table LIKE our table that needs logging
  •  Create a function for our table
  •  Apply that function as a trigger

Table Like

First we need an example table to get started with. For a simple example, lets use a basic address table.

create table address
address_id integer not null,
type varchar(100),
street1 varchar(120) not null,
street2 varchar(120),
street3 varchar(120),
street4 varchar(120),
city varchar(80),
po_box_code varchar(20) not null,
phone_number varchar(50),
date_created timestamp with time zone not null default current_timestamp

This basic table has enough constraints to make a decent example. We need to create a copy of this table, one where the columns are the same name and type, but without all of the constraints. Luckily for us, PostgreSQL provides a feature for just a situation. For this, we want to use the like_option for the CREATE TABLE statement. According the latest documentation (PostgreSQL 12) at the time of writing this post:

The LIKE clause specifies a table from which the new table automatically copies all column names, their data types, and their not-null constraints.

Unlike INHERITS, the new table and original table are completely decoupled after creation is complete. Changes to the original table will not be applied to the new table, and it is not possible to include data of the new table in scans of the original table.

Also unlike INHERITS, columns and constraints copied by LIKE are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another LIKE clause, an error is signaled.

The optional like_option clauses specify which additional properties of the original table to copy. Specifying INCLUDING copies the property, specifying EXCLUDING omits the property. EXCLUDING is the default. If multiple specifications are made for the same kind of object, the last one is used.

We will want to exclude all constraints so that when our trigger fires, it can write any data to the columns without worrying if those columns are valid. The resulting table definition looks like the following:

create table logging_address
operation char(10) not null,
date_operated timestamp with time zone not null default current_timestamp

Logging Function

Next, there needs to be a trigger that logs the data. To create a new trigger in PostgreSQL, you follow these steps:

    • First, create a trigger function using CREATE FUNCTION statement.
    • Second, bind the trigger function to a table by using CREATE TRIGGER statement.

A trigger function is similar to an ordinary function. However, a trigger function does not take any argument and has a return value with the type of trigger. Inside this trigger function, insert the old data into the logging table. This makes the trigger function as follows:

create function address_trigger_function()
returns trigger as $$
insert into logging_address_address (address_id, type, street1, street2, street3, street4, city, po_box_code, phone_number, date_created, operation)
values (old.address_id, old.type, old.street1,old.street2,old.street3,old.street4,,old.po_box_code,old.phone_number, old.date_created,TG_OP);
$$ LANGUAGE plpgsql;

TG_OP is Data type text; a string of INSERT, UPDATE, DELETE, or TRUNCATE telling for which operation the trigger was fired.

Implementing the Trigger

As we said earlier, Second, bind the trigger function to a table by using CREATE TRIGGER statement. This part is fairly easy.

CREATE TRIGGER address_versioning_trigger
FOR EACH ROW EXECUTE PROCEDURE address_trigger_function();

Now, whenever you insert, update, or delete a record in the address table, the operation is logged in the logging address table.

Using Cognitive Services: Custom Vision Service with Azure IoT Edge

This is a guide on how to use Cognitive Services: Custom Vision Service with Azure IoT Edge without having the Edge module host a web endpoint but instead use the built in Module to Module communication. This post will break down the steps into four major sections:

  • Creating the Custom Vision Model
  • Creating the Edge Module in Python
  • Adding the model and custom code for Custom Vision
  • Deploy the Module

Creating the Custom Vision Model

To use the Custom Vision Service for image classification, you must first build a classifier model. In this guide, you’ll learn how to build a classifier through the Custom Vision website.


  • A valid Azure subscription. Create an account for free.
  • A set of images with which to train your classifier. See below for tips on choosing images.

Create Custom Vision resources in the Azure portal

To use Custom Vision Service, you will need to create Custom Vision Training and Prediction resources in the Azure portal. This will create both a Training and Prediction resource.

Create a new project

In your web browser, navigate to the Custom Vision web page and select Sign in. Sign in with the same account you used to sign into the Azure portal.

Image of the sign-in page

  1. To create your first project, select New Project. The Create new project dialog box will appear.The new project dialog box has fields for name, description, and domains.
  2. Enter a name and a description for the project. Then select a Resource Group. If your signed-in account is associated with an Azure account, the Resource Group dropdown will display all of your Azure Resource Groups that include a Custom Vision Service Resource.
  3. Select Classification under Project Types. Then, under Classification Types, choose either Multilabel or Multiclass, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You will be able to change the classification type later if you wish.
  4. Next, select one of the available domains. Each domain optimizes the classifier for specific types of images, as described in the following table. You will be able to change the domain later if you wish.
    Domain Purpose
    Generic Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the Generic domain.
    Food Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.
    Landmarks Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.
    Retail Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.
    Compact domains Optimized for the constraints of real-time classification on mobile devices. The models generated by compact domains can be exported to run locally.
  5. Finally, select Create project.

Choose training images

As a minimum, we recommend you use at least 30 images per tag in the initial training set. You’ll also want to collect a few extra images to test your model once it is trained.

In order to train your model effectively, use images with visual variety. Select images with that vary by:

  • camera angle
  • lighting
  • background
  • visual style
  • individual/grouped subject(s)
  • size
  • type

Additionally, make sure all of your training images meet the following criteria:

  • .jpg, .png, or .bmp format
  • no greater than 6MB in size (4MB for prediction images)
  • no less than 256 pixels on the shortest edge; any images shorter than this will be automatically scaled up by the Custom Vision Service

Upload and tag images

In this section you will upload and manually tag images to help train the classifier.

  1. To add images, click the Add images button and then select Browse local files. Select Open to move to tagging. Your tag selection will be applied to the entire group of images you’ve selected to upload, so it is easier to upload images in separate groups according to their desired tags. You can also change the tags for individual images after they have been uploaded.The add images control is shown in the upper left, and as a button at bottom center.
  2. To create a tag, enter text in the My Tags field and press Enter. If the tag already exists, it will appear in a dropdown menu. In a multilabel project, you can add more than one tag to your images, but in a multiclass project you can add only one. To finish uploading the images, use the Upload [number] files button.Image of the tag and upload page
  3. Select Done once the images have been uploaded.The progress bar shows all tasks completed.

To upload another set of images, return to the top of this section and repeat the steps.

Train the classifier

To train the classifier, select the Train button. The classifier uses all of the current images to create a model that identifies the visual qualities of each tag.

The train button in the top right of the web page's header toolbar

The training process should only take a few minutes. During this time, information about the training process is displayed in the Performance tab.

The browser window with a training dialog in the main section

Custom Vision Service supports the following exports:

  • Tensorflow for Android.
  • CoreML for iOS11.
  • ONNX for Windows ML.
  • A Windows or Linux container. The container includes a Tensorflow model and service code to use the Custom Vision Service API.

Convert to a compact domain

To convert the domain of an existing classifier, use the following steps:

  1. From the Custom vision page, select the Home icon to view a list of your projects. You can also use the to see your projects.Image of the home icon and projects list
  2. Select a project, and then select the Gear icon in the upper right of the page.Image of the gear icon
  3. In the Domains section, select a compact domain. Select Save Changes to save the changes.Image of domains selection
  4. From the top of the page, select Train to retrain using the new domain.

Export your model

To export the model after retraining, use the following steps:

  1. Go to the Performance tab and select Export.Image of the export icon


    If the Export entry is not available, then the selected iteration does not use a compact domain. Use the Iterations section of this page to select an iteration that uses a compact domain, and then select Export.

  2. Select the export format, and then select Export to download the model.

Creating the Edge Module in Python

You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating an IoT Edge module that will be edited to use the Custom Vision model exported. In this tutorial, you learn how to:

  • Use Visual Studio Code to create an IoT Edge Python module.
  • Use Visual Studio Code and Docker to create a Docker image and publish it to your registry.

If you don’t have an Azure subscription, create a free account before you begin.


Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: Develop IoT Edge modules for Linux devices. By completing either of those tutorials, you should have the following prerequisites in place:

To develop an IoT Edge module in Python, install the following additional prerequisites on your development machine:

  • Python extension for Visual Studio Code.
  • Python.
  • Pip for installing Python packages (typically included with your Python installation).

Create a module project

The following steps create an IoT Edge Python module by using Visual Studio Code and the Azure IoT Tools.

Create a new project

Use the Python package cookiecutter to create a Python solution template that you can build on top of.

  1. In Visual Studio Code, select View > Terminal to open the VS Code integrated terminal.
  2. In the terminal, enter the following command to install (or update) cookiecutter, which you use to create the IoT Edge solution template:
    pip install --upgrade --user cookiecutter
  3. Select View > Command Palette to open the VS Code command palette.
  4. In the command palette, enter and run the command Azure: Sign in and follow the instructions to sign in your Azure account. If you’re already signed in, you can skip this step.

In the command palette, enter and run the command Azure IoT Edge: New IoT Edge solution. Follow the prompts and provide the following information to create your solution:

Field Value
Select folder Choose the location on your development machine for VS Code to create the solution files.
Provide a solution name Enter a descriptive name for your solution or accept the default EdgeSolution.
Select module template Choose Python Module.
Provide a module name Name your module PythonModule.
Provide Docker image repository for the module An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace localhost:5000 with the login server value from your Azure container registry. You can retrieve the login server from the Overview page of your container registry in the Azure portal.

The final image repository looks like <registry name>

Provide Docker image repository

Add your registry credentials

The environment file stores the credentials for your container repository and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.

  1. In the VS Code explorer, open the .env file.
  2. Update the fields with the username and password values that you copied from your Azure container registry.
  3. Save the .env file.

Select your target architecture

Currently, Visual Studio Code can develop C modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you’re targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.

  1. Open the command palette and search for Azure IoT Edge: Set Default Target Platform for Edge Solution, or select the shortcut icon in the side bar at the bottom of the window.
  2. In the command palette, select the target architecture from the list of options. For this tutorial, we’re using an Ubuntu virtual machine as the IoT Edge device, so will keep the default amd64.

Adding the model and custom code for Custom Vision


Deploy the Module

Build and push your module

In the previous section, you created an IoT Edge solution and added code to the PythonModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.

  1. Open the VS Code integrated terminal by selecting View > Terminal.
  2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the Access keys section of your registry in the Azure portal.
    docker login -u <ACR username> -p <ACR password> <ACR login server>
    You may receive a security warning recommending the use of --password-stdin. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the docker login reference. In the VS Code explorer, right-click the deployment.template.json file and select Build and Push IoT Edge solution.

    The build and push command starts three operations. First, it creates a new folder in the solution called config that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs docker build to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs docker push to push the image repository to your container registry.

Deploy modules to device

Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the deployment.json file in the config folder. All you need to do now is select a device to receive the deployment.

Make sure that your IoT Edge device is up and running.

  1. In the Visual Studio Code explorer, expand the Azure IoT Hub Devices section to see your list of IoT devices.
  2. Right-click the name of your IoT Edge device, then select Create Deployment for Single Device.
  3. Select the deployment.json file in the config folder and then click Select Edge Deployment Manifest. Do not use the deployment.template.json file.
  4. Click the refresh button. You should see the new PythonModule running along with the TempSensor module and the $edgeAgent and $edgeHub.




New Pluralsight Courses Released!

My new Pluralsight courses Cleaning and Preparing Data in Microsoft Azure and Architecting Xamarin.Forms Applications for Code Reuse were just released! Here are the synopsis:

Cleaning and Preparing Data in Microsoft Azure


This course targets software developers and data scientists looking to understand the initial steps in a machine learning solution. The content will showcase methods and tools available using Microsoft Azure.


No data science project of merit has ever started with great data ready to plug into an algorithm. In this course, Cleaning and Preparing Data in Microsoft Azure, you’ll learn foundational knowledge of the steps required to utilize data in a machine learning project. First, you’ll discover different types of data and languages. Next, you’ll learn about managing large data sets and handling bad data. Finally, you’ll explore how to utilize Azure Notebooks. When you’re finished with this course, you’ll have the skills and knowledge of preparing data needed for use in Microsoft Azure. Software required: Microsoft Azure.

Architecting Xamarin.Forms Applications for Code Reuse


A well-architected application is flexible to changing business requirements. This course will teach you how to architect Xamarin.Forms applications in a way that promotes reusable patterns.


As business requirements change, so do solution assumptions. In this course, Architecting Xamarin.Forms Applications for Code Reuse, you’ll learn different architectural patterns in Xamarin.Forms. First, you’ll explore project structure and organization. Next, you’ll discover patterns and standards to promote code sharing. Finally, you’ll learn how to utilize dependency injection in Xamarin.Forms. When you’re finished with this course, you’ll have the skills and knowledge of architecting Xamarin.Forms projects needed to optimally promote code reuse.

gRPC C++ and Self Signed Certificates

Playing around with gRPC with a C++ server caused an issue that took longer to solve than it should. Once the linker and other issues were solved, the following error started to follow:

7562] Handshake failed with fatal error SSL_ERROR_SSL: error:100000c0:SSL routines:OPENSSL_internal:PEER_DID_NOT_RETURN_A_CERTIFICATE.

After searching, it lead me to this file where the different enumeration values for the SSL handling could be set.

/** Server does not request client certificate. A client can present a self
signed or signed certificates if it wishes to do so and they would be
accepted. */
/** Server requests client certificate but does not enforce that the client
presents a certificate.

If the client presents a certificate, the client authentication is left to
the application based on the metadata like certificate etc.

The key cert pair should still be valid for the SSL connection to be
established. */
/** Server requests client certificate but does not enforce that the client
presents a certificate.

If the client presents a certificate, the client authentication is done by
grpc framework (The client needs to either present a signed cert or skip no
certificate for a successful connection).

The key cert pair should still be valid for the SSL connection to be
established. */
/** Server requests client certificate but enforces that the client presents a

If the client presents a certificate, the client authentication is left to
the application based on the metadata like certificate etc.

The key cert pair should still be valid for the SSL connection to be
established. */
/** Server requests client certificate but enforces that the client presents a

The cerificate presented by the client is verified by grpc framework (The
client needs to present signed certs for a successful connection).

The key cert pair should still be valid for the SSL connection to be
established. */

That lead me to find a more through breakout of the use cases for each enumeration here.

  1. With GRPC_SSL_DONT_REQUEST_CLIENT_CERTIFICATE: Server does not request for a client certificate. So the client can choose to present a self-signed or a signed certificate or not present a certificate at all and all of these should be okay.
    With GRPC_SSL_REQUEST_CLIENT_CERTIFICATE_BUT_DONT_VERIFY: Server requests the client for a certificate but the signature enforcement is not done by grpc server framework but left to the app. The app can use metadata like the certificate hash to verify a certificate (essentially provides the server a
    way to verify self signed certificates, provided they have an out of band mechanism to register the certificate with the app)
  2. By “client authentication done by grpc framework”, I meant certificate signature verification is done using the ssl protocol itself by the grpc server framework (SSL_VERIFY_PEER option is being used in ssl options). The client has to provide a signed certificate which can be verified by the server (using the SSL roots file).
  3. “don’t request”/ “request”/ “require” / “verify”
    – Server has the option to either request or not-request for client cert.
    – Client can choose to either present a certificate or not.
    – Server can choose to verify the client certificate or not
    Each of these three options are independent of each other and contribute to multiple options presented.
    “require” for instance is the case server request for client cert, client has to present a certificate for the ssl handshake to continue but the server will not verify the client certificate using signature but can do so if needed based on certificate metadata.
    “verify” – SSL_VERIFY_PEER option is being used in ssl options and the client signature is verified/trusted by the server using the SSL roots file.
  4. All of the above pretty much expected that the private key and the public key files were all in okay and the only question was whether they were self signed or signed by a mutually trusted CA. If the public key and private keys don’t match up then the connection fails.
  5. It is a typo. It should have been “The client needs to either present a signed cert or not present a
    certificate at all for a successful connection”
  6. grpc_auth_context has various properties of the peer like GRPC_X509_CN_PROPERTY_NAME, GRPC_X509_PEM_CERT_PROPERTY_NAME, GRPC_X509_SAN_PROPERTY_NAME that can be used.

Finally, that lead me to understand that for self-signed certificates in development GRPC_SSL_REQUEST_CLIENT_CERTIFICATE_BUT_DONT_VERIFY was the right enumeration.