TrueNAS Azure Sync for Proxmox

Previously, we discuss TrueNAS NFS for Proxmox. Now that Proxmox is using TrueNAS for storage, a Cloud Sync Task can be used to copy the TrueNAS NFS to Azure Blob Storage as a backup. The following steps are required:

  • Create Azure Blob Storage Account
  • Create TrueNAS Cloud Credentials
  • Create Cloud Sync Tasks

Create Azure Blob Storage Account

Create a storage account

Every storage account must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group, or use an existing resource group. This article shows how to create a new resource group.

general-purpose v2 storage account provides access to all of the Azure Storage services: blobs, files, queues, tables, and disks. The steps outlined here create a general-purpose v2 storage account, but the steps to create any type of storage account are similar. For more information about types of storage accounts and other storage account settings, see Azure storage account overview.

Portal

To create a general-purpose v2 storage account in the Azure portal, follow these steps:

  1. On the Azure portal menu, select All services. In the list of resources, type Storage Accounts. As you begin typing, the list filters based on your input. Select Storage Accounts.
  2. On the Storage Accounts window that appears, choose Add.
  3. On the Basics tab, select the subscription in which to create the storage account.
  4. Under the Resource group field, select your desired resource group, or create a new resource group. For more information on Azure resource groups, see Azure Resource Manager overview.
  5. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
  6. Select a location for your storage account, or use the default location.
  7. Select a performance tier. The default tier is Standard.
  8. Set the Account kind field to Storage V2 (general-purpose v2).
  9. Specify how the storage account will be replicated. The default replication option is Read-access geo-redundant storage (RA-GRS). For more information about available replication options, see Azure Storage redundancy.
  10. Additional options are available on the NetworkingData protectionAdvanced, and Tags tabs. To use Azure Data Lake Storage, choose the Advanced tab, and then set Hierarchical namespace to Enabled. For more information, see Azure Data Lake Storage Gen2 Introduction
  11. Select Review + Create to review your storage account settings and create the account.
  12. Select Create.

The following image shows the settings on the Basics tab for a new storage account:

Screenshot showing how to create a storage account in the Azure portal

Create a container

To create a container in the Azure portal, follow these steps:

  1. Navigate to your new storage account in the Azure portal.
  2. In the left menu for the storage account, scroll to the Blob service section, then select Containers.
  3. Select the + Container button.
  4. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character. For more information about container and blob names, see Naming and referencing containers, blobs, and metadata.
  5. Set the level of public access to the container. The default level is Private (no anonymous access).
  6. Select OK to create the container.Screenshot showing how to create a container in the Azure portal

Create TrueNAS Cloud Credentials

To begin integrating TrueNAS with a Cloud Storage provider, register the account credentials on the system. After saving any credentials, a Cloud Sync Task allows sending or receiving data from that Cloud Storage Provider.

Saving a Cloud Storage Credential

Transferring data from TrueNAS to the Cloud requires saving Cloud Storage Provider credentials on the system.

It is recommended to have another browser tab open and logged in to the Cloud Storage Provider account you intend to link with TrueNAS. Some providers require additional information that is generated on the storage provider account page. For example, saving an Amazon S3 credential on TrueNAS could require logging in to the S3 account and generating an access key pair on the Security Credentials > Access Keys page.

To save cloud storage provider credentials, go to System > Cloud Credentials and click Add.

Using the Azure Portal we can retrieve our access keys.

Create Cloud Sync Tasks

TrueNAS can send, receive, or synchronize data with a Cloud Storage provider. Cloud Sync tasks allow for single time transfers or recurring transfers on a schedule, and are an effective method to back up data to a remote location.

Go to Tasks > Cloud Sync Tasks and click Add.

TasksCloudSyncAdd

Give the task a memorable Description and select an existing cloud Credential. TrueNAS connects to the chosen Cloud Storage Provider and shows the available storage locations. Decide if data is transferring to (PUSH) or from (PULL) the Cloud Storage location (Remote). Choose a Transfer Mode:

Next, Control when the task runs by defining a Schedule. When a specific Schedule is required, choose Custom and use the Advanced Scheduler.Advanced Schedulerexpand

Unsetting Enable makes the configuration available without allowing the Schedule to run the task. To manually activate a saved task, go to Tasks > Cloud Sync Tasks, click  to expand a task, and click RUN NOW.

The remaining options allow tuning the task to your specific requirements.Specific Optionsexpand

Transfer

NameDescription
DescriptionEnter a description of the Cloud Sync Task.
DirectionPUSH sends data to cloud storage. PULL receives data from cloud storage. Changing the direction resets the Transfer Mode to COPY.
Transfer ModeSYNC: Files on the destination are changed to match those on the source. If a file does not exist on the source, it is also deleted from the destination. COPY: Files from the source are copied to the destination. If files with the same names are present on the destination, they are overwritten. MOVE: After files are copied from the source to the destination, they are deleted from the source. Files with the same names on the destination are overwritten.
Directory/FilesSelect the directories or files to be sent to the cloud for Push syncs, or the destination to be written for Pull syncs. Be cautious about the destination of Pull jobs to avoid overwriting existing files.

Remote

NameDescription
CredentialSelect the cloud storage provider credentials from the list of available Cloud Credentials.

Control

NameDescription
ScheduleSelect a schedule preset or choose Custom to open the advanced scheduler.
EnabledEnable this Cloud Sync Task. Unset to disable this Cloud Sync Task without deleting it.

Advanced Options

NameDescription
Follow SymlinksFollow symlinks and copy the items to which they link.
Pre-ScriptScript to execute before running sync.
Post-ScriptScript to execute after running sync.
ExcludeList of files and directories to exclude from sync. Separate entries by pressing Enter. See rclone filtering for more details about the --exclude option.

Advanced Remote Options

NameDescription
Remote EncryptionPUSH: Encrypt files before transfer and store the encrypted files on the remote system. Files are encrypted using the Encryption Password and Encryption Salt values. PULL: Decrypt files that are being stored on the remote system before the transfer. Transferring the encrypted files requires entering the same Encryption Password and Encryption Salt that was used to encrypt the files. Additional details about the encryption algorithm and key derivation are available in the rclone crypt File formats documentation.
TransfersNumber of simultaneous file transfers. Enter a number based on the available bandwidth and destination system performance. See rclone –transfers.
Bandwidth limitA single bandwidth limit or bandwidth limit schedule in rclone format. Separate entries by pressing Enter. Example: 08:00,512 12:00,10MB 13:00,512 18:00,30MB 23:00,off. Units can be specified with the beginning letter: b, k (default), M, or G. See rclone –bwlimit.

Scripting and Environment Variables

Advanced users can write scripts that run immediately before or after the Cloud Sync task. The Post-script field is only run when the Cloud Sync task successfully completes. You can pass a variety of task environment variables into the Pre- and Post- script fields:

  • CLOUD_SYNC_ID
  • CLOUD_SYNC_DESCRIPTION
  • CLOUD_SYNC_DIRECTION
  • CLOUD_SYNC_TRANSFER_MODE
  • CLOUD_SYNC_ENCRYPTION
  • CLOUD_SYNC_FILENAME_ENCRYPTION
  • CLOUD_SYNC_ENCRYPTION_PASSWORD
  • CLOUD_SYNC_ENCRYPTION_SALT
  • CLOUD_SYNC_SNAPSHOT

There also are provider-specific variables like CLOUD_SYNC_CLIENT_ID or CLOUD_SYNC_TOKEN or CLOUD_SYNC_CHUNK_SIZE

Remote storage settings:

  • CLOUD_SYNC_BUCKET
  • CLOUD_SYNC_FOLDER

Local storage settings:

  • CLOUD_SYNC_PATH

Testing Settings

Test the settings before saving by clicking DRY RUN. TrueNAS connects to the Cloud Storage Provider and simulates a file transfer. No data is actually sent or received. A dialog shows the test status and allows downloading the task logs.

TasksCloudsyncAddGoogledriveDryrun

Cloud Sync Behavior

Saved tasks are activated according to their schedule or by clicking RUN NOW. An in-progress cloud sync must finish before another can begin. Stopping an in-progress task cancels the file transfer and requires starting the file transfer over.

To view logs about a running or the most recent run of a task, click the task status.

Cloud Sync Restore

To quickly create a new Cloud Sync that uses the same options but reverses the data transfer, expand () an existing Cloud Sync and click RESTORE.

TasksCloudSyncRestore

Enter a new Description for this reversed task and define the path to a storage location for the transferred data.

The restored cloud sync is saved as another entry in Tasks > Cloud Sync Tasks.

TrueNAS NFS for Proxmox

While setting the server closet at the office, the first thing setup was a TrueNAS Core server. If you are looking for a guide on installing TrueNAS core, a good one can be found here. With it already in place, it can be used to create an NFS share.

One of the major benefits of NFS is being able to easily migrate a container from one environment to another. By using the NFS, the container or VM is pulled over the network which allows any host to host ad hoc. Migrating VMs or containers takes seconds.

TrueNAS NFS support

Creating a Network File System (NFS) share on TrueNAS gives the benefit of making lots of data easily available for anyone with share access. Depending how the share is configured, users accessing the share can be restricted to read or write privileges.To create a new share, make sure a dataset is available with all the data for sharing.

Creating an NFS Share

Go to Sharing > Unix Shares (NFS) and click ADD.

Services NFS Add

Use the file browser to select the dataset to be shared. An optional Description can be entered to help identify the share. Clicking SUBMIT creates the share. At the time of creation, you can select ENABLE SERVICE for the service to start and to automatically start after any reboots. If you wish to create the share but not immediately enable it, select CANCEL.

Services NFS Add Service Enable
Services NFS Service Enable Success

NFS Share Settings

SettingValueDescription
Pathfile browserType or browse to the full path to the pool or dataset to share. Click ADD to configure multiple paths.
DescriptionstringEnter any notes or reminders about the share.
All dirscheckboxSet to allow the client to mount any subdirectory within the Path. Leaving disabled only allows clients to mount the Path endpoint.
QuietcheckboxEnabling inhibits some syslog diagnostics to avoid error messages. See exports(5) for examples. Disabling allows all syslog diagnostics, which can lead to additional cosmetic error messages.
EnabledcheckboxEnable this NFS share. Unset to disable this NFS share without deleting the configuration.

To edit an existing NFS share, go to Sharing > Unix Shares (NFS) and click more_vert > Edit. The options available are identical to the share creation options.

Configure the NFS Service

To begin sharing the data, go to Services and click the NFS toggle. If you want NFS sharing to activate immediately after TrueNAS boots, set Start Automatically.

NFS service settings can be configured by clicking  (Configure).

Services NFS Options
SettingValueDescription
Number of serversintegerSpecify how many servers to create. Increase if NFS client responses are slow. Keep this less than or equal to the number of CPUs reported by sysctl -n kern.smp.cpus to limit CPU context switching.
Bind IP Addressesdrop downSelect IP addresses to listen to for NFS requests. Leave empty for NFS to listen to all available addresses.
Enable NFSv4checkboxSet to switch from NFSv3 to NFSv4.
NFSv3 ownership model for NFSv4checkboxSet when NFSv4 ACL support is needed without requiring the client and the server to sync users and groups.
Require Kerberos for NFSv4checkboxSet to force NFS shares to fail if the Kerberos ticket is unavailable.
Serve UDP NFS clientscheckboxSet if NFS clients need to use the User Datagram Protocol (UDP).
Allow non-root mountcheckboxSet only if required by the NFS client. Set to allow serving non-root mount requests.
Support >16 groupscheckboxSet when a user is a member of more than 16 groups. This assumes group membership is configured correctly on the NFS server.
Log mountd(8) requestscheckboxSet to log mountd syslog requests.
Log rpc.statd(8) and rpc.lockd(8)checkboxSet to log rpc.statd and rpc.lockd syslog requests.
mountd(8) bind portintegerEnter a number to bind mountd only to that port.
rpc.statd(8) bind portintegerEnter a number to bind rpc.statd only to that port.
rpc.lockd(8) bind portintegerEnter a number to bind rpc.lockd only to that port.

Unless a specific setting is needed, it is recommended to use the default settings for the NFS service. When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.

Proxmox NFS storage pool

The NFS backend is based on the directory backend, so it shares most properties. The directory layout and the file naming conventions are the same. The main advantage is that you can directly configure the NFS server properties, so the backend can mount the share automatically. There is no need to modify /etc/fstab. The backend can also test if the server is online, and provides a method to query the server for exported shares.

How to add NFS Storage on Proxmox VE | LinuxHelp Tutorials

The backend supports all common storage properties, except the shared flag, which is always set. Additionally, the following properties are used to configure the NFS server:

server – Server IP or DNS name. To avoid DNS lookup delays, it is usually preferable to use an IP address instead of a DNS name – unless you have a very reliable DNS server, or list the server in the local /etc/hosts file.

export – NFS export path (as listed by pvesm nfsscan). You can also set NFS mount options:

path – The local mount point (defaults to /mnt/pve/<STORAGE_ID>/).

options – NFS mount options (see man nfs).

How to setup an NFS Server and configure NFS Storage in Proxmox VE

PostgreSQL Historical Log by Table

In my current project, there is a need for tracking data changes in the PostgreSQL tables. The end goal is, if a row changes, we copy the previous row before the change transaction completes and write it to a logging table. We will accomplish this with in the following steps:

  •  Creating a table LIKE our table that needs logging
  •  Create a function for our table
  •  Apply that function as a trigger

Table Like

First we need an example table to get started with. For a simple example, lets use a basic address table.

create table address
(
address_id integer not null,
type varchar(100),
street1 varchar(120) not null,
street2 varchar(120),
street3 varchar(120),
street4 varchar(120),
city varchar(80),
po_box_code varchar(20) not null,
phone_number varchar(50),
date_created timestamp with time zone not null default current_timestamp
);

view raw
address_table.sql
hosted with ❤ by GitHub

This basic table has enough constraints to make a decent example. We need to create a copy of this table, one where the columns are the same name and type, but without all of the constraints. Luckily for us, PostgreSQL provides a feature for just a situation. For this, we want to use the like_option for the CREATE TABLE statement. According the latest documentation (PostgreSQL 12) at the time of writing this post:

The LIKE clause specifies a table from which the new table automatically copies all column names, their data types, and their not-null constraints.

Unlike INHERITS, the new table and original table are completely decoupled after creation is complete. Changes to the original table will not be applied to the new table, and it is not possible to include data of the new table in scans of the original table.

Also unlike INHERITS, columns and constraints copied by LIKE are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another LIKE clause, an error is signaled.

The optional like_option clauses specify which additional properties of the original table to copy. Specifying INCLUDING copies the property, specifying EXCLUDING omits the property. EXCLUDING is the default. If multiple specifications are made for the same kind of object, the last one is used.

We will want to exclude all constraints so that when our trigger fires, it can write any data to the columns without worrying if those columns are valid. The resulting table definition looks like the following:

create table logging_address
(
like address EXCLUDING CONSTRAINTS,
operation char(10) not null,
date_operated timestamp with time zone not null default current_timestamp
);

view raw
logging_address.sql
hosted with ❤ by GitHub

Logging Function

Next, there needs to be a trigger that logs the data. To create a new trigger in PostgreSQL, you follow these steps:

    • First, create a trigger function using CREATE FUNCTION statement.
    • Second, bind the trigger function to a table by using CREATE TRIGGER statement.

A trigger function is similar to an ordinary function. However, a trigger function does not take any argument and has a return value with the type of trigger. Inside this trigger function, insert the old data into the logging table. This makes the trigger function as follows:

create function address_trigger_function()
returns trigger as $$
BEGIN
insert into logging_address_address (address_id, type, street1, street2, street3, street4, city, po_box_code, phone_number, date_created, operation)
values (old.address_id, old.type, old.street1,old.street2,old.street3,old.street4,old.city,old.po_box_code,old.phone_number, old.date_created,TG_OP);
RETURN NEW;
end;
$$ LANGUAGE plpgsql;

TG_OP is Data type text; a string of INSERT, UPDATE, DELETE, or TRUNCATE telling for which operation the trigger was fired.

Implementing the Trigger

As we said earlier, Second, bind the trigger function to a table by using CREATE TRIGGER statement. This part is fairly easy.

CREATE TRIGGER address_versioning_trigger
BEFORE UPDATE OR DELETE ON location.address
FOR EACH ROW EXECUTE PROCEDURE address_trigger_function();

view raw
create_trigger.sql
hosted with ❤ by GitHub

Now, whenever you insert, update, or delete a record in the address table, the operation is logged in the logging address table.

Using Cognitive Services: Custom Vision Service with Azure IoT Edge

This is a guide on how to use Cognitive Services: Custom Vision Service with Azure IoT Edge without having the Edge module host a web endpoint but instead use the built in Module to Module communication. This post will break down the steps into four major sections:

  • Creating the Custom Vision Model
  • Creating the Edge Module in Python
  • Adding the model and custom code for Custom Vision
  • Deploy the Module

Creating the Custom Vision Model

To use the Custom Vision Service for image classification, you must first build a classifier model. In this guide, you’ll learn how to build a classifier through the Custom Vision website.

Prerequisites

  • A valid Azure subscription. Create an account for free.
  • A set of images with which to train your classifier. See below for tips on choosing images.

Create Custom Vision resources in the Azure portal

To use Custom Vision Service, you will need to create Custom Vision Training and Prediction resources in the Azure portal. This will create both a Training and Prediction resource.

Create a new project

In your web browser, navigate to the Custom Vision web page and select Sign in. Sign in with the same account you used to sign into the Azure portal.

Image of the sign-in page

  1. To create your first project, select New Project. The Create new project dialog box will appear.The new project dialog box has fields for name, description, and domains.
  2. Enter a name and a description for the project. Then select a Resource Group. If your signed-in account is associated with an Azure account, the Resource Group dropdown will display all of your Azure Resource Groups that include a Custom Vision Service Resource.
  3. Select Classification under Project Types. Then, under Classification Types, choose either Multilabel or Multiclass, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You will be able to change the classification type later if you wish.
  4. Next, select one of the available domains. Each domain optimizes the classifier for specific types of images, as described in the following table. You will be able to change the domain later if you wish.
    Domain Purpose
    Generic Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the Generic domain.
    Food Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.
    Landmarks Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.
    Retail Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.
    Compact domains Optimized for the constraints of real-time classification on mobile devices. The models generated by compact domains can be exported to run locally.
  5. Finally, select Create project.

Choose training images

As a minimum, we recommend you use at least 30 images per tag in the initial training set. You’ll also want to collect a few extra images to test your model once it is trained.

In order to train your model effectively, use images with visual variety. Select images with that vary by:

  • camera angle
  • lighting
  • background
  • visual style
  • individual/grouped subject(s)
  • size
  • type

Additionally, make sure all of your training images meet the following criteria:

  • .jpg, .png, or .bmp format
  • no greater than 6MB in size (4MB for prediction images)
  • no less than 256 pixels on the shortest edge; any images shorter than this will be automatically scaled up by the Custom Vision Service

Upload and tag images

In this section you will upload and manually tag images to help train the classifier.

  1. To add images, click the Add images button and then select Browse local files. Select Open to move to tagging. Your tag selection will be applied to the entire group of images you’ve selected to upload, so it is easier to upload images in separate groups according to their desired tags. You can also change the tags for individual images after they have been uploaded.The add images control is shown in the upper left, and as a button at bottom center.
  2. To create a tag, enter text in the My Tags field and press Enter. If the tag already exists, it will appear in a dropdown menu. In a multilabel project, you can add more than one tag to your images, but in a multiclass project you can add only one. To finish uploading the images, use the Upload [number] files button.Image of the tag and upload page
  3. Select Done once the images have been uploaded.The progress bar shows all tasks completed.

To upload another set of images, return to the top of this section and repeat the steps.

Train the classifier

To train the classifier, select the Train button. The classifier uses all of the current images to create a model that identifies the visual qualities of each tag.

The train button in the top right of the web page's header toolbar

The training process should only take a few minutes. During this time, information about the training process is displayed in the Performance tab.

The browser window with a training dialog in the main section

Custom Vision Service supports the following exports:

  • Tensorflow for Android.
  • CoreML for iOS11.
  • ONNX for Windows ML.
  • A Windows or Linux container. The container includes a Tensorflow model and service code to use the Custom Vision Service API.

Convert to a compact domain

To convert the domain of an existing classifier, use the following steps:

  1. From the Custom vision page, select the Home icon to view a list of your projects. You can also use the https://customvision.ai/projects to see your projects.Image of the home icon and projects list
  2. Select a project, and then select the Gear icon in the upper right of the page.Image of the gear icon
  3. In the Domains section, select a compact domain. Select Save Changes to save the changes.Image of domains selection
  4. From the top of the page, select Train to retrain using the new domain.

Export your model

To export the model after retraining, use the following steps:

  1. Go to the Performance tab and select Export.Image of the export icon

    Tip

    If the Export entry is not available, then the selected iteration does not use a compact domain. Use the Iterations section of this page to select an iteration that uses a compact domain, and then select Export.

  2. Select the export format, and then select Export to download the model.

Creating the Edge Module in Python

You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating an IoT Edge module that will be edited to use the Custom Vision model exported. In this tutorial, you learn how to:

  • Use Visual Studio Code to create an IoT Edge Python module.
  • Use Visual Studio Code and Docker to create a Docker image and publish it to your registry.

If you don’t have an Azure subscription, create a free account before you begin.

Prerequisites

Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: Develop IoT Edge modules for Linux devices. By completing either of those tutorials, you should have the following prerequisites in place:

To develop an IoT Edge module in Python, install the following additional prerequisites on your development machine:

  • Python extension for Visual Studio Code.
  • Python.
  • Pip for installing Python packages (typically included with your Python installation).

Create a module project

The following steps create an IoT Edge Python module by using Visual Studio Code and the Azure IoT Tools.

Create a new project

Use the Python package cookiecutter to create a Python solution template that you can build on top of.

  1. In Visual Studio Code, select View > Terminal to open the VS Code integrated terminal.
  2. In the terminal, enter the following command to install (or update) cookiecutter, which you use to create the IoT Edge solution template:
    pip install --upgrade --user cookiecutter
  3. Select View > Command Palette to open the VS Code command palette.
  4. In the command palette, enter and run the command Azure: Sign in and follow the instructions to sign in your Azure account. If you’re already signed in, you can skip this step.

In the command palette, enter and run the command Azure IoT Edge: New IoT Edge solution. Follow the prompts and provide the following information to create your solution:

Field Value
Select folder Choose the location on your development machine for VS Code to create the solution files.
Provide a solution name Enter a descriptive name for your solution or accept the default EdgeSolution.
Select module template Choose Python Module.
Provide a module name Name your module PythonModule.
Provide Docker image repository for the module An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace localhost:5000 with the login server value from your Azure container registry. You can retrieve the login server from the Overview page of your container registry in the Azure portal.

The final image repository looks like <registry name>.azurecr.io/pythonmodule.

Provide Docker image repository

Add your registry credentials

The environment file stores the credentials for your container repository and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.

  1. In the VS Code explorer, open the .env file.
  2. Update the fields with the username and password values that you copied from your Azure container registry.
  3. Save the .env file.

Select your target architecture

Currently, Visual Studio Code can develop C modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you’re targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.

  1. Open the command palette and search for Azure IoT Edge: Set Default Target Platform for Edge Solution, or select the shortcut icon in the side bar at the bottom of the window.
  2. In the command palette, select the target architecture from the list of options. For this tutorial, we’re using an Ubuntu virtual machine as the IoT Edge device, so will keep the default amd64.

Adding the model and custom code for Custom Vision

 

Deploy the Module

Build and push your module

In the previous section, you created an IoT Edge solution and added code to the PythonModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.

  1. Open the VS Code integrated terminal by selecting View > Terminal.
  2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the Access keys section of your registry in the Azure portal.
    docker login -u <ACR username> -p <ACR password> <ACR login server>
    
    You may receive a security warning recommending the use of --password-stdin. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the docker login reference. In the VS Code explorer, right-click the deployment.template.json file and select Build and Push IoT Edge solution.

    The build and push command starts three operations. First, it creates a new folder in the solution called config that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs docker build to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs docker push to push the image repository to your container registry.

Deploy modules to device

Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the deployment.json file in the config folder. All you need to do now is select a device to receive the deployment.

Make sure that your IoT Edge device is up and running.

  1. In the Visual Studio Code explorer, expand the Azure IoT Hub Devices section to see your list of IoT devices.
  2. Right-click the name of your IoT Edge device, then select Create Deployment for Single Device.
  3. Select the deployment.json file in the config folder and then click Select Edge Deployment Manifest. Do not use the deployment.template.json file.
  4. Click the refresh button. You should see the new PythonModule running along with the TempSensor module and the $edgeAgent and $edgeHub.

 

 

 

New Pluralsight Courses Released!

My new Pluralsight courses Cleaning and Preparing Data in Microsoft Azure and Architecting Xamarin.Forms Applications for Code Reuse were just released! Here are the synopsis:

Cleaning and Preparing Data in Microsoft Azure

Abstract

This course targets software developers and data scientists looking to understand the initial steps in a machine learning solution. The content will showcase methods and tools available using Microsoft Azure.

Description

No data science project of merit has ever started with great data ready to plug into an algorithm. In this course, Cleaning and Preparing Data in Microsoft Azure, you’ll learn foundational knowledge of the steps required to utilize data in a machine learning project. First, you’ll discover different types of data and languages. Next, you’ll learn about managing large data sets and handling bad data. Finally, you’ll explore how to utilize Azure Notebooks. When you’re finished with this course, you’ll have the skills and knowledge of preparing data needed for use in Microsoft Azure. Software required: Microsoft Azure.

Architecting Xamarin.Forms Applications for Code Reuse

Abstract

A well-architected application is flexible to changing business requirements. This course will teach you how to architect Xamarin.Forms applications in a way that promotes reusable patterns.

Description

As business requirements change, so do solution assumptions. In this course, Architecting Xamarin.Forms Applications for Code Reuse, you’ll learn different architectural patterns in Xamarin.Forms. First, you’ll explore project structure and organization. Next, you’ll discover patterns and standards to promote code sharing. Finally, you’ll learn how to utilize dependency injection in Xamarin.Forms. When you’re finished with this course, you’ll have the skills and knowledge of architecting Xamarin.Forms projects needed to optimally promote code reuse.

gRPC C++ and Self Signed Certificates

Playing around with gRPC with a C++ server caused an issue that took longer to solve than it should. Once the linker and other issues were solved, the following error started to follow:

7562 ssl_transport_security.cc:1238] Handshake failed with fatal error SSL_ERROR_SSL: error:100000c0:SSL routines:OPENSSL_internal:PEER_DID_NOT_RETURN_A_CERTIFICATE.

After searching, it lead me to this file where the different enumeration values for the SSL handling could be set.


/** Server does not request client certificate. A client can present a self
signed or signed certificates if it wishes to do so and they would be
accepted. */
GRPC_SSL_DONT_REQUEST_CLIENT_CERTIFICATE,
/** Server requests client certificate but does not enforce that the client
presents a certificate.

If the client presents a certificate, the client authentication is left to
the application based on the metadata like certificate etc.

The key cert pair should still be valid for the SSL connection to be
established. */
GRPC_SSL_REQUEST_CLIENT_CERTIFICATE_BUT_DONT_VERIFY,
/** Server requests client certificate but does not enforce that the client
presents a certificate.

If the client presents a certificate, the client authentication is done by
grpc framework (The client needs to either present a signed cert or skip no
certificate for a successful connection).

The key cert pair should still be valid for the SSL connection to be
established. */
GRPC_SSL_REQUEST_CLIENT_CERTIFICATE_AND_VERIFY,
/** Server requests client certificate but enforces that the client presents a
certificate.

If the client presents a certificate, the client authentication is left to
the application based on the metadata like certificate etc.

The key cert pair should still be valid for the SSL connection to be
established. */
GRPC_SSL_REQUEST_AND_REQUIRE_CLIENT_CERTIFICATE_BUT_DONT_VERIFY,
/** Server requests client certificate but enforces that the client presents a
certificate.

The cerificate presented by the client is verified by grpc framework (The
client needs to present signed certs for a successful connection).

The key cert pair should still be valid for the SSL connection to be
established. */
GRPC_SSL_REQUEST_AND_REQUIRE_CLIENT_CERTIFICATE_AND_VERIFY

That lead me to find a more through breakout of the use cases for each enumeration here.

  1. With GRPC_SSL_DONT_REQUEST_CLIENT_CERTIFICATE: Server does not request for a client certificate. So the client can choose to present a self-signed or a signed certificate or not present a certificate at all and all of these should be okay.
    With GRPC_SSL_REQUEST_CLIENT_CERTIFICATE_BUT_DONT_VERIFY: Server requests the client for a certificate but the signature enforcement is not done by grpc server framework but left to the app. The app can use metadata like the certificate hash to verify a certificate (essentially provides the server a
    way to verify self signed certificates, provided they have an out of band mechanism to register the certificate with the app)
  2. By “client authentication done by grpc framework”, I meant certificate signature verification is done using the ssl protocol itself by the grpc server framework (SSL_VERIFY_PEER option is being used in ssl options). The client has to provide a signed certificate which can be verified by the server (using the SSL roots file).
  3. “don’t request”/ “request”/ “require” / “verify”
    – Server has the option to either request or not-request for client cert.
    – Client can choose to either present a certificate or not.
    – Server can choose to verify the client certificate or not
    Each of these three options are independent of each other and contribute to multiple options presented.
    “require” for instance is the case server request for client cert, client has to present a certificate for the ssl handshake to continue but the server will not verify the client certificate using signature but can do so if needed based on certificate metadata.
    “verify” – SSL_VERIFY_PEER option is being used in ssl options and the client signature is verified/trusted by the server using the SSL roots file.
  4. All of the above pretty much expected that the private key and the public key files were all in okay and the only question was whether they were self signed or signed by a mutually trusted CA. If the public key and private keys don’t match up then the connection fails.
  5. It is a typo. It should have been “The client needs to either present a signed cert or not present a
    certificate at all for a successful connection”
  6. grpc_auth_context has various properties of the peer like GRPC_X509_CN_PROPERTY_NAME, GRPC_X509_PEM_CERT_PROPERTY_NAME, GRPC_X509_SAN_PROPERTY_NAME that can be used.

Finally, that lead me to understand that for self-signed certificates in development GRPC_SSL_REQUEST_CLIENT_CERTIFICATE_BUT_DONT_VERIFY was the right enumeration.

Azure IoT Hub – OpenSSL – Generate proof of possession

The Azure IoT documentation has guides on setting up certifications for production use. That documentation showcases how to properly setup using certificate authorities to generate proof of possession. For development purposes, you may want to use self signed certificates.

  1. Assuming  the original key and cert were created with the following commands (Azure IoT reports unverified if you upload it):
# Create root key
openssl genrsa -out iotHubRoot.key 2048

# Create root cert
openssl req -new -x509 -key iotHubRoot.key -out iotHubRoot.cer -days 500
  1. Then generate the verification cert (pay attention to fill in common name with verification code):
# Create verification key and csr
openssl genrsa -out verification.key 2048
openssl req -new -key verification.key -out verification.csr

#It will prompt for cert fields. 
#IMPORTANT: The Common Name needs to be your Verification Code (generate and copy that from portal)

# Create verification pem
openssl x509 -req -in -verification.csr -CA iotHubRoot.cer -CAkey iotHubRoot.key -CAcreateserial -out verification.pem -days 500 -sha256
  1. Upload pem file to portal to verify certificate

New Pluralsight Course Released!

My new Pluralsight course Sourcing Data in Microsoft Azure was just released! Here is the synopsis:

Abstract

This course targets software developers looking to source data from inside and outside of the cloud. The content will also showcase methods and tools available using Microsoft Azure.

Description

The cloud has nearly infinite compute power for processing. In this course, Sourcing Data in Microsoft Azure, you’ll learn foundational knowledge of data types, data policy, and finding data. First, you’ll learn how to register data sources with Azure Data Catalog. Next, you’ll discover how to extract, transform, and load data with Azure Data Factory. Finally, you’ll explore how to set up data processing with Azure HD Insight. When you’re finished with this course, you’ll have the skills and knowledge of the tools and processes needed to source data in Microsoft Azure. Software required: Microsoft Azure portal.

New Pluralsight Course Released!

My new Pluralsight course Deploying and Managing Models in Microsoft Azure was just released! Here is the synopsis:

Abstract

In this course, you’ll learn about how data science practitioners can utilize tools for managing the models they create. You’ll also see those tools showcased in Microsoft Azure.

Description

One of the most overlooked processes in data science is managing the life cycle of models. In this course, Deploying and Managing Models in Microsoft Azure, you’ll gain foundational knowledge of Azure Machine Learning. First, you’ll discover how to create and utilize Azure Machine Learning. Next, you’ll find out how to integrate with Azure DevOps. Finally, you’ll explore how to utilize them together to automate the deployment and management of models. When you’re finished with this course, you’ll have the skills and knowledge of model life cycle management needed to manage a machine learning project. Software required: Microsoft Azure.

Authoring for Pluralsight – Azure Machine Learning

Off to start another set of courses for Pluralsight:

  • Sourcing Data in Microsoft Azure
  • Deploying and Managing Models in Microsoft Azure
  • Cleaning and Preparing Data in Microsoft Azure

If you would like to check out any of my other courses, visit my author’s profile.

Sourcing Data in Microsoft Azure

This course is for people looking to move into the data sciences. They can have an existing background in development or IT.

This course will show how to find data in Microsoft Azure, how to move and change that data, and finally how to build workflows around that data.

This course assumes the developer has an understanding of basic computer terminology and the azure portal.

Deploying and Managing Models in Microsoft Azure

This course is for people looking to move into the data sciences. They can have an existing background in development or IT.

This course introduces the audience to the different data preparation steps involved with data projects. This course will show how to clean, transform, and wrangle the data needed for a data project.

This course assumes the developer has an understanding of basic computer terminology and the azure portal.

Cleaning and Preparing Data in Microsoft Azure

This course is for data science practitioners who need to learn more about how to utilize tools for managing the models they create.

The audience will be taken through automation and DevOps to learn more about how to manage their workflows. Everything from versioning, automated deployments, automated hyper-parameter tuning, and more will be discussed.

This course assumes the data scientist has an understanding of machine learning and common terminology and integration in machine learning projects. The course also assumes the data scientist has knowledge of Azure and the Azure portal.