Using Cognitive Services: Custom Vision Service with Azure IoT Edge

This is a guide on how to use Cognitive Services: Custom Vision Service with Azure IoT Edge without having the Edge module host a web endpoint but instead use the built in Module to Module communication. This post will break down the steps into four major sections:

  • Creating the Custom Vision Model
  • Creating the Edge Module in Python
  • Adding the model and custom code for Custom Vision
  • Deploy the Module

Creating the Custom Vision Model

To use the Custom Vision Service for image classification, you must first build a classifier model. In this guide, you’ll learn how to build a classifier through the Custom Vision website.

Prerequisites

  • A valid Azure subscription. Create an account for free.
  • A set of images with which to train your classifier. See below for tips on choosing images.

Create Custom Vision resources in the Azure portal

To use Custom Vision Service, you will need to create Custom Vision Training and Prediction resources in the Azure portal. This will create both a Training and Prediction resource.

Create a new project

In your web browser, navigate to the Custom Vision web page and select Sign in. Sign in with the same account you used to sign into the Azure portal.

Image of the sign-in page

  1. To create your first project, select New Project. The Create new project dialog box will appear.The new project dialog box has fields for name, description, and domains.
  2. Enter a name and a description for the project. Then select a Resource Group. If your signed-in account is associated with an Azure account, the Resource Group dropdown will display all of your Azure Resource Groups that include a Custom Vision Service Resource.
  3. Select Classification under Project Types. Then, under Classification Types, choose either Multilabel or Multiclass, depending on your use case. Multilabel classification applies any number of your tags to an image (zero or more), while multiclass classification sorts images into single categories (every image you submit will be sorted into the most likely tag). You will be able to change the classification type later if you wish.
  4. Next, select one of the available domains. Each domain optimizes the classifier for specific types of images, as described in the following table. You will be able to change the domain later if you wish.
    Domain Purpose
    Generic Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the Generic domain.
    Food Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.
    Landmarks Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.
    Retail Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.
    Compact domains Optimized for the constraints of real-time classification on mobile devices. The models generated by compact domains can be exported to run locally.
  5. Finally, select Create project.

Choose training images

As a minimum, we recommend you use at least 30 images per tag in the initial training set. You’ll also want to collect a few extra images to test your model once it is trained.

In order to train your model effectively, use images with visual variety. Select images with that vary by:

  • camera angle
  • lighting
  • background
  • visual style
  • individual/grouped subject(s)
  • size
  • type

Additionally, make sure all of your training images meet the following criteria:

  • .jpg, .png, or .bmp format
  • no greater than 6MB in size (4MB for prediction images)
  • no less than 256 pixels on the shortest edge; any images shorter than this will be automatically scaled up by the Custom Vision Service

Upload and tag images

In this section you will upload and manually tag images to help train the classifier.

  1. To add images, click the Add images button and then select Browse local files. Select Open to move to tagging. Your tag selection will be applied to the entire group of images you’ve selected to upload, so it is easier to upload images in separate groups according to their desired tags. You can also change the tags for individual images after they have been uploaded.The add images control is shown in the upper left, and as a button at bottom center.
  2. To create a tag, enter text in the My Tags field and press Enter. If the tag already exists, it will appear in a dropdown menu. In a multilabel project, you can add more than one tag to your images, but in a multiclass project you can add only one. To finish uploading the images, use the Upload [number] files button.Image of the tag and upload page
  3. Select Done once the images have been uploaded.The progress bar shows all tasks completed.

To upload another set of images, return to the top of this section and repeat the steps.

Train the classifier

To train the classifier, select the Train button. The classifier uses all of the current images to create a model that identifies the visual qualities of each tag.

The train button in the top right of the web page's header toolbar

The training process should only take a few minutes. During this time, information about the training process is displayed in the Performance tab.

The browser window with a training dialog in the main section

Custom Vision Service supports the following exports:

  • Tensorflow for Android.
  • CoreML for iOS11.
  • ONNX for Windows ML.
  • A Windows or Linux container. The container includes a Tensorflow model and service code to use the Custom Vision Service API.

Convert to a compact domain

To convert the domain of an existing classifier, use the following steps:

  1. From the Custom vision page, select the Home icon to view a list of your projects. You can also use the https://customvision.ai/projects to see your projects.Image of the home icon and projects list
  2. Select a project, and then select the Gear icon in the upper right of the page.Image of the gear icon
  3. In the Domains section, select a compact domain. Select Save Changes to save the changes.Image of domains selection
  4. From the top of the page, select Train to retrain using the new domain.

Export your model

To export the model after retraining, use the following steps:

  1. Go to the Performance tab and select Export.Image of the export icon

    Tip

    If the Export entry is not available, then the selected iteration does not use a compact domain. Use the Iterations section of this page to select an iteration that uses a compact domain, and then select Export.

  2. Select the export format, and then select Export to download the model.

Creating the Edge Module in Python

You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating an IoT Edge module that will be edited to use the Custom Vision model exported. In this tutorial, you learn how to:

  • Use Visual Studio Code to create an IoT Edge Python module.
  • Use Visual Studio Code and Docker to create a Docker image and publish it to your registry.

If you don’t have an Azure subscription, create a free account before you begin.

Prerequisites

Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: Develop IoT Edge modules for Linux devices. By completing either of those tutorials, you should have the following prerequisites in place:

To develop an IoT Edge module in Python, install the following additional prerequisites on your development machine:

  • Python extension for Visual Studio Code.
  • Python.
  • Pip for installing Python packages (typically included with your Python installation).

Create a module project

The following steps create an IoT Edge Python module by using Visual Studio Code and the Azure IoT Tools.

Create a new project

Use the Python package cookiecutter to create a Python solution template that you can build on top of.

  1. In Visual Studio Code, select View > Terminal to open the VS Code integrated terminal.
  2. In the terminal, enter the following command to install (or update) cookiecutter, which you use to create the IoT Edge solution template:
    pip install --upgrade --user cookiecutter
  3. Select View > Command Palette to open the VS Code command palette.
  4. In the command palette, enter and run the command Azure: Sign in and follow the instructions to sign in your Azure account. If you’re already signed in, you can skip this step.

In the command palette, enter and run the command Azure IoT Edge: New IoT Edge solution. Follow the prompts and provide the following information to create your solution:

Field Value
Select folder Choose the location on your development machine for VS Code to create the solution files.
Provide a solution name Enter a descriptive name for your solution or accept the default EdgeSolution.
Select module template Choose Python Module.
Provide a module name Name your module PythonModule.
Provide Docker image repository for the module An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace localhost:5000 with the login server value from your Azure container registry. You can retrieve the login server from the Overview page of your container registry in the Azure portal.

The final image repository looks like <registry name>.azurecr.io/pythonmodule.

Provide Docker image repository

Add your registry credentials

The environment file stores the credentials for your container repository and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.

  1. In the VS Code explorer, open the .env file.
  2. Update the fields with the username and password values that you copied from your Azure container registry.
  3. Save the .env file.

Select your target architecture

Currently, Visual Studio Code can develop C modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you’re targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.

  1. Open the command palette and search for Azure IoT Edge: Set Default Target Platform for Edge Solution, or select the shortcut icon in the side bar at the bottom of the window.
  2. In the command palette, select the target architecture from the list of options. For this tutorial, we’re using an Ubuntu virtual machine as the IoT Edge device, so will keep the default amd64.

Adding the model and custom code for Custom Vision

 

Deploy the Module

Build and push your module

In the previous section, you created an IoT Edge solution and added code to the PythonModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.

  1. Open the VS Code integrated terminal by selecting View > Terminal.
  2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the Access keys section of your registry in the Azure portal.
    docker login -u <ACR username> -p <ACR password> <ACR login server>
    
    You may receive a security warning recommending the use of --password-stdin. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the docker login reference. In the VS Code explorer, right-click the deployment.template.json file and select Build and Push IoT Edge solution.

    The build and push command starts three operations. First, it creates a new folder in the solution called config that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs docker build to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs docker push to push the image repository to your container registry.

Deploy modules to device

Use the Visual Studio Code explorer and the Azure IoT Tools extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the deployment.json file in the config folder. All you need to do now is select a device to receive the deployment.

Make sure that your IoT Edge device is up and running.

  1. In the Visual Studio Code explorer, expand the Azure IoT Hub Devices section to see your list of IoT devices.
  2. Right-click the name of your IoT Edge device, then select Create Deployment for Single Device.
  3. Select the deployment.json file in the config folder and then click Select Edge Deployment Manifest. Do not use the deployment.template.json file.
  4. Click the refresh button. You should see the new PythonModule running along with the TempSensor module and the $edgeAgent and $edgeHub.

 

 

 

New Pluralsight Courses

I’ve been busy and not able to update that I have new courses available on Pluralsight:

Developing Microsoft Azure Intelligent Edge Solutions

This course targets software developers that are looking to build edge solutions that can process data and make intelligent decisions. This course will showcase how to develop those solutions using Microsoft Azure.

Over time, what was once simply Internet of Things solutions has evolved into Edge solutions. In this course, Developing an Intelligent Edge in Microsoft Azure, you will learn foundational knowledge of edge computing, how it interacts with data and messaging systems, and how to utilize both with Microsoft Azure. First, you will learn the concepts of edge and internet of things computing. Next, you will discover how to process streaming data on hot, warm, and cold paths. Finally, you will explore how real-time and batch processing can be utilized in an edge solution. When you are finished with this course, you will have the skills and knowledge of edge and internet of things in Azure needed to architect your next edge solution. Software required: Microsoft Azure, .NET.

Building Your First Data Science Project in Microsoft Azure

This course targets software developers looking to build data science solutions that can utilize the power of the cloud. The content will also showcase how to create those solutions using Microsoft Azure.

The past five years have shown a boom in the data science field with advancements in hardware and cloud computing. In this course, Building Your First Data Science Project in Microsoft Azure, you will learn about data science and how to get started utilizing it in Microsoft Azure. First, you will learn the data science and the tools surrounding it. Next, you will discover how to create a development environment in Microsoft Azure. Finally, you will explore how to maintain and utilize that development environment. When you are finished with this course, you will have the skills and knowledge of data science to build your first data science project in Microsoft Azure. Software required: Microsoft Azure.

 

numpy/core/_multiarray_umath.cpython-35m-arm-linux-gnueabihf.so: undefined symbol: cblas_sgemm – Raspberry Pi

While working on a Raspberry Pi image that had been used prior by an electrical engineer to setup all of the dependencies for the hardware, there was an error when trying to upgrade to use Tensorflow. Tensorflow was needed to run a model trained with Cognitive Services: Custom Vision Service. The error was when the script imported Numpy. That caused the following error:

numpy/core/_multiarray_umath.cpython-35m-arm-linux-gnueabihf.so: undefined symbol: cblas_sgemm

To remedy this, all of the installations of Numpy had to be uninstalled. The following commands were run:

  • apt-get remove python-numpy
  • apt-get remove python3-numpy
  • pip3 uninstall numpy

After all three of those commands complete, Numpy was reinstalled using the package provided for raspian:

apt-get install python3-numpy

Pluralsight Course Published – Designing an Intelligent Edge in Microsoft Azure

Designing an Intelligent Edge in Microsoft Azure was just published on Pluralsight! Check it out. Here is a synopsis of what’s in it:

This course targets software developers that are looking to integrate AI solutions in edge scenarios ranging from an edge data center down to secure microcontrollers. This course will showcase how to design solutions using Microsoft Azure.

Cloud computing has moved more and more out of the cloud and onto the edge. In this course, Designing an Intelligent Edge in Microsoft Azure, you will learn foundational knowledge of edge computing, its intersection with AI, and how to utilize both with Microsoft Azure. First, you will learn the concepts of edge computing. Next, you will discover how to create an edge solution utilizing Azure Stack, Azure Data Box Edge, and Azure IoT Edge. Finally, you will explore how to utilize off the shelf AI and build your own for Azure IoT Edge. When you are finished with this course, you will have the skills and knowledge of AI on the edge needed to architect your next edge solution. Software required: Microsoft Azure, .NET

 

Multiple TensorFlow Graphs from Cognitive Services – Custom Vision Service

For one project, there was a need for multiple models within the same Python application. These models were trained using the Cognitive Services: Custom Vision Service. There are two steps to using an exported model:

  1. Prepare the image
  2. Classify the image

Prepare an image for prediction


from PIL import Image
import numpy as np
import cv2
def convert_to_opencv(image):
# RGB -> BGR conversion is performed as well.
image = image.convert('RGB')
r,g,b = np.array(image).T
opencv_image = np.array([b,g,r]).transpose()
return opencv_image
def crop_center(img,cropx,cropy):
h, w = img.shape[:2]
startx = w//2(cropx//2)
starty = h//2(cropy//2)
return img[starty:starty+cropy, startx:startx+cropx]
def resize_down_to_1600_max_dim(image):
h, w = image.shape[:2]
if (h < 1600 and w < 1600):
return image
new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
def resize_to_256_square(image):
h, w = image.shape[:2]
return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
def update_orientation(image):
exif_orientation_tag = 0x0112
if hasattr(image, '_getexif'):
exif = image._getexif()
if (exif != None and exif_orientation_tag in exif):
orientation = exif.get(exif_orientation_tag, 1)
# orientation is 1 based, shift to zero based and flip/transpose based on 0-based values
orientation -= 1
if orientation >= 4:
image = image.transpose(Image.TRANSPOSE)
if orientation == 2 or orientation == 3 or orientation == 6 or orientation == 7:
image = image.transpose(Image.FLIP_TOP_BOTTOM)
if orientation == 1 or orientation == 2 or orientation == 5 or orientation == 6:
image = image.transpose(Image.FLIP_LEFT_RIGHT)
return image
def prepare_image(image):
# Update orientation based on EXIF tags, if the file has orientation info.
image = update_orientation(image)
# Convert to OpenCV format
image = convert_to_opencv(image)
# If the image has either w or h greater than 1600 we resize it down respecting
# aspect ratio such that the largest dimension is 1600
image = resize_down_to_1600_max_dim(image)
# We next get the largest center square
h, w = image.shape[:2]
min_dim = min(w,h)
max_square_image = crop_center(image, min_dim, min_dim)
# Resize that square down to 256×256
augmented_image = resize_to_256_square(max_square_image)
augmented_image = crop_center(augmented_image, 244, network_input_size)
return augmented_image

Classify the image

To run multiple models in Python was fairly simple. Simply call tf.reset_default_graph() after saving the loaded session into memory.


import tensorflow as tf
import numpy as np
# The category name and probability percentage
class CategoryScore:
def __init__(self, category, probability: float):
self.category = category
self.probability = probability
# The categorizer handles running tensorflow models
class Categorizer:
def __init__(self, model_file_path: str, map: []):
self.map = map
self.graph = tf.Graph()
self.graph.as_default()
self.graph_def = self.graph.as_graph_def()
with tf.gfile.GFile(model_file_path, 'rb') as f:
self.graph_def.ParseFromString(f.read())
tf.import_graph_def(self.graph_def, name='')
output_layer = 'loss:0'
self.input_node = 'Placeholder:0'
self.sess = tf.Session()
self.prob_tensor = self.sess.graph.get_tensor_by_name(output_layer)
tf.reset_default_graph()
def score(self, image):
predictions, = self.sess.run(self.prob_tensor, {self.input_node: [image]})
label_index = 0
scores = []
for p in predictions:
category_score = CategoryScore(self.map[label_index],np.float64(np.round(p, 8)))
scores.append(category_score)
label_index += 1
return scores

After the CustomVisionCategorizer is create, just call score and it will score with the labels in the map.

Upcoming Pluralsight Course – Designing an Intelligent Edge in Microsoft Azure

Off to start another course for Pluralsight. This time its Designing an Intelligent Edge in Microsoft Azure. If you would like to check out any of my other courses, visit my author’s profile. The new course will cover the following topics:

  • Edge –
    • Scenarios
    • Concerns
    • Architecture
  • Azure AI Pipelines – Overview with edge
  • Edge Pipelines –
    • Azure Stack
    • Azure Databox Edge
    • Azure IoT Edge
  • Cognitive Services – Overview with Edge
  • Azure Databricks – Overview
  • Azure Machine Learning VMs
  • Project Brainwave

New Pluralsight Course Released!

My new Pluralsight course Microsoft Azure Cognitive Services: Speech to Text SDK was just released! Here is the synopsis:

Abstract

This course will teach you how to create applications using Cognitive Services: Speech to Text. With it, your applications are more accessible and easier to use with a natural user interface.

Description

Creating and integrating advanced artificial intelligence into any application is a monumental task for most developers. In this course, Microsoft Azure Cognitive Services: Speech to Text SDK, you will gain the ability to create applications with Cognitive Services: Speech to Text. First, you will learn how to use the C# SDK. Next, you will discover the extensibility and customization options. Finally, you will explore how to integrate with Azure Functions and batch processing. When you are finished with this course, you will have the skills and knowledge of Cognitive Services: Speech to Text needed to integrate advanced artificial intelligence into any application.

Speaking at DotNetSouth.Tech

I look forward to speaking on AI on the Edge at DotNetSouth.Tech. This year is the conference’s first year so check it out.

AI on the Edge

The next evolution in cloud computing is a smarter application not in the cloud. As the cloud has continued to evolve, the applications that utilize it have had more and more capabilities of the cloud. This presentation will show how to push logic and machine learning from the cloud to an edge application. Afterward, creating edge applications which utilize the intelligence of the cloud should become effortless.

 

Authoring for Pluralsight – Microsoft Azure Cognitive Services: Speech to Text SDK

I am creating a new course or Pluralsight titled – Microsoft Azure Cognitive Services: Speech to Text SDK. If you would like to check out my other courses, they can be found in my author’s profile. Here is the breakdown for the course:

Audience Profile

This course targets software developers who are looking to get started with Microsoft Azure Cognitive Services: Speech to Text API to build modern AI solutions and want to get started building an AI solution with a simple REST interface and a robust set of device SDKs.

Abstract

With AI becoming more and more ubiquitous, it is important to quickly and easily integrate with AI services. This course will show how to create modern applications using Microsoft Azure Cognitive Services: Speech to Text API and SDKs.

Prerequisites

This course assumes viewers are familiar with C# and understands REST APIs and JSON.