iotedge: error while loading shared libraries: libssl.so.1.0.2: cannot open shared object file: No such file or directory – Raspberry Pi

After installing Azure IoT Edge using the guide for Linux ARM32, the following error was presented: “iotedge: error while loading shared libraries: libssl.so.1.0.2: cannot open shared object file: No such file or directory“. 

The fix was simple enough, just install the building libssl1.02 using the following command:

sudo apt-get install libssl1.0.2

Test by running the iotedge command:

iotedge

azureiotedgeCapture.PNG

If that works successfully, restart the iotedge service:

service iotedge edge restart

Verify that it is running by checking the service status:

service iotedge edge status

azureiotedgeCapture

Multiple TensorFlow Graphs from Cognitive Services – Custom Vision Service

For one project, there was a need for multiple models within the same Python application. These models were trained using the Cognitive Services: Custom Vision Service. There are two steps to using an exported model:

  1. Prepare the image
  2. Classify the image

Prepare an image for prediction


from PIL import Image
import numpy as np
import cv2
def convert_to_opencv(image):
# RGB -> BGR conversion is performed as well.
image = image.convert('RGB')
r,g,b = np.array(image).T
opencv_image = np.array([b,g,r]).transpose()
return opencv_image
def crop_center(img,cropx,cropy):
h, w = img.shape[:2]
startx = w//2(cropx//2)
starty = h//2(cropy//2)
return img[starty:starty+cropy, startx:startx+cropx]
def resize_down_to_1600_max_dim(image):
h, w = image.shape[:2]
if (h < 1600 and w < 1600):
return image
new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
def resize_to_256_square(image):
h, w = image.shape[:2]
return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
def update_orientation(image):
exif_orientation_tag = 0x0112
if hasattr(image, '_getexif'):
exif = image._getexif()
if (exif != None and exif_orientation_tag in exif):
orientation = exif.get(exif_orientation_tag, 1)
# orientation is 1 based, shift to zero based and flip/transpose based on 0-based values
orientation -= 1
if orientation >= 4:
image = image.transpose(Image.TRANSPOSE)
if orientation == 2 or orientation == 3 or orientation == 6 or orientation == 7:
image = image.transpose(Image.FLIP_TOP_BOTTOM)
if orientation == 1 or orientation == 2 or orientation == 5 or orientation == 6:
image = image.transpose(Image.FLIP_LEFT_RIGHT)
return image
def prepare_image(image):
# Update orientation based on EXIF tags, if the file has orientation info.
image = update_orientation(image)
# Convert to OpenCV format
image = convert_to_opencv(image)
# If the image has either w or h greater than 1600 we resize it down respecting
# aspect ratio such that the largest dimension is 1600
image = resize_down_to_1600_max_dim(image)
# We next get the largest center square
h, w = image.shape[:2]
min_dim = min(w,h)
max_square_image = crop_center(image, min_dim, min_dim)
# Resize that square down to 256×256
augmented_image = resize_to_256_square(max_square_image)
augmented_image = crop_center(augmented_image, 244, network_input_size)
return augmented_image

Classify the image

To run multiple models in Python was fairly simple. Simply call tf.reset_default_graph() after saving the loaded session into memory.


import tensorflow as tf
import numpy as np
# The category name and probability percentage
class CategoryScore:
def __init__(self, category, probability: float):
self.category = category
self.probability = probability
# The categorizer handles running tensorflow models
class Categorizer:
def __init__(self, model_file_path: str, map: []):
self.map = map
self.graph = tf.Graph()
self.graph.as_default()
self.graph_def = self.graph.as_graph_def()
with tf.gfile.GFile(model_file_path, 'rb') as f:
self.graph_def.ParseFromString(f.read())
tf.import_graph_def(self.graph_def, name='')
output_layer = 'loss:0'
self.input_node = 'Placeholder:0'
self.sess = tf.Session()
self.prob_tensor = self.sess.graph.get_tensor_by_name(output_layer)
tf.reset_default_graph()
def score(self, image):
predictions, = self.sess.run(self.prob_tensor, {self.input_node: [image]})
label_index = 0
scores = []
for p in predictions:
category_score = CategoryScore(self.map[label_index],np.float64(np.round(p, 8)))
scores.append(category_score)
label_index += 1
return scores

After the CustomVisionCategorizer is create, just call score and it will score with the labels in the map.

Azure IoT Edge – YOLO, Stream Analytics Service, and Blob Storage

As a continuation of the Izon camera hack //TODO: link to previous article, I wanted to detect if my dog was using the doggy door in the main room. The approach was going to be simple at first, detect the dog in the room and not in the room. When in the room changes (or not in the room), upload 10 seconds worth of images to Azure to see if the dog used the door.

Image Classification

To detect the dog, the first and largest challenge to these types of tasks is getting enough images to train the model. For me, this meant saving images of the dog in a pre-aligned shot. This is easy enough to accomplish; the room the images will be processed in should only have the dog moving in it. Since he is the only moving object, YOLO can be used to detect the position of objects in the room and then the position of these objects can be checked to see if there is any movement. If there is any movement, the images can be saved for later categorization. To accomplish this, there will be four modules:

  • Camera Module – Accesses the camera feeds to save the images
  • Object Detection Module – Uses YOLO to detect object and object positions
  • Motion Detection Module – Uses Stream Analytics Service to detect if object positions are moving.
  • Image Storage Module – Uses Blob Storage so save and delete the images

Module Arch

The Camera module will send the timestamped images to the Object Detection Module and the Image Storage Module. The Object Detection Module will then use YOLO to detect the objects and their positions in the image. Those detection results will be sent to the Motion Detection Module, which will use Streaming Analytics Service to see if there was motion detected over the last ten seconds. If there is no motion detected over the last ten seconds, then the Motion Detection Module will send a delete command to the Image Storage Module to remove the image without motion from the store. The routing Table will look as so:


"routes": {
"cameraToObjectDetection": "FROM /messages/modules/camera/outputs/imageOutput INTO BrokeredEndpoint(\"/modules/objectDetection/inputs/incomingImages\")",
"cameraToImageStorage": "FROM /messages/modules/camera/outputs/imageOutput INTO BrokeredEndpoint(\"/modules/imageStorage/inputs/incomingImages\")",
"objectDetectionToMotionDetection": "FROM /messages/modules/objectDetection/outputs/objectDetectionOutput INTO BrokeredEndpoint(\"/modules/motionDetection/inputs/incomingObjectDetection\")",
"motionDetectionToDeleteImage": "FROM /messages/modules/motionDetection/outputs/motionDetectionOutput INTO BrokeredEndpoint(\"/modules/imageStorage/inputs/deleteImages\")"
}

view raw

routes.json

hosted with ❤ by GitHub

These modules will be broken up into their own articles for readability and searchability. If there is no link to a module article it is because that article is not completed or is not published yet.

Securing SSH in Azure

On a recent project I inherited an Azure IaaS setup that managed Linux VMs by connecting via SSH from public IPs. I figured while we did a vNet migration we might as well secure the SSH pipeline.

Disable SSH Arcfour and CBC Ciphers

Arcfour is compatible with RC4 encryption and has issues with weak keys, which should be avoided. See RFC 4353 for more information here.

The SSH server located on the remote host also allows cipher block chaining (CBC) ciphers to be used to establish a Secure Shell (SSH) connection, leaving encrypted content vulnerable to a plaintext recovery attack. SSH is a cryptographic network protocol that allows encrypted connections between machines to be established. These connections can be used for remote login by an end user, or to encrypt network services. SSH leverages various encryption algorithms to make these connections, including ciphers that employ cipher block chaining.

The plaintext recovery attack can return up to thirty two bits of plaintext with a probability of 2-18 or fourteen bits of plain text with a probability of 2-14. This exposure is caused by the way CBC ciphers verify the message authentication code (MAC) for a block. Each block’s MAC is created by a combination of an unencrypted sequence number and an encrypted section containing the packet length, padding length, payload, and padding. With the length of the message encrypted the receiver of the packet needs to decrypt the first block of the message in order to obtain the length of the message to know how much data to read. As the location of the message length is static among all messages, the first four bytes will always be decrypted by a recipient. An attacker can take advantage of this by submitting an encrypted block, one byte at a time, directly to a waiting recipient. The recipient will automatically decrypt the first four bytes received as it the length is required to process the message’s MAC. Bytes controlled by an attacker can then be submitted until a MAC error is encountered, which will close the connection. Note as this attack will lead to the SSH connection to be closed, iterative attacks of this nature will be difficult to carry out against a target system.

Establishing an SSH connection using CBC mode ciphers can result in the exposure of plaintext messages, which are derived from an encrypted SSH connection. Depending on the data being transmitted, an attacker may be able to recover session identifiers, passwords, and any other data passed between the client and server.

Disable Arcfour ciphers in the SSH configuration. These ciphers are now disabled by default in some OpenSSH installations. All CBC mode ciphers should also be disabled on the target SSH server. In the place of CBC, SSH connections should be created using ciphers that utilize CTR (Counter) mode or GCM (Galois/Counter Mode), which are resistant to the plaintext recovery attack.

Disable SSH Weak MAC Algorithms

The SSH server is configured to allow cipher suites that include weak message authentication code (“MAC”) algorithms. Examples of weak MAC algorithms include MD5 and other known-weak hashes, and/or the use of 96-bit or shorter keys. The SSH protocol uses a MAC to ensure message integrity by hashing the encrypted message, and then sending both the message and the output of the MAC hash function to the recipient. The recipient then generates their hash of the message and related content and compares it to the received hash value. If the values match, there is a reasonable guarantee that the message is received “as is” and has not been tampered with in transit.

If the SSH server is configured to accept weak or otherwise vulnerable MAC algorithms, an attacker may be able to crack them in a reasonable timeframe. This has two potential effects:

  • The attacker may figure out the shared secret between the client and the server thereby allowing them to read sensitive data being exchanged. 
  • The attacker may be able to tamper with the data in-transit by injecting their own packets or modifying existing packet data sent within the SSH stream.

Disable all 96-bit HMAC algorithms, MD5-based HMAC algorithms, and all CBC mode ciphers configured for SSH on the server. The sshd_config file should only contain the following options as far as supported MAC algorithms are concerned:

  • hmac-sha2-512
  • hmac-sha2-512-etm@openssh.com
  • hmac-sha2-256
  • hmac-sha2-256-etm@openssh.com
  • hmac-ripemd160-etm@openssh.com
  • umac-128-etm@openssh.com
  • umac-128-etm@openssh.com
  • hmac-ripemd160
  • umac-128@openssh.com
  • umac-128@openssh.com 

In addition, all CBC mode ciphers should be replaced with their CTR mode counterparts.

Testing

To test, run the following command:

nmap -sS -sV -p 22 –script ssh2-enum-algos [TARGET IP]

Upcoming Pluralsight Course – Designing an Intelligent Edge in Microsoft Azure

Off to start another course for Pluralsight. This time its Designing an Intelligent Edge in Microsoft Azure. If you would like to check out any of my other courses, visit my author’s profile. The new course will cover the following topics:

  • Edge –
    • Scenarios
    • Concerns
    • Architecture
  • Azure AI Pipelines – Overview with edge
  • Edge Pipelines –
    • Azure Stack
    • Azure Databox Edge
    • Azure IoT Edge
  • Cognitive Services – Overview with Edge
  • Azure Databricks – Overview
  • Azure Machine Learning VMs
  • Project Brainwave

Multi-Region Point-to-Site in Microsoft Azure (Windows Fix)

In a previous post, I showcased how to: Create a Single Gateway, Multi-Region, VPN Architecture in Microsoft Azure. If testing with Windows didn’t work, it may be because Windows has to have its route tables updated to know how to tunnel past the gateway into the different regions. MAC and Linux can use IKEv2 without additional route adding.

A. For Windows, by default, it chooses IKEv2, we need to add a route to your spoke VNET

ip tables

Suppose the VNET spoke address space is 10.2.0.0 255.255.0.0,  and Client VPN interface IP is 172.16.100.130

route add

B. We also need to test the CMAK or manually create a SSTP VPN profile to Azure on Windows client.

New Pluralsight Course Released!

My new Pluralsight course Microsoft Azure Cognitive Services: Speech to Text SDK was just released! Here is the synopsis:

Abstract

This course will teach you how to create applications using Cognitive Services: Speech to Text. With it, your applications are more accessible and easier to use with a natural user interface.

Description

Creating and integrating advanced artificial intelligence into any application is a monumental task for most developers. In this course, Microsoft Azure Cognitive Services: Speech to Text SDK, you will gain the ability to create applications with Cognitive Services: Speech to Text. First, you will learn how to use the C# SDK. Next, you will discover the extensibility and customization options. Finally, you will explore how to integrate with Azure Functions and batch processing. When you are finished with this course, you will have the skills and knowledge of Cognitive Services: Speech to Text needed to integrate advanced artificial intelligence into any application.

Creating a Single Gateway, Multi-Region, VPN Architecture in Microsoft Azure

The goal of this post is to showcase how to create a gateway for a multi-region VPN architecture in Microsoft Azure. We can start from a very basic use case, three regions:

  • One containing the VPN gateway all clients will connect through
  • Two other regions containing resources connected to the vNet gateway

There are two terms that will be used throughout this post:

  •  Hub – this refers to the central VPN Gateway that all other VPN Gateways will connect to.
  •  Spoke – this refers to an individual VPN Gateway that connects to the Hub

Planning

Since there will be a vNet for each region peered with the hub, address spacing should be taken into consideration before creating each Virtual Network in a region. From previous experience, it was considered best practice to:

Address – {shared}.{region_specific}.{subnet}.{instance}

  •  Shared – A common root address was picked for the first octet. This is the best place to avoid conflicts with networks outside of Azure that will connect to the Hub.
  •  Region Specific – Each region would get its own address for the second octet
  •  Subnet – Each subnet in the region would get an address for the third octet
  •  Instance – Finally each assigned IP address would fill the fourth octet

This does not account for third party integration and Site-to-Site integrations. Those require future planning and, as always in business, there is no way to properly plan for every variation.

Create the vNets

Once the planning phase is complete we will create three Virtual Networks in three separate regions. Which Virtual Network is the Hub and which is the Spokes does not matter yet.

  1. Sign in to the Azure portal and select Create a resource. The New page opens.
  2. In the Search the marketplace field, enter virtual network and select Virtual network from the returned list. The Virtual network page opens.

    Locate Virtual Network resource page

  3. From the Select a deployment model list near the bottom of the page, select Resource Manager, and then select Create. The Create virtual network page opens.

    Create virtual network page

  4. On the Create virtual network page, configure the VNet settings. When you fill in the fields, the red exclamation mark becomes a green check mark when the characters you enter in the field are validated. Some values are autofilled, which you can replace with your own values:
    • Name: Enter the name for your virtual network.
    • Address space: Enter the address space. If you have multiple address spaces to add, enter your first address space here. You can add additional address spaces later, after you create the VNet.
    • Subscription: Verify that the subscription listed is the correct one. You can change subscriptions by using the drop-down.
    • Resource group: Select an existing resource group, or create a new one by entering a name for your new resource group. If you’re creating a new group, name the resource group according to your planned configuration values. For more information about resource groups, see Azure Resource Manager overview.
    • Location: Select the location for your VNet. The location determines where the resources that you deploy to this VNet will live.
    • Subnet: Add the subnet Name and subnet Address range. You can add additional subnets later, after you create the VNet.
  5. Select Create.

Before creating a virtual network gateway for your virtual network, you first need to create the gateway subnet. The gateway subnet contains the IP addresses that are used by the virtual network gateway. If possible, it’s best to create a gateway subnet by using a CIDR block of /28 or /27 to provide enough IP addresses to accommodate future additional configuration requirements.

  1. In the Azure portal, select the Resource Manager virtual network for which you want to create a virtual network gateway.
  2. In the Settings section of your virtual network page, select Subnets to expand the Subnets page.
  3. On the Subnets page, select Gateway subnet to open the Add subnet page.

    Add the gateway subnet

  4. The Name for your subnet is automatically autofilled with the value GatewaySubnet. This value is required for Azure to recognize the subnet as the gateway subnet. Adjust the autofilled Address range values to match your configuration requirements, then select OK to create the subnet.

    Adding the subnet

Create Virtual Network Gateways

Once the Virtual Networks are created, we will create a Virtual Network Gateway for each of the Virtual Networks. Which Virtual Network Gateway is the Hub and which is the Spokes does not matter yet.

  1. Sign in to the Azure portal and select Create a resource. The New page opens.
  2. In the Search the marketplace field, enter virtual network gateway, and select Virtual network gateway from the search list.
  3. On the Virtual network gateway page, select Create to open the Create virtual network gateway page.

    Create virtual network gateway page fields

  4. On the Create virtual network gateway page, fill in the values for your virtual network gateway:
    • Name: Enter a name for the gateway object you’re creating. This name is different than the gateway subnet name.
    • Gateway type: Select VPN for VPN gateways.
    • VPN type: Select the VPN type that is specified for your configuration. Most configurations require a Route-based VPN type.
    • SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the VPN type you select. For more information about gateway SKUs, see Gateway SKUs.

      Only select Enable active-active mode if you’re creating an active-active gateway configuration. Otherwise, leave this setting unselected.

    • Location: You may need to scroll to see Location. Set Location to the location where your virtual network is located. For example, West US. If you don’t set the location to the region where your virtual network is located, it won’t appear in the drop-down list when you select a virtual network.
    • Virtual network: Choose the virtual network to which you want to add this gateway. Select Virtual network to open the Choose virtual network page and select the VNet. If you don’t see your VNet, make sure the Location field is set to the region in which your virtual network is located.
    • Gateway subnet address range: You’ll only see this setting if you didn’t previously create a gateway subnet for your virtual network. If you previously created a valid gateway subnet, this setting won’t appear.
    • Public IP address: This setting specifies the public IP address object that’s associated with the VPN gateway. The public IP address is dynamically assigned to this object when the VPN gateway is created. The VPN gateway currently supports only Dynamic public IP address allocation. However, dynamic allocation doesn’t mean that the IP address changes after it has been assigned to your VPN gateway. The only time the public IP address changes is when the gateway is deleted and re-created. It doesn’t change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
      • Leave Create new selected.
      • In the text box, enter a name for your public IP address.
    • Configure BGP ASN: Leave this setting unselected, unless your configuration specifically requires it. If you do require this setting, the default ASN is 65515, which you can change.
  5. Verify the settings and select Create to begin creating the VPN gateway. The settings are validated and you’ll see the Deploying Virtual network gateway tile on the dashboard. Creating a gateway can take up to 45 minutes. You may need to refresh your portal page to see the completed status.
  6. After you create the gateway, verify the IP address that’s been assigned to it by viewing the virtual network in the portal. The gateway appears as a connected device. You can select the connected device (your virtual network gateway) to view more information.

Connecting the Gateways

With the Virtual Network Gateways created, it is time to connect the gateways. Starting with the Hub, connect the Hub to a Spoke. Then, connect that Spoke back to the Hub. Do this for each Spoke that is going to connect to the Hub.

  1. In the Azure portal, select All resources, enter virtual network gateway in the search box, and then navigate to the virtual network gateway for your VNet. For example, TestVNet1GW. Select it to open the Virtual network gateway page.

    Connections page

  2. Under Settings, select Connections, and then select Add to open the Add connection page.

    Add connection

  3. On the Add connection page, fill in the values for your connection:
    • Name: Enter a name for your connection. For example, TestVNet1toTestVNet4.
    • Connection type: Select VNet-to-VNet from the drop-down.
    • First virtual network gateway: This field value is automatically filled in because you’re creating this connection from the specified virtual network gateway.
    • Second virtual network gateway: This field is the virtual network gateway of the VNet that you want to create a connection to. Select Choose another virtual network gateway to open the Choose virtual network gateway page.
      • View the virtual network gateways that are listed on this page. Notice that only virtual network gateways that are in your subscription are listed. If you want to connect to a virtual network gateway that isn’t in your subscription, use the PowerShell.
      • Select the virtual network gateway to which you want to connect.
      • Shared key (PSK): In this field, enter a shared key for your connection. You can generate or create this key yourself. In a site-to-site connection, the key you use is the same for your on-premises device and your virtual network gateway connection. The concept is similar here, except that rather than connecting to a VPN device, you’re connecting to another virtual network gateway.
  4. Select OK to save your changes.

Verify your connections

Locate the virtual network gateway in the Azure portal. On the Virtual network gateway page, select Connections to view the Connections page for the virtual network gateway. After the connection is established, you’ll see the Status values change to Succeeded and Connected. Select a connection to open the Essentials page and view more information.

Succeeded

After verifying the connection was successful, the connection can be tested with a Point-to-Site connection or a Site-to-Site connection.

Speaking at DotNetSouth.Tech

I look forward to speaking on AI on the Edge at DotNetSouth.Tech. This year is the conference’s first year so check it out.

AI on the Edge

The next evolution in cloud computing is a smarter application not in the cloud. As the cloud has continued to evolve, the applications that utilize it have had more and more capabilities of the cloud. This presentation will show how to push logic and machine learning from the cloud to an edge application. Afterward, creating edge applications which utilize the intelligence of the cloud should become effortless.