New Pluralsight Courses Released!

My new Pluralsight courses Cleaning and Preparing Data in Microsoft Azure and Architecting Xamarin.Forms Applications for Code Reuse were just released! Here are the synopsis:

Cleaning and Preparing Data in Microsoft Azure

Abstract

This course targets software developers and data scientists looking to understand the initial steps in a machine learning solution. The content will showcase methods and tools available using Microsoft Azure.

Description

No data science project of merit has ever started with great data ready to plug into an algorithm. In this course, Cleaning and Preparing Data in Microsoft Azure, you’ll learn foundational knowledge of the steps required to utilize data in a machine learning project. First, you’ll discover different types of data and languages. Next, you’ll learn about managing large data sets and handling bad data. Finally, you’ll explore how to utilize Azure Notebooks. When you’re finished with this course, you’ll have the skills and knowledge of preparing data needed for use in Microsoft Azure. Software required: Microsoft Azure.

Architecting Xamarin.Forms Applications for Code Reuse

Abstract

A well-architected application is flexible to changing business requirements. This course will teach you how to architect Xamarin.Forms applications in a way that promotes reusable patterns.

Description

As business requirements change, so do solution assumptions. In this course, Architecting Xamarin.Forms Applications for Code Reuse, you’ll learn different architectural patterns in Xamarin.Forms. First, you’ll explore project structure and organization. Next, you’ll discover patterns and standards to promote code sharing. Finally, you’ll learn how to utilize dependency injection in Xamarin.Forms. When you’re finished with this course, you’ll have the skills and knowledge of architecting Xamarin.Forms projects needed to optimally promote code reuse.

Quicken Loans TechCon 2018

September 20th at Cobb Center in Detroit I will be presenting:

Alternative Device Interfaces and Machine Learning

Abstract

In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.

Description

Users use voice (Alexa, Cortana, Google Now) or video as a mode of interaction with applications. More than a fad, this is a natural interface for users and is becoming more and more common with the ever-decreasing size of hardware.

Different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with two app types: UWP and Xamarin Forms (iOS and Android). Speech Recognition with Cognitive Services: Verifying the speaker with Speaker Recognition API. Computer Vision with Cognitive Services: Verifying a user with Face API.

By utilizing UWP, Xamarin, and Cognitive services; a device with the ultimate in customization for user interactions will be created. Come and see how!

Update – Techbash

UPDATE:

Another one of my talks was selected for Techbash: Alternative Device Interfaces and Machine Learning.

In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.

Users use voice (Alexa, Cortana, Google Now) or video as a mode of interaction with applications. More than a fad, this is a natural interface for users and is becoming more and more common with the ever-decreasing size of hardware.

Different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with two app types: UWP and Xamarin Forms (iOS and Android). Speech Recognition with Cognitive Services: Verifying the speaker with Speaker Recognition API. Computer Vision with Cognitive Services: Verifying a user with Face API.

By utilizing UWP, Xamarin, and Cognitive services; a device with the ultimate in customization for user interactions will be created. Come and see how!

Original:

This year I will be presenting Enable IoT with Edge Computing and Machine Learning at TechBash. Here is the outline:

Being able to run compute cycles on local hardware is a practice predating silicon circuits. Mobile and Web technology has pushed computation away from local hardware and onto remote servers. As prices in the cloud have decreased, more and more of the remote servers have moved there. This technology cycle is coming full circle with pushing the computation that would be done in the cloud down to the client. The catalyst for the cycle completing is latency and cost. Running computations on local hardware softens the load in the cloud and reduces overall cost and architectural complexity.

The difference now is how the computational logic is sent to the device. As of now, we rely on app stores and browsers to deliver the logic the client will use. Delivery mechanisms are evolving into writing code once and having the ability to run that logic in the cloud and push that logic to the client through your application and have that logic run on the device. In this presentation, we will look at how to accomplish this with existing Azure technologies and how to prepare for upcoming technologies to run these workloads.

 

Home Control Flex Major Release

After nearly a year of hard work, the Home Control Flex application has finally reached a new release point. There have been major improvements around the use of Xamarin Forms and the use of mobile features. There were major changes around framework dependencies and utilization of navigation pages.

The major problems in the previous version was poor usage of navigation pages, dependencies on old frameworks, and lack of sharing of global resources. Adding all of these failures together resulted in an unstable application that crashed on multiple pages. Fixes to those crashes were a slow roll out of shims and hacks to keep the previous decisions working.

The largest problem was the poor usage of navigation pages. For some reason, to implement a tabbed page where the tabs were at the bottom on Android, the previous developers decided to use a ContentPage, and make the tab pages within the ContentPage  ContentViews and swap those views out whenever a tab was changed. This caused almost every major problem that could not be resolved in the app moving forward. To fix it, the BottomNavigationBarXF Nuget packages was used. The base renderer was overridden to implement some custom functionality but overall it was a clean integration or at least as clean as such a big overhaul to the navigation system can handle.

Since the pages were being swapped out whenever a tab was changed, the previous developers mush have decided that instead of needing navigation pages, they would just continue to change the view out and have their own navigation stack. Without using NavigationPage within their app, the page lifecycle was completely off and the were object disposed exceptions that were being thrown by the Forms framework due to the fact that the views lifecycle was not correctly managed. Xamarin Forms couldn’t track whether a view was to be reused or not and would collect on disappeared views that were going to come back later. Once NavigationPage was used this was no longer a problem.

When I inherited the app there were multiple frameworks being used in the application. It seemed to have a javascript approach where a framework may be brought in for some partial functionality or even a single method. Xamarin Forms Labs was the biggest offender when I inherited the app. The previous developers had referenced it to use it for one control and two converters. Once it was removed, the application was much more light weight on disk. At the time it was removed there was no noticeable performance gain but that was most likely due to the lack of utilization within the app and the fact that I had only been with the app for a month.

This app was riddled with copy and paste code reuse. Every page shared the same Style declaration with the same name (which was the style for that page). Every page had a declaration of a Converter for inverting a boolean. All these “shared” resources were moved to the App.XAML for reuse by every page within the application.

After fixing the above issues, changing a variety of pages within the app, and adding a load of new features, the app should finally be a stable release with market effects that was expected out of its first release. I hope to continue to improve on the line of applications from Telular including this app.

DevNet Create

I’m proud to be presenting Alternative Device Interfaces and Machine Learning at DevNet Create this year. With AI becoming more and more ubiquitous, it is important to note the effect on a user’s experience. This presentation is meant to show how to create modern applications using machine learning provided by a third party and showcase what some third parties provide.

In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.

Users use voice (Alexa, Cortana, Google Now) or video as a mode of interaction with applications. More than a fad, this is a natural interface for users and is becoming more and more common with the ever-decreasing size of hardware.

Different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with two app types: UWP and Xamarin Forms (iOS and Android). Speech Recognition with Cognitive Services: Verifying the speaker with Speaker Recognition API. Computer Vision with Cognitive Services: Verifying a user with Face API.

By utilizing UWP, Xamarin, and Cognitive services; a device with the ultimate in customization for user interactions will be created. Come and see how!

Creating a Common Loading Page for Xamarin Forms

A common pitfall I see in Xamarin Forms is adding a Loading page icon for every page. This is one of the problems that plagued the current Home Control Flex application. Instead of having to loading icon on each page you can crate a base page that has a loading screen on each. You can do this using the ContentPropertyAttribute on your base page as shown below.


<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
x:Class="LoadingPage"
x:Name="ContentPage">
<ContentPage.Content>
<AbsoluteLayout>
<ContentView Content="{Binding Source={x:Reference ContentPage},Path=MainContent}"
HorizontalOptions="FillAndExpand" VerticalOptions="FillAndExpand"
AbsoluteLayout.LayoutBounds="0,0,1,1" AbsoluteLayout.LayoutFlags="All">
</ContentView>
<!– Your Busy Indicator (Check out syncfusion's busy indicator) –>
<ContentView Content="{Binding Source={x:Reference ContentPage}, Path=LoadingView}"
IsVisible="{Binding Source={x:Reference ContentPage},Path=IsBusy}"
AbsoluteLayout.LayoutBounds="0,0,1,1" AbsoluteLayout.LayoutFlags="All" />
</AbsoluteLayout>
</ContentPage.Content>
</ContentPage>


[ContentProperty(nameof(MainContent))]
[XamlCompilation(XamlCompilationOptions.Compile)]
public partial class LoadingPage : ContentPage
{
public LoadingPage()
{
InitializeComponent();
}
public static readonly BindableProperty MainContentProperty =
BindableProperty.Create(nameof(MainContent), typeof(View), typeof(LoadingPage));
public static readonly BindableProperty LoadingContentProperty =
BindableProperty.Create(nameof(LoadingView), typeof(View), typeof(LoadingPage));
public View LoadingView
{
get => (View) GetValue(LoadingContentProperty);
set => SetValue(LoadingContentProperty, value);
}
public View MainContent
{
get { return (View) GetValue(MainContentProperty); }
set { SetValue(MainContentProperty, value); }
}
protected override void OnBindingContextChanged()
{
base.OnBindingContextChanged();
if (MainContent == null)
{
return;
}
SetInheritedBindingContext(MainContent, BindingContext);
}
}