UPDATE:
Another one of my talks was selected for Techbash: Alternative Device Interfaces and Machine Learning.
In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.
Users use voice (Alexa, Cortana, Google Now) or video as a mode of interaction with applications. More than a fad, this is a natural interface for users and is becoming more and more common with the ever-decreasing size of hardware.
Different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with two app types: UWP and Xamarin Forms (iOS and Android). Speech Recognition with Cognitive Services: Verifying the speaker with Speaker Recognition API. Computer Vision with Cognitive Services: Verifying a user with Face API.
By utilizing UWP, Xamarin, and Cognitive services; a device with the ultimate in customization for user interactions will be created. Come and see how!
Original:
This year I will be presenting Enable IoT with Edge Computing and Machine Learning at TechBash. Here is the outline:
Being able to run compute cycles on local hardware is a practice predating silicon circuits. Mobile and Web technology has pushed computation away from local hardware and onto remote servers. As prices in the cloud have decreased, more and more of the remote servers have moved there. This technology cycle is coming full circle with pushing the computation that would be done in the cloud down to the client. The catalyst for the cycle completing is latency and cost. Running computations on local hardware softens the load in the cloud and reduces overall cost and architectural complexity.
The difference now is how the computational logic is sent to the device. As of now, we rely on app stores and browsers to deliver the logic the client will use. Delivery mechanisms are evolving into writing code once and having the ability to run that logic in the cloud and push that logic to the client through your application and have that logic run on the device. In this presentation, we will look at how to accomplish this with existing Azure technologies and how to prepare for upcoming technologies to run these workloads.
Azure Cognitive Services Dotnet Core IoT Hub Mobile Raspberry Pi Xamarin Xamarin Forms
Last modified: July 23, 2018