Update Conference Prague

I have been selected to speak at Update Conference Prague during 

  •  Enable IoT with Edge Computing and Machine Learning
  • Virtual Reality and IoT – Interacting with the changing world

Enable IoT with Edge Computing and Machine Learning

Being able to run compute cycles on local hardware is a practice predating silicon circuits. Mobile and Web technology has pushed computation away from local hardware and onto remote servers. As prices in the cloud have decreased, more and more of the remote servers have moved there. This technology cycle is coming full circle with pushing the computation that would be done in the cloud down to the client. The catalyst for the cycle completing is latency and cost. Running computations on local hardware softens the load in the cloud and reduces overall cost and architectural complexity.

The difference now is how the computational logic is sent to the device. As of now, we rely on app stores and browsers to deliver the logic the client will use. Delivery mechanisms are evolving into writing code once and having the ability to run that logic in the cloud and push that logic to the client through your application and have that logic run on the device. In this presentation, we will look at how to accomplish this with existing Azure technologies and how to prepare for upcoming technologies to run these workloads.

Virtual Reality and IoT – Interacting with the changing world

Using IoT Devices, powered by Windows 10 IoT and Raspbian, we can collect data from the world surrounding us. That data can be used to create interactive environments for mixed reality, augmented reality, or virtual reality. To move the captured data from the devices to the interactive environment, the data will travel through Microsoft’s Azure. First it will be ingested through the Azure IoT Hub which provides the security, bi-directional communication, and input rates needed for the solution. We will move the data directly from the IoT Hub to an Azure Service Bus Topic. The Topic allows for data to be sent to every Subscription listening for the data that was input. Azure Web Apps subscribe to the Topics and forward the data through a SignalR Hub that forwards the data to a client. For this demo, the client is a Unity Application that creates a Virtual Reality simulation showcasing that data.

Once finished with this introduction to these technologies, utilizing each component of this technology stack should be approachable. Before seeing the pieces come together, the technologies used in this demonstration may not seem useful to a developer. When combined, they create a powerful tool to share nearly unlimited amounts of incoming data across multiple channels.

MVP Renewal

Proudly, I will be entering my second year as a Microsoft MVP. This will be under the Microsoft Azure category again. Moving forward, I look forward to doing a large amount of work and training with Azure Edge and Azure ML. Specifically, I look forward to working on the Scry Unlimited and West World of Warcraft projects. To contact me for for your project, please visit the contact page.

As a start, on 7/2/2018 I will be presenting AI on the Edge at the Atlanta Intelligent Devices user group. Following that up I will be speaking at events around the country and hopefully internationally. In addition to my normal speaking on Mobile, Cloud, and Edge; I will be adding Machine Learning and Artificial Intelligence specifically focusing on the integration with Edge and Mobile computing. If you are looking for a speaker, check out my speaker page and fill out the form.

Finally, I am still putting together events in Atlanta. If you would like to participate in any of the following events, just follow their links or message me on Twitter:

TechBash 2018

This year I will be presenting Enable IoT with Edge Computing and Machine Learning at TechBash. Here is the outline:

Being able to run compute cycles on local hardware is a practice predating silicon circuits. Mobile and Web technology has pushed computation away from local hardware and onto remote servers. As prices in the cloud have decreased, more and more of the remote servers have moved there. This technology cycle is coming full circle with pushing the computation that would be done in the cloud down to the client. The catalyst for the cycle completing is latency and cost. Running computations on local hardware softens the load in the cloud and reduces overall cost and architectural complexity.

The difference now is how the computational logic is sent to the device. As of now, we rely on app stores and browsers to deliver the logic the client will use. Delivery mechanisms are evolving into writing code once and having the ability to run that logic in the cloud and push that logic to the client through your application and have that logic run on the device. In this presentation, we will look at how to accomplish this with existing Azure technologies and how to prepare for upcoming technologies to run these workloads.

 

IRIS Conference

April 14th is the Integrative Research and Ideas Symposium (IRIS) hosted by the UGA Graduate-Professional Student Association. I will be speaking on three separate topics at the event:

  • Virtual Reality and IoT – Interacting with the Changing World
  • Enable IoT with Edge Computing and Machine Learning
  • Alternative Device Interfaces and Machine Learning

More than that though I look forward to hearing about the innovations and research provided by the graduate students and professionals at UGA. Here is their synopsis of IRIS:

The UGA Graduate-Professional Student Association is proud to announce IRIS 2018, a unique and exciting opportunity for students and other researchers from throughout the UGA community. 

This initiative’s focus on community-building, cross-pollination of ideas, transferrable skills, and service will:

  • Provide an excellent opportunity to enhance research communication skills and present research to an interdisciplinary audience. 
  • Expose students to cutting-edge scholarship, industry professionals, and rich professional development opportunities.
  • Help attendees refine the content and language of their C.V.’s and resumés through career workshops. 
  • Encourage shared scholarship, research, and service.
  • Equip attendees with new knowledge and skills which can strengthen teaching, learning, and career outcomes. 
  • Empower attendees to translate skills and research interests into career competencies. 

CodeStock

I’m proud to be presenting Alternative Device Interfaces and Machine Learning at CodeStock this year. With AI becoming more and more ubiquitous, it is important to note the effect on a user’s experience. This presentation is meant to show how to create modern applications using machine learning provided by a third party and showcase what some third parties provide.

In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.

Users use voice (Alexa, Cortana, Google Now) or video as a mode of interaction with applications. More than a fad, this is a natural interface for users and is becoming more and more common with the ever-decreasing size of hardware.

Different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with two app types: UWP and Xamarin Forms (iOS and Android). Speech Recognition with Cognitive Services: Verifying the speaker with Speaker Recognition API. Computer Vision with Cognitive Services: Verifying a user with Face API.

By utilizing UWP, Xamarin, and Cognitive services; a device with the ultimate in customization for user interactions will be created. Come and see how!

Azure IoT Edge – exec user process caused “exec format error”

If running Edge on a Raspberry Pi  and an Edge container’s logs show  ‘exec user process caused “exec format error”‘ as an error then most likely you are running a non Raspberry Pi container on the Raspberry Pi. If the docker file used to build the container starts with:

  • FROM microsoft/dotnet:2.0.0-runtime

or

  • FROM microsoft/dotnet:2.0.0-runtime-nanoserver-1709

then the line above should be changed to one of the following:

  • FROM microsoft/dotnet:2.0.5-runtime-stretch-arm32v7
  • FROM microsoft/dotnet:2.0-runtime-stretch-arm32v7
  • FROM microsoft/dotnet:2.0.5-runtime-deps-stretch-arm32v7
  • FROM microsoft/dotnet:2.0-runtime-deps-stretch-arm32v7