I look forward to speaking on AI on the Edge at DotNetSouth.Tech. This year is the conference’s first year so check it out.
AI on the Edge
I look forward to speaking on AI on the Edge at DotNetSouth.Tech. This year is the conference’s first year so check it out.
I am happy to announce I will speaking at the Orlando Code Camp again this year. I will be presenting AI on the Edge, a look into Microsoft’s Azure IoT Edge.
AI on the Edge
The next evolution in cloud computing is a smarter application not in the cloud. As the cloud has continued to evolve, the applications that utilize it have had more and more capabilities of the cloud. This presentation will show how to push logic and machine learning from the cloud to an edge application. Afterward, creating edge applications which utilize the intelligence of the cloud should become effortless.
I was once again accepted to speak at CodeMash. This year I will be presenting – Alternative Device Interfaces and Machine Learning. If you would like to purchase tickets, they are for sell. Here is what is going to be covered:
In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.
I have been selected to speak at Update Conference Prague during
Being able to run compute cycles on local hardware is a practice predating silicon circuits. Mobile and Web technology has pushed computation away from local hardware and onto remote servers. As prices in the cloud have decreased, more and more of the remote servers have moved there. This technology cycle is coming full circle with pushing the computation that would be done in the cloud down to the client. The catalyst for the cycle completing is latency and cost. Running computations on local hardware softens the load in the cloud and reduces overall cost and architectural complexity.
The difference now is how the computational logic is sent to the device. As of now, we rely on app stores and browsers to deliver the logic the client will use. Delivery mechanisms are evolving into writing code once and having the ability to run that logic in the cloud and push that logic to the client through your application and have that logic run on the device. In this presentation, we will look at how to accomplish this with existing Azure technologies and how to prepare for upcoming technologies to run these workloads.
Using IoT Devices, powered by Windows 10 IoT and Raspbian, we can collect data from the world surrounding us. That data can be used to create interactive environments for mixed reality, augmented reality, or virtual reality. To move the captured data from the devices to the interactive environment, the data will travel through Microsoft’s Azure. First it will be ingested through the Azure IoT Hub which provides the security, bi-directional communication, and input rates needed for the solution. We will move the data directly from the IoT Hub to an Azure Service Bus Topic. The Topic allows for data to be sent to every Subscription listening for the data that was input. Azure Web Apps subscribe to the Topics and forward the data through a SignalR Hub that forwards the data to a client. For this demo, the client is a Unity Application that creates a Virtual Reality simulation showcasing that data.
Once finished with this introduction to these technologies, utilizing each component of this technology stack should be approachable. Before seeing the pieces come together, the technologies used in this demonstration may not seem useful to a developer. When combined, they create a powerful tool to share nearly unlimited amounts of incoming data across multiple channels.
This year I will be presenting Enable IoT with Edge Computing and Machine Learning at TechBash. Here is the outline:
Being able to run compute cycles on local hardware is a practice predating silicon circuits. Mobile and Web technology has pushed computation away from local hardware and onto remote servers. As prices in the cloud have decreased, more and more of the remote servers have moved there. This technology cycle is coming full circle with pushing the computation that would be done in the cloud down to the client. The catalyst for the cycle completing is latency and cost. Running computations on local hardware softens the load in the cloud and reduces overall cost and architectural complexity.
The difference now is how the computational logic is sent to the device. As of now, we rely on app stores and browsers to deliver the logic the client will use. Delivery mechanisms are evolving into writing code once and having the ability to run that logic in the cloud and push that logic to the client through your application and have that logic run on the device. In this presentation, we will look at how to accomplish this with existing Azure technologies and how to prepare for upcoming technologies to run these workloads.
At Devnet Create I was interviewed by The Cube. Heres the resulting interview:
April 14th is the Integrative Research and Ideas Symposium (IRIS) hosted by the UGA Graduate-Professional Student Association. I will be speaking on three separate topics at the event:
More than that though I look forward to hearing about the innovations and research provided by the graduate students and professionals at UGA. Here is their synopsis of IRIS:
The UGA Graduate-Professional Student Association is proud to announce IRIS 2018, a unique and exciting opportunity for students and other researchers from throughout the UGA community.
This initiative’s focus on community-building, cross-pollination of ideas, transferrable skills, and service will:
This year will be the 4th annual Azure Global Bootcamp in Atlanta. If you don’t know about Azure Global Bootcamp here is their snippet:
Welcome to Global Azure Bootcamp! All around the world user groups and communities want to learn about Azure and Cloud Computing!
On April 21, 2018, all communities will come together once again in the sixth great Global Azure Bootcamp event! Each user group will organize their own one day deep dive class on Azure the way they see fit and how it works for their members. The result is that thousands of people get to learn about Azure and join together online under the social hashtag #GlobalAzure!
Join hundreds of other organizers to help out and be part of the experience! Check out the different locations worldwide and if there is no location near you, why not organize one?
I will be leading off the IoT track after the Keynote. All around there is an amazing line up of speakers and it looks to be a great experience for both developers and decision makers. There are a limited number of seats so don’t waste time waiting to sign up.
I’m proud to be presenting Alternative Device Interfaces and Machine Learning at DevNet Create this year. With AI becoming more and more ubiquitous, it is important to note the effect on a user’s experience. This presentation is meant to show how to create modern applications using machine learning provided by a third party and showcase what some third parties provide.
In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.
Users use voice (Alexa, Cortana, Google Now) or video as a mode of interaction with applications. More than a fad, this is a natural interface for users and is becoming more and more common with the ever-decreasing size of hardware.
Different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with two app types: UWP and Xamarin Forms (iOS and Android). Speech Recognition with Cognitive Services: Verifying the speaker with Speaker Recognition API. Computer Vision with Cognitive Services: Verifying a user with Face API.
By utilizing UWP, Xamarin, and Cognitive services; a device with the ultimate in customization for user interactions will be created. Come and see how!
I’m proud to be presenting Enable IoT with Edge Computing and Machine Learning at DevNexus this year. This is one of my newer talks. After doing a bit of work with Microsoft Azure’s gateway releases I feel that this logic delivery mechanism is going to become more and more pervasive. As a Microsoft MVP I want to showcase what is coming from an AI and compute perspective.
Being able to run compute cycles on local hardware is a practice predating silicon circuits. Mobile and Web technology has pushed computation away from local hardware and onto remote servers. As prices in the cloud have decreased, more and more of the remote servers have moved there. This technology cycle is coming full circle with pushing the computation that would be done in the cloud down to the client. The catalyst for the cycle completing is latency and cost. Running computations on local hardware softens the load in the cloud and reduces overall cost and architectural complexity.
The difference now is how the computational logic is sent to the device. As of now, we rely on app stores and browsers to deliver the logic the client will use. Delivery mechanisms are evolving into writing code once and having the ability to run that logic in the cloud and push that logic to the client through your application and have that logic run on the device. In this presentation, we will look at how to accomplish this with existing Azure technologies and how to prepare for upcoming technologies to run these workloads.