April 14th is the Integrative Research and Ideas Symposium (IRIS) hosted by the UGA Graduate-Professional Student Association. I will be speaking on three separate topics at the event:
- Virtual Reality and IoT – Interacting with the Changing World
- Enable IoT with Edge Computing and Machine Learning
- Alternative Device Interfaces and Machine Learning
More than that though I look forward to hearing about the innovations and research provided by the graduate students and professionals at UGA. Here is their synopsis of IRIS:
The UGA Graduate-Professional Student Association is proud to announce IRIS 2018, a unique and exciting opportunity for students and other researchers from throughout the UGA community.
This initiative’s focus on community-building, cross-pollination of ideas, transferrable skills, and service will:
- Provide an excellent opportunity to enhance research communication skills and present research to an interdisciplinary audience.
- Expose students to cutting-edge scholarship, industry professionals, and rich professional development opportunities.
- Help attendees refine the content and language of their C.V.’s and resumés through career workshops.
- Encourage shared scholarship, research, and service.
- Equip attendees with new knowledge and skills which can strengthen teaching, learning, and career outcomes.
- Empower attendees to translate skills and research interests into career competencies.
I’m proud to be presenting Alternative Device Interfaces and Machine Learning at DevNet Create this year. With AI becoming more and more ubiquitous, it is important to note the effect on a user’s experience. This presentation is meant to show how to create modern applications using machine learning provided by a third party and showcase what some third parties provide.
In this presentation, we will look at the how users interface with machines without the use of touch. These different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with mobile applications, Speech Recognition, and Computer Vision. After this presentation, attendees will have the knowledge to create applications that can utilize voice, video, and machine learning.
Users use voice (Alexa, Cortana, Google Now) or video as a mode of interaction with applications. More than a fad, this is a natural interface for users and is becoming more and more common with the ever-decreasing size of hardware.
Different types of interaction have their benefits and pitfalls. To showcase the power of these user interactions we will explore: Voice commands with two app types: UWP and Xamarin Forms (iOS and Android). Speech Recognition with Cognitive Services: Verifying the speaker with Speaker Recognition API. Computer Vision with Cognitive Services: Verifying a user with Face API.
By utilizing UWP, Xamarin, and Cognitive services; a device with the ultimate in customization for user interactions will be created. Come and see how!