Advances in Speech Will Accelerate and Make Voice a Vital Interaction Paradigm

Speech will quickly become the predominant interaction modalities for smart glasses in 2018. As Forrester aptly put it, “we’ll see increasing demand for developers that know how to build augmented reality and natural language processing (NLP) based experiences.”

Because many AR use cases involve complex, hands-on work, navigating the wearable interface with speech is ideal, keeping the worker completely hands-free.  Being hands-free and accessing information heads-up remains one of the most compelling reasons as to why companies are investing in smart glasses today, and we expect this trend to continue.

Equally as importantly on the consumer side, the emergence of smart assistance and natural language processing is making people more familiar with speech as an interaction modality. A quick glance at highlights from last week’s CES confirms the proliferation of speech and voice recognition devices, and there is more to come. Much like how touch-based interactions have become common knowledge to everyone today, speech will follow this pattern.

As we look into how AR is set to evolve in 2018 and beyond, we also see other modalities such as gesture, sensor triggers and object recognition emerge as well. For example, in mixed reality experiences where multiple objects can be placed in space, speech is an inefficient method to accurately direct where they should be placed – gesture is much more streamlined. Similarly, sensor triggers (e.g. from IoT enabled sensors) and object recognition will further simplify the need to interact with on-screen content by sensing the user’s environment.

View 2018 AR Prediction #5: Tech behemoths will drive 3D content revolution >

Related Articles

2018 Prediction #3:

Anyone can be an AR developer

2018 Prediction #5:

Tech behemoths will drive the 3D content revolution