Average: 5 (1 vote)

Voice-command technology is advancing quickly. So is consumer demand.

Smart-home systems including Amazon Alexa, Google Home and Apple’s upcoming HomePod are locked in an intense battle for a new kind of user space.

A big part of these devices’ appeal is the way they let users issue commands while speaking naturally. The goal: a frictionless experience in which users can issue myriad commands that are seamlessly understood, parsed and carried out quickly and efficiently.

Amazon Apple and Google smart-home devices

From left: Google Home, Apple HomePod, Amazon Alexa

But from a designer’s perspective, creating these easy and intuitive user experiences is a real challenge. “Star Trek” makes it look easy. It isn’t. Inside these new smart-home digital assistants is a tightly woven fabric of complex technologies.

Audio connection

For the user, it all starts with a keyword or phrase, such as “Alexa” or “Hey, Siri.” Developers call this Keyword Spotting (KWS). Since a voice command can come at any time, the device needs to keep itself on a constant vigil, listening for the wake word(s) directly preceding its next command.

However, microphone technology makes that especially challenging. We humans hear in 360 degrees, and we’re used to being heard even when the listener’s back is turned. By contrast, mics are designed to pick up sound signatures in a far tighter radius.

To overcome this issue, most designers now place a series of microphones in a circular array. This array of mics, in turn, gets connected to a sophisticated Digital Signal Processing (DSP) system.

Voice-command processing

Intel’s smart-home team is at the forefront of designing what’s being called Far-Field Voice Algorithms. To improve the way smart-home devices listen, process and respond to voice commands, Intel engineers are using the latest in AI, machine learning and natural language processing.

For an example of how this works, consider the humble dance party. Let’s say you’re using Apple’s HomePod, which focuses on providing a great audio experience. However, at some point in the evening, you might want something other than loud music. Here’s where Intel’s latest technology comes in.

Intel knows that, even if Lady Gaga is playing at top volume, you might want to interrupt the music to order some Kung Pao chicken, dim the lights, change the song, or unlock the door. To ensure that your device is always ready, willing and able, the system uses “beamforming” — a kind of concentrated and directed WiFi — to first identify your location, then channel the mic input accordingly.

But beamforming and keyword spotting are pointless if your device can’t hear you in the first place. So the latest batch of smart-home controllers now also include something called Auto-Echo Cancellation. AEC uses the speaker’s signal as a reference. It can artfully instruct the microphones to disregard “Born This Way” and instead keep an ear out for your next command.

This year, more than 35 million Americans will have used a voice-activated system at least once a month, according to research firm eMarketer. That’s a nearly 130% increase over last year. It’s also a clear sign that the market for voice-controlled smart-home functionality is real and growing fast.

Are your customers ready? If not, it's time to raise your voice!

Blog Category: 
Devices