Amazon and other companies announced the Voice Interoperability Initiative, a new program designed to speed up the widespread adoption of voice-activated assistants.
Amazon says that the initiative is built around a belief that voice services should work alongside one another on a single device, and that voice-enabled products should be designed to support multiple simultaneous wake words. More than 30 companies are supporting the effort, including global brands like Amazon, Baidu, BMW, Bose, Cerence, ecobee, Harman, Logitech, Microsoft, Salesforce, Sonos, Sound United, Sony Audio Group, Spotify and Tencent; telecommunications operators like Free, Orange, SFR and Verizon; hardware solutions providers like Amlogic, InnoMedia, Intel, MediaTek, NXP Semiconductors, Qualcomm Technologies, Inc., SGW Global and Tonly; and systems integrators like CommScope, DiscVision, Libre, Linkplay, MyBox, Sagemcom, StreamUnlimited and Sugr. However, notable exclusions from Amazon’s ‘Voice Interoperability Initiative’ are Alphabet Inc’s Google Assistant, Apple Inc’s Siri and Samsung Electronics’s Bixby.
Software built by Google and Apple powers virtually all of the world’s smartphones, giving the two companies a captive audience thanks to the versions of Google Assistant and Apple’s Siri installed by default on new devices. Currently, users can’t make Alexa the default assistant on an iPhone, although that’s possible on Android handsets.
Amazon says that the Voice Interoperability Initiative is built around four priorities:
- Developing voice services that can work seamlessly with others, while protecting the privacy and security of customers
- Building voice-enabled devices that promote choice and flexibility through multiple, simultaneous wake words
- Releasing technologies and solutions that make it easier to integrate multiple voice services on a single product
- Accelerating machine learning and conversational AI research to improve the breadth, quality and interoperability of voice services
Companies participating in the Voice Interoperability Initiative will work with one another to ensure their customers have the freedom to interact with multiple voice services on a single device. Products that support multiple voice services, will support multiple simultaneous wake words, so customers can access each service simply by saying the corresponding wake word. Customers get to enjoy the skill and capabilities of each service, from Alexa and Cortana to Djingo, Einstein, and any number of emerging voice services.
Alexa machine learning and speech science technology is designed to support multiple, simultaneous wake words. As a result, any device maker building with the Alexa Voice Service (AVS) can build differentiated products that feature Alexa alongside other voice services.
Still, device makers interested in supporting multiple, simultaneous wake words often face higher development costs and increased memory load on their devices. To address this, the Voice Interoperability Initiative will also include support from hardware providers like Amlogic, Intel, MediaTek, NXP Semiconductors and Qualcomm Technologies, Inc.; original design manufacturers (ODMs) like InnoMedia, Tonly and SGW Global; and systems integrators like CommScope, DiscVision, Libre, Linkplay, MyBox, Sagemcom, StreamUnlimited and Sugr. As part of the initiative, these companies will develop products and services that make it easier and more affordable for OEMs to support multiple wake words on their devices.
Companies involved in the initiative will work with researchers and universities to further accelerate machine learning and wake word technology, from developing algorithms that allow wake words to run on portable, low-power devices to improving the encryption and APIs that ensure voice recording are routed securely to the right destination.