Yesterday, Amazon hosted its 3rd annual Alexa Live event for its 900,000 developers and countless users of Alexa devices. Last year’s Alexa Live event saw many important changes that shaped the future of the Alexa ecosystem. Alexa Live is joined by developers, device makers, startups, entrepreneurs, journalists and analysts to see what else Alexa has for its community.
One statistic released by Amazon that caught my eye is that one in four Alexa Smart Home interactions are now initiated by Alexa rather than the customer. This statistic reveals the direction of the future of voice, and I think Amazon did just that by giving its vision statement. Jeff Blankenburg, chief technology evangelist, said Alexa’s vision statement, “to be an ambient assistant who is proactive, personal and predictable, wherever clients need it.” Behind the scenes, Alexa is designed to be ambient while naturally helping customers and not being the next distraction. The unique challenge with Alexa is to distribute and guide information to the user from end to end without compromising the mood. To do this, Amazon says it’s working to make the Alexa ambient experience ubiquitous, multimodal, and smarter.
Make Alexa ubiquitous
Amazon announced new interactive and customer-engaging features (Alexa Presentation Language), APL widgets, and featured skill cards. APL Widgets allow customers to interact with content on the home screen with easy to update views of skill content. Featured Skills allow developers to put their skills on the home screen next to what’s already displayed on the Echo Show home screen. These are great features that enhance the multimodal user and developer experience. Users should be able to use the skills most used and discover new skills in seamless interaction on the home screen.
Amazon announced its Unnamed Interaction Toolkit (NFI) at last year’s Alexa Live event, and this year it’s made some significant improvements to the toolkit. The toolkit helps developers to make their skills known to users by tagging or flagging a skill based on a user’s request. Amazon says the toolkit has boosted traffic and in some cases doubled it for useful skills.
NFI Toolkit has a new feature that allows skills to be the responses to popular discovery-driven Alexa utterances, such as “Alexa, tell me a story” or “Alexa, I need a workout” . The NFI Toolkit also has a new custom skill suggestion feature for users so frequent users find the skills most useful. One example Amazon gave was of a customer asking, “Alexa, how did the Nasdaq behave today?” And he responds with “You have already used the CNBC skill.” Would you like to use it again? I highlight this example because it brings personal and ubiquitous experience to skills without being overwhelming.
Amazon is also expanding its unnamed interactions feature to support additional skill discovery in interactions that may use multiple skills. I think this feature is another great way to improve customer interaction and increase discoverability.
Another interactive feature that Amazon added to Alexa is the Spotlight feature on Amazon Music. Amazon says users can now connect directly with fans by uploading posts to promote new music and interact with fans. Amazon has also created Interactive Media Skill components and Song Request Skill components that shorten interaction times with radio, podcast and music providers and provide users with additional modes of interaction. Users will either love or hate these features, since most primarily want to listen to music, and music is not necessarily an interactive activity.
Make Alexa multimodal
Amazon announced its new Food Skills APIs that allow users to quickly create food delivery and pickup experiences. One of the hardest choices when going out to eat is choosing a place to eat. Having local food offers and suggestions through Alexa should make the experience much easier for users and in some cases better for restaurants, stores, and delivery services to take out products and services.
Amazon also offers two new features that go hand in hand: event-based triggers and proactive suggestions. Alexa users can create proactive experiences that trigger skills when an event or activity occurs. Alexa has also improved the routines with custom tasks that allow users to customize the routines within the skills. Amazon also includes a feature that allows users to send experiences that start on the Alexa device to a connected smartphone. These features open up Alexa’s multimodal capabilities, and I think users are going to find Alexa to be a crucial part of their days.
Alexa also opens its Device Discovery feature to include additional Alexa-enabled devices connected to the same network. This feature allows device manufacturers to integrate Device Discovery into other smart home devices to create a connected home. Amazon has also upgraded Alexa Guard to connect to smart security devices like smoke, carbon monoxide and water leak detectors around the house that can then send notifications.
Make Alexa smarter
Amazon says it has doubled the skills commitment since it made Alexa conversations generally available. He announced that he was expanding Alexa Conversations to be available as a public beta in German, all English languages and a preview for developers in Japan. It also announces Alexa Skill Components to help developers learn skills faster by plugging basic skill code into existing voice models and code libraries.
Amazon also makes it easier for users to connect their accounts to a product or service skill or to enroll using Voice Forward account binding and Voice Forward consent. Amazon said it has upgraded the Alexa Skills Design Guide which codifies lessons learned from Amazon developers and the broader skills development community.
Amazon has included other features that make it much easier to build skills and implement services and products in the Alexa ecosystem:
- Alexa entities allows skills to retrieve information from Alexa’s skill graph.
- Custom pronunciations allows developers to add custom pronunciation to skill models.
- Example of a statement recommendation engine uses grammatical induction, sequence-to-sequence transformers, and data filtering to recommend utterances for a developer’s skills.
- A / B skills tests allows developers to perform A / B testing, make data-driven launch decisions.
- Service and test generation tool helps developers test capabilities for consolidated batch testing.
What’s great about these new features is that Amazon understands that it doesn’t have to do all the work to make Alexa smart. Amazon should only give developers the tools and the ability to implement intelligent user interactions and experiences. I think these tools successfully give developers the tools to do it.
Ambient computing is one of the hardest things to do, but I think it’s the most valuable in the long run. It could take another five to ten years of work to accomplish on a global scale.
Amazon’s Alexa Live event kind of brought more to the table than last year’s event. Much of the creation of a ubiquitous, multimodal, intelligent ambient experience lies in the hands of developers, device makers, entrepreneurs, and the Alexa community. To create an ambient experience, Amazon must create the tools and opportunities for these partners to do their part.
Amazon has created seamless interactions between skills and users with feature maps and APL widgets. This gives the skills more opportunities to be interactive and discoverable with the NFI toolkit. Amazon makes a lot of interactions and experiences between users and Alexa much of people’s day with food APIs, event-based triggers, and proactive suggestions. Amazon is successful in making skills easier and more accessible to developers, and I think the Alexa ecosystem, end to end, can appreciate the feasibility.
Ambient computing is the “victory” and Amazon, from what I saw on Live, is bringing us closer to that reality. It’s a two-horse race with Google, and it looks like Amazon is currently in the lead.
Note: Moor Insights & Strategy Co-operative, Jacob Freyman, contributed to this article.
Moor Insights & Strategy, like all research and analysis companies, provides or has provided paid research, analysis, advice or advice to many high-tech companies in the industry including 8 × 8, Advanced Micro Devices , Amazon, Applied Micro, ARM, Aruba Réseaux, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Marvell, Mavenir, Marseille Inc, Mayfair Equity , Meraki (Cisco), Mesophere , Microsoft, Mojo Networks, National Instruments, Net App, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint , Stratus Technologies, Symantec, Synaptics, Synverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa and Zoho which can be cited in blogs and research.