Recent developments in technology have been focusing more and more on the user experience part of it. A lot of research is being done by the likes of Google, Amazon and Microsoft in this area. The problem they are trying to solve is to reduce the threshold energy a user needs to spend in order to do an activity, and the friction in switching between such activities. “The best interface is no interface!”, says the title of Golden Krishna’s popular book.
The fundamental question is, what is the easiest way to interact with our surroundings? The answer is simple, natural conversation – which computers do not understand completely on their own. Several projects like Siri, Google Now, Cortana and Amazon Alexa are trying to build upon this interface.
Alexa is particularly more interesting than others in that category because they have decided to open up the platform for outside developers. Amazon describes Alexa as having “infinite capabilities” which they call “skills”.
What is an Alexa skill?
Alexa is a brain that is ever expanding. It is the voice that powers millions of Amazon Echo devices and Fire TV sticks. Alexa comes with a basic set of capabilities by default and the user can install even more skills. They can simply select any of the more than 12,000 skills available for Alexa at the moment. How it works is that the developers who want to put their service on the Alexa platform use the Alexa Skills Kit to build natural user interfaces. For example the developers from Starbucks Coffee Co. made this Alexa skill that lets the user control their order and manage their account with voice.
How does the Alexa Skills Kit work?
ASK is a combination of multiple tools by Amazon that help the developers make skills for Alexa. Developers and designers are provided with Alexa APIs to do this. Skills can either be simple single-sentence commands or complex multi-sentence conversations where Alexa needs to keep track of the context as well. Categorizing by functionality, there are three types of skills:
1. Smart Home skills
These skills use the Alexa Smart Home API to control smart gadgets in a home. It can be used to do activities like control lighting, fans, refrigerator and thermostat.
2. Flash Briefing skills
These skills uses the Alexa Flash Briefing API to define words that users need to say to invoke a flash briefing. The provider also needs to return the response to Alexa in a pre-specified format so that it can parse it and relay it to the end user.
3. Custom skills
These skills can handle any type of request. These are considerably more complex to implement but can provide a greater degree of control over the experience. Custom skills are implemented in two parts: first of all the user needs to specify the words that invoke their skill. After that the user needs to specify the function they want to run when a specific “intent” is expressed by the user. The parsing and identification of this intent is done by Alexa. Custom skills require making a lambda function adapter that runs on AWS Lambda. Amazon even provides free AWS credits for Alexa!
All these skills can be controlled by the user either by asking questions to query the current status of the service or make requests to perform actions.
Publishing the skill to market
Amazon allows the developers to publish their Alexa skill to the skills market. This is a three step process in which the developer first registers the skill, tests it with the Alexa testing tool Echosim and then submits the skill for approval.
Power features
Tools like Segment already allow for aggregated data of user behavior to be sent to various data warehouses for processing. Such tools allow developers who want to use the benefit of data science and generated insights to send the aggregated data to Alexa using these data pipeline solutions. Developers can also create games for Alexa! In fact the most popular skills on Alexa right now are games like Jeopardy. This can be done by creating custom skills that can handle multi-sentence conversations.
Amazon Alexa is an interesting platform and a pioneer in how “zero UI” interfaces of the future might look like. Opening it up to developers is a sure way to get a good momentum going.
Photo courtesy of Pexels user Pixabay