AWS recently announced updates to Amazon Lex, a service for building conversational interfaces into any application using voice and text. The service now has an enhanced management console and new V2 APIs, including continuous streaming capability. According to an AWS News Blog, developers can now build and manage bots with the new management console and APIs. There are three main benefits with the enhanced console and new APIs:
- First, developers can add a new language to a bot at any time and manage all the languages through the lifecycle of design, test, and deployment as a single resource. Furthermore, the console allows them to quickly move between different languages to compare and refine their conversations.
- Second, the new Lex console and V2 APIs provide a simple information architecture where the bot intents and slot types are scoped to a specific language.
- And lastly, developers can use additional builder productivity tools and capabilities for more flexibility and control of the bot design process.
The company also provides developers with a new streaming conversation API to pause a conversation and handle interruptions directly as they configure the bot. With the streaming capabilities, they can quickly enhance virtual contact center agents and smart assistants’ abilities. The new Wait and Continue feature, for instance, which provides the ability to put the conversation into a waiting state, is surfaced during slot elicitation. In a recent AWS Machine Learning Blog, the authors wrote:
You can configure the slot to respond with a “Wait” message such as, “Sure, let me know when you’re ready” when a caller asks for more time to retrieve information. You can also configure the bot to continue the conversation with a “Continue” response based on defined cues such as “I’m ready for the policy ID. Go ahead.”
To build a bot, developers can go to the new Amazon Lex console and create a bot from scratch or start with an example. Subsequently, they can continue with the bot configuration, perform IAM settings, add one or more languages, use the intend editor, create a version, and test it.
Next to the console, developers can also opt to build bots using the AWS Command Line Interface (CLI) or via a set of APIs. Unfortunately, they cannot integrate Amazon Connect contact flows with V2 APIs or the new console – however, the company will provide this integration as part of the near-term roadmap.
Holger Mueller, principal analyst and vice president at Constellation Research Inc., told InfoQ:
Addressing the multilingual ability of a bot is enormous to make availability and management easier – and it adds tremendously to developer velocity for conversational applications. It also lays the foundation for assistants to understand multiple languages based on situational requirements, so it is good to see AWS moving forward to the Lex V2 APIS.
AWS is not the only cloud provider with a service for building bots. For instance, Microsoft provides a bot framework with integration to Azure Cognitive Services and on the Microsoft Cloud Virtual Agents. Furthermore, Google has an offering on their cloud platform called Dialogflow.
Currently, Amazon Lex with the new capabilities is available in all existing AWS Regions. Users of the service only pay for what they use, and the pricing details are available on the pricing page. Furthermore, AWS will continue to support all existing APIs and bots, while the newly announced features are only available in the new console and V2 APIs. More details on the service are available on the documentation landing page.