Overview

Because Alexa does not communicate through Microsoft Bot Services, there are some best practices that are unique to building skills for an Alexa-enabled bot.

Avoid Long Running Processes

One of the biggest limitations with Alexa is that it will only keep the line of communication open with AtBot for up to 8 seconds without a response. This means that if you have a flow that has a step that could take a while to complete, Alexa will time out and any communication that flow tries to make back to the user will be ignored. Try to get back to the user with additional questions or closing response within 8 seconds of the last question.

Always End Your Skill with a Send Reply

During the open 8 second window, you can send "Send Reply" actions to inform the user you are doing something. These are called progressive responses and they will keep the line of communication open with Alexa.

When your skill is done, provide a final "Send Reply" that includes a Signal Response JSON of { EndSkill:true }. This informs Alexa that the skill is over and can stop listening.


Limited Flow Action Support

Voice is a very different medium than Teams or Web. We do plan for expanded display support with APL, but for now, the supported actions are as follows:

  • When an Intent is Used
  • When a Registered Intent is Used
  • Get Response from User
  • Get Choice Response from User. Only supported on devices with screens
  • Get Choice List Response from User. Only supported on devices with screens
  • Send Reply

Conditional Logic Based on Device

Get Choice and Get Choice List actions will fail when sent to Alexa devices that do not have a screen. In order to work around this, you will want to use the Bot Scope property Alexa_SupportsAPL that comes over in the trigger to branch in your flow to the proper action type.


QnA Maker Integration

In order to allow Alexa to integrate seamlessly with QnA Maker, AtBot will look for some metadata values from the Knowledgebase that will help it support voice responses.

Metadata Value speak

When a plain text answer from QnA Maker is sent to Alexa, it will be spoken to the user as the text. However, when you have an answer that is either a multi-turn selection with prompts or includes markup that can't be read, the value of the speak metadata will be spoken.

When this is spoken for a multi-turn response, AtBot will include the context to be sure to keep the user within the multi-turn scope.

Metadata Value EndSkill

By default, AtBot will tell Alexa to continue listening after an answer is sent to the user. In the case where the user says "Thanks" or "Goodbye" or some other closing statement, you want the Alexa to immediately stop listening. Adding the metadata value of Endskill:true will have Alexa read the answer and then not take any further input.