Introduction
As technology develops our testers have become experienced in testing a variety of cutting edge environments from AR and VR apps to smart speakers such as the Google Home and the Amazon Echo (and Dot). As Echo sales continue to grow, reaching more and more people, we wanted to share a useful introductory testing guide to Alexa – the Echo’s inbuilt concierge.
We’re presenting a number of test considerations we think are relevant and we’ve kept this guide as simple as possible, so it remains applicable to the common features of Alexa’s functionality. It is worth noting, the test approach we use is not the same as those required for Alexa’s platform checks and certification.
Access
An initial consideration for any developer looking for testing will be the delivery method of the application or product undergoing testing. With Alexa this can be achieved in two ways:
- Via the Amazon Alexa App
If the application has already been certified by Amazon within their guidelines, this can then be downloaded and installed via the Amazon Alexa app. This will require the client and tester to have relevant account access.
- Via Echosim – Simulator
Alternatively, if an application is not at the state where it can pass Amazon’s platform checks, Amazon’s Alexa Simulator called ‘Echosim’ can be used to mimic the test environment. This is ideal for developers looking to test their application.
Skills
Alexa’s main functionality comes from the user speaking to the host device and using keywords or ‘utterances’ to trigger events within certain Skills. A Skill being a voice driven process that allows Alexa to perform a requested task. For example, if a user requested to know the weather in a specific place, it would tell them using the corresponding skill that the user has installed.
These skills can be browsed and uploaded to the device through a companion app, which can be accessed on iOS or Android, or through a web browser. Upon uploading the skill, the user will be given instructions on how to use it and what words or utterances will be useful as examples within the companion app.
When testing a new app or skill, these words/utterances and their expected responses will need to be provided by the development team in order to test that the triggers are behaving as expected and the full journey is completed. Skills can also be enabled through the voice command “Alexa, enable [skill name] skill”, if the user knows the exact name of the skill. A test for this voice command installation should form part of a test suite when testing any Alexa application.
Smart assistant updates
Amazon’s smart assistant (Alexa) updates automatically, improving its functionality with new features that will help users with everyday tasks and providing bug fixes. Previously, if Alexa did not know the answer to a question it would say so and that would be the end of the conversation. Recent updates now allow Alexa to suggest skills that will help answer that specific question. As a testing provider, these updates to Alexa should be monitored as they will impact future test cases. In this scenario, tests should be performed to check whether the new skill you’re developing will be recommended if related questions are asked before the skill has been installed by a user.
Security
When testing Alexa, some skills require security considerations to be taken into account when scripting for the test process. For example, if a skill allows the unlocking or unarming of other devices such as alarms, front doors or even phones, is there security within the skill to protect the user such as pin codes or passwords?
Functional tests
Any functional tests will primarily be based around the roles the skill is expected to play, verifying that the data provided back by the skill or the action taken by the skill is accurate and as expected. Testers would look at the utterances that Alexa responds to, specifically working around the syntax of these utterances to ensure key user journeys can be triggered via a variety of phrases both common and uncommon. Much of these areas will be identified during the scoping phase of a project but some may be defined by the tester by using exploratory and edge case testing techniques.
With Alexa and other AI platforms in a constant state of innovation and update, here at Zoonou we’re continuing to monitor and adapt to these changes, creating an increasing number of tests for Alexa that are being added to our test framework regularly.
To speak to us more about Alexa and AI testing, please contact us at info@zoonou.com and one of our team will get back to you. To find out more about how Zoonou can help to advise on your testing strategy, please see our QA Advisory and Consultancy page. If you’d like to get in touch about anything else, please head over to our Contact page.