Custom skills is a great feature of Aimybox that enables the voice assistant to perform any actions right on the device from where the user speaks their voice commands.

Google Assistant Actions or Amazon Alexa skills don't provide such ability because the voice action or skill works entirely in the cloud and doesn't have an access to the local device's services

For example, a custom skill could launch some activity or perform some actions in the user's local network.

How to create a custom skill

To create a new custom skill you have to implement a CustomSkill interface. Then you have to add your custom skill implementation in the Config of your Aimybox instance.

Custom skill lifecycle

The custom skill has the following lifecycle methods that are called by Aimybox service

onRequest

This method will be called by Aimybox service right after the user's speech was recognised. Custom skill can add some additional data to the Request that will be sent to the configured dialog API then.

For example, your custom skill can add the current user's geolocation to help your weather forecast service find the right data.

canHandle

This method should return true if your custom skill can handle a particular Response from the dialog API.

As a rule custom skill looks only on the action field of Response object to determine if it can handle this particular response
If Aimybox service didn't find any custom skill that can handle this response, it just executes the default action - synthesises the speech from this response and continue speech recognition if needed

onResponse

The main method of the custom skill that should perform the actual action for the particular dialog API's Response).

An Aimybox instance also passed to this method because custom skill should manage the state of Aimybox in the case it can handle this response. It means that your custom skill should synthesise the speech through speak() method or just call standby() method right after processing.

Note that you have to call standby() method if your custom skill doesn't synthesise anything back to the user

This method also receives a callDefaultHandler as an argument. This function can be called if you won't synthesise a response on your own and would like to invoke a default Aimybox's implementation.

Example

Here is a simple example of such custom skill that performs some logic on the device

class DeepLinkSkill(val context: Context): CustomSkill {

override fun canHandle(response: Response)
= response.action == "deep_link"

override suspend fun onRequest(request: Request) {
// This field will be available in request on the NLP engine side
request.data.addProperty("package", context.packageName)
}

override suspend fun onResponse(
response: Response,
aimybox: Aimybox,
callDefaultHandler: suspend (Response) -> Unit
) {
val data = response.data as JsonObject // if we know that NLP engine returns JSON
val uri = data["deeplink"].asString // voice skill added this to the response data

context.startActivity(Intent(Intent.ACTION_VIEW, Uri.parse(uri)))
aimybox.standby() // we don't speak anything thus we have to standby Aimybox
}
}
Was this article helpful?
Cancel
Thank you!