Voice skill is a piece of functionality of your voice assistant. This can be a voice driven weather forecast, phone dialler or any other feature of your app or device.

Your Aimybox driven voice assistant should have at least one voice skill to perform some useful actions.

Here we describe how to work with built-in voice skills and create your own.

Aimybox console

If you plan to add multiple voice skills into your assistant it's better to use Aimybox console that enables you to mix multiple voice skills powered by different NLU engines in the single voice project.

Just sign in to Aimybox console with your Github account and create a new project with one of available language and name.

Aimybox machine learning NLP supports only a limited set of languages

Enable some ready to use voice skills from the internal marketplace or add your own custom skills powered by any NLU engine. Then train the project and copy your Aimybox API key from the project's settings.

Paste this API key into the Aimybox initialisation block in your code like this

val dialogApi = AimyboxDialogApi("your Aimybox project API key", unitId)

You can discover the sample app to learn how to instantiate and use Aimybox in your project.

Skills marketplace

Here you can see a built-in collection of voice skills that can be added to your project. Just click on Add button on skills you would like to add and then click on Train project on the top right corner.

Note that you have to re-train your project each time you add or remove voice skills

Custom skills

You are not limited with only a built-in Aimybox's voice skills. Custom skills enable you to create your own voice features for your assistant.

Click on Create custom skill button. Here you have to setup some settings of your voice skill.

Skill name - an arbitrary skill name
Skill samples - a collection of natural language phrases that activate your skill
Custom parameters - if your skill uses some dynamic data like passwords you can configure it here
Skill source - configure the place where your skill's logic is hosted

Custom skill source

Your custom skill should be hosted anywhere in the cloud. For example you can use Aimylogic or Dialogflow NLP services to create and host your voice skill.

Once the user speaks some phrase to the assistant, Aimybox recognises it and sends the recognised text to the Aimybox API. Here this phrase matches with sample phrases you've provided in your custom skills using the machine learning algorithms. If any phrase matches to the skill successfully, Aimybox API picks this voice skill and it handles the user's query. Then all following user's queries go to this voice skill until the skill ends the session.

Direct NLU

If your assistant requires only a single voice skill, it may be more convenient for you to eliminate Aimybox API from the query chain and connect the assistant directly to the NLU engine that implements the voice feature fro your project.

You can develop a voice skill that implements Aimybox HTTP API and connect it directly to your Aimybox service in your application. In Android SDK you can pass the endpoint URL of your voice skill right in the AimyboxDialogApi constructor.

Thus every user's request will go directly to the URL you've provided.

For example you can develop a voice skill using Aimylogic webhook

Other NLU engines

You can connect your project directly to any other NLU engine you use. Here is a list of currently supported engines you can use.

If there is a lack of the NLU engine you wish to use, you are always welcome to implement it and contribute to the Github upstream repository
Was this article helpful?
Thank you!