Speech to text plugin, leveraging iOS and Android's built-in recognition engines.
npm i --save nativescript-speech-recognition

NativeScript Speech Recognition

Build Status NPM version Downloads Twitter Follow

This is the plugin demo in action..

..while recognizing Dutch πŸ‡³πŸ‡± .. after recognizing American-English πŸ‡ΊπŸ‡Έ


From the command prompt go to your app's root folder and execute:

NativeScript 7+:

ns plugin add nativescript-speech-recognition

NativeScript < 7:

tns plugin add nativescript-speech-recognition@1.5.0


You'll need to test this on a real device as a Simulator/Emulator doesn't have speech recognition capabilities.



Depending on the OS version a speech engine may not be available.


// require the plugin
var SpeechRecognition = require("nativescript-speech-recognition").SpeechRecognition;

// instantiate the plugin
var speechRecognition = new SpeechRecognition();

function(available) {
console.log(available ? "YES!" : "NO");


// import the plugin
import { SpeechRecognition } from "nativescript-speech-recognition";

class SomeClass {
private speechRecognition = new SpeechRecognition();

public checkAvailability(): void {
(available: boolean) => console.log(available ? "YES!" : "NO"),
(err: string) => console.log(err)


You can either let startListening handle permissions when needed, but if you want to have more control over when the permission popups are shown, you can use this function:

this.speechRecognition.requestPermission().then((granted: boolean) => {
console.log("Granted? " + granted);


On iOS this will trigger two prompts:

The first prompt requests to allow Apple to analyze the voice input. The user will see a consent screen which you can extend with your own message by adding a fragment like this to app/App_Resources/iOS/Info.plist:

<string>My custom recognition usage description. Overriding the default empty one in the plugin.</string>

The second prompt requests access to the microphone:

<string>My custom microphone usage description. Overriding the default empty one in the plugin.</string>


// import the options
import { SpeechRecognitionTranscription } from "nativescript-speech-recognition";

// optional, uses the device locale by default
locale: "en-US",
// set to true to get results back continuously
returnPartialResults: true,
// this callback will be invoked repeatedly during recognition
onResult: (transcription: SpeechRecognitionTranscription) => {
console.log(`User said: ${transcription.text}`);
console.log(`User finished?: ${transcription.finished}`);
onError: (error: string | number) => {
// because of the way iOS and Android differ, this is either:
// - iOS: A 'string', describing the issue.
// - Android: A 'number', referencing an 'ERROR_*' constant from
// If that code is either 6 or 7 you may want to restart listening.
(started: boolean) => { console.log(`started listening`) },
(errorMessage: string) => { console.log(`Error: ${errorMessage}`); }
).catch((error: string | number) => {
// same as the 'onError' handler, but this may not return if the error occurs after listening has successfully started (because that resolves the promise,
// hence the' onError' handler was created.
Angular tip

If you're using this plugin in Angular, then note that the onResult callback is not part of Angular's lifecycle. So either update the UI in an ngZone as shown here, or use ChangeDetectorRef as shown here.



() => { console.log(`stopped listening`) },
(errorMessage: string) => { console.log(`Stop error: ${errorMessage}`); }

Demo app (Angular)

This plugin is part of the plugin showcase app I built using Angular.

Angular video tutorial

Rather watch a video? Check out this tutorial on YouTube.