Optical Character Recognition - powered by Tesseract
npm i --save nativescript-ocr

NativeScript OCR

Build Status NPM version Downloads Twitter Follow

Optical Character Recognition - powered by Tesseract


tns plugin add nativescript-ocr


You'll need to add language files to help Tesseract recognizing text in the images you feed it.

Download version 3.04.00 of the tessdata files here and add your required language to the app/tesseract/tessdata/ folder of your app.

Note that if your language(s) has multiple files (like English: there's 9 files matching eng.*), copy all those files to the folder.


iOS searches for the tessdata folder in app/App_Resources/iOS, but instead of dulicating the folder you can create a symbolic link:

cd app/App_Resources/iOS
ln -s ../../tesseract/tessdata




This is just a basic example using the default settings, look at the TypeScript code below for a more elaborate example.

var OCRPlugin = require("nativescript-ocr");
var ocr = new OCRPlugin.OCR();

image: myImage
function (result) {
console.log("Result: " + result.text);
function (error) {
console.log("Error: " + error);


This example shows how to use all possible (but optional) options you can pass into retrieveText:

import { OCR, RetrieveTextResult } from "nativescript-ocr";
import { ImageSource } from "image-source";

export Class MyOCRClass {
private ocr: OCR;

constructor() {
this.ocr = new OCR();

doRecognize(): void {
let img: ImageSource = new ImageSource();

img.fromFile("~/samples/scanned.png").then((success: boolean) => {
if (success) {
image: img,
whitelist: "ABCDEF", // you can include only certain characters in the result
blacklist: "0123456789", // .. or you can exclude certain characters from the result
onProgress: (percentage: number ) => {
console.log(`Decoding progress: ${percentage}%`);
(result: RetrieveTextResult) => {
this.set(HelloWorldModel.BUSY_KEY, false);
console.log(`Result: ${result.text}`);
}, (error: string) => {
console.log(`Error: ${err}`);