Make sure to check out this demo app because it has almost all ML Kit features this plugin currently supports! Steps:
git clone https://github.com/EddyVerbruggen/nativescript-plugin-firebase
cd nativescript-plugin-firebase/src
npm run setupandinstall (just skip through the plugin y/n prompts you'll get as those are ignored in this case)
npm run demo-ng.ios (or .android)
During plugin installation you'll be asked whether or not you want to use ML Kit, and which of its features.
In case you're upgrading and you have the firebase.nativescript.json
file in your project root, it's safest to rename it (so you can see what your old configuration was),
then clean your platforms folder (rm -rf platforms
) and build your app again. You will be prompted which Firebase features you'll want to use.
In case you want to detect images from the camera, add these to your app resources AndroidManifest.xml
:
<uses-permission android:name="android.permission.CAMERA"/>
<uses-feature android:name="android.hardware.camera" android:required="false" />
<uses-feature android:name="android.hardware.camera.autofocus" android:required="false" />
In case you're using the camera on iOS, open iOS/Info.plist
in your app resources folder,
and add this somewhere in the file (if it's not already there):
<key>NSCameraUsageDescription</key>
<string>Your reason here</string> <!-- better change this 😎 -->
In order to compile, the default version on ios must be >= 9.0.
Edit the file build.xconfig and check that you have the following line (Without this line the default version will be 8.0 and the compilation will failed (targeted OS version does not support use of thread local variables ...
)
IPHONEOS_DEPLOYMENT_TARGET = 9.0;
There are two ways of using ML Kit:
- On-device. These features have been enhanced to not only interpret still images, but you can also run ML against a live camera feed. Why? Because it's fr***ing cool!
- Cloud. The cloud has much larger and always up to date models, so results will be more accurate. Since this is a remote service reconition speed depends heavily on the size of the images you send to the cloud.
Optionally (but recommended) for Android, you can have the device automatically download the relevant ML model(s) to the device
after your app is installed from the Play Store. Add this to your <resources>/Android/AndroidManifest.xml
:
<meta-data
android:name="com.google.firebase.ml.vision.DEPENDENCIES"
android:value="ocr,face,.." />
Replace ocr,label,..
by whichever features you need. So if you only need Text recognitions, use "ocr"
, but if you want
to perform Text recognition, Face detection, Barcode scanning, and Image labeling on-device, use "ocr,face,barcode,label"
.
Note that (because of how iOS works) we bundle the models you've picked during plugin configuration with your app. So if you have a change of heart, re-run the configuration as explained at the top of this document.
To be able to use Cloud features you need to do two things:
- Enable the Cloud Vision API:
- Open the Cloud Vision API in the Cloud Console API library.
- Ensure that your Firebase project is selected in the menu at the top of the page.
- If the API is not already enabled, click Enable.
- Upgrade to a Blaze plan.
- Open the Firebase console.
- Select your project.
- In the bottom left, make sure you're on the Blaze plan, or hit the 'Upgrade' button.
Feature | On-device | Cloud |
---|---|---|
Text recognition | ✅ | ✅ |
Face detection | ✅ | |
Barcode scanning | ✅ | |
Image labeling | ✅ | ✅ |
Landmark recognition | ✅ | |
Custom model inference |
import { MLKitRecognizeTextResult } from "nativescript-plugin-firebase/mlkit/textrecognition";
const firebase = require("nativescript-plugin-firebase");
firebase.mlkit.textrecognition.recognizeTextOnDevice({
image: imageSource // a NativeScript Image or ImageSource, see the demo for examples
}).then((result: MLKitRecognizeTextResult) => { // just look at this type to see what else is returned
console.log(result.text ? result.text : "");
}).catch(errorMessage => console.log("ML Kit error: " + errorMessage));
var firebase = require("nativescript-plugin-firebase");
firebase.mlkit.textrecognition.recognizeTextOnDevice({
image: imageSource // a NativeScript Image or ImageSource, see the demo for examples
}).then(function(result) {
console.log(result.text ? result.text : "");
}).catch(function (errorMessage) { return console.log("ML Kit error: " + errorMessage); });
import { MLKitRecognizeTextResult } from "nativescript-plugin-firebase/mlkit/textrecognition";
const firebase = require("nativescript-plugin-firebase");
firebase.mlkit.textrecognition.recognizeTextCloud({
image: imageSource, // a NativeScript Image or ImageSource, see the demo for examples
})
.then((result: MLKitRecognizeTextResult) => console.log(result.text ? result.text : ""))
.catch(errorMessage => console.log("ML Kit error: " + errorMessage));
var firebase = require("nativescript-plugin-firebase");
firebase.mlkit.textrecognition.recognizeTextCloud({
image: imageSource // a NativeScript Image or ImageSource, see the demo for examples
}).then(function(result) {
console.log(result.text ? result.text : "");
}).catch(function (errorMessage) { return console.log("ML Kit error: " + errorMessage); });
The exact details of using the live camera view depend on whether or not you're using Angular / Vue.
You can use any view-related property you like as we're extending ContentView
.
So things like class
, row
, width
, horizontalAlignment
, style
are all valid properties.
Plugin-specific are the optional properties processEveryNthFrame
, preferFrontCamera
(default false
), torchOn
, and pause
, as well as the optional scanResult
event.
You can set processEveryNthFrame
set to a lower value than the default (5) to put less strain on the device.
Especially 'Face detection' seems a bit more CPU intensive, but for 'Text recognition' the default is fine.
If you don't destroy the scanner page/modal but instead briefly want to hide it (but keep it alive),
you can pause the scanner with the pause
property.
Look at the demo app to see how to wire up that
onTextRecognitionResult
function, and how to wiretorchOn
to aSwitch
.
Register a custom element like so in the component/module:
import { registerElement } from "nativescript-angular/element-registry";
registerElement("MLKitTextRecognition", () => require("nativescript-plugin-firebase/mlkit/textrecognition").MLKitTextRecognition);
Now you're able to use the registered element in the view:
<MLKitTextRecognition
class="my-class"
width="260"
height="380"
processEveryNthFrame="10"
preferFrontCamera="false"
[pause]="pause"
[torchOn]="torchOn"
(scanResult)="onTextRecognitionResult($event)">
</MLKitTextRecognition>
Declare a namespace at the top of the embedding page, and use it anywhere on the page:
<Page xmlns:FirebaseMLKitTextRecognition="nativescript-plugin-firebase/mlkit/textrecognition">
<OtherTags/>
<FirebaseMLKitTextRecognition:MLKitTextRecognition
class="my-class"
width="260"
height="380"
processEveryNthFrame="3"
preferFrontCamera="false"
pause="{{ pause }}"
scanResult="onTextRecognitionResult" />
</Page>
Note that since NativeScript 4 the
Page
tag may actually be aTabView
, but adding the namespace declaration to the TabView works just as well.
import { MLKitDetectFacesOnDeviceResult } from "nativescript-plugin-firebase/mlkit/facedetection";
const firebase = require("nativescript-plugin-firebase");
firebase.mlkit.facedetection.detectFacesOnDevice({
image: imageSource, // a NativeScript Image or ImageSource, see the demo for examples
detectionMode: "accurate", // default "fast"
enableFaceTracking: true, // default false
minimumFaceSize: 0.25 // default 0.1 (which means the face must be at least 10% of the image)
})
.then((result: MLKitDetectFacesOnDeviceResult) => console.log(JSON.stringify(result.faces)))
.catch(errorMessage => console.log("ML Kit error: " + errorMessage));
The basics are explained above for 'Text recognition', so we're only showing the differences here.
import { registerElement } from "nativescript-angular/element-registry";
registerElement("MLKitFaceDetection", () => require("nativescript-plugin-firebase/mlkit/facedetection").MLKitFaceDetection);
<MLKitFaceDetection
width="260"
height="380"
detectionMode="accurate"
enableFaceTracking="true"
minimumFaceSize="0.2"
preferFrontCamera="true"
[torchOn]="torchOn"
(scanResult)="onFaceDetectionResult($event)">
</MLKitFaceDetection>
import { BarcodeFormat, MLKitScanBarcodesOnDeviceResult } from "nativescript-plugin-firebase/mlkit/barcodescanning";
const firebase = require("nativescript-plugin-firebase");
firebase.mlkit.barcodescanning.scanBarcodesOnDevice({
image: imageSource,
formats: [BarcodeFormat.QR_CODE, BarcodeFormat.CODABAR] // limit recognition to certain formats (faster), or leave out entirely for all formats (default)
})
.then((result: MLKitScanBarcodesOnDeviceResult) => console.log(JSON.stringify(result.barcodes)))
.catch(errorMessage => console.log("ML Kit error: " + errorMessage));
The basics are explained above for 'Text recognition', so we're only showing the differences here.
import { registerElement } from "nativescript-angular/element-registry";
registerElement("MLKitBarcodeScanner", () => require("nativescript-plugin-firebase/mlkit/barcodescanning").MLKitBarcodeScanner);
<MLKitBarcodeScanner
width="260"
height="380"
formats="QR_CODE, EAN_8, EAN_13"
preferFrontCamera="false"
[torchOn]="torchOn"
(scanResult)="onBarcodeScanningResult($event)">
</MLKitBarcodeScanner>
Note that formats
is optional but recommended for better recognition performance. Supported types:
CODE_128
, CODE_39
, CODE_93
, CODABAR
, DATA_MATRIX
, EAN_13
, EAN_8
, ITF
, QR_CODE
, UPC_A
, UPC_E
, PDF417
, AZTEC
.
import { MLKitImageLabelingOnDeviceResult } from "nativescript-plugin-firebase/mlkit/imagelabeling";
const firebase = require("nativescript-plugin-firebase");
firebase.mlkit.imagelabeling.labelImageOnDevice({
image: imageSource,
confidenceThreshold: 0.6 // this will only return labels with at least 0.6 (60%) confidence. Default 0.5.
})
.then((result: MLKitImageLabelingOnDeviceResult) => console.log(JSON.stringify(result.labels)))
.catch(errorMessage => console.log("ML Kit error: " + errorMessage));
import { MLKitImageLabelingCloudResult } from "nativescript-plugin-firebase/mlkit/imagelabeling";
const firebase = require("nativescript-plugin-firebase");
firebase.mlkit.imagelabeling.labelImageCloud({
image: imageSource,
modelType: "stable", // either "latest" or "stable" (default "stable")
maxResults: 5 // default 10
})
.then((result: MLKitImageLabelingCloudResult) => console.log(JSON.stringify(result.labels)))
.catch(errorMessage => console.log("ML Kit error: " + errorMessage));
The basics are explained above for 'Text recognition', so we're only showing the differences here.
import { registerElement } from "nativescript-angular/element-registry";
registerElement("MLKitImageLabeling", () => require("nativescript-plugin-firebase/mlkit/imagelabeling").MLKitImageLabeling);
<MLKitImageLabeling
width="260"
height="380"
confidenceThreshold="0.6"
preferFrontCamera="false"
[torchOn]="torchOn"
(scanResult)="onImageLabelingResult($event)">
</MLKitImageLabeling>
import { MLKitLandmarkRecognitionCloudResult } from "nativescript-plugin-firebase/mlkit/landmarkrecognition";
const firebase = require("nativescript-plugin-firebase");
firebase.mlkit.landmarkrecognition.recognizeLandmarksCloud({
image: imageSource,
modelType: "latest", // either "latest" or "stable" (default "stable")
maxResults: 8 // default 10
})
.then((result: MLKitLandmarkRecognitionCloudResult) => console.log(JSON.stringify(result.landmarks)))
.catch(errorMessage => console.log("ML Kit error: " + errorMessage));
Coming soon. See issue #702.