This sample will record audio on a physical Android device and attempt to classify those recordings. The supported classification models include YAMNET and a custom speech command model trained using TensorFlow's Model Maker. These instructions walk you through building and running the demo on an Android device.
The model files are downloaded via MSBuild scripts when you build and run the app. You don't need to do any steps to download TFLite models into the project explicitly.
This application should be run on a physical Android device.
-
The Visual Studio IDE. This sample has been tested on Mac Visual Studio 2022.
-
A physical Android device with a minimum OS version of SDK 23 (Android 6.0) with developer mode enabled. The process of enabling developer mode may vary by device.
-
Open Visual Studio. From the Welcome screen, select Open a local Visual Studio project, solution, or file.
-
From the Open File or Project window that appears, navigate to and select the TensorFlowLiteExamples/AudioClassification solution. Click Open.
-
With your Android device connected to your computer and developer mode enabled, click on the black Run arrow in Visual Studio.
Downloading, extraction, and placing the models into the assets folder is managed automatically by the AudioClassification.csproj file.