[MUSIC PLAYING]
KHANH LEVIET: Hi, everyone.
I’m Khanh from the
TensorFlow team.
And today I’d like to show you
how to train a TensorFlow Lite
model that can recognize custom
images using your own data set.
And then my colleague
Hoi will show you
how to integrate the model into
an Android app using the new ML
Model Binding Plugin
in Android Studio.
In this example,
we’ll build an app
that can recognize five
different types of flowers,
daisy, rose,
sunflower, and so on.
Now, let’s get started.
We’ll start with training
our model in Google Colab.
Google Colab is an
environment that
allows you to use CPE for free
to quickly train your machine
learning model.
By using Colab, you don’t
need to install any software
to your computer.
Xem ngay báo giá thiết kế profile doanh nghiệp, nếu bạn đang cần tìm đơn vị làm hồ sơ năng lực công ty.
You only need to use
your web browsers.
You can go to this Colab
Notebook from this link.
This is also included in
the video description.
Let’s start by installing
TensorFlow Lite Model
Maker to Colab.
Model Maker is a tool
that allows easy training
tests for Lite models
with no machine learning
expertise required.
Now the installation
has finished.
Let’s import the
required Python package.
Let’s download our training
images from TensorFlow
to our Colab machine.
You can also download
the training data
from this URL to your computer
to explore the training data
set.
You will find that
it contains images
of five different
types of flowers
organized into each folder.
So for daisy, we have several
hundreds of images, dandelion,
rose, and so on.
And if you have training
images that you want to use,
you can also upload
them to Colab.
Just go to the Files
tab right here,
click on the Upload
button, and then you
can upload your images.
Next, we upload our
images into Model Maker.
Here you can see that
I split the data set
into train data and test data.
The number 0.9 here means
that we’ll use 90% our data
for training the model, and
the remaining 10% of the data
to test its accuracy.
I’ll explain more about
that later in this video.
Now the loading
process has completed.
Here we have 3,670 images
in five categories, daisy,
dandelion, roses,
sunflowers, and tulips.
Now we have our
training data set.
Let’s train our TensorFlow Lite
model with the flower data set.
To train faster,
make sure that you
have enabled GPU in your Colab.
Go to Runtime,
Change Runtime Type,
and make sure that GPU
is already selected.
You can see that here
I use only the train
data, which is the 90%
portion of our data set
that I loaded earlier.
You can see that the model
is now being trained,
and the accuracy is improving.
Now, after about 90
seconds of training,
I have a model with 93%
accuracy, which is quite good.
You will probably see your
accuracy number a little bit
different from my
accuracy number,
because before training,
the model is initialized
with random weight values.
So your initial
value is very likely
to be different to my
initial value, which
led to a slight difference
in the final accuracy.
And next, let’s
evaluate our model.
You can see here that I
used the test data, which
is the remaining 10% our data
set that I left out earlier.
The reason why I did that
was to keep some data
that the model didn’t see during
training so that we can test
if the model can
generalize well to new data
that it has never seen before.
Our test accuracy is about
92%, which is a little bit
lower than our
training accuracy,
but still pretty good.
And finally, let’s export
our TensorFlow Lite model
to integrate it into
our Android app.
You can see that the export
process has finished,
and now I have the TensorFlow
Lite model in my Colab.
Let’s download the model
to our local computer.
And then I’ll share
the model with Hoi,
my Android local friend.
HOI LAM: Thanks, Khanh.
I’m Hoi from the Android team.
Today, I’ll walk you
through the steps required
to integrate the TF Lite
model that Khanh has just
built into an Android app.
The first step is to set up.
Go to developer.androi
d.com/studio/preview and select
Download 4.1 Beta
1 as your option.
You have the option to
keep several versions
of Android Studio
installed at the same time.
If that’s what you want to do,
please scroll down on this page
and refer to the documentation
saying Run Alongside Stable
and click on Install
Both to find out more.
Once you have Android
Studio Beta installed,
go to the URL link in the
video description down below
to download his Colabs
Android Studio project.
It contains the basic
CameraX functionalities which
we will use to feed the model.
Just like any good
cooking show, I’ve
already downloaded the
Android Studio Beta,
installed it, and also
downloaded the Android Studio
project that we’re
going to work on.
So let’s take a look.
So this is Android Studio
version 4.1 beta 1.
Let’s open up the project
and see what we have.
So let me just find
it, click Open.
And here, Android Studio is
opening the project itself.
It’s indexing all
the files, and this
might take a while
depending on what you
have set up on your machine.
So it looks like
everything is fine.
OK.
As we can see, we
have two modules here
in this project,
Start and Finish.
So under Start is the projects
that we’re going to work on,
and Finish is basically
the finished project.
So if you ever get stuck in this
particular Colab, go to Finish
and you’ll be able to see
what the code should be.
So we’ll be working with
Start throughout this project.
And one of the
things that you can
do at the very beginning
of this project
is to just run
the Start project.
So just click here to run it.
So Gradle is building
and it’s now installing.
And this is what you should
see on your screen, which
is a preview of what
the camera sees.
And at the bottom,
you have three labels
that are printed right next
to some random numbers that
are fake labels.
And that’s the
start of our Colab.
OK, let’s stop that and go back.
And one of the ways to
really easily navigate
through this Colab is by
viewing all the to-dos.
So if you click on View,
Tools window, To-Do,
you’ll see a list
of different items
within the projects of
what we needed to do.
And one of the
ways to easily see
which ones are under the Start
project is go here, Group By,
and select By Module.
And here, order to-do items
and group by Finish and Start.
Under Start, these
are the actual places
where we expect to insert code.
The first to-do is actually
nothing to do with code,
but about importing
an actual model.
So let’s do that.
Let’s import the model
that Khanh have created.
OK, so the easiest thing to do
to import the model that Khanh
had just produced
are the new tooling
that you get with
Android Studio 4.1.
To start, right click
on a module where
you want to put the
particular TF Lite model,
select New, select
Other, and here, we
have the TensorFlow Lite model.
Click on that and then
select the model location.
Open it up.
I’ve got it under
here, under Download.
So one thing you have
noticed is that I’ve actually
changed the name of the
model that I’ve downloaded.
So Khanh named it model.tflite.
If you import it without
changing the name,
then the classes that generated
for you to use that model
will be model.
And model is an
awfully common name,
so in order to reduce
confusion, always
change the name of your
model into something
that is a little
bit more meaningful,
and you will see
that further down
in this particular screencast.
So now select that
model, click Open.
Here we have the path.
And what else have we got here?
So the other things that
this import module do
is not just copying
the TF Lite model,
but actually insert
the necessary settings
to use the model.
So there are two things here.
One is to enable a
new build feature
that is in Android Studio
4.1, the ML model binding.
In addition, we have
some Gradle import
and dependencies that
we will need in order
to use the ML model binding.
So they’re here, and they
will be inserted automatically
into your [? build-out ?]
Gradle file.
So no more messing around
with the setting file.
Another thing that
it has is if you
want to use the GPU acceleration
through the TensorFlow Lite’s
GPU data kit.
You’ll need to do this.
We’ll do that later on in
this particular screen cast.
So don’t worry about
it for the time being.
So let’s click Finish.
So after you have
imported the model,
this is the first thing
that will greet you
in the new Android Studio 4.1.
You can see from a high
level, you know, what it does.
So for example, the input
is an image of float.
And then, the
output is 5 labels.
And in this case, those
are the five labels
of flowers that
Khanh have trained.
And if you go
slightly down here,
you will also see
the sample code
of how to actually use the
new ML binding functionality.
So you initialize an
instance, create an image
from either the
camera or a bitmap.
And then, you
process the output.
And you get the
probability as a list.
So here, we have an
image analyzer class,
where we are using the
CameraX functionality.
And for each frame
that a CameraX gets,
it will basically fit it
into the analyze method here.
So before we can use that, we
will need to initiate the model
itself.
So let’s do that.
Let’s create a private
instance of the flower model
that we have just import
and call it flower model.
And the way to initialize
it is like this.
So just call it
FlowerModel.NewInstance,
and then feeding in the context.
Here we go.
So you’ll notice that here we’re
basically using the same name
as the TF Lite model itself.
So name your TF Lite model
carefully before you import,
so that you will have a
much better time in terms
of finding out which
model you should be using
and basically give
it an intuitive name.
So now, that we got an
instance of the model,
what do we do with
each of the frame?
The first thing to do
is to convert the image
into a format that
we can process,
namely the TensorFlow Lite
or the Tensor image format.
So let’s create a
variable, call it TF image
and create an instance of
Tensor image from bitmap.
So if you’ve done this before
with TensorFlow interpreter,
you have noticed
that previously, you
might need to convert your
image into byte array directly.
Here, you don’t actually
need to do any of that.
The input can be bitmap.
And in this case,
we have image proxy.
So in this case, we need to
convert that into a bitmap
before feeding it in.
So I’ve created a helper method
here to say 2 bitmap, feeding
in an image proxy.
And this is all you need to do.
If instead of using
a camera as an input,
and you want to load up a image
that’s already on the phone,
you can feed that
bitmap directly here.
Next, we’re going to feed
this particular image
into the model for processing.
So we will create a
output object, so outputs.
And we’ll utilize
the flower model
that we have initialized
before and process it.
Process the TF image.
And what we can get
after we process an image
is basically a list of
category objects, which
include both the label as
well as the probability
of the output or our detection.
So select that.
And the next, what we
want to do is to sort it,
so that the highest probability
result is at the top.
And then, we want to
limit it to, let’s say,
the top three items.
And how do we do that?
So there is this really
helpful Kotlin feature
that will allow you
to solve for the list
by specifying the attribute
that you want to sort on.
So what we do is, I go, .apply.
And here, I’m going to
say, sortDescending.
Because what we want to
do is to sort the list,
so that the highest
probability is at the top,
and the lowest probability
is at the end of the list.
So let’s do that.
And here, we need to specify
which field we want to sort by.
So I would type in, it, dot.
So here, you can
see the two fields.
Of course, we’re not
going to sort by label.
But instead, we’re going to
sort by the score, which is,
in this case, the probability.
So here we go.
We have got the list of
the different categories
together with the probability.
And we have now
with this one step,
sorted it by the
probability itself.
The next thing
I’m going to do is
I’m going to cut the
list down in size,
so that it only display
the top three results.
And the way to do that is to use
another Kotlin feature, .take.
And we can say, the
maximum display results.
So this is a constant that
I’ve defined as being three.
You can absolutely change
that to whatever you want.
In this case, it may
not be helping a lot,
because the list itself
is only five items.
We’re only training a model
with five different types
of flowers.
But you could
imagine that if you
have a model that have a
hundred different categories,
or maybe even thousand, or
maybe even tens of thousands,
or hundreds of thousands,
this could really
help you cut down on the
number of results that don’t
get passed on and processed.
So please do take
time to consider this.
So you can cut down the
amount of processing
that you need to do.
So after this, what
I need to do is
to convert those category
results into data objects
that my particular
application will understand.
So I’ve created a data
class called recognition
with a label and probability.
So I just need to convert
the results that I have.
In this case, it’s the three
highest probable flowers
that we have into those
recognition objects.
So let’s do that.
So it’s four output in outputs.
And I’m going to go items.
So this is the list
of items that we’re
going to return back to the UI.
So items.add,
recognition, output.label,
and then output.score.
And here we go.
One last thing before
we can run the app
is to take out this
section, which I previously
used to generate basically fake
result, placeholder results
so that you can see
the app running.
So let’s comment that.
Automatic, it’s
command and then slash
key to comment those
particular blocks out.
So here we go.
We have our app.
Let’s see it running.
As you can see, basically when
you look at the sunflower,
then the sunflower
probability increases
and is working as we expect.
So one last thing I’m going
to do before I sign off
is to show you how you could
use the GPU accelerator.
So to do that, the first thing
we need to do is to go back
to the [? build-out ?]
Gradle file.
And under here, so
under the to-do,
we can see that, hey, we need to
add the optional GPU data kit.
So in the previous color import
dialog, you can just tick a box
and get it.
But in this case, because
we didn’t tick that box,
we will need to manually
perform this step.
So super easy.
Just type in implementation and
is org.TensorFlow:TensorFlow.
Let’s go -lite-gpu:2.2.0.
And let’s press Sync.
Yay.
OK, it looks like it’s done it.
So this step basically imported
the TensorFlow Lite GPU
into your project.
And the next thing that
we’re going to do is use it.
So go to to-do 6.
Here, we need to create an
options object for the model
itself and say,
hey, we really want
to use the GPU accelerator.
So to do that, create a private
variable and call it options.
And this time, we’re
using this model class.
So under the TensorFlow
support library,
you need to choose this
one, org.TensorFlow.l
ite.support.model.
So that’s the one that we want.
And then under that,
there’s options.
And there’s a builder
for that option.
First one you could do
is to say, setDevice.
You can also see that
there is an option, set
the number of threads.
So setDevice is
the option to use
different types of accelerator.
So in this case, we’re
going to use a GPU.
You can also choose
an API if you want.
So setDevice, and then we go
model.device., in this case,
gpu.
And last but not
least, we need a build.
But just before
that, I just want
to highlight this other
option that we have,
set the number of threads.
So if instead you want to
use different CPU threads
to run your model, you can also
select this one and then type
in as an integer how many
threads you want to spin up.
And that’s all you need to do.
In this case, we’re going
to leave it with the GPU
and build this object.
So at the moment this option is
not used and the way to use it
is by just doing comma
and say, options.
So there, you have
it, just two lines,
or one line of code, and
then a method change.
And now, you can use
the GPU to accelerate
the running of your model.
So let’s take a look.
Fantastic.
So the model is running again.
But this time round,
it has now got
the GPU acceleration enabled.
So to sum up, make your own
model and running it on Android
has never been easier.
You can use the model
maker that Khanh
has shown you to
train that image
classifier and the
Android Studio tooling
I’ve demonstrated to put
it to work on Android.
Check out the video’s
description for more.
See you next time.
[MUSIC PLAYING]