TensorFlow Lite is an open source deep learning framework for on-device inference. Therefore, we need to convert our trained .pb mofel file to .tflite format which is a little bit complex process.
Tensorflow:
TensorFlow is an end-to-end open source platform for machine learning. Moreover, It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in Machine Learning. In addition, it helps developers easily build and deploy Machine Learning powered applications.
Install tensor flow and dependencies:
It is better to install tensorflow and dependencies in python virtual environment . In addition ,you need other dependencies by using command below.
pip install Cython pip install contextlib2 pip install pillow pip install lxml pip install jupyter pip install matplotlib
After that, clone the models directory using git command git clone https://github.com/tensorflow/models.git . If your system doesn’t have git then you can install git using command sudo apt-get install git.
install COCO API :
Download the cocoapi and copy the pycocotools subfolder to the models/research directory if you are interested in using COCO evaluation metrics. Otherwise, use following commands:
sudo apt-get install python3-dev
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
make cp -r pycocotools <path_to_tensorflow>/models/research/
Protobuf Compilation:
At first install a protobuf in your system using following command. from models/research directory.
wget -O protobuf.zip https://github.com/google/protobuf/releases/download/v3.0.0/protoc-3.0.0-linux-x86_64.zip unzip protobuf.zip
After that, run following command to compile protobuf file.
# From tensorflow/models/research/ ./bin/protoc object_detection/protos/*.proto --python_out=.
As a result, you will get python file generated from .proto file as shown in figure below.
Add Libraries to PYTHONPATH:
When running locally, the tensorflow/models/research/ and slim directories is to PYTHONPATH. For this run command from tensorflow/models/research/:
# From tensorflow/models/research/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim
Note: This command needs to run from every new terminal you start. If you wish to avoid running this manually, you can add it as a new line to the end of your ~/.bashrc file, replacing `pwd` with the absolute path of tensorflow/models/research on your system.
Testing the installation:
Note that this doesn’t work for tensorflow version 2.x because TensorFlow 2.0 doesn’t have contrib module Now run python script as:
python3 object_detection/builders/model_builder_test.py
Conversion to Intermediate model:
In this step you need to convert to intermediate file using python script object_detection/export_tflite_ssd_graph.py. with parameter shown below. For this you have inference_graph directory with following files.
python3 object_detection/export_tflite_ssd_graph.py --pipeline_config_path /home/sgc/My-projets/testproject/inference_graph/pipeline.config --trained_checkpoint_prefix /home/sgc/My-projets/testproject/inference_graph/model.ckpt --output_directory /home/sgc/My-projets/testproject/inference_graph/my_tflite_mobile
After that you will get directory my_tflite_mobile directory with following file shown in figure below.
Final Conversion Step:
At first you need to find input tensor name and size using step 4 in our previous tutorial. And then install toco using pip3 install toco and then run following command.
toco --graph_def_file=/home/sgc/tflite_conver/inference_graph/tflite_mobile/tflite_graph.pb \
--output_file=/home/sgc/tflite_conver/inference_graph/tflite/tflite_mobile.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=FLOAT \
--allow_custom_ops
Finally, you will get tflite_mobile.tflite file shown in figure above.