Skip to content

quantization of hand_landmark.tflite #6082

@vewe-richard

Description

@vewe-richard

I downloaded hand_landmark.tflite from
https://storage.googleapis.com/mediapipe-assets/hand_landmark.tflite?generation=1666153735814956

But hand_landmark.tflite is float 32

I wanted to apply hand_landmark.tflite in nxp imx8 embedded board, but the NPU/GPU only supported int8 and int16.

Can you tell me how to do quantization of hand_landmark.tflite?

From "Model conversion" guide, it seems the conversion need savedModel, but can I get it?


The following sections outline the process of evaluating and converting models for use with LiteRT.

Input model formats
You can use the converter with the following input model formats:

[SavedModel]: A TensorFlow model saved as a set of files on disk.
[Keras model]: A model created using the high level Keras API.
[Keras H5 format]: A light-weight alternative to SavedModel format supported by Keras API.
[Models built from concrete functions]: A model created using the low level TensorFlow API.

Metadata

Metadata

Assignees

Labels

platform:embedded-linux-armIssues related to Raspberry Pi, Coral Dev Board, Nvidia Jetson Nano, etc.task:hand landmarkerIssues related to hand landmarker: Identify and track hands and fingers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions