-
Notifications
You must be signed in to change notification settings - Fork 5.5k
Description
I downloaded hand_landmark.tflite from
https://storage.googleapis.com/mediapipe-assets/hand_landmark.tflite?generation=1666153735814956
But hand_landmark.tflite is float 32
I wanted to apply hand_landmark.tflite in nxp imx8 embedded board, but the NPU/GPU only supported int8 and int16.
Can you tell me how to do quantization of hand_landmark.tflite?
From "Model conversion" guide, it seems the conversion need savedModel, but can I get it?
The following sections outline the process of evaluating and converting models for use with LiteRT.
Input model formats
You can use the converter with the following input model formats:
[SavedModel]: A TensorFlow model saved as a set of files on disk.
[Keras model]: A model created using the high level Keras API.
[Keras H5 format]: A light-weight alternative to SavedModel format supported by Keras API.
[Models built from concrete functions]: A model created using the low level TensorFlow API.