Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Biswas__Sudi
Beginner
49 Views

Modifyting the intermediate layer activations

How can we modify the input activations of the intermediate layers? For example, if we want to use input activations as int4 instead of FP32 or int8, how can we do it.

0 Kudos
1 Reply
Gouveia__César
New Contributor I
49 Views

Hi Sudi,

I think there isn't a easy way to perform what you are looking for, since the openvino converter (model optimizer) only provides reduced precisions of FP16 (half floats). If you run the model optimizer with -h flag you will find:

--data_type {FP16,FP32,half,float}
                        Data type for all intermediate tensors and weights. If
                        original model is in FP32 and --data_type=FP16 is
                        specified, all model weights and biases are quantized
                        to FP16.

Hope it helps,

César.

Reply