Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Inconsistant number of classes in segmentation vizualisation

fredf
Beginner
5,820 Views

The results of visualization indicates 0-255 classes however this custom NN segmentater is trained on binary class ! 

The "common semantic" dataset looks right because it is uploading correctly by the workbench (import/upload dataset) including this json file:

{
"label_map": {"0" : "no-cloud", "1" : "cloud"},
"background_label":"0",
"segmentation_colors":[[0, 0, 0], [255, 255, 255]]
}

Once the project is created with the relevant custom NN and "vizualise output" is applied on several images then more than 2 classes (from class#0 until class#255) are listed in "model predictions" !

Could you please explain this problem ?

Labels (2)
0 Kudos
1 Solution
Peh_Intel
Moderator
4,121 Views

Hi fredf,


This thread will no longer be monitored since we have provided solution and answer. If you need any additional information from Intel, please submit a new question.



Regards,

Peh


View solution in original post

0 Kudos
23 Replies
Peh_Intel
Moderator
497 Views

Hi Benguigui__Michael,

 

You can find the details (formula) of the latency calculation from the codes in benchmark_app.py scripts. 

 

If you need any additional information from Intel, it is recommended to submit a new question.

 

 

Regards,

Peh

 

0 Kudos
Peh_Intel
Moderator
461 Views

Hi fredf,


Alexander is currently out of office. Hence, I try to answer your last question.


Based on the latency calculation from the codes in benchmark_app.py scripts, I would say that the calculation of the latency is more related with the number of inference requests if inferencing asynchronously.


Taking your first results as example, the latency (3650.01ms) is the total inference time for executing 2 infer requests asynchronously. To complete 36 iterations, it is required to execute 18 times.


Latency = Total execution time / (Number of iterations/Number of infer requests)

*Note: Unable to get the exact value from this calculation due to the latency time is taking the median of the inference time between the infer requests.



Regards,

Peh


0 Kudos
Peh_Intel
Moderator
4,122 Views

Hi fredf,


This thread will no longer be monitored since we have provided solution and answer. If you need any additional information from Intel, please submit a new question.



Regards,

Peh


0 Kudos
Reply