- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The results of visualization indicates 0-255 classes however this custom NN segmentater is trained on binary class !
The "common semantic" dataset looks right because it is uploading correctly by the workbench (import/upload dataset) including this json file:
{
"label_map": {"0" : "no-cloud", "1" : "cloud"},
"background_label":"0",
"segmentation_colors":[[0, 0, 0], [255, 255, 255]]
}
Once the project is created with the relevant custom NN and "vizualise output" is applied on several images then more than 2 classes (from class#0 until class#255) are listed in "model predictions" !
Could you please explain this problem ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi fredf,
This thread will no longer be monitored since we have provided solution and answer. If you need any additional information from Intel, please submit a new question.
Regards,
Peh
Link Copied
- « Previous
-
- 1
- 2
- Next »
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Benguigui__Michael,
You can find the details (formula) of the latency calculation from the codes in benchmark_app.py scripts.
If you need any additional information from Intel, it is recommended to submit a new question.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi fredf,
Alexander is currently out of office. Hence, I try to answer your last question.
Based on the latency calculation from the codes in benchmark_app.py scripts, I would say that the calculation of the latency is more related with the number of inference requests if inferencing asynchronously.
Taking your first results as example, the latency (3650.01ms) is the total inference time for executing 2 infer requests asynchronously. To complete 36 iterations, it is required to execute 18 times.
Latency = Total execution time / (Number of iterations/Number of infer requests)
*Note: Unable to get the exact value from this calculation due to the latency time is taking the median of the inference time between the infer requests.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi fredf,
This thread will no longer be monitored since we have provided solution and answer. If you need any additional information from Intel, please submit a new question.
Regards,
Peh
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- « Previous
-
- 1
- 2
- Next »