Intel® Gaudi® AI Accelerator
Support for the Intel® Gaudi® AI Accelerator
10 Discussions

Few questions about Media_Pipeline

sihanchen
Employee
259 Views

Hi,

I have some questions about the Media_Pipeline supported by habana_frameworks. Firstly I saw there are code about displaying the images here https://docs.habana.ai/en/latest/Media_Pipeline/Media_Reader_ReadVideoDatasetFromDir.html#example-1-use-readvideodatasetfromdir-by-providing-input-directory . In that section I see the function `display_videos` and in that it calls the `plt.show()`.

My first question is that is it even possible to display that image from Gaudi2 server without the laptop supported. As far as we know, we can only get into the Gaudi docker containers and run that code but it should not have the capability to display the image on our local laptop, right? How could that code be tested?

My second question is that if we use that habana_frameworks.mediapipe, can our video/image processing be significantly faster than on CPU only? I read through the documentation and I saw there are some ops such as `resize`, `crop` like pre/postprocessing HPU operators that can be used for data augmentation when training ResNet. However, I do not see a rough benchmarks data showing that how these operators are faster than CPU-only ones (%10?, %50?). If it is significantly faster I think it is very attractive for users to also move pre/postprocessing logics on Gaudi!

My third question is that is there any other examples that I can find to use this Media_Pipeline?

Thank you so much in advance for your help!

 

0 Kudos
1 Reply
James_Edwards
Employee
198 Views

My first question is that is it even possible to display that image from Gaudi2 server without the laptop supported . . . . . How could that code be tested?

If you don’t have a display setup on your head node you can route the display to another host with the X windowing system. Generally, this is done with a properly set DISPLAY environment variable. If you interested in going that route consult the X.org documentation.

However, if you want to test the code from the command line, you can replace the plt.show() function call with a call to save the file. For example:

plt.savefig("combinded.jpg", dpi='figure', format="jpg", metadata=None, bbox_inches=None, pad_inches=0.1, facecolor='auto')

This will save a combined jpg file, that you can download and check after it is created.

My second question is that if we use that habana_frameworks.mediapipe, can our video/image processing be significantly faster than on CPU only? . . . . . If it is significantly faster, I think it is very attractive for users to also move pre/postprocessing logics on Gaudi!

The operators supported by the Media Pipeline, and the devices they are supported on (CPU, GPU or both), are documented on this page: https://docs.habana.ai/en/latest/Media_Pipeline/Operators.html#media-operators

Currently there are no published benchmarks for the individual operators.

Most operation should be significantly faster on a HPU than on a CPU. To get an idea of the of performance impact the hpu has, look at the example for the Crop operator: https://docs.habana.ai/en/latest/Media_Pipeline/Media_Operator_Crop.html#using-crop

This example runs the crop pipeline using cpu, hpu and ‘mixed’.  Comparing the crop performance of the mixed, cpu only runs will indicate how much faster the hpu is when performing that operation.

My third question is that is there any other examples that I can find to use this Media_Pipeline?

Currently the only examples for using the Media Pipeline are on the Intel Gaudi base documentation site: https://docs.habana.ai/en/latest/Media_Pipeline/index.html

There are examples on how to use each operator and instructions on how order CPU and HPU operations.

0 Kudos
Reply