- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
OpenVINO toolkit 2018 R3 seems do not support Conv3D.
This is the output of mo_tf.py script:
[ ERROR ] List of operations that cannot be converted to IE IR:
[ ERROR ] Conv3D (3)
[ ERROR ] conv3d_16/convolution
[ ERROR ] conv3d_17/convolution
[ ERROR ] conv3d_18/convolution
[ ERROR ] MaxPool3D (2)
[ ERROR ] max_pooling3d_11/MaxPool3D
[ ERROR ] max_pooling3d_12/MaxPool3D
[ ERROR ] Part of the nodes was not translated to IE. Stopped.
I actually have a CNN full of Conv3D and MaxPool3D, so "Offload Computations to TensorFlow" methods explained in the guide doesn't seem to fit my requirements since i'd like to speed up inference just compared with Tensorflow.
Moreover, NVIDIA has already provided their inference optimization library (TensorRT) which correctly support Conv3D, so forcing user to use their hardware to speed up the inference.
So, When do you think Conv3D operation will be supported? It is in your scheduling list or not?
Thanks,
Regards
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It is something that team is working on, so it is known gap. I'm not allowed to communicate any release date though, should not be very long.
Can you specify what topology are you using, I would like to ensure that it is covered in tests also.
Thanks
Yury
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good to hear from you!
I've used a simple VGG-like network.
Regards,
Luca
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I also need the conv3d feature. Any update?

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page