- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are LSTM / GRU RNN supported in the CV SDK? Have not been able to find any samples yet, I think it is just CNN support for now. Are there any plans to support RNN in the next release?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nikos,
You're absolutely right - RNNs are not supported in current release. Could you please provide us more details about your use case? Are you going to use it in CV tasks? This information will help us to prioritize further plans.
Best wishes,
Anna
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nikos,
You're absolutely right - RNNs are not supported in current release. Could you please provide us more details about your use case? Are you going to use it in CV tasks? This information will help us to prioritize further plans.
Best wishes,
Anna
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for the confirmation on RNN support. Yes, this is mainly needed for typical CV tasks when we need to combine CNN with RNN. In some applications, for example, CNN will get us the convolutional features we need from different video frames and send them to an LSTM (or bidirectional LSTM) or GRU for further analysis, like for example activity recognition tasks. In some other cases we can simply use CNN+RNN in one single frame, like in many existing solutions that use CNN and RNN for typical OCR tasks. Your existing release with CNN support is fantastic and we are looking forward to your future releases that will start supporting RNN.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Nikos,
Thanks for positive feedback and use-case explanation. I'll let you know when RNN support appear.
Best wishes,
Anna
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are LSTM / GRU RNN supported in the Movidius Myraid X VPU (Movidius NCS 2)?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Anna,
Just wondering if you could possibly provide an update on current status of LSTM in OpenVino.
Based on release notes:
Extends neural network support to include LSTM (long short-term memory) from ONNX*, TensorFlow*& MXNet* frameworks, & 3D convolutional-based networks in preview mode (CPU-only) to support additional, new use cases beyond computer vision.
Does this mean LSTM is in preview mode and the only supported device is CPU ? if I understand this correctly there is no LSTM support for GPU/NCS, correct?
Thanks,
Nikos
Anna B. (Intel) wrote:Hi Nikos,
Thanks for positive feedback and use-case explanation. I'll let you know when RNN support appear.
Best wishes,
Anna
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear bohra, chandni,
According to the Supported Devices document, LSTM is supported on MYRIAD but GRU is not. RNN is also unsupported on VPU.
Hope it helps,
thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear bohra, chandni
Where are you seeing that atan is supported on CPU ?
If atan is an Activation Layer it's not one of the layers OpenVino supports : see The IR Activation Section . I see tanh support but not atanh.
And LSTM normally uses tanh not arctanh.
As for your runtime error above, it's very hard to say what happened without looking at your IR and your Inference Engine code.
Hope it helps,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear bohra, chandni,
You got me there. Thanks for finding the documentation proof for CPU support of atanh (via custom kernels). atanh in this case is not an Activation layer though - it's just a regular layer. Do you have an example model with atanh which fails OpenVino either at the Model Optimizer stage or Inference Engine stage ? If so, I'd be happy to reproduce the issue and file a bug. Please attach your model and Inference program to this ticket as a *.zip. Of if you'd prefer you can PM the package to me privately.
Looking forward to receiving your stuff,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Shubha R. (Intel) wrote:Dear bohra, chandni,
You got me there. Thanks for finding the documentation proof for CPU support of atanh (via custom kernels). atanh in this case is not an Activation layer though - it's just a regular layer. Do you have an example model with atanh which fails OpenVino either at the Model Optimizer stage or Inference Engine stage ? If so, I'd be happy to reproduce the issue and file a bug. Please attach your model and Inference program to this ticket as a *.zip. Of if you'd prefer you can PM the package to me privately.
Looking forward to receiving your stuff,
Shubha
Dear Shubha R. ,
Thank you so much for fixing the memory corruption bug while loading the IR model with cpu_extension. However, I got the same issue with Mr. Bohra:
RuntimeError: Cannot detect right dims for nodes bidirectional_1/while/Const_4/Output_0/Data__const and bidirectional_1/while/clip_by_value_2
If you need file a bug for the issue, I will send you my IR model and inference code. Thank you so much for your work in the program.
Best regards,
Minh Quyet
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Minh ,
I got your PM and replied to it also - via PM. Sure, please send me your IR model and inference code which demonstrates this bug.
Thanks !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear bohra, chandni,
I have PM'd you, simply reply to this message with an attached *.zip file.
Looking forward to receiving your stuff -
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear bohra, chandni,
In OpenVino since models are converted to IR, yes you can use different frameworks for multiple models, i.e. one originally ONNX, another originally Caffe, and the final originally Tensorflow. From OpenVino's perspective, it doesn't care, as long as the models get converted to IR successfully.
I think the document you're referring to is the Supported Devices Document which lists supported layers per hardware. However, there is also The Model Optimizer Supported Layers document.
If you cannot share your models, I definitely understand. However, in this case there's a limited amount I can help you unless the error is super obvious.
Hope it helps,
Thanks,
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page