Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Convertt Mxnet, custom Operation

Luong__Khang
Beginner
580 Views

Hello everyone,

 

I am trying converting a Mxnet model to openvino IR, but I have got this error: "Original exception message: Operation 'one_hot' not supported". I understand that the operation "one_hot" is not implemented in mxnet support (I dont know why that of tensorflow did). Therefore, I try to create my own custom operation with using "mxnet.ndarray.one_hot" and "extgen.py new --mo-op", but I still dont know how to make it work, please help me.

The operation is like this:

{
      "op": "one_hot", 
      "name": "one_hot0", 
      "attrs": {
        "depth": "180380", 
        "off_value": "0.0", 
        "on_value": "1.0"
      }, 
      "inputs": [[1139, 0, 0]]
 }

Thank you!

0 Kudos
5 Replies
Luong__Khang
Beginner
580 Views

Roy Allela. (Intel) wrote:

Hi Luong,

Have you taken a look at this https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_customize_model_optimizer_Extending_MXNet_Model_Optimizer_with_New_Primitives.html ?

Regards

Roy

Hi Roy,

Thank you for your answer. I have already read that article, but I have not been clear the code at the function 

 def extract(node):
    attrs = get_mxnet_layer_attrs(node.symbol_dict)
        node_attrs = {
            'feat_stride': attrs.float('feat_stride', 16)
        }
        # update the attributes of the node
        Op.get_op_class_by_name('Proposal').update_node_stat(node, node_attrs) # <------ here goes the name ('Proposal') of the Operation that was implemented before
        return __class__.enabled

 

How could I return the result of one_hot function in ndarray.one_hot of Mxnet framework?

Sincerely,

Luong

0 Kudos
Shubha_R_Intel
Employee
580 Views

Dear Luong, Khang

 Please take a look at the following IDZ post where I helped a customer construct an MXNet custom layer.

https://software.intel.com/en-us/forums/computer-vision/topic/816432

Thanks,

Shubha

0 Kudos
Luong__Khang
Beginner
580 Views

Dear Shubha,

I followed the post [1] and finally successfully converted my model, but base on [1], I still can not implement the model on movidius devices because "Support for the Myriad (VPU) will be available by the end of 2019. Support for other devices is yet to be announced.".

Could you please give me some advices?

Thanks,

Luong

[1] https://github.com/david-drew/OpenVINO-Custom-Layers/blob/master/2019.r2.0/ReadMe.Linux.2019.r2.md

0 Kudos
Shubha_R_Intel
Employee
580 Views

Dear Luong, Khang,

Yes custom layers right now are available only in "preview mode" and full functionality is not yet available.  Are you writing a custom layer for MYRIAD in OpenCL ? At what point are you getting the above error - during Model Optimizer phase or during Inference phase ?

Thanks,

Shubha

0 Kudos
Reply