Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Luong__Khang
Beginner
129 Views

Convertt Mxnet, custom Operation

Hello everyone,

 

I am trying converting a Mxnet model to openvino IR, but I have got this error: "Original exception message: Operation 'one_hot' not supported". I understand that the operation "one_hot" is not implemented in mxnet support (I dont know why that of tensorflow did). Therefore, I try to create my own custom operation with using "mxnet.ndarray.one_hot" and "extgen.py new --mo-op", but I still dont know how to make it work, please help me.

The operation is like this:

{
      "op": "one_hot", 
      "name": "one_hot0", 
      "attrs": {
        "depth": "180380", 
        "off_value": "0.0", 
        "on_value": "1.0"
      }, 
      "inputs": [[1139, 0, 0]]
 }

Thank you!

0 Kudos
5 Replies
Roy_A_Intel
Employee
129 Views

Luong__Khang
Beginner
129 Views

Roy Allela. (Intel) wrote:

Hi Luong,

Have you taken a look at this https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_customize_model_optimizer_Extendin... ?

Regards

Roy

Hi Roy,

Thank you for your answer. I have already read that article, but I have not been clear the code at the function 

 def extract(node):
    attrs = get_mxnet_layer_attrs(node.symbol_dict)
        node_attrs = {
            'feat_stride': attrs.float('feat_stride', 16)
        }
        # update the attributes of the node
        Op.get_op_class_by_name('Proposal').update_node_stat(node, node_attrs) # <------ here goes the name ('Proposal') of the Operation that was implemented before
        return __class__.enabled

 

How could I return the result of one_hot function in ndarray.one_hot of Mxnet framework?

Sincerely,

Luong

Shubha_R_Intel
Employee
129 Views

Dear Luong, Khang

 Please take a look at the following IDZ post where I helped a customer construct an MXNet custom layer.

https://software.intel.com/en-us/forums/computer-vision/topic/816432

Thanks,

Shubha

Luong__Khang
Beginner
129 Views

Dear Shubha,

I followed the post [1] and finally successfully converted my model, but base on [1], I still can not implement the model on movidius devices because "Support for the Myriad (VPU) will be available by the end of 2019. Support for other devices is yet to be announced.".

Could you please give me some advices?

Thanks,

Luong

[1] https://github.com/david-drew/OpenVINO-Custom-Layers/blob/master/2019.r2.0/ReadMe.Linux.2019.r2.md

Shubha_R_Intel
Employee
129 Views

Dear Luong, Khang,

Yes custom layers right now are available only in "preview mode" and full functionality is not yet available.  Are you writing a custom layer for MYRIAD in OpenCL ? At what point are you getting the above error - during Model Optimizer phase or during Inference phase ?

Thanks,

Shubha

Reply