<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic RE: Re:image translation model in Intel® Distribution of OpenVINO™ Toolkit</title>
    <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375890#M27049</link>
    <description>&lt;P&gt;Peh.&lt;/P&gt;
&lt;P&gt;Thank you for detailed explanation.&lt;/P&gt;
&lt;P&gt;After installation successful, I ran the demo according to doc, it says only '-ri'(reference images) argument is required, and other images are optional.&lt;/P&gt;
&lt;P&gt;Running with that argument, it shows error as follows:&lt;/P&gt;
&lt;P&gt;=====================================&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gb8_1-1649573398490.png" style="width: 572px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28422i3C2FED80F987EE47/image-dimensions/572x246/is-moderation-mode/true?v=v2&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" width="572" height="246" role="button" title="gb8_1-1649573398490.png" alt="gb8_1-1649573398490.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;==========================================&lt;/P&gt;
&lt;P&gt;So I rearranged arguments like '-ii imagefile -ri imagefile' (same image file for test), then error message is as follows:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gb8_0-1649573220550.png" style="width: 565px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28421i4DDE3A9A55C92858/image-dimensions/565x494/is-moderation-mode/true?v=v2&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" width="565" height="494" role="button" title="gb8_0-1649573220550.png" alt="gb8_0-1649573220550.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;I am not familiar with image translation, I just want to introduce this model to people.&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For this demonstration to work, Can I get suitable images ?&amp;nbsp;If possible.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;gb8&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Sun, 10 Apr 2022 06:54:09 GMT</pubDate>
    <dc:creator>gb8</dc:creator>
    <dc:date>2022-04-10T06:54:09Z</dc:date>
    <item>
      <title>image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374370#M27002</link>
      <description>&lt;P&gt;When using&amp;nbsp; '&lt;SPAN&gt;omz_converter',&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp;segmentation_model,&amp;nbsp;hrnet-v2-c1-segmentation.xml created.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;But&amp;nbsp;'translation_model,&amp;nbsp;cocosnet.xml doesn't.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Document says,&lt;/P&gt;
&lt;P&gt;&amp;nbsp; -m_trn TRANSLATION_MODEL : required&lt;/P&gt;
&lt;P&gt;&amp;nbsp; -m_seg SEGMENTATION_MODEL : optional&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;And running the demo without translation model produces error as follows,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;&amp;nbsp; error message : ipykernel_launcher.py: error: the following arguments are required: -m_trn/--translation_model, -o/--output_dir&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;gb8&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 05 Apr 2022 09:09:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374370#M27002</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-05T09:09:00Z</dc:date>
    </item>
    <item>
      <title>Re: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374428#M27008</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/226216"&gt;@gb8&lt;/a&gt;&amp;nbsp;what was the issue of using omz_converter ?&lt;/P&gt;
&lt;P&gt;Note, when you want to load models for particular demo, it might be convenient to use models.lst file from demo folder as a parameter for omz_downloaded or omz_converter tools.&lt;/P&gt;
&lt;P&gt;I've just tried under OpenVINO 2022.1 python environment&lt;/P&gt;
&lt;P&gt;omz_downloader --list &amp;lt;omz_dir&amp;gt;/demos/image_translation_demo/python/models.lst -o &amp;lt;out_dir&amp;gt;&lt;/P&gt;
&lt;P&gt;omz_converter --list&amp;nbsp;&amp;lt;omz_dir&amp;gt;/demos/image_translation_demo/python/models.lst -o &amp;lt;out_dir&amp;gt; -d &amp;lt;out_dir&amp;gt; --precisions FP16&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;and models listed in models.lst file were downloaded and converted&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Remember, when you install openvino-dev into python environment where you plan to convert the models to IR, you also need specify extra dependencies for frameworks you are going to use (or simple list all supported OpenVINO frameworks)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;you can do so by running command like this:&lt;/P&gt;
&lt;P&gt;pip install openvino==2022.1 openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet,paddle,kaldi]==2022.1&lt;/P&gt;
&lt;P&gt;This will install OpenVINO runtime and tools and also all required deps of proper versions to make conversion to IR from all these frameworks.&lt;/P&gt;</description>
      <pubDate>Tue, 05 Apr 2022 14:26:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374428#M27008</guid>
      <dc:creator>Vladimir_Dudnik</dc:creator>
      <dc:date>2022-04-05T14:26:08Z</dc:date>
    </item>
    <item>
      <title>Re: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374567#M27010</link>
      <description>&lt;P&gt;After model_downloader command, folder structure is as follows,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2022-04-06 09_21_16-model_files.png" style="width: 400px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28273i9839572644C82FA4/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="2022-04-06 09_21_16-model_files.png" alt="2022-04-06 09_21_16-model_files.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt; &lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2022-04-06 09_20_32-ckpt.png" style="width: 400px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28274iEC8C967E016C986C/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="2022-04-06 09_20_32-ckpt.png" alt="2022-04-06 09_20_32-ckpt.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt; &lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2022-04-06 09_20_50-networks.png" style="width: 400px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28275iAA3463206AD2BABC/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="2022-04-06 09_20_50-networks.png" alt="2022-04-06 09_20_50-networks.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt; &lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2022-04-06 09_21_02-util.png" style="width: 400px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28276iCD3DFDAFEF700C3A/image-size/medium/is-moderation-mode/true?v=v2&amp;amp;px=400&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="2022-04-06 09_21_02-util.png" alt="2022-04-06 09_21_02-util.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So, model download seems to be completed normally.&lt;/P&gt;
&lt;P&gt;download command&amp;nbsp; in vscode is :&amp;nbsp;&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;os&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;system&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"omz_downloader --list models.lst -o /home/pi/code/models --cache_dir /home/pi/code/models/cache"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;I converted the models as following command in vscode'&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; - os&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;system&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"omz_converter --list models.lst -d /home/pi/code/models -o /home/pi/code/models"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;And segmentation model was converted well as I mentioned earlier.&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;Suggestion : some kind of downloader, converter command result message ( success, fail and more info.) would be helpful.&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;gb8&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Wed, 06 Apr 2022 00:39:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374567#M27010</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-06T00:39:38Z</dc:date>
    </item>
    <item>
      <title>RE: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374569#M27011</link>
      <description>&lt;P&gt;note)&lt;/P&gt;
&lt;P&gt;&amp;nbsp; above convert command produces '256'.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 06 Apr 2022 00:46:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1374569#M27011</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-06T00:46:59Z</dc:date>
    </item>
    <item>
      <title>RE: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375003#M27023</link>
      <description>&lt;P&gt;I am very sorry.&lt;/P&gt;
&lt;P&gt;I overlooked the output message of downloader and converter because it is so long and always did well for other examples.&lt;/P&gt;
&lt;P&gt;There is converting error for cocosnet as follows:&lt;/P&gt;
&lt;P&gt;============================================&lt;/P&gt;
&lt;DIV class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"&gt;
&lt;DIV class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stdout"&gt;
&lt;PRE&gt;################|| Downloading cocosnet ||################

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/util/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/util/util.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/architecture.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/base_network.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/generator.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/normalization.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/batchnorm.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/replicate.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/comm.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_Corr.pth from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_G.pth from the cache

========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/__init__.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/generator.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/util/util.py

################|| Downloading hrnet-v2-c1-segmentation ||################

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/parallel/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/parallel/data_parallel.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/batchnorm.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/comm.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/replicate.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/hrnet.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/mobilenet.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/models.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/resnet.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/resnext.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/utils.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/decoder_epoch_30.pth from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/encoder_epoch_30.pth from the cache

========== Converting cocosnet to ONNX
Conversion to ONNX command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/models/public/cocosnet --model-path=/home/pi/code/models/public/cocosnet/model_files --model-name=Pix2PixModel --import-module=model '--input-shape=[1,151,256,256],[1,3,256,256],[1,151,256,256]' --output-file=/home/pi/code/models/public/cocosnet/cocosnet.onnx '--model-param=corr_weights=r"/home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_Corr.pth"' '--model-param=gen_weights=r"/home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_G.pth"' --input-names=input_seg_map,ref_image,ref_seg_map --output-names=exemplar_based_output

apex not found
Network [NoVGGCorrespondence] was created. Total number of parameters: 59.3 million. To see the architecture, do print(network).
Network [SPADEGenerator] was created. Total number of parameters: 96.7 million. To see the architecture, do print(network).
&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"&gt;
&lt;DIV class="lm-Widget p-Widget jp-OutputPrompt jp-OutputArea-prompt"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr"&gt;
&lt;PRE&gt;/home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py:236: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  feature_height = int(image_height // self.opt.down)
/home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py:237: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  feature_width = int(image_width // self.opt.down)
Traceback (most recent call last):
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py", line 168, in &amp;lt;module&amp;gt;
    main()
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py", line 163, in main
    convert_to_onnx(model, args.input_shapes, args.output_file, args.input_names, args.output_names, args.inputs_dtype,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py", line 146, in convert_to_onnx
    torch.onnx.export(model, dummy_inputs, str(output_file), verbose=False, opset_version=opset_version,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/__init__.py", line 225, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 85, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 632, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 417, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 203, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/__init__.py", line 263, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 930, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 888, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 111, in get_registered_op
    raise RuntimeError(msg)
RuntimeError: Exporting the operator var to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator.
&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"&gt;
&lt;DIV class="lm-Widget p-Widget jp-OutputPrompt jp-OutputArea-prompt"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stdout"&gt;
&lt;PRE&gt;========== Converting hrnet-v2-c1-segmentation to ONNX
Conversion to ONNX command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/models/public/hrnet-v2-c1-segmentation --model-path=/home/pi/code/models/public/hrnet-v2-c1-segmentation --model-name=HrnetV2C1 --import-module=model --input-shape=1,3,320,320 --output-file=/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx '--model-param=encoder_weights=r"/home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/encoder_epoch_30.pth"' '--model-param=decoder_weights=r"/home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/decoder_epoch_30.pth"' --input-names=data --output-names=prob

Loading weights for net_encoder
Loading weights for net_decoder
ONNX check passed successfully.

========== Converting hrnet-v2-c1-segmentation to IR (FP16)
Conversion command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/bin/mo --framework=onnx --data_type=FP16 --output_dir=/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16 --model_name=hrnet-v2-c1-segmentation --input=data --reverse_input_channels '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --output=prob --input_model=/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 320, 320]'

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx
	- Path for generated IR: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16
	- IR output name: 	hrnet-v2-c1-segmentation
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	data
	- Output layers: 	prob
	- Input shapes: 	[1, 3, 320, 320]
	- Source layout: 	Not specified
	- Target layout: 	Not specified
	- Layout: 	data(NCHW)
	- Mean values: 	data[123.675,116.28,103.53]
	- Scale values: 	data[58.395,57.12,57.375]
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- User transformations: 	Not specified
	- Reverse input channels: 	True
	- Enable IR generation for fixed input shape: 	False
	- Use the transformations config file: 	None
Advanced parameters:
	- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: 	False
	- Force the usage of new Frontend of Model Optimizer for model conversion into IR: 	False
OpenVINO runtime found in: 	/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16/hrnet-v2-c1-segmentation.xml
[ SUCCESS ] BIN file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16/hrnet-v2-c1-segmentation.bin
[ SUCCESS ] Total execution time: 1.95 seconds. 
[ SUCCESS ] Memory consumed: 595 MB. 
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at &lt;A href="https://docs.openvino.ai/" target="_blank" rel="noopener"&gt;https://docs.openvino.ai&lt;/A&gt;

========== Converting hrnet-v2-c1-segmentation to IR (FP32)
Conversion command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/bin/mo --framework=onnx --data_type=FP32 --output_dir=/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32 --model_name=hrnet-v2-c1-segmentation --input=data --reverse_input_channels '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --output=prob --input_model=/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 320, 320]' '--layout=data(NCHW)' '--input_shape=[1, 3, 320, 320]'

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx
	- Path for generated IR: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32
	- IR output name: 	hrnet-v2-c1-segmentation
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	data
	- Output layers: 	prob
	- Input shapes: 	[1, 3, 320, 320]
	- Source layout: 	Not specified
	- Target layout: 	Not specified
	- Layout: 	data(NCHW)
	- Mean values: 	data[123.675,116.28,103.53]
	- Scale values: 	data[58.395,57.12,57.375]
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- User transformations: 	Not specified
	- Reverse input channels: 	True
	- Enable IR generation for fixed input shape: 	False
	- Use the transformations config file: 	None
Advanced parameters:
	- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: 	False
	- Force the usage of new Frontend of Model Optimizer for model conversion into IR: 	False
OpenVINO runtime found in: 	/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32/hrnet-v2-c1-segmentation.xml
[ SUCCESS ] BIN file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32/hrnet-v2-c1-segmentation.bin
[ SUCCESS ] Total execution time: 1.35 seconds. 
[ SUCCESS ] Memory consumed: 599 MB. 
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at &lt;A href="https://docs.openvino.ai/" target="_blank" rel="noopener"&gt;https://docs.openvino.ai&lt;/A&gt;

FAILED:
cocosnet&lt;BR /&gt;======================================================================&lt;BR /&gt;&lt;BR /&gt;gb8&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2022 00:47:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375003#M27023</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-07T00:47:40Z</dc:date>
    </item>
    <item>
      <title>RE: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375006#M27024</link>
      <description>&lt;DIV class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"&gt;
&lt;DIV class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stdout"&gt;
&lt;P&gt;I am verry sorry.&lt;/P&gt;
&lt;P&gt;I overlooked downloader and converter message because it is so long and it worked well for other examples.&lt;/P&gt;
&lt;P&gt;There is converter error for cocosnet as follows:&lt;/P&gt;
&lt;P&gt;=================================================&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;################|| Downloading cocosnet ||################

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/util/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/util/util.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/architecture.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/base_network.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/generator.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/normalization.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/batchnorm.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/replicate.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/models/networks/sync_batchnorm/comm.py from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_Corr.pth from the cache

========== Retrieving /home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_G.pth from the cache

========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/__init__.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/models/networks/generator.py
========== Replacing text in /home/pi/code/models/public/cocosnet/model_files/util/util.py

################|| Downloading hrnet-v2-c1-segmentation ||################

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/parallel/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/parallel/data_parallel.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/batchnorm.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/comm.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/lib/nn/modules/replicate.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/hrnet.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/mobilenet.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/models.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/resnet.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/resnext.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/models/utils.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/mit_semseg/__init__.py from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/decoder_epoch_30.pth from the cache

========== Retrieving /home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/encoder_epoch_30.pth from the cache

========== Converting cocosnet to ONNX
Conversion to ONNX command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/models/public/cocosnet --model-path=/home/pi/code/models/public/cocosnet/model_files --model-name=Pix2PixModel --import-module=model '--input-shape=[1,151,256,256],[1,3,256,256],[1,151,256,256]' --output-file=/home/pi/code/models/public/cocosnet/cocosnet.onnx '--model-param=corr_weights=r"/home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_Corr.pth"' '--model-param=gen_weights=r"/home/pi/code/models/public/cocosnet/model_files/ckpt/latest_net_G.pth"' --input-names=input_seg_map,ref_image,ref_seg_map --output-names=exemplar_based_output

apex not found
Network [NoVGGCorrespondence] was created. Total number of parameters: 59.3 million. To see the architecture, do print(network).
Network [SPADEGenerator] was created. Total number of parameters: 96.7 million. To see the architecture, do print(network).
&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"&gt;
&lt;DIV class="lm-Widget p-Widget jp-OutputPrompt jp-OutputArea-prompt"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stderr"&gt;
&lt;PRE&gt;/home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py:236: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  feature_height = int(image_height // self.opt.down)
/home/pi/code/models/public/cocosnet/model_files/models/networks/correspondence.py:237: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  feature_width = int(image_width // self.opt.down)
Traceback (most recent call last):
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py", line 168, in &amp;lt;module&amp;gt;
    main()
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py", line 163, in main
    convert_to_onnx(model, args.input_shapes, args.output_file, args.input_names, args.output_names, args.inputs_dtype,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py", line 146, in convert_to_onnx
    torch.onnx.export(model, dummy_inputs, str(output_file), verbose=False, opset_version=opset_version,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/__init__.py", line 225, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 85, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 632, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 417, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 203, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/__init__.py", line 263, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 930, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/utils.py", line 888, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
  File "/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 111, in get_registered_op
    raise RuntimeError(msg)
RuntimeError: Exporting the operator var to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator.
&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget lm-Panel p-Panel jp-OutputArea-child"&gt;
&lt;DIV class="lm-Widget p-Widget jp-OutputPrompt jp-OutputArea-prompt"&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV class="lm-Widget p-Widget jp-RenderedText jp-mod-trusted jp-OutputArea-output" data-mime-type="application/vnd.jupyter.stdout"&gt;
&lt;PRE&gt;========== Converting hrnet-v2-c1-segmentation to ONNX
Conversion to ONNX command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-path=/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino/model_zoo/models/public/hrnet-v2-c1-segmentation --model-path=/home/pi/code/models/public/hrnet-v2-c1-segmentation --model-name=HrnetV2C1 --import-module=model --input-shape=1,3,320,320 --output-file=/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx '--model-param=encoder_weights=r"/home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/encoder_epoch_30.pth"' '--model-param=decoder_weights=r"/home/pi/code/models/public/hrnet-v2-c1-segmentation/ckpt/decoder_epoch_30.pth"' --input-names=data --output-names=prob

Loading weights for net_encoder
Loading weights for net_decoder
ONNX check passed successfully.

========== Converting hrnet-v2-c1-segmentation to IR (FP16)
Conversion command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/bin/mo --framework=onnx --data_type=FP16 --output_dir=/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16 --model_name=hrnet-v2-c1-segmentation --input=data --reverse_input_channels '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --output=prob --input_model=/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 320, 320]'

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx
	- Path for generated IR: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16
	- IR output name: 	hrnet-v2-c1-segmentation
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	data
	- Output layers: 	prob
	- Input shapes: 	[1, 3, 320, 320]
	- Source layout: 	Not specified
	- Target layout: 	Not specified
	- Layout: 	data(NCHW)
	- Mean values: 	data[123.675,116.28,103.53]
	- Scale values: 	data[58.395,57.12,57.375]
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- User transformations: 	Not specified
	- Reverse input channels: 	True
	- Enable IR generation for fixed input shape: 	False
	- Use the transformations config file: 	None
Advanced parameters:
	- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: 	False
	- Force the usage of new Frontend of Model Optimizer for model conversion into IR: 	False
OpenVINO runtime found in: 	/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16/hrnet-v2-c1-segmentation.xml
[ SUCCESS ] BIN file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP16/hrnet-v2-c1-segmentation.bin
[ SUCCESS ] Total execution time: 1.95 seconds. 
[ SUCCESS ] Memory consumed: 595 MB. 
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at &lt;A href="https://docs.openvino.ai/" target="_blank" rel="noopener"&gt;https://docs.openvino.ai&lt;/A&gt;

========== Converting hrnet-v2-c1-segmentation to IR (FP32)
Conversion command: /home/pi/miniconda3/envs/ov/bin/python3.8 -- /home/pi/miniconda3/envs/ov/bin/mo --framework=onnx --data_type=FP32 --output_dir=/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32 --model_name=hrnet-v2-c1-segmentation --input=data --reverse_input_channels '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --output=prob --input_model=/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 320, 320]' '--layout=data(NCHW)' '--input_shape=[1, 3, 320, 320]'

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/hrnet-v2-c1-segmentation.onnx
	- Path for generated IR: 	/home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32
	- IR output name: 	hrnet-v2-c1-segmentation
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	data
	- Output layers: 	prob
	- Input shapes: 	[1, 3, 320, 320]
	- Source layout: 	Not specified
	- Target layout: 	Not specified
	- Layout: 	data(NCHW)
	- Mean values: 	data[123.675,116.28,103.53]
	- Scale values: 	data[58.395,57.12,57.375]
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- User transformations: 	Not specified
	- Reverse input channels: 	True
	- Enable IR generation for fixed input shape: 	False
	- Use the transformations config file: 	None
Advanced parameters:
	- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: 	False
	- Force the usage of new Frontend of Model Optimizer for model conversion into IR: 	False
OpenVINO runtime found in: 	/home/pi/miniconda3/envs/ov/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32/hrnet-v2-c1-segmentation.xml
[ SUCCESS ] BIN file: /home/pi/code/models/public/hrnet-v2-c1-segmentation/FP32/hrnet-v2-c1-segmentation.bin
[ SUCCESS ] Total execution time: 1.35 seconds. 
[ SUCCESS ] Memory consumed: 599 MB. 
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at &lt;A href="https://docs.openvino.ai/" target="_blank" rel="noopener"&gt;https://docs.openvino.ai&lt;/A&gt;

FAILED:
cocosnet&lt;BR /&gt;======================================================&lt;BR /&gt;gb8&lt;/PRE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;</description>
      <pubDate>Thu, 07 Apr 2022 00:50:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375006#M27024</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-07T00:50:08Z</dc:date>
    </item>
    <item>
      <title>Re: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375019#M27026</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi gb8,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Below are the results that I convert CocosNet model via Command Prompt. As you see, the result messages (success, fail and info) are shown.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Fail.jpeg" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28331i4BC651982DC4129C/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="Fail.jpeg" alt="Fail.jpeg" /&gt;&lt;/span&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Success.jpeg" style="width: 873px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28332i13FCF1C778F7FAB7/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="Success.jpeg" alt="Success.jpeg" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I tried to convert CocosNet model without installing the dependencies and the conversion was failed. &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Please make sure you run the command below to install the dependencies:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;pip install openvino-dev[pytorch]==2022.1&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Peh&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2022 01:33:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375019#M27026</guid>
      <dc:creator>Peh_Intel</dc:creator>
      <dc:date>2022-04-07T01:33:59Z</dc:date>
    </item>
    <item>
      <title>Re: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375030#M27027</link>
      <description>&lt;P&gt;Hi Peh.&lt;/P&gt;
&lt;P&gt;Your mention succeed.&lt;/P&gt;
&lt;P&gt;Former result : I installed using comand someday before as follows,&lt;/P&gt;
&lt;P&gt;&amp;nbsp; - 'pip install ‘openvino-dev[caffe, kaldi, mxnet, onnx, pytorch, tensorflow2]’&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Now using comamnd,&amp;nbsp; - '&lt;SPAN&gt;pip install openvino-dev[pytorch]==2022.1'&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;it succeed.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Is there difference between&amp;nbsp; the two pip command?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thanks a lot.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;gb8&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Apr 2022 02:33:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375030#M27027</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-07T02:33:18Z</dc:date>
    </item>
    <item>
      <title>Re:image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375100#M27033</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi gb8,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Thanks for confirming you are running well now.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Both pip commands are actually the same, the only difference is just to specify the version. Installing OpenVINO™ from PyPI without specifying the version, it will install the latest version at that moment.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Peh&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 07 Apr 2022 06:51:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375100#M27033</guid>
      <dc:creator>Peh_Intel</dc:creator>
      <dc:date>2022-04-07T06:51:25Z</dc:date>
    </item>
    <item>
      <title>RE: Re:image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375890#M27049</link>
      <description>&lt;P&gt;Peh.&lt;/P&gt;
&lt;P&gt;Thank you for detailed explanation.&lt;/P&gt;
&lt;P&gt;After installation successful, I ran the demo according to doc, it says only '-ri'(reference images) argument is required, and other images are optional.&lt;/P&gt;
&lt;P&gt;Running with that argument, it shows error as follows:&lt;/P&gt;
&lt;P&gt;=====================================&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gb8_1-1649573398490.png" style="width: 572px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28422i3C2FED80F987EE47/image-dimensions/572x246/is-moderation-mode/true?v=v2&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" width="572" height="246" role="button" title="gb8_1-1649573398490.png" alt="gb8_1-1649573398490.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;==========================================&lt;/P&gt;
&lt;P&gt;So I rearranged arguments like '-ii imagefile -ri imagefile' (same image file for test), then error message is as follows:&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="gb8_0-1649573220550.png" style="width: 565px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28421i4DDE3A9A55C92858/image-dimensions/565x494/is-moderation-mode/true?v=v2&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" width="565" height="494" role="button" title="gb8_0-1649573220550.png" alt="gb8_0-1649573220550.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;I am not familiar with image translation, I just want to introduce this model to people.&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;For this demonstration to work, Can I get suitable images ?&amp;nbsp;If possible.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;gb8&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 10 Apr 2022 06:54:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1375890#M27049</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-10T06:54:09Z</dc:date>
    </item>
    <item>
      <title>Re:image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376071#M27055</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi gb8,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;There are two ways to run with this demo. You can refer to commands &lt;/SPAN&gt;&lt;A href="https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/image_translation_demo/python#running" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;here&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt;. &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Besides, there are also some &lt;/SPAN&gt;&lt;A href="https://github.com/openvinotoolkit/open_model_zoo/pull/3455" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;fixes&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt; to the preprocessing for Segmentation and Translation models.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Please modify these two files (models.py and preprocessing.py) in the &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;I style="font-size: 16px;"&gt;&amp;lt;omz&amp;gt;/demos/image_translation_demo/python/image_translation_demo&lt;/I&gt;&lt;SPAN style="font-size: 16px;"&gt; directory.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="https://github.com/openvinotoolkit/open_model_zoo/commit/d3ddd45e8455f12185c76efd9d0c38097d741a0d" rel="noopener noreferrer" target="_blank" style="font-size: 16px;"&gt;Here&lt;/A&gt;&lt;SPAN style="font-size: 16px;"&gt; are the changes that required to be made.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Peh&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 11 Apr 2022 07:41:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376071#M27055</guid>
      <dc:creator>Peh_Intel</dc:creator>
      <dc:date>2022-04-11T07:41:56Z</dc:date>
    </item>
    <item>
      <title>RE: Re:image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376409#M27059</link>
      <description>&lt;P&gt;OK.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I see.&lt;/P&gt;
&lt;P&gt;I am not familiar with image translation as mentioned before,&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I would rather postpone the test until everything is fixed.&lt;/P&gt;
&lt;P&gt;I respect you for hard work if not impossible.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks a lot.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;gb8&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 12 Apr 2022 05:42:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376409#M27059</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-12T05:42:51Z</dc:date>
    </item>
    <item>
      <title>Re: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376709#M27071</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi gb8,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;I’ve validated the demo works fine on my side after modifying these two files (models.py and preprocessing.py).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Command used:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;python image_translation_demo.py -m_trn cocosnet.xml -m_seg hrnet-v2-c1-segmentation.xml -ii horse.jpg -ri zebra.jpg -o output&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="example.JPG" style="width: 999px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/28507i5F7F7935B3CBDB02/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="example.JPG" alt="example.JPG" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Besides, I also attached these two modified files as well. You can replace these two files in the directory below:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;I&gt;&amp;lt;omz&amp;gt;/demos/image_translation_demo/python/image_translation_demo&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Peh&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 13 Apr 2022 03:16:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376709#M27071</guid>
      <dc:creator>Peh_Intel</dc:creator>
      <dc:date>2022-04-13T03:16:31Z</dc:date>
    </item>
    <item>
      <title>Re: image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376727#M27072</link>
      <description>&lt;P&gt;Yes. I tested it and everything is OK.&lt;/P&gt;
&lt;P&gt;Thank you.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;gb8&lt;/P&gt;</description>
      <pubDate>Wed, 13 Apr 2022 03:41:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1376727#M27072</guid>
      <dc:creator>gb8</dc:creator>
      <dc:date>2022-04-13T03:41:18Z</dc:date>
    </item>
    <item>
      <title>Re:image translation model</title>
      <link>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1377305#M27085</link>
      <description>&lt;P&gt;&lt;SPAN style="font-size: 16px;"&gt;Hi gb8,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: &amp;quot;Intel Clear&amp;quot;, sans-serif; font-size: 16px;"&gt;This thread will no longer be monitored since this issue has been resolved.&amp;nbsp;If you need any additional information from Intel, please submit a new question.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: &amp;quot;Intel Clear&amp;quot;, sans-serif; font-size: 16px;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: &amp;quot;Intel Clear&amp;quot;, sans-serif; font-size: 16px;"&gt;Peh&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 15 Apr 2022 02:39:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/image-translation-model/m-p/1377305#M27085</guid>
      <dc:creator>Peh_Intel</dc:creator>
      <dc:date>2022-04-15T02:39:05Z</dc:date>
    </item>
  </channel>
</rss>

