Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Some questions about Openvino 2019.R1 FPGA version

zgjja
Beginner
854 Views

1. I try to quantize my model to INT8 and implement it on Intel Arria10 GX FPGA. according to the mannual, I configure the configuration yml file and definition yml file, but when i choose device as hetero:fpga,cpu, it fails, the error message is:

10:52:26 openvino.tools.calibration WARNING: /opt/intel/openvino_2019.1.094/python/python3.5/lib/python3.5/site-packages/openvino/tools/accuracy_checker/accuracy_checker/config/config_reader.py:279: UserWarning: Model "Stone_example" has no launchers
  warnings.warn('Model "{}" has no launchers'.format(model['name']))

Traceback (most recent call last):
  File "calibrate.py", line 20, in <module>
    with calibration.CommandLineProcessor.process() as config:
  File "/opt/intel/openvino_2019.1.094/python/python3.5/lib/python3.5/site-packages/openvino/tools/calibration/command_line_processor.py", line 44, in process
    updated_config = ConfigurationFilter.filter(merged_config, args.metric_name, args.metric_type, default_logger)
  File "/opt/intel/openvino_2019.1.094/python/python3.5/lib/python3.5/site-packages/openvino/tools/utils/configuration_filter.py", line 33, in filter
    raise ValueError("there are no models")
ValueError: there are no models

 

but when i try to quantize it as CPU, it works out.

 

2. The second question is related to the question above, it seems like that the openvino 2019 quantization tool didn't change the size of the bin file after quantization, while the version 2020 and above can decrease the size of bin file after quantization. Is this a kind of feature of 2019 version or some of the misoperation of myself?

 

my two yml files:

    --config

models:
  - name: Stone_example
    launchers:
      - framework: dlsdk
        device: hetero:FPGA,CPU
        caffe_model: VGG_3_final_255.caffemodel
        caffe_weights: VGG_3_final_255.prototxt
        # model: FP32/VGG_3_final_255.xml 
        # weights: FP32/VGG_3_final_255.bin
        # launcher returns raw result, so it should be converted
        # to an appropriate representation with adapter
        adapter: classification
        # batch size
        batch: 1

    datasets:
        # uniquely distinguishable name for dataset
        # note that all other steps are specific for this dataset only
        # if you need to test topology on multiple datasets, you need to specify
        # every step explicitly for each dataset
      - name: stone_dataset
        # directory where input images are searched.
        # prefixed with directory specified in "-s/--source" option
        data_source: train_calibration/data
        # parameters for annotation conversion to a common annotation representation format.
        annotation_conversion:
          # specified which annotation converter will be used
          #  In order to do this you need to provide your own annotation converter,
          # i.e. implement BaseFormatConverter interface.
          # All annotation converters are stored in accuracy_checker/annotation_converters directory.
          converter: imagenet
          # converter specific parameters.
          # Full range available options you can find in accuracy_checker/annotation_converters/README.md
          # relative paths will be merged with "-s/--source" option
          annotation_file: train_calibration/annotation.txt
          labels_file: train_calibration/labels.txt

        metrics:
          - type: accuracy
            top_k: 1

    --defintion

 

launchers:
  - framework: dlsdk
    device: hetero:fpga,cpu
    mo_params:
      data_type: FP32
      input_shape: "(1, 3, 224, 224)"
datasets:
  - name: stone_dataset
    data_source: /home/ivip/zjq/stone/train_calibration/data
    annotation: /home/ivip/zjq/stone/annotations/stone_calibration.pickle
    dataset_meta: /home/ivip/zjq/stone/annotations/stone_calibration.json
    preprocessing:
      - type: crop
        size: 224
    metrics:
      - name: accuracy @ top1
        type: accuracy
        top_k: 1

 Thanks for reading.

0 Kudos
1 Solution
Zulkifli_Intel
Moderator
730 Views

Hello,

 

DLA source code is available under license and here is the discussion related to this issue. Additional information, FGPA is not available in open source.

 

Sincerely,

Zulkifli


View solution in original post

0 Kudos
7 Replies
Zulkifli_Intel
Moderator
825 Views

Hello Zgjja,

 

Thank you for reaching out to us.

 

We are currently looking into this issue and we will get back to you soon.

 

Sincerely,

Zulkifli

 

0 Kudos
Zulkifli_Intel
Moderator
795 Views

 

Try to specify the bitstream directory (-b) in POT command, and also use the  -td hetero:FPGA, CPU in the command line rather than in the configuration file.

 

Sincerely,

Zulkifli

 

0 Kudos
zgjja
Beginner
791 Views

    Thanks for your answer, finallly i solve this by rewriting the configuration yaml file.

    By the way, I notice that Intel provide a tool called Intel DLA to develop CNN primitives, but the download links are not available, do you know the reasons? If the software is trully unavailable, can I have some approach to read the source code of the orginal OpenCL code for the bitstreams provided by OpenVINO?

    I try to browse the github of the OpenVINO, seems that even the source code of DLIA plugin is not there.....

0 Kudos
Zulkifli_Intel
Moderator
781 Views
0 Kudos
zgjja
Beginner
776 Views

    Yes, I do check the online cource link you mentioned above, and that's why i ask. This course introduce the "DLA suite", seems that it was introduced in 2017, while the download links are not available, so it's not a problem with OpenCL, it's problem with DLA.

          thanks.                                                                                               

0 Kudos
Zulkifli_Intel
Moderator
731 Views

Hello,

 

DLA source code is available under license and here is the discussion related to this issue. Additional information, FGPA is not available in open source.

 

Sincerely,

Zulkifli


0 Kudos
Zulkifli_Intel
Moderator
682 Views

Hi,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Sincerely,

Zulkifli


0 Kudos
Reply