intel_edge_aibox : b7664aae-xxxx-xxxx-xxxx-8a5637e2xxxx user@NUC13:~/edge_aibox$ user@NUC13:~/edge_aibox$ ./edgesoftware install [sudo] password for user: Please enter the Product Key. The Product Key is contained in the email you received from Intel confirming your download: b7664aae-xxxx-xxxx-xxxx-8a5637e2xxxx Starting the setup... ESB CLI version: 2024.1 Target OS: Ubuntu 22.04 Python version: 3.10.12 Checking Internet connection Connected to the Internet Validating package product key Successfully validated Product Key Checking for prerequisites All dependencies met -------------------SYSTEM INFO-------------------- Package Name: Intel Edge AI Box 3.1 Product Name: Intel(R) Client Systems NUC13ANHi5 CPU SKU: 13th Gen Intel(R) CoreT i5-1340P Memory Size: 16 GB Operating System: Ubuntu 22.04.4 LTS Kernel Version: 6.5.0-25-generic Accelerator: None CPU Utilization: 0.6% Available Disk Space: 382 GB Starting installation Downloading modules... Downloading component esb_common ZIP file for module 5e8c4742e02f17002a2a6976 already exists. Validating it... Module validation passed for 5e8c4742e02f17002a2a6976 Skipping download... Downloading component bmra_base ZIP file for module 652c9b5bd229cd102cfb088e already exists. Validating it... Module validation passed for 652c9b5bd229cd102cfb088e Skipping download... Downloading component bmra_container_base ZIP file for module 652c9ae1d229cd102cfac095 already exists. Validating it... Module validation passed for 652c9ae1d229cd102cfac095 Skipping download... Downloading component bmra_container_base_devel ZIP file for module 652c9b44d229cd102cfafa28 already exists. Validating it... Module validation passed for 652c9b44d229cd102cfafa28 Skipping download... Downloading component bmra_container_dlstreamer ZIP file for module 652c9b29d229cd102cfadd68 already exists. Validating it... Module validation passed for 652c9b29d229cd102cfadd68 Skipping download... Downloading component bmra_container_opencv_ffmpeg ZIP file for module 652c9b13d229cd102cfacf01 already exists. Validating it... Module validation passed for 652c9b13d229cd102cfacf01 Skipping download... Downloading component modules/Aibox_Test_Module ZIP file for module 652c9411d229cd102cfa954a already exists. Validating it... Module validation passed for 652c9411d229cd102cfa954a Skipping download... Downloading component ESDQ ZIP file for module 64cb337f89c314c29867e05c already exists. Validating it... Module validation passed for 64cb337f89c314c29867e05c Skipping download... Downloading modules completed... Installing shared module 'esb_common' Unzipping the shared module 'esb_common'... running install running bdist_egg running egg_info writing esb_common.egg-info/PKG-INFO writing dependency_links to esb_common.egg-info/dependency_links.txt writing top-level names to esb_common.egg-info/top_level.txt reading manifest file 'esb_common.egg-info/SOURCES.txt' writing manifest file 'esb_common.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib warning: install_lib: 'build/lib' does not exist -- no Python modules to install creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/EGG-INFO copying esb_common.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying esb_common.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying esb_common.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying esb_common.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO zip_safe flag not set; analyzing archive contents... creating 'dist/esb_common-0.1-py3.10.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing esb_common-0.1-py3.10.egg Removing /usr/local/lib/python3.10/dist-packages/esb_common-0.1-py3.10.egg Copying esb_common-0.1-py3.10.egg to /usr/local/lib/python3.10/dist-packages esb-common 0.1 is already the active version in easy-install.pth Installed /usr/local/lib/python3.10/dist-packages/esb_common-0.1-py3.10.egg Processing dependencies for esb-common==0.1 Finished processing dependencies for esb-common==0.1 Successfully installed shared module 'esb_common'. Modules to be installed by package are ['bmra_base', 'bmra_container_base', 'bmra_container_base_devel', 'bmra_container_dlstreamer', 'bmra_container_opencv_ffmpeg', 'modules/Aibox_Test_Module', 'ESDQ'] bmra_base is already installed. Type YES to reinstall or NO to skip installation. NO Installing bmra_container_base [+] Building 1.1s (9/9) FINISHED docker:default => [internal] load build definition from Dockerfile.test-gpu 0.1s => => transferring dockerfile: 312B 0.0s => [internal] load metadata for docker.io/library/aibox-base:3.1 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/4] FROM docker.io/library/aibox-base:3.1 0.1s => [internal] load build context 0.0s => => transferring context: 1.40kB 0.0s => [2/4] WORKDIR /home/aibox 0.0s => [3/4] COPY --chown=aibox:aibox test_gpu_entry.sh . 0.1s => [4/4] RUN chmod +x test_gpu_entry.sh 0.4s => exporting to image 0.2s => => exporting layers 0.2s => => writing image sha256:ef6d23d4bc4c41b8299ce23ebaf10990f27c8bec314f3c12884e9f4d2a777f09 0.0s => => naming to docker.io/library/test-gpu:latest 0.0s xauth: /home/user/.Xauthority not writable, changes will be ignored ---------------------Check glxinfo ------------------- Authorization required, but no authorization protocol specified Error: unable to open display :0 Authorization required, but no authorization protocol specified Error: unable to open display :0 glxinfo: Failed ---------------------Check clinfo ------------------- Device Name Intel(R) Graphics [0xa7a0] Device Vendor Intel(R) Corporation Device Vendor ID 0x8086 Device Version OpenCL 3.0 NEO Device UUID 8680a0a7-0400-0000-0002-000000000000 Valid Device LUID No Device LUID 20ce-42deff7f0000 Device Node Mask 0 Device Numeric Version 0xc00000 (3.0.0) Device OpenCL C Version OpenCL C 1.2 Device OpenCL C all versions OpenCL C 0x400000 (1.0.0) Device OpenCL C features __opencl_c_int64 0xc00000 (3.0.0) Device Type GPU Device Profile FULL_PROFILE Device Available Yes Device Partition (core) Unified memory for Host and Device Yes Device enqueue capabilities (n/a) Device Extensions cl_khr_byte_addressable_store cl_khr_device_uuid cl_khr_fp16 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_intel_command_queue_families cl_intel_subgroups cl_intel_required_subgroup_size cl_intel_subgroups_short cl_khr_spir cl_intel_accelerator cl_intel_driver_diagnostics cl_khr_priority_hints cl_khr_throttle_hints cl_khr_create_command_queue cl_intel_subgroups_char cl_intel_subgroups_long cl_khr_il_program cl_intel_mem_force_host_memory cl_khr_subgroup_extended_types cl_khr_subgroup_non_uniform_vote cl_khr_subgroup_ballot cl_khr_subgroup_non_uniform_arithmetic cl_khr_subgroup_shuffle cl_khr_subgroup_shuffle_relative cl_khr_subgroup_clustered_reduce cl_intel_device_attribute_query cl_khr_suggested_local_work_size cl_intel_split_work_group_barrier cl_intel_spirv_media_block_io cl_intel_spirv_subgroups cl_khr_spirv_no_integer_wrap_decoration cl_intel_unified_shared_memory cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_ext_float_atomics cl_intel_planar_yuv cl_intel_packed_yuv cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_image2d_from_buffer cl_khr_depth_images cl_khr_3d_image_writes cl_intel_media_block_io cl_intel_subgroup_local_block_io cl_khr_integer_dot_product cl_khr_gl_sharing cl_khr_gl_depth_images cl_khr_gl_event cl_khr_gl_msaa_sharing cl_intel_va_api_media_sharing cl_intel_sharing_format_query cl_khr_pci_bus_info Device Extensions with Version cl_khr_byte_addressable_store 0x400000 (1.0.0) clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [INTEL] Device Name Intel(R) Graphics [0xa7a0] Device Name Intel(R) Graphics [0xa7a0] Device Name Intel(R) Graphics [0xa7a0] Device Name Intel(R) Graphics [0xa7a0] Device Name Intel(R) Graphics [0xa7a0] Device Name Intel(R) Graphics [0xa7a0] Device Name Intel(R) Graphics [0xa7a0] ---------------------Check vulkaninfo ----------------- Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified XCB failed to connect to the X server due to error:1. ERROR at ./vulkaninfo/vulkaninfo.h:836: AppCreateXcbSurface failed to establish connection Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified XCB failed to connect to the X server due to error:1. ERROR at ./vulkaninfo/vulkaninfo.h:836: AppCreateXcbSurface failed to establish connection vulkaninfo: Failed ---------------------Check vainfo -------------------- Authorization required, but no authorization protocol specified error: can't connect to X server! libva info: VA-API version 1.20.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_19 libva info: va_openDriver() returns 0 vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 23.3.1 () Authorization required, but no authorization protocol specified error: can't connect to X server! libva info: VA-API version 1.20.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_19 libva info: va_openDriver() returns 0 vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 23.3.1 () Failed: Get vulkaninfo: Failed. Get glxinfo: Failed. Failed to install bmra_container_base took 8.36 seconds [+] Building 0.6s (9/9) FINISHED docker:default => [internal] load build definition from Dockerfile.test-openvino-dev 0.0s => => transferring dockerfile: 360B 0.0s => [internal] load metadata for docker.io/library/aibox-base-devel:3.1 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/4] FROM docker.io/library/aibox-base-devel:3.1 0.1s => [internal] load build context 0.0s => => transferring context: 1.24kB 0.0s => [2/4] WORKDIR /home/aibox 0.1s => [3/4] COPY --chown=aibox:aibox test_openvino_dev_entry.sh . 0.1s => [4/4] RUN chmod +x test_openvino_dev_entry.sh 0.3s => exporting to image 0.1s => => exporting layers 0.1s => => writing image sha256:c1f8a7e8c9d1ff022c7255c076c1ff775b8a39be56102a70838ab1ded902692d 0.0s => => naming to docker.io/library/test-openvino-dev:latest 0.0s xauth: /home/user/.Xauthority not writable, changes will be ignored [setupvars.sh] OpenVINO environment initialized ################|| Downloading mtcnn-p ||################ ========== Downloading /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.prototxt ... 100%, 2 KB, 5746 KB/s, 0 seconds passed ========== Downloading /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.caffemodel ... 100%, 27 KB, 70 KB/s, 0 seconds passed ========== Replacing text in /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.prototxt ========== Replacing text in /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.prototxt ################|| Downloading mtcnn-r ||################ ========== Downloading /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/mtcnn-r.prototxt ... 100%, 3 KB, 11574 KB/s, 0 seconds passed ========== Downloading /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/mtcnn-r.caffemodel ... 100%, 398 KB, 407 KB/s, 0 seconds passed ################|| Downloading mtcnn-o ||################ ========== Downloading /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/mtcnn-o.prototxt ... 100%, 3 KB, 18722 KB/s, 0 seconds passed ========== Downloading /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/mtcnn-o.caffemodel ... 100%, 1521 KB, 1019 KB/s, 1 seconds passed ========== Converting mtcnn-p to IR (FP16) Conversion command: /home/aibox/venv_openvino_2023/bin/python3 -- /home/aibox/venv_openvino_2023/bin/mo --framework=caffe --output_dir=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/FP16 --model_name=mtcnn-p --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[128.0]' --output=conv4-2,prob1 --input_model=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.caffemodel --input_proto=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.prototxt '--layout=data(NCWH)' '--input_shape=[1, 3, 720, 1280]' --compress_to_fp16=True [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/FP16/mtcnn-p.xml [ SUCCESS ] BIN file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/FP16/mtcnn-p.bin ========== Converting mtcnn-p to IR (FP32) Conversion command: /home/aibox/venv_openvino_2023/bin/python3 -- /home/aibox/venv_openvino_2023/bin/mo --framework=caffe --output_dir=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/FP32 --model_name=mtcnn-p --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[128.0]' --output=conv4-2,prob1 --input_model=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.caffemodel --input_proto=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/mtcnn-p.prototxt '--layout=data(NCWH)' '--input_shape=[1, 3, 720, 1280]' --compress_to_fp16=True '--layout=data(NCWH)' '--input_shape=[1, 3, 720, 1280]' --compress_to_fp16=False [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/FP32/mtcnn-p.xml [ SUCCESS ] BIN file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-p/FP32/mtcnn-p.bin ========== Converting mtcnn-r to IR (FP16) Conversion command: /home/aibox/venv_openvino_2023/bin/python3 -- /home/aibox/venv_openvino_2023/bin/mo --framework=caffe --output_dir=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/FP16 --model_name=mtcnn-r --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[128.0]' --output=conv5-2,prob1 --input_model=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/mtcnn-r.caffemodel --input_proto=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/mtcnn-r.prototxt '--layout=data(NCWH)' '--input_shape=[1, 3, 24, 24]' --compress_to_fp16=True [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/FP16/mtcnn-r.xml [ SUCCESS ] BIN file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/FP16/mtcnn-r.bin ========== Converting mtcnn-r to IR (FP32) Conversion command: /home/aibox/venv_openvino_2023/bin/python3 -- /home/aibox/venv_openvino_2023/bin/mo --framework=caffe --output_dir=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/FP32 --model_name=mtcnn-r --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[128.0]' --output=conv5-2,prob1 --input_model=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/mtcnn-r.caffemodel --input_proto=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/mtcnn-r.prototxt '--layout=data(NCWH)' '--input_shape=[1, 3, 24, 24]' --compress_to_fp16=True '--layout=data(NCWH)' '--input_shape=[1, 3, 24, 24]' --compress_to_fp16=False [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/FP32/mtcnn-r.xml [ SUCCESS ] BIN file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-r/FP32/mtcnn-r.bin ========== Converting mtcnn-o to IR (FP16) Conversion command: /home/aibox/venv_openvino_2023/bin/python3 -- /home/aibox/venv_openvino_2023/bin/mo --framework=caffe --output_dir=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/FP16 --model_name=mtcnn-o --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[128.0]' --output=conv6-2,conv6-3,prob1 --input_model=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/mtcnn-o.caffemodel --input_proto=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/mtcnn-o.prototxt '--layout=data(NCWH)' '--input_shape=[1, 3, 48, 48]' --compress_to_fp16=True [ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False. Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/FP16/mtcnn-o.xml [ SUCCESS ] BIN file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/FP16/mtcnn-o.bin ========== Converting mtcnn-o to IR (FP32) Conversion command: /home/aibox/venv_openvino_2023/bin/python3 -- /home/aibox/venv_openvino_2023/bin/mo --framework=caffe --output_dir=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/FP32 --model_name=mtcnn-o --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[128.0]' --output=conv6-2,conv6-3,prob1 --input_model=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/mtcnn-o.caffemodel --input_proto=/home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/mtcnn-o.prototxt '--layout=data(NCWH)' '--input_shape=[1, 3, 48, 48]' --compress_to_fp16=True '--layout=data(NCWH)' '--input_shape=[1, 3, 48, 48]' --compress_to_fp16=False [ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11. Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/FP32/mtcnn-o.xml [ SUCCESS ] BIN file: /home/aibox/data/openvino_dev_output_aibox-base-devel/models/public/mtcnn/mtcnn-o/FP32/mtcnn-o.bin Passed bmra_container_base_devel is already installed. Type YES to reinstall or NO to skip installation. NO [+] Building 0.6s (9/9) FINISHED docker:default => [internal] load build definition from Dockerfile.test-dlstreamer 0.0s => => transferring dockerfile: 352B 0.0s => [internal] load metadata for docker.io/library/aibox-dlstreamer:3.1 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/4] FROM docker.io/library/aibox-dlstreamer:3.1 0.1s => [internal] load build context 0.0s => => transferring context: 4.34kB 0.0s => [2/4] WORKDIR /home/aibox 0.0s => [3/4] COPY --chown=aibox:aibox test_dlstreamer_entry.sh . 0.0s => [4/4] RUN chmod +x test_dlstreamer_entry.sh 0.3s => exporting to image 0.1s => => exporting layers 0.1s => => writing image sha256:926f85d81fb760f6fe3c05795fe62b345db1b3893257e4f46d0bb3d425180188 0.0s => => naming to docker.io/library/test-dlstreamer:latest 0.0s xauth: /home/user/.Xauthority not writable, changes will be ignored [setupvars.sh] OpenVINO environment initialized [setupvars.sh] GStreamer 1.20 framework initialized [setupvars.sh] GStreamer 1.20 plugins path initialized [setupvars.sh] Intel(R) DL Streamer environment initialized (gst-plugin-scanner:25): GStreamer-WARNING **: 21:06:17.581: Failed to load plugin '/opt/intel/dlstreamer/lib/gstreamer-1.0/libgstdlstreamer_sycl.so': libsvml.so: cannot open shared object file: No such file or directory ** (gst-plugin-scanner:25): CRITICAL **: 21:06:17.700: pygobject initialization failed ** (gst-plugin-scanner:25): CRITICAL **: 21:06:17.948: pygobject initialization failed Setting pipeline to PAUSED ... Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Pipeline is PREROLLING ... Got context from element 'vaapiencodeh264-0': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"\(GstVaapiDisplayDRM\)\ vaapidisplaydrm1", gst.vaapi.Display.GObject=(GstObject)"\(GstVaapiDisplayDRM\)\ vaapidisplaydrm1"; Redistribute latency... Redistribute latency... Redistribute latency... Redistribute latency... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... Redistribute latency... New clock: GstSystemClock FpsCounter(last 1.02sec): total=91.17 fps, number-streams=1, per-stream=91.17 fps FpsCounter(average 1.02sec): total=91.16 fps, number-streams=1, per-stream=91.16 fps FpsCounter(last 1.02sec): total=90.25 fps, number-streams=1, per-stream=90.25 fps FpsCounter(average 2.04sec): total=90.71 fps, number-streams=1, per-stream=90.71 fps FpsCounter(last 1.00sec): total=90.83 fps, number-streams=1, per-stream=90.83 fps FpsCounter(average 3.04sec): total=90.75 fps, number-streams=1, per-stream=90.75 fps FpsCounter(last 1.01sec): total=89.73 fps, number-streams=1, per-stream=89.73 fps FpsCounter(average 4.06sec): total=90.49 fps, number-streams=1, per-stream=90.49 fps FpsCounter(last 0.10sec): total=99.02 fps, number-streams=1, per-stream=99.02 fps FpsCounter(overall 4.16sec): total=90.70 fps, number-streams=1, per-stream=90.70 fps Got EOS from element "pipeline0". EOS received - stopping pipeline... Execution ended after 0:00:04.163971083 Setting pipeline to NULL ... Freeing pipeline ... Analyzing file:///home/aibox/data/dlstreamer_output_aibox-dlstreamer/output_person-vehicle-bike-detection-2004.mp4 Authorization required, but no authorization protocol specified Authorization required, but no authorization protocol specified Done discovering file:///home/aibox/data/dlstreamer_output_aibox-dlstreamer/output_person-vehicle-bike-detection-2004.mp4 Properties: Duration: 0:00:30.160000000 Seekable: yes Live: no container #0: Quicktime video #1: H.264 (High Profile) Stream ID: 3ac6348660ef537697d5bf4e6f2d419bb48ef1f189eb0781843f5b01e9a3d824/001 Width: 768 Height: 432 Depth: 24 Frame rate: 25/2 Pixel aspect ratio: 1/1 Interlaced: false Bitrate: 272129 Max bitrate: 1673900 272129 9: video #1: H.264 (High Profile) Passed bmra_container_dlstreamer is already installed. Type YES to reinstall or NO to skip installation. NO Installing bmra_container_opencv_ffmpeg [+] Building 625.5s (18/18) FINISHED docker:default => [internal] load build definition from Dockerfile.opencv-ffmpeg 0.0s => => transferring dockerfile: 1.06kB 0.0s => [internal] load metadata for docker.io/library/aibox-base:3.1 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load build context 0.0s => => transferring context: 5.26kB 0.0s => CACHED [ 1/13] FROM docker.io/library/aibox-base:3.1 0.0s => [ 2/13] RUN apt-get update -y; apt-get -y --no-install-recommends install sudo net-tools 3.4s => [ 3/13] RUN mkdir -p /opt/intel/nep 0.5s => [ 4/13] COPY install_ffmpeg.sh /opt/intel/nep/ 0.1s => [ 5/13] WORKDIR /opt/intel/nep 0.0s => [ 6/13] RUN chmod +x install_ffmpeg.sh 0.3s => [ 7/13] RUN ./install_ffmpeg.sh default 193.3s => [ 8/13] RUN mkdir -p /opt/intel/nep 0.3s => [ 9/13] COPY install_opencv.sh /opt/intel/nep/ 0.0s => [10/13] WORKDIR /opt/intel/nep 0.0s => [11/13] RUN chmod +x install_opencv.sh 0.4s => [12/13] RUN ./install_opencv.sh 4.8.0 423.5s => [13/13] WORKDIR /home/aibox 0.0s => exporting to image 3.4s => => exporting layers 3.4s => => writing image sha256:63d0db79f9f8c53097e0b1c577b8096bdbe9f9985cb525a55c816f8daf17e652 0.0s => => naming to docker.io/library/aibox-opencv-ffmpeg:latest 0.0s [+] Building 0.8s (9/9) FINISHED docker:default => [internal] load build definition from Dockerfile.test-opencv 0.0s => => transferring dockerfile: 342B 0.0s => [internal] load metadata for docker.io/library/aibox-opencv-ffmpeg:3.1 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/4] FROM docker.io/library/aibox-opencv-ffmpeg:3.1 0.2s => [internal] load build context 0.0s => => transferring context: 1.09kB 0.0s => [2/4] WORKDIR /home/aibox 0.1s => [3/4] COPY --chown=aibox:aibox test_opencv_entry.sh . 0.0s => [4/4] RUN chmod +x test_opencv_entry.sh 0.3s => exporting to image 0.1s => => exporting layers 0.1s => => writing image sha256:f203ba6590f761ed76d8b0c99661ed2a7a960d35f317e684fab03acc479e1b1d 0.0s => => naming to docker.io/library/test-opencv:latest 0.0s xauth: /home/user/.Xauthority not writable, changes will be ignored Cloning into 'opencv_extra'... remote: Enumerating objects: 8246, done. remote: Counting objects: 100% (8246/8246), done. remote: Compressing objects: 100% (6243/6243), done. remote: Total 8246 (delta 1140), reused 7997 (delta 1098), pack-reused 0 Receiving objects: 100% (8246/8246), 466.23 MiB | 9.86 MiB/s, done. Resolving deltas: 100% (1140/1140), done. Updating files: 100% (7936/7936), done. [setupvars.sh] OpenVINO environment initialized TEST: Skip tests with tags: 'mem_6gb', 'verylong' CTEST_FULL_OUTPUT OpenCV version: 4.8.0 OpenCV VCS version: 4.8.0-dirty Build type: Release Compiler: /usr/bin/c++ (ver 11.4.0) Parallel framework: pthreads (nthreads=16) CPU features: SSE SSE2 SSE3 SSSE3 SSE4.1 POPCNT SSE4.2 *FP16 *AVX *AVX2 *AVX512-SKX? Intel(R) IPP version: ippIP AVX2 (l9) 2021.8 (-) Feb 20 2023 Intel(R) IPP features code: 0x8000 OpenCL Platforms: Intel(R) OpenCL Graphics iGPU: Intel(R) Graphics [0xa7a0] (OpenCL 3.0 NEO ) Current OpenCL device: Type = iGPU Name = Intel(R) Graphics [0xa7a0] Version = OpenCL 3.0 NEO Driver version = 23.26.26690.36 Address bits = 64 Compute units = 80 Max work group size = 512 Local memory size = 64 KB Max memory allocation size = 3 GB 1023 MB 1016 KB Double support = No Half support = Yes Host unified memory = Yes Device extensions: cl_khr_byte_addressable_store cl_khr_device_uuid cl_khr_fp16 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_intel_command_queue_families cl_intel_subgroups cl_intel_required_subgroup_size cl_intel_subgroups_short cl_khr_spir cl_intel_accelerator cl_intel_driver_diagnostics cl_khr_priority_hints cl_khr_throttle_hints cl_khr_create_command_queue cl_intel_subgroups_char cl_intel_subgroups_long cl_khr_il_program cl_intel_mem_force_host_memory cl_khr_subgroup_extended_types cl_khr_subgroup_non_uniform_vote cl_khr_subgroup_ballot cl_khr_subgroup_non_uniform_arithmetic cl_khr_subgroup_shuffle cl_khr_subgroup_shuffle_relative cl_khr_subgroup_clustered_reduce cl_intel_device_attribute_query cl_khr_suggested_local_work_size cl_intel_split_work_group_barrier cl_intel_spirv_media_block_io cl_intel_spirv_subgroups cl_khr_spirv_no_integer_wrap_decoration cl_intel_unified_shared_memory cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_ext_float_atomics cl_intel_planar_yuv cl_intel_packed_yuv cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_image2d_from_buffer cl_khr_depth_images cl_khr_3d_image_writes cl_intel_media_block_io cl_intel_subgroup_local_block_io cl_khr_integer_dot_product cl_khr_gl_sharing cl_khr_gl_depth_images cl_khr_gl_event cl_khr_gl_msaa_sharing cl_intel_va_api_media_sharing cl_intel_sharing_format_query cl_khr_pci_bus_info Has AMD Blas = No Has AMD Fft = No Preferred vector width char = 16 Preferred vector width short = 8 Preferred vector width int = 4 Preferred vector width long = 1 Preferred vector width float = 1 Preferred vector width double = 0 Preferred vector width half = 8 [==========] Running 155 tests from 10 test cases. [----------] Global test environment set-up. [----------] 1 test from OCL_HOGFixture_HOG [ RUN ] OCL_HOGFixture_HOG.HOG [ WARN:0@0.025] global filesystem.cpp:489 getCacheDirectory Using world accessible cache directory. This may be not secure: /var/tmp/ [ PERFSTAT ] (samples=100 mean=21.87 median=21.63 min=20.97 stddev=0.89 (4.1%)) [ OK ] OCL_HOGFixture_HOG.HOG (2535 ms) [----------] 1 test from OCL_HOGFixture_HOG (2535 ms total) [----------] 27 tests from OCL_Cascade_Image_MinSize_CascadeClassifier [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/0, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/shared/lena.png", 30) [ PERFSTAT ] (samples=100 mean=8.85 median=9.13 min=6.39 stddev=0.81 (9.2%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/0 (1257 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/1, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/shared/lena.png", 64) [ PERFSTAT ] (samples=13 mean=2.72 median=2.72 min=2.70 stddev=0.01 (0.5%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/1 (51 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/2, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/shared/lena.png", 90) [ PERFSTAT ] (samples=100 mean=1.78 median=1.77 min=1.71 stddev=0.11 (6.3%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/2 (199 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/3, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/cascadeandhog/images/bttf301.png", 30) [ PERFSTAT ] (samples=100 mean=8.57 median=8.81 min=6.18 stddev=0.66 (7.7%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/3 (871 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/4, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/cascadeandhog/images/bttf301.png", 64) [ PERFSTAT ] (samples=100 mean=3.11 median=3.14 min=2.38 stddev=0.14 (4.4%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/4 (323 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/5, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/cascadeandhog/images/bttf301.png", 90) [ PERFSTAT ] (samples=100 mean=1.59 median=1.58 min=1.37 stddev=0.16 (9.8%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/5 (173 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/6, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/cascadeandhog/images/class57.png", 30) [ PERFSTAT ] (samples=100 mean=49.18 median=48.93 min=46.46 stddev=1.86 (3.8%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/6 (4999 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/7, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/cascadeandhog/images/class57.png", 64) [ PERFSTAT ] (samples=59 mean=10.23 median=10.30 min=9.16 stddev=0.30 (3.0%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/7 (624 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/8, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml", "cv/cascadeandhog/images/class57.png", 90) [ PERFSTAT ] (samples=11 mean=5.72 median=5.67 min=5.65 stddev=0.17 (3.0%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/8 (81 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/9, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/shared/lena.png", 30) [ PERFSTAT ] (samples=81 mean=7.41 median=7.50 min=6.24 stddev=0.22 (3.0%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/9 (806 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/10, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/shared/lena.png", 64) [ PERFSTAT ] (samples=11 mean=3.01 median=2.99 min=2.95 stddev=0.09 (2.9%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/10 (46 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/11, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/shared/lena.png", 90) [ PERFSTAT ] (samples=100 mean=1.81 median=1.77 min=1.73 stddev=0.11 (6.1%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/11 (196 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/12, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/cascadeandhog/images/bttf301.png", 30) [ PERFSTAT ] (samples=10 mean=7.62 median=7.53 min=7.47 stddev=0.19 (2.5%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/12 (85 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/13, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/cascadeandhog/images/bttf301.png", 64) [ PERFSTAT ] (samples=63 mean=2.46 median=2.45 min=2.32 stddev=0.07 (2.9%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/13 (166 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/14, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/cascadeandhog/images/bttf301.png", 90) [ PERFSTAT ] (samples=100 mean=1.47 median=1.47 min=1.35 stddev=0.06 (4.0%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/14 (157 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/15, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/cascadeandhog/images/class57.png", 30) [ PERFSTAT ] (samples=100 mean=48.12 median=47.03 min=46.63 stddev=1.89 (3.9%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/15 (4877 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/16, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/cascadeandhog/images/class57.png", 64) [ PERFSTAT ] (samples=10 mean=10.23 median=10.14 min=10.08 stddev=0.18 (1.7%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/16 (118 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/17, where GetParam() = ("cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml", "cv/cascadeandhog/images/class57.png", 90) [ PERFSTAT ] (samples=13 mean=5.59 median=5.64 min=5.08 stddev=0.16 (2.9%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/17 (89 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/18, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/shared/lena.png", 30) [ PERFSTAT ] (samples=100 mean=5.52 median=5.49 min=4.94 stddev=0.29 (5.3%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/18 (818 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/19, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/shared/lena.png", 64) [ PERFSTAT ] (samples=100 mean=2.42 median=2.38 min=2.27 stddev=0.11 (4.6%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/19 (250 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/20, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/shared/lena.png", 90) [ PERFSTAT ] (samples=100 mean=1.72 median=1.76 min=1.57 stddev=0.10 (5.9%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/20 (180 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/21, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/cascadeandhog/images/bttf301.png", 30) [ PERFSTAT ] (samples=10 mean=5.22 median=5.19 min=5.17 stddev=0.09 (1.6%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/21 (55 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/22, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/cascadeandhog/images/bttf301.png", 64) [ PERFSTAT ] (samples=11 mean=2.28 median=2.26 min=2.24 stddev=0.07 (3.0%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/22 (28 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/23, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/cascadeandhog/images/bttf301.png", 90) [ PERFSTAT ] (samples=14 mean=1.32 median=1.33 min=1.23 stddev=0.04 (2.9%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/23 (21 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/24, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/cascadeandhog/images/class57.png", 30) [ PERFSTAT ] (samples=100 mean=27.71 median=27.21 min=26.95 stddev=1.08 (3.9%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/24 (2820 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/25, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/cascadeandhog/images/class57.png", 64) [ PERFSTAT ] (samples=13 mean=8.50 median=8.49 min=8.42 stddev=0.07 (0.8%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/25 (122 ms) [ RUN ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/26, where GetParam() = ("cv/cascadeandhog/cascades/lbpcascade_frontalface.xml", "cv/cascadeandhog/images/class57.png", 90) [ PERFSTAT ] (samples=10 mean=5.29 median=5.27 min=5.23 stddev=0.09 (1.7%)) [ OK ] OCL_Cascade_Image_MinSize_CascadeClassifier.CascadeClassifier/26 (62 ms) [----------] 27 tests from OCL_Cascade_Image_MinSize_CascadeClassifier (19474 ms total) [----------] 2 tests from EstimateAruco_ArucoFirst [ RUN ] EstimateAruco_ArucoFirst.ArucoFirst/0, where GetParam() = (false, -1) [ PERFSTAT ] (samples=100 mean=6.70 median=6.66 min=5.40 stddev=0.50 (7.5%)) [ OK ] EstimateAruco_ArucoFirst.ArucoFirst/0 (687 ms) [ RUN ] EstimateAruco_ArucoFirst.ArucoFirst/1, where GetParam() = (true, -1) [ PERFSTAT ] (samples=100 mean=4.82 median=4.82 min=4.26 stddev=0.30 (6.3%)) [ OK ] EstimateAruco_ArucoFirst.ArucoFirst/1 (495 ms) [----------] 2 tests from EstimateAruco_ArucoFirst (1182 ms total) [----------] 2 tests from EstimateAruco_ArucoSecond [ RUN ] EstimateAruco_ArucoSecond.ArucoSecond/0, where GetParam() = (false, -1) [ PERFSTAT ] (samples=100 mean=23.86 median=23.75 min=18.75 stddev=2.78 (11.6%)) [ OK ] EstimateAruco_ArucoSecond.ArucoSecond/0 (2469 ms) [ RUN ] EstimateAruco_ArucoSecond.ArucoSecond/1, where GetParam() = (true, -1) [ PERFSTAT ] (samples=100 mean=18.61 median=17.76 min=15.38 stddev=2.35 (12.6%)) [ OK ] EstimateAruco_ArucoSecond.ArucoSecond/1 (1923 ms) [----------] 2 tests from EstimateAruco_ArucoSecond (4392 ms total) [----------] 15 tests from EstimateLargeAruco_ArucoFHD [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/0, where GetParam() = (0 0 0, (1440, 1)) [ PERFSTAT ] (samples=100 mean=5.42 median=5.64 min=3.62 stddev=0.84 (15.5%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/0 (556 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/1, where GetParam() = (0 0 0, (480, 3)) [ PERFSTAT ] (samples=100 mean=5.78 median=5.34 min=4.74 stddev=1.02 (17.6%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/1 (604 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/2, where GetParam() = (0 0 0, (144, 10)) [ PERFSTAT ] (samples=100 mean=13.26 median=13.10 min=9.89 stddev=1.39 (10.5%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/2 (1378 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/3, where GetParam() = (1 0 32, (1440, 1)) [ PERFSTAT ] (samples=100 mean=5.34 median=5.30 min=3.98 stddev=0.89 (16.7%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/3 (552 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/4, where GetParam() = (1 0 32, (480, 3)) [ PERFSTAT ] (samples=100 mean=5.83 median=5.34 min=4.68 stddev=1.02 (17.6%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/4 (607 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/5, where GetParam() = (1 0 32, (144, 10)) [ PERFSTAT ] (samples=100 mean=10.05 median=10.10 min=8.09 stddev=1.23 (12.3%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/5 (1035 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/6, where GetParam() = (1 0.015 32, (1440, 1)) [ PERFSTAT ] (samples=100 mean=2.59 median=2.71 min=1.70 stddev=0.43 (16.6%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/6 (269 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/7, where GetParam() = (1 0.015 32, (480, 3)) [ PERFSTAT ] (samples=100 mean=3.39 median=3.41 min=2.19 stddev=0.49 (14.5%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/7 (351 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/8, where GetParam() = (1 0.015 32, (144, 10)) [ PERFSTAT ] (samples=100 mean=6.53 median=6.47 min=5.59 stddev=0.53 (8.2%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/8 (677 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/9, where GetParam() = (1 0 16, (1440, 1)) [ PERFSTAT ] (samples=100 mean=4.79 median=4.54 min=3.77 stddev=0.78 (16.3%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/9 (496 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/10, where GetParam() = (1 0 16, (480, 3)) [ PERFSTAT ] (samples=100 mean=6.40 median=5.95 min=4.88 stddev=1.09 (17.0%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/10 (663 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/11, where GetParam() = (1 0 16, (144, 10)) [ PERFSTAT ] (samples=100 mean=10.30 median=10.12 min=8.67 stddev=1.12 (10.9%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/11 (1063 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/12, where GetParam() = (1 0.0069 16, (1440, 1)) [ PERFSTAT ] (samples=100 mean=2.28 median=2.22 min=1.64 stddev=0.46 (20.3%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/12 (244 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/13, where GetParam() = (1 0.0069 16, (480, 3)) [ PERFSTAT ] (samples=100 mean=3.21 median=3.13 min=2.54 stddev=0.42 (13.1%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/13 (332 ms) [ RUN ] EstimateLargeAruco_ArucoFHD.ArucoFHD/14, where GetParam() = (1 0.0069 16, (144, 10)) [ PERFSTAT ] (samples=100 mean=6.98 median=6.89 min=6.13 stddev=0.52 (7.5%)) [ OK ] EstimateLargeAruco_ArucoFHD.ArucoFHD/14 (720 ms) [----------] 15 tests from EstimateLargeAruco_ArucoFHD (9548 ms total) [----------] 6 tests from Perf_Barcode_multi [ RUN ] Perf_Barcode_multi.detect/0, where GetParam() = ("4_barcodes.jpg", 2041x2722) [ PERFSTAT ] (samples=13 mean=7.41 median=7.34 min=7.18 stddev=0.19 (2.5%)) [ OK ] Perf_Barcode_multi.detect/0 (109 ms) [ RUN ] Perf_Barcode_multi.detect/1, where GetParam() = ("4_barcodes.jpg", 1361x1815) [ PERFSTAT ] (samples=75 mean=4.95 median=4.96 min=4.72 stddev=0.13 (2.6%)) [ OK ] Perf_Barcode_multi.detect/1 (391 ms) [ RUN ] Perf_Barcode_multi.detect/2, where GetParam() = ("4_barcodes.jpg", 680x907) [ PERFSTAT ] (samples=10 mean=3.72 median=3.72 min=3.65 stddev=0.04 (1.1%)) [ OK ] Perf_Barcode_multi.detect/2 (48 ms) [ RUN ] Perf_Barcode_multi.detect_decode/0, where GetParam() = ("4_barcodes.jpg", 2041x2722) [ PERFSTAT ] (samples=10 mean=11.27 median=11.31 min=10.74 stddev=0.30 (2.6%)) [ OK ] Perf_Barcode_multi.detect_decode/0 (125 ms) [ RUN ] Perf_Barcode_multi.detect_decode/1, where GetParam() = ("4_barcodes.jpg", 1361x1815) [ PERFSTAT ] (samples=10 mean=7.64 median=7.59 min=7.51 stddev=0.15 (2.0%)) [ OK ] Perf_Barcode_multi.detect_decode/1 (86 ms) [ RUN ] Perf_Barcode_multi.detect_decode/2, where GetParam() = ("4_barcodes.jpg", 680x907) [ PERFSTAT ] (samples=10 mean=6.26 median=6.25 min=6.19 stddev=0.05 (0.8%)) [ OK ] Perf_Barcode_multi.detect_decode/2 (74 ms) [----------] 6 tests from Perf_Barcode_multi (833 ms total) [----------] 18 tests from Perf_Barcode_single [ RUN ] Perf_Barcode_single.detect/0, where GetParam() = ("book.jpg", 480x360) [ PERFSTAT ] (samples=13 mean=1.51 median=1.51 min=1.47 stddev=0.02 (1.2%)) [ OK ] Perf_Barcode_single.detect/0 (21 ms) [ RUN ] Perf_Barcode_single.detect/1, where GetParam() = ("book.jpg", 640x480) [ PERFSTAT ] (samples=10 mean=2.42 median=2.42 min=2.38 stddev=0.04 (1.5%)) [ OK ] Perf_Barcode_single.detect/1 (26 ms) [ RUN ] Perf_Barcode_single.detect/2, where GetParam() = ("book.jpg", 800x600) [ PERFSTAT ] (samples=10 mean=3.44 median=3.46 min=3.29 stddev=0.08 (2.4%)) [ OK ] Perf_Barcode_single.detect/2 (35 ms) [ RUN ] Perf_Barcode_single.detect/3, where GetParam() = ("bottle_1.jpg", 480x360) [ PERFSTAT ] (samples=11 mean=1.67 median=1.66 min=1.62 stddev=0.05 (2.9%)) [ OK ] Perf_Barcode_single.detect/3 (21 ms) [ RUN ] Perf_Barcode_single.detect/4, where GetParam() = ("bottle_1.jpg", 640x480) [ PERFSTAT ] (samples=10 mean=2.55 median=2.55 min=2.53 stddev=0.02 (0.8%)) [ OK ] Perf_Barcode_single.detect/4 (28 ms) [ RUN ] Perf_Barcode_single.detect/5, where GetParam() = ("bottle_1.jpg", 800x600) [ PERFSTAT ] (samples=10 mean=3.60 median=3.61 min=3.49 stddev=0.06 (1.6%)) [ OK ] Perf_Barcode_single.detect/5 (38 ms) [ RUN ] Perf_Barcode_single.detect/6, where GetParam() = ("bottle_2.jpg", 480x360) [ PERFSTAT ] (samples=10 mean=1.87 median=1.86 min=1.81 stddev=0.05 (2.7%)) [ OK ] Perf_Barcode_single.detect/6 (20 ms) [ RUN ] Perf_Barcode_single.detect/7, where GetParam() = ("bottle_2.jpg", 640x480) [ PERFSTAT ] (samples=10 mean=2.81 median=2.80 min=2.77 stddev=0.04 (1.5%)) [ OK ] Perf_Barcode_single.detect/7 (30 ms) [ RUN ] Perf_Barcode_single.detect/8, where GetParam() = ("bottle_2.jpg", 800x600) [ PERFSTAT ] (samples=10 mean=3.85 median=3.85 min=3.81 stddev=0.02 (0.6%)) [ OK ] Perf_Barcode_single.detect/8 (40 ms) [ RUN ] Perf_Barcode_single.detect_decode/0, where GetParam() = ("book.jpg", 480x360) [ PERFSTAT ] (samples=10 mean=2.61 median=2.58 min=2.55 stddev=0.07 (2.5%)) [ OK ] Perf_Barcode_single.detect_decode/0 (28 ms) [ RUN ] Perf_Barcode_single.detect_decode/1, where GetParam() = ("book.jpg", 640x480) [ PERFSTAT ] (samples=10 mean=2.92 median=2.92 min=2.86 stddev=0.04 (1.2%)) [ OK ] Perf_Barcode_single.detect_decode/1 (30 ms) [ RUN ] Perf_Barcode_single.detect_decode/2, where GetParam() = ("book.jpg", 800x600) [ PERFSTAT ] (samples=50 mean=4.08 median=4.06 min=3.98 stddev=0.12 (3.0%)) [ OK ] Perf_Barcode_single.detect_decode/2 (211 ms) [ RUN ] Perf_Barcode_single.detect_decode/3, where GetParam() = ("bottle_1.jpg", 480x360) [ PERFSTAT ] (samples=10 mean=2.11 median=2.08 min=2.07 stddev=0.06 (2.7%)) [ OK ] Perf_Barcode_single.detect_decode/3 (23 ms) [ RUN ] Perf_Barcode_single.detect_decode/4, where GetParam() = ("bottle_1.jpg", 640x480) [ PERFSTAT ] (samples=10 mean=2.87 median=2.86 min=2.82 stddev=0.04 (1.3%)) [ OK ] Perf_Barcode_single.detect_decode/4 (31 ms) [ RUN ] Perf_Barcode_single.detect_decode/5, where GetParam() = ("bottle_1.jpg", 800x600) [ PERFSTAT ] (samples=10 mean=3.90 median=3.88 min=3.80 stddev=0.07 (1.7%)) [ OK ] Perf_Barcode_single.detect_decode/5 (41 ms) [ RUN ] Perf_Barcode_single.detect_decode/6, where GetParam() = ("bottle_2.jpg", 480x360) [ PERFSTAT ] (samples=10 mean=2.72 median=2.70 min=2.67 stddev=0.06 (2.2%)) [ OK ] Perf_Barcode_single.detect_decode/6 (29 ms) [ RUN ] Perf_Barcode_single.detect_decode/7, where GetParam() = ("bottle_2.jpg", 640x480) [ PERFSTAT ] (samples=10 mean=3.72 median=3.71 min=3.69 stddev=0.03 (0.9%)) [ OK ] Perf_Barcode_single.detect_decode/7 (39 ms) [ RUN ] Perf_Barcode_single.detect_decode/8, where GetParam() = ("bottle_2.jpg", 800x600) [ PERFSTAT ] (samples=10 mean=4.87 median=4.87 min=4.82 stddev=0.03 (0.5%)) [ OK ] Perf_Barcode_single.detect_decode/8 (50 ms) [----------] 18 tests from Perf_Barcode_single (741 ms total) [----------] 28 tests from Perf_Objdetect_QRCode [ RUN ] Perf_Objdetect_QRCode.detect/0, where GetParam() = "version_1_down.jpg" [ PERFSTAT ] (samples=10 mean=6.90 median=6.87 min=6.83 stddev=0.07 (1.0%)) [ OK ] Perf_Objdetect_QRCode.detect/0 (70 ms) [ RUN ] Perf_Objdetect_QRCode.detect/1, where GetParam() = "version_1_left.jpg" [ PERFSTAT ] (samples=10 mean=6.33 median=6.30 min=6.24 stddev=0.09 (1.5%)) [ OK ] Perf_Objdetect_QRCode.detect/1 (64 ms) [ RUN ] Perf_Objdetect_QRCode.detect/2, where GetParam() = "version_1_right.jpg" [ PERFSTAT ] (samples=10 mean=5.90 median=5.90 min=5.86 stddev=0.03 (0.6%)) [ OK ] Perf_Objdetect_QRCode.detect/2 (60 ms) [ RUN ] Perf_Objdetect_QRCode.detect/3, where GetParam() = "version_1_up.jpg" [ PERFSTAT ] (samples=10 mean=5.69 median=5.70 min=5.63 stddev=0.03 (0.6%)) [ OK ] Perf_Objdetect_QRCode.detect/3 (58 ms) [ RUN ] Perf_Objdetect_QRCode.detect/4, where GetParam() = "version_1_top.jpg" [ PERFSTAT ] (samples=10 mean=6.50 median=6.51 min=6.47 stddev=0.02 (0.3%)) [ OK ] Perf_Objdetect_QRCode.detect/4 (66 ms) [ RUN ] Perf_Objdetect_QRCode.detect/5, where GetParam() = "version_5_down.jpg" [ PERFSTAT ] (samples=13 mean=5.98 median=5.98 min=5.90 stddev=0.04 (0.7%)) [ OK ] Perf_Objdetect_QRCode.detect/5 (81 ms) [ RUN ] Perf_Objdetect_QRCode.detect/6, where GetParam() = "version_5_left.jpg" [ PERFSTAT ] (samples=13 mean=5.91 median=5.90 min=5.85 stddev=0.03 (0.6%)) [ OK ] Perf_Objdetect_QRCode.detect/6 (79 ms) [ RUN ] Perf_Objdetect_QRCode.detect/7, where GetParam() = "version_5_up.jpg" [ PERFSTAT ] (samples=13 mean=5.75 median=5.75 min=5.70 stddev=0.03 (0.5%)) [ OK ] Perf_Objdetect_QRCode.detect/7 (78 ms) [ RUN ] Perf_Objdetect_QRCode.detect/8, where GetParam() = "version_5_top.jpg" [ PERFSTAT ] (samples=10 mean=5.69 median=5.67 min=5.56 stddev=0.10 (1.8%)) [ OK ] Perf_Objdetect_QRCode.detect/8 (58 ms) [ RUN ] Perf_Objdetect_QRCode.detect/9, where GetParam() = "russian.jpg" [ PERFSTAT ] (samples=10 mean=6.26 median=6.23 min=6.20 stddev=0.06 (1.0%)) [ OK ] Perf_Objdetect_QRCode.detect/9 (63 ms) [ RUN ] Perf_Objdetect_QRCode.detect/10, where GetParam() = "kanji.jpg" [ PERFSTAT ] (samples=10 mean=6.05 median=6.05 min=6.03 stddev=0.01 (0.2%)) [ OK ] Perf_Objdetect_QRCode.detect/10 (62 ms) [ RUN ] Perf_Objdetect_QRCode.detect/11, where GetParam() = "link_github_ocv.jpg" [ PERFSTAT ] (samples=10 mean=6.12 median=6.13 min=6.03 stddev=0.04 (0.6%)) [ OK ] Perf_Objdetect_QRCode.detect/11 (62 ms) [ RUN ] Perf_Objdetect_QRCode.detect/12, where GetParam() = "link_ocv.jpg" [ PERFSTAT ] (samples=10 mean=6.01 median=6.01 min=5.98 stddev=0.02 (0.4%)) [ OK ] Perf_Objdetect_QRCode.detect/12 (61 ms) [ RUN ] Perf_Objdetect_QRCode.detect/13, where GetParam() = "link_wiki_cv.jpg" [ PERFSTAT ] (samples=10 mean=6.21 median=6.22 min=6.16 stddev=0.03 (0.5%)) [ OK ] Perf_Objdetect_QRCode.detect/13 (63 ms) [ RUN ] Perf_Objdetect_QRCode.decode/0, where GetParam() = "version_1_down.jpg" [ PERFSTAT ] (samples=10 mean=3.92 median=3.92 min=3.87 stddev=0.05 (1.2%)) [ OK ] Perf_Objdetect_QRCode.decode/0 (47 ms) [ RUN ] Perf_Objdetect_QRCode.decode/1, where GetParam() = "version_1_left.jpg" [ PERFSTAT ] (samples=10 mean=3.92 median=3.91 min=3.88 stddev=0.03 (0.7%)) [ OK ] Perf_Objdetect_QRCode.decode/1 (46 ms) [ RUN ] Perf_Objdetect_QRCode.decode/2, where GetParam() = "version_1_right.jpg" [ PERFSTAT ] (samples=10 mean=3.89 median=3.88 min=3.86 stddev=0.02 (0.6%)) [ OK ] Perf_Objdetect_QRCode.decode/2 (46 ms) [ RUN ] Perf_Objdetect_QRCode.decode/3, where GetParam() = "version_1_up.jpg" [ PERFSTAT ] (samples=14 mean=3.93 median=3.89 min=3.84 stddev=0.12 (3.0%)) [ OK ] Perf_Objdetect_QRCode.decode/3 (62 ms) [ RUN ] Perf_Objdetect_QRCode.decode/4, where GetParam() = "version_1_top.jpg" [ PERFSTAT ] (samples=10 mean=3.93 median=3.91 min=3.89 stddev=0.07 (1.8%)) [ OK ] Perf_Objdetect_QRCode.decode/4 (47 ms) [ RUN ] Perf_Objdetect_QRCode.decode/5, where GetParam() = "version_5_down.jpg" [ PERFSTAT ] (samples=10 mean=4.44 median=4.42 min=4.39 stddev=0.07 (1.6%)) [ OK ] Perf_Objdetect_QRCode.decode/5 (51 ms) [ RUN ] Perf_Objdetect_QRCode.decode/6, where GetParam() = "version_5_left.jpg" [ PERFSTAT ] (samples=10 mean=4.43 median=4.42 min=4.40 stddev=0.03 (0.6%)) [ OK ] Perf_Objdetect_QRCode.decode/6 (51 ms) [ RUN ] Perf_Objdetect_QRCode.decode/7, where GetParam() = "version_5_up.jpg" [ PERFSTAT ] (samples=10 mean=4.43 median=4.42 min=4.39 stddev=0.03 (0.7%)) [ OK ] Perf_Objdetect_QRCode.decode/7 (51 ms) [ RUN ] Perf_Objdetect_QRCode.decode/8, where GetParam() = "version_5_top.jpg" [ PERFSTAT ] (samples=10 mean=4.41 median=4.40 min=4.37 stddev=0.03 (0.7%)) [ OK ] Perf_Objdetect_QRCode.decode/8 (51 ms) [ RUN ] Perf_Objdetect_QRCode.decode/9, where GetParam() = "russian.jpg" [ PERFSTAT ] (samples=10 mean=4.46 median=4.46 min=4.43 stddev=0.03 (0.6%)) [ OK ] Perf_Objdetect_QRCode.decode/9 (51 ms) [ RUN ] Perf_Objdetect_QRCode.decode/10, where GetParam() = "kanji.jpg" [ PERFSTAT ] (samples=10 mean=4.21 median=4.21 min=4.18 stddev=0.04 (0.9%)) [ OK ] Perf_Objdetect_QRCode.decode/10 (49 ms) [ RUN ] Perf_Objdetect_QRCode.decode/11, where GetParam() = "link_github_ocv.jpg" [ PERFSTAT ] (samples=38 mean=4.24 median=4.21 min=4.15 stddev=0.10 (2.4%)) [ OK ] Perf_Objdetect_QRCode.decode/11 (177 ms) [ RUN ] Perf_Objdetect_QRCode.decode/12, where GetParam() = "link_ocv.jpg" [ PERFSTAT ] (samples=10 mean=4.24 median=4.23 min=4.20 stddev=0.03 (0.8%)) [ OK ] Perf_Objdetect_QRCode.decode/12 (49 ms) [ RUN ] Perf_Objdetect_QRCode.decode/13, where GetParam() = "link_wiki_cv.jpg" [ PERFSTAT ] (samples=10 mean=4.33 median=4.33 min=4.29 stddev=0.03 (0.8%)) [ OK ] Perf_Objdetect_QRCode.decode/13 (51 ms) [----------] 28 tests from Perf_Objdetect_QRCode (1754 ms total) [----------] 32 tests from Perf_Objdetect_QRCode_Multi [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/0, where GetParam() = ("2_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=19.60 median=19.17 min=18.19 stddev=1.28 (6.5%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/0 (2098 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/1, where GetParam() = ("2_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=17.44 median=16.61 min=15.47 stddev=1.89 (10.8%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/1 (1815 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/2, where GetParam() = ("3_close_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=10.66 median=10.25 min=10.07 stddev=0.58 (5.5%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/2 (1105 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/3, where GetParam() = ("3_close_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=6.14 median=5.48 min=4.37 stddev=1.36 (22.2%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/3 (646 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/4, where GetParam() = ("3_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=13 mean=17.77 median=17.82 min=16.75 stddev=0.47 (2.6%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/4 (238 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/5, where GetParam() = ("3_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=7.03 median=7.01 min=6.25 stddev=0.38 (5.5%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/5 (711 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/6, where GetParam() = ("4_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=11 mean=12.20 median=12.00 min=11.80 stddev=0.36 (3.0%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/6 (136 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/7, where GetParam() = ("4_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=4.94 median=4.68 min=4.11 stddev=0.78 (15.8%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/7 (520 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/8, where GetParam() = ("5_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=16.87 median=16.03 min=15.64 stddev=1.90 (11.3%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/8 (1806 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/9, where GetParam() = ("5_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=9.36 median=9.12 min=8.11 stddev=0.97 (10.3%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/9 (975 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/10, where GetParam() = ("6_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=10 mean=24.10 median=24.18 min=23.31 stddev=0.47 (2.0%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/10 (243 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/11, where GetParam() = ("6_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=14.92 median=14.15 min=12.22 stddev=2.54 (17.1%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/11 (1593 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/12, where GetParam() = ("7_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=33.45 median=30.49 min=28.46 stddev=6.59 (19.7%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/12 (3517 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/13, where GetParam() = ("7_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=13.42 median=13.02 min=11.88 stddev=1.16 (8.7%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/13 (1413 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/14, where GetParam() = ("8_close_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=25 mean=78.55 median=78.44 min=75.51 stddev=1.57 (2.0%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/14 (2015 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.detectMulti/15, where GetParam() = ("8_close_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=29 mean=53.89 median=53.35 min=52.45 stddev=1.61 (3.0%)) [ OK ] Perf_Objdetect_QRCode_Multi.detectMulti/15 (1599 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/0, where GetParam() = ("2_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=10 mean=9.04 median=9.01 min=8.93 stddev=0.12 (1.3%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/0 (113 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/1, where GetParam() = ("2_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=9.29 median=9.07 min=8.83 stddev=0.72 (7.8%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/1 (1032 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/2, where GetParam() = ("3_close_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=12 mean=4.27 median=4.23 min=4.19 stddev=0.12 (2.9%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/2 (65 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/3, where GetParam() = ("3_close_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=7.00 median=6.64 min=4.27 stddev=2.25 (32.2%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/3 (741 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/4, where GetParam() = ("3_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=9.95 median=8.08 min=7.57 stddev=3.01 (30.2%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/4 (1096 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/5, where GetParam() = ("3_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=8.22 median=7.78 min=7.50 stddev=1.33 (16.2%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/5 (919 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/6, where GetParam() = ("4_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=10.17 median=11.10 min=4.73 stddev=1.96 (19.2%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/6 (1049 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/7, where GetParam() = ("4_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=9.95 median=10.99 min=4.96 stddev=1.75 (17.6%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/7 (1014 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/8, where GetParam() = ("5_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=10 mean=11.52 median=11.55 min=10.98 stddev=0.27 (2.3%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/8 (136 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/9, where GetParam() = ("5_qrcodes.png", "aruco_based") [ SKIP ] 5_qrcodes.png is disabled sample for method aruco_based [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/9 (2 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/10, where GetParam() = ("6_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=22.17 median=23.40 min=15.39 stddev=2.64 (11.9%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/10 (2266 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/11, where GetParam() = ("6_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=22.66 median=23.44 min=15.64 stddev=2.03 (8.9%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/11 (2299 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/12, where GetParam() = ("7_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=14 mean=22.82 median=22.99 min=20.95 stddev=0.68 (3.0%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/12 (359 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/13, where GetParam() = ("7_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=23.76 median=22.50 min=15.38 stddev=4.01 (16.9%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/13 (2510 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/14, where GetParam() = ("8_close_qrcodes.png", "contours_based") [ PERFSTAT ] (samples=100 mean=62.86 median=64.46 min=43.80 stddev=5.53 (8.8%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/14 (6432 ms) [ RUN ] Perf_Objdetect_QRCode_Multi.decodeMulti/15, where GetParam() = ("8_close_qrcodes.png", "aruco_based") [ PERFSTAT ] (samples=100 mean=61.92 median=63.26 min=44.60 stddev=4.95 (8.0%)) [ OK ] Perf_Objdetect_QRCode_Multi.decodeMulti/15 (6312 ms) [----------] 32 tests from Perf_Objdetect_QRCode_Multi (46776 ms total) [----------] 24 tests from Perf_Objdetect_Not_QRCode [ RUN ] Perf_Objdetect_Not_QRCode.detect/0, where GetParam() = ("zero", 640x480) [ PERFSTAT ] (samples=13 mean=4.12 median=4.10 min=4.05 stddev=0.06 (1.6%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/0 (55 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/1, where GetParam() = ("zero", 1280x720) [ PERFSTAT ] (samples=10 mean=15.85 median=15.79 min=15.66 stddev=0.19 (1.2%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/1 (158 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/2, where GetParam() = ("zero", 1920x1080) [ PERFSTAT ] (samples=10 mean=28.90 median=28.93 min=28.70 stddev=0.13 (0.5%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/2 (289 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/3, where GetParam() = ("zero", 3840x2160) [ PERFSTAT ] (samples=10 mean=105.07 median=103.92 min=103.42 stddev=2.96 (2.8%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/3 (1051 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/4, where GetParam() = ("random", 640x480) [ PERFSTAT ] (samples=10 mean=4.12 median=4.09 min=4.02 stddev=0.12 (2.8%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/4 (41 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/5, where GetParam() = ("random", 1280x720) [ PERFSTAT ] (samples=10 mean=15.85 median=15.79 min=15.65 stddev=0.23 (1.4%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/5 (159 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/6, where GetParam() = ("random", 1920x1080) [ PERFSTAT ] (samples=10 mean=28.93 median=28.91 min=28.65 stddev=0.18 (0.6%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/6 (290 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/7, where GetParam() = ("random", 3840x2160) [ PERFSTAT ] (samples=13 mean=104.78 median=104.28 min=103.33 stddev=1.76 (1.7%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/7 (1399 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/8, where GetParam() = ("chessboard", 640x480) [ PERFSTAT ] (samples=10 mean=257.32 median=257.09 min=255.91 stddev=1.10 (0.4%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/8 (2574 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/9, where GetParam() = ("chessboard", 1280x720) [ PERFSTAT ] (samples=10 mean=18.58 median=18.60 min=18.34 stddev=0.20 (1.1%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/9 (187 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/10, where GetParam() = ("chessboard", 1920x1080) [ PERFSTAT ] (samples=10 mean=35.10 median=35.02 min=34.84 stddev=0.21 (0.6%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/10 (355 ms) [ RUN ] Perf_Objdetect_Not_QRCode.detect/11, where GetParam() = ("chessboard", 3840x2160) [ PERFSTAT ] (samples=10 mean=127.98 median=127.81 min=127.18 stddev=0.77 (0.6%)) [ OK ] Perf_Objdetect_Not_QRCode.detect/11 (1293 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/0, where GetParam() = ("zero", 640x480) [ PERFSTAT ] (samples=10 mean=6.70 median=6.69 min=6.65 stddev=0.04 (0.6%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/0 (68 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/1, where GetParam() = ("zero", 1280x720) [ PERFSTAT ] (samples=10 mean=14.03 median=13.92 min=13.80 stddev=0.32 (2.3%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/1 (140 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/2, where GetParam() = ("zero", 1920x1080) [ PERFSTAT ] (samples=10 mean=27.81 median=27.71 min=27.57 stddev=0.28 (1.0%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/2 (279 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/3, where GetParam() = ("zero", 3840x2160) [ PERFSTAT ] (samples=10 mean=108.94 median=108.42 min=108.06 stddev=1.13 (1.0%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/3 (1090 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/4, where GetParam() = ("random", 640x480) [ PERFSTAT ] (samples=10 mean=6.70 median=6.68 min=6.62 stddev=0.11 (1.6%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/4 (67 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/5, where GetParam() = ("random", 1280x720) [ PERFSTAT ] (samples=50 mean=13.98 median=13.90 min=13.78 stddev=0.39 (2.8%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/5 (751 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/6, where GetParam() = ("random", 1920x1080) [ PERFSTAT ] (samples=10 mean=27.71 median=27.73 min=27.50 stddev=0.14 (0.5%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/6 (278 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/7, where GetParam() = ("random", 3840x2160) [ PERFSTAT ] (samples=16 mean=110.32 median=108.96 min=107.82 stddev=3.28 (3.0%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/7 (1787 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/8, where GetParam() = ("chessboard", 640x480) [ PERFSTAT ] (samples=10 mean=7.45 median=7.43 min=7.35 stddev=0.12 (1.6%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/8 (76 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/9, where GetParam() = ("chessboard", 1280x720) [ PERFSTAT ] (samples=10 mean=14.22 median=14.18 min=14.04 stddev=0.14 (1.0%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/9 (143 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/10, where GetParam() = ("chessboard", 1920x1080) [ PERFSTAT ] (samples=11 mean=27.54 median=27.27 min=27.12 stddev=0.81 (2.9%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/10 (307 ms) [ RUN ] Perf_Objdetect_Not_QRCode.decode/11, where GetParam() = ("chessboard", 3840x2160) [ PERFSTAT ] (samples=10 mean=104.30 median=103.99 min=103.59 stddev=0.76 (0.7%)) [ OK ] Perf_Objdetect_Not_QRCode.decode/11 (1057 ms) [----------] 24 tests from Perf_Objdetect_Not_QRCode (13896 ms total) [----------] Global test environment tear-down [ SKIPSTAT ] 1 tests skipped [ SKIPSTAT ] TAG='skip_other' skip 1 tests [==========] 155 tests from 10 test cases ran. (101131 ms total) [ PASSED ] 155 tests. 594:[ PASSED ] 155 tests. Passed [+] Building 0.6s (9/9) FINISHED docker:default => [internal] load build definition from Dockerfile.test-ffmpeg 0.0s => => transferring dockerfile: 342B 0.0s => [internal] load metadata for docker.io/library/aibox-opencv-ffmpeg:3.1 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/4] FROM docker.io/library/aibox-opencv-ffmpeg:3.1 0.0s => CACHED [2/4] WORKDIR /home/aibox 0.0s => [internal] load build context 0.0s => => transferring context: 2.61kB 0.0s => [3/4] COPY --chown=aibox:aibox test_ffmpeg_entry.sh . 0.1s => [4/4] RUN chmod +x test_ffmpeg_entry.sh 0.3s => exporting to image 0.1s => => exporting layers 0.1s => => writing image sha256:2b1f1b243f474549fec01b0d9ee642a41f8b0e51a406cd146a13942c88e02570 0.0s => => naming to docker.io/library/test-ffmpeg:latest 0.0s xauth: /home/user/.Xauthority not writable, changes will be ignored ffmpeg version N-111327-gecbbe8c0a0 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04) configuration: --prefix=/usr/local --enable-shared --enable-vaapi --enable-libvpl libavutil 58. 13.101 / 58. 13.101 libavcodec 60. 21.100 / 60. 21.100 libavformat 60. 9.100 / 60. 9.100 libavdevice 60. 2.100 / 60. 2.100 libavfilter 9. 8.102 / 9. 8.102 libswscale 7. 3.100 / 7. 3.100 libswresample 4. 11.100 / 4. 11.100 V....D av1_qsv AV1 video (Intel Quick Sync Video acceleration) (codec av1) V....D h264_qsv H264 video (Intel Quick Sync Video acceleration) (codec h264) V....D hevc_qsv HEVC video (Intel Quick Sync Video acceleration) (codec hevc) V....D mjpeg_qsv MJPEG video (Intel Quick Sync Video acceleration) (codec mjpeg) V....D mpeg2_qsv MPEG2VIDEO video (Intel Quick Sync Video acceleration) (codec mpeg2video) V....D vc1_qsv VC1 video (Intel Quick Sync Video acceleration) (codec vc1) V....D vp8_qsv VP8 video (Intel Quick Sync Video acceleration) (codec vp8) V....D vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9) ffmpeg version N-111327-gecbbe8c0a0 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04) configuration: --prefix=/usr/local --enable-shared --enable-vaapi --enable-libvpl libavutil 58. 13.101 / 58. 13.101 libavcodec 60. 21.100 / 60. 21.100 libavformat 60. 9.100 / 60. 9.100 libavdevice 60. 2.100 / 60. 2.100 libavfilter 9. 8.102 / 9. 8.102 libswscale 7. 3.100 / 7. 3.100 libswresample 4. 11.100 / 4. 11.100 V..... av1_qsv AV1 (Intel Quick Sync Video acceleration) (codec av1) V....D av1_vaapi AV1 (VAAPI) (codec av1) V..... h264_qsv H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (Intel Quick Sync Video acceleration) (codec h264) V....D h264_vaapi H.264/AVC (VAAPI) (codec h264) V..... hevc_qsv HEVC (Intel Quick Sync Video acceleration) (codec hevc) V....D hevc_vaapi H.265/HEVC (VAAPI) (codec hevc) V..... mjpeg_qsv MJPEG (Intel Quick Sync Video acceleration) (codec mjpeg) V....D mjpeg_vaapi MJPEG (VAAPI) (codec mjpeg) V..... mpeg2_qsv MPEG-2 video (Intel Quick Sync Video acceleration) (codec mpeg2video) V....D mpeg2_vaapi MPEG-2 (VAAPI) (codec mpeg2video) V....D vp8_vaapi VP8 (VAAPI) (codec vp8) V....D vp9_vaapi VP9 (VAAPI) (codec vp9) V..... vp9_qsv VP9 video (Intel Quick Sync Video acceleration) (codec vp9) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2745k 100 2745k 0 0 886k 0 0:00:03 0:00:03 --:--:-- 887k ffmpeg version N-111327-gecbbe8c0a0 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04) configuration: --prefix=/usr/local --enable-shared --enable-vaapi --enable-libvpl libavutil 58. 13.101 / 58. 13.101 libavcodec 60. 21.100 / 60. 21.100 libavformat 60. 9.100 / 60. 9.100 libavdevice 60. 2.100 / 60. 2.100 libavfilter 9. 8.102 / 9. 8.102 libswscale 7. 3.100 / 7. 3.100 libswresample 4. 11.100 / 4. 11.100 [AVHWDeviceContext @ 0x62049d962ac0] libva: VA-API version 1.20.0 [AVHWDeviceContext @ 0x62049d962ac0] libva: User requested driver 'iHD' [AVHWDeviceContext @ 0x62049d962ac0] libva: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so [AVHWDeviceContext @ 0x62049d962ac0] libva: Found init function __vaDriverInit_1_19 [AVHWDeviceContext @ 0x62049d962ac0] libva: va_openDriver() returns 0 [AVHWDeviceContext @ 0x62049d962ac0] Initialised VAAPI connection: version 1.20 [AVHWDeviceContext @ 0x62049d962ac0] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 23.3.1 (). [AVHWDeviceContext @ 0x62049d962ac0] Driver not found in known nonstandard list, using standard behaviour. [AVHWDeviceContext @ 0x62049d9624c0] Use Intel(R) oneVPL to create MFX session, API version is 2.9, the required implementation version is 1.3 libva info: VA-API version 1.20.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_19 libva info: va_openDriver() returns 0 libva info: VA-API version 1.20.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_19 libva info: va_openDriver() returns 0 [AVHWDeviceContext @ 0x62049d9624c0] Initialize MFX session: implementation version is 2.9 [h264 @ 0x62049e27d900] Reinit context to 768x432, pix_fmt: yuv420p Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/aibox/data/ffmpeg_data_aibox-opencv-ffmpeg/input_car-detection.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42mp41 creation_time : 2018-04-19T22:46:00.000000Z Duration: 00:00:30.16, start: 0.000000, bitrate: 745 kb/s Stream #0:0[0x1](eng): Video: h264 (Baseline), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, smpte170m, progressive, left), 768x432, 614 kb/s, 12.50 fps, 12.50 tbr, 25k tbn (default) Metadata: creation_time : 2018-04-19T22:46:00.000000Z handler_name : ?Mainconcept Video Media Handler vendor_id : [0][0][0][0] encoder : AVC Coding Stream #0:1[0x2](eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 125 kb/s (default) Metadata: creation_time : 2018-04-19T22:46:00.000000Z handler_name : #Mainconcept MP4 Sound Media Handler vendor_id : [0][0][0][0] [out#0/rawvideo @ 0x62049e298ec0] Creating output stream from unlabeled output of complex filtergraph 0. This overrides automatic video mapping. [vost#0:0/rawvideo @ 0x62049ebec940] Created video stream from complex filtergraph 0:[null:default] [vost#0:0/rawvideo @ 0x62049ebec940] [out#0/rawvideo @ 0x62049e298ec0] No explicit maps, mapping streams automatically... Stream mapping: Stream #0:0 (h264_qsv) -> null:default null:default -> Stream #0:0 (rawvideo) Press [q] to stop, [?] for help [AVHWDeviceContext @ 0x75c9ac008e00] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 23.3.1 (). [AVHWDeviceContext @ 0x75c9ac008e00] Driver not found in known nonstandard list, using standard behaviour. [h264_qsv @ 0x62049e2a59c0] Decoder: output is video memory surface [h264_qsv @ 0x62049e2a59c0] Use Intel(R) oneVPL to create MFX session with the specified MFX loader [AVHWDeviceContext @ 0x75c9ac01e3c0] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 23.3.1 (). [AVHWDeviceContext @ 0x75c9ac01e3c0] Driver not found in known nonstandard list, using standard behaviour. [h264_qsv @ 0x62049e2a59c0] Decoder: output is video memory surface [h264_qsv @ 0x62049e2a59c0] Use Intel(R) oneVPL to create MFX session with the specified MFX loader [AVHWFramesContext @ 0x75c9ac01c740] Use Intel(R) oneVPL to create MFX session, API version is 2.9, the required implementation version is 2.9 [AVHWFramesContext @ 0x75c9ac01c740] Initialize MFX session: implementation version is 2.9 [graph 0 input from stream 0:0 @ 0x62049ec1d200] w:768 h:432 pixfmt:nv12 tb:1/25000 fr:25/2 sar:0/1 [auto_scale_0 @ 0x62049e292a40] w:iw h:ih flags:'' interl:0 [format @ 0x62049e89f600] auto-inserting filter 'auto_scale_0' between the filter 'Parsed_null_0' and the filter 'format' [auto_scale_0 @ 0x62049e292a40] w:768 h:432 fmt:nv12 sar:0/1 -> w:768 h:432 fmt:yuv420p sar:0/1 flags:0x00000004 Last message repeated 3 times Output #0, rawvideo, to '/home/aibox/data/ffmpeg_output_aibox-opencv-ffmpeg/ffmpeg_decode_output_car-detection.yuv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42mp41 encoder : Lavf60.9.100 Stream #0:0: Video: rawvideo, 1 reference frame (I420 / 0x30323449), yuv420p(tv, smpte170m, progressive, left), 768x432 (0x0), q=2-31, 49766 kb/s, 12.50 fps, 12.50 tbn Metadata: encoder : Lavc60.21.100 rawvideo [in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 0x62049e27c280] EOF while reading input= -0.0kbits/s speed=N/A [in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 0x62049e27c280] Terminating demuxer thread [vist#0:0/h264 @ 0x62049e299580] Decoder thread received EOF packet [h264_qsv @ 0x62049e2a59c0] A decode call did not consume any data: expect more data at input (-10) Last message repeated 2 times [vist#0:0/h264 @ 0x62049e299580] Decoder returned EOF, finishing [vist#0:0/h264 @ 0x62049e299580] Terminating decoder thread No more output streams to write to, finishing. [out#0/rawvideo @ 0x62049e298ec0] All streams finished [out#0/rawvideo @ 0x62049e298ec0] Terminating muxer thread [AVIOContext @ 0x62049e297d80] Statistics: 187619328 bytes written, 0 seeks, 716 writeouts [out#0/rawvideo @ 0x62049e298ec0] Output file #0 (/home/aibox/data/ffmpeg_output_aibox-opencv-ffmpeg/ffmpeg_decode_output_car-detection.yuv): [out#0/rawvideo @ 0x62049e298ec0] Output stream #0:0 (video): 377 frames encoded; 377 packets muxed (187619328 bytes); [out#0/rawvideo @ 0x62049e298ec0] Total: 377 packets (187619328 bytes) muxed [out#0/rawvideo @ 0x62049e298ec0] video:183222kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000% frame= 377 fps=0.0 q=-0.0 Lsize= 183222kB time=00:00:30.08 bitrate=49898.8kbits/s speed=93.8x [in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 0x62049e27c280] Input file #0 (/home/aibox/data/ffmpeg_data_aibox-opencv-ffmpeg/input_car-detection.mp4): [in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 0x62049e27c280] Input stream #0:0 (video): 377 packets read (2318299 bytes); 377 frames decoded; 0 decode errors; [in#0/mov,mp4,m4a,3gp,3g2,mj2 @ 0x62049e27c280] Total: 377 packets (2318299 bytes) demuxed [AVIOContext @ 0x62049e285000] Statistics: 2811553 bytes read, 0 seeks ffmpeg version N-111327-gecbbe8c0a0 Copyright (c) 2000-2023 the FFmpeg developers built with gcc 11 (Ubuntu 11.4.0-1ubuntu1~22.04) configuration: --prefix=/usr/local --enable-shared --enable-vaapi --enable-libvpl libavutil 58. 13.101 / 58. 13.101 libavcodec 60. 21.100 / 60. 21.100 libavformat 60. 9.100 / 60. 9.100 libavdevice 60. 2.100 / 60. 2.100 libavfilter 9. 8.102 / 9. 8.102 libswscale 7. 3.100 / 7. 3.100 libswresample 4. 11.100 / 4. 11.100 [AVHWDeviceContext @ 0x5ca79a0fec00] libva: VA-API version 1.20.0 [AVHWDeviceContext @ 0x5ca79a0fec00] libva: User requested driver 'iHD' [AVHWDeviceContext @ 0x5ca79a0fec00] libva: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so [AVHWDeviceContext @ 0x5ca79a0fec00] libva: Found init function __vaDriverInit_1_19 [AVHWDeviceContext @ 0x5ca79a0fec00] libva: va_openDriver() returns 0 [AVHWDeviceContext @ 0x5ca79a0fec00] Initialised VAAPI connection: version 1.20 [AVHWDeviceContext @ 0x5ca79a0fec00] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 23.3.1 (). [AVHWDeviceContext @ 0x5ca79a0fec00] Driver not found in known nonstandard list, using standard behaviour. [AVHWDeviceContext @ 0x5ca79a0fe600] Use Intel(R) oneVPL to create MFX session, API version is 2.9, the required implementation version is 1.3 libva info: VA-API version 1.20.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_19 libva info: va_openDriver() returns 0 libva info: VA-API version 1.20.0 libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_19 libva info: va_openDriver() returns 0 [AVHWDeviceContext @ 0x5ca79a0fe600] Initialize MFX session: implementation version is 2.9 [rawvideo @ 0x5ca79aa17300] Estimating duration from bitrate, this may be inaccurate Input #0, rawvideo, from '/home/aibox/data/ffmpeg_output_aibox-opencv-ffmpeg/ffmpeg_decode_output_car-detection.yuv': Duration: 00:00:12.57, start: 0.000000, bitrate: 119439 kb/s Stream #0:0: Video: rawvideo, 1 reference frame (I420 / 0x30323449), yuv420p, 768x432, 119439 kb/s, 30 tbr, 30 tbn [out#0/h264 @ 0x5ca79b075f80] No explicit maps, mapping streams automatically... [vost#0:0/h264_qsv @ 0x5ca79b078200] Created video stream from input stream 0:0 Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (h264_qsv)) Press [q] to stop, [?] for help [graph 0 input from stream 0:0 @ 0x5ca79b07f840] w:768 h:432 pixfmt:yuv420p tb:1/30 fr:30/1 sar:0/1 [auto_scale_0 @ 0x5ca79b07f440] w:iw h:ih flags:'' interl:0 [Parsed_format_0 @ 0x5ca79b076600] auto-inserting filter 'auto_scale_0' between the filter 'graph 0 input from stream 0:0' and the filter 'Parsed_format_0' [auto_scale_0 @ 0x5ca79b07f440] w:768 h:432 fmt:yuv420p sar:0/1 -> w:768 h:432 fmt:nv12 sar:0/1 flags:0x00000004 [AVHWDeviceContext @ 0x5ca79b07ffc0] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 23.3.1 (). [AVHWDeviceContext @ 0x5ca79b07ffc0] Driver not found in known nonstandard list, using standard behaviour. [AVHWFramesContext @ 0x5ca79b079840] Use Intel(R) oneVPL to create MFX session, API version is 2.9, the required implementation version is 2.9 [AVHWFramesContext @ 0x5ca79b079840] Initialize MFX session: implementation version is 2.9 [h264_qsv @ 0x5ca79b0786c0] Using input frames context (format qsv) with h264_qsv encoder. [h264_qsv @ 0x5ca79b0786c0] Encoder: input is video memory surface [h264_qsv @ 0x5ca79b0786c0] Use Intel(R) oneVPL to create MFX session with the specified MFX loader [h264_qsv @ 0x5ca79b0786c0] Using the constant bitrate (CBR) ratecontrol method [h264_qsv @ 0x5ca79b0786c0] profile: avc main; level: 30 [h264_qsv @ 0x5ca79b0786c0] GopPicSize: 30; GopRefDist: 3; GopOptFlag: closed; IdrInterval: 0 [h264_qsv @ 0x5ca79b0786c0] TargetUsage: 4; RateControlMethod: CBR [h264_qsv @ 0x5ca79b0786c0] BufferSizeInKB: 500; InitialDelayInKB: 250; TargetKbps: 2000; MaxKbps: 2000; BRCParamMultiplier: 1 [h264_qsv @ 0x5ca79b0786c0] NumSlice: 1; NumRefFrame: 2 [h264_qsv @ 0x5ca79b0786c0] RateDistortionOpt: OFF [h264_qsv @ 0x5ca79b0786c0] RecoveryPointSEI: OFF [h264_qsv @ 0x5ca79b0786c0] VDENC: OFF [h264_qsv @ 0x5ca79b0786c0] Entropy coding: CABAC; MaxDecFrameBuffering: 2 [h264_qsv @ 0x5ca79b0786c0] NalHrdConformance: ON; SingleSeiNalUnit: ON; VuiVclHrdParameters: OFF VuiNalHrdParameters: ON [h264_qsv @ 0x5ca79b0786c0] FrameRateExtD: 1; FrameRateExtN: 30 [h264_qsv @ 0x5ca79b0786c0] IntRefType: 0; IntRefCycleSize: 0; IntRefQPDelta: 0 [h264_qsv @ 0x5ca79b0786c0] MaxFrameSize: 0; MaxSliceSize: 0 [h264_qsv @ 0x5ca79b0786c0] BitrateLimit: ON; MBBRC: OFF; ExtBRC: OFF [h264_qsv @ 0x5ca79b0786c0] Trellis: auto [h264_qsv @ 0x5ca79b0786c0] RepeatPPS: OFF; NumMbPerSlice: 0; LookAheadDS: 2x [h264_qsv @ 0x5ca79b0786c0] AdaptiveI: OFF; AdaptiveB: OFF; BRefType:off [h264_qsv @ 0x5ca79b0786c0] MinQPI: 0; MaxQPI: 0; MinQPP: 0; MaxQPP: 0; MinQPB: 0; MaxQPB: 0 [h264_qsv @ 0x5ca79b0786c0] DisableDeblockingIdc: 0 [h264_qsv @ 0x5ca79b0786c0] SkipFrame: no_skip [h264_qsv @ 0x5ca79b0786c0] PRefType: default [h264_qsv @ 0x5ca79b0786c0] TransformSkip: unknown [h264_qsv @ 0x5ca79b0786c0] IntRefCycleDist: 0 [h264_qsv @ 0x5ca79b0786c0] LowDelayBRC: OFF [h264_qsv @ 0x5ca79b0786c0] MaxFrameSizeI: 0; MaxFrameSizeP: 0 [h264_qsv @ 0x5ca79b0786c0] ScenarioInfo: 0 Output #0, h264, to '/home/aibox/data/ffmpeg_output_aibox-opencv-ffmpeg/ffmpeg_encode_output_car-detection.264': Metadata: encoder : Lavf60.9.100 Stream #0:0: Video: h264, 1 reference frame, qsv(tv, progressive), 768x432 (0x0), q=2-31, 2000 kb/s, 30 fps, 30 tbn Metadata: encoder : Lavc60.21.100 h264_qsv Side data: cpb: bitrate max/min/avg: 2000000/0/2000000 buffer size: 0 vbv_delay: N/A [in#0/rawvideo @ 0x5ca79aa17180] EOF while reading input2:22.77 bitrate= -0.0kbits/s speed=N/A [in#0/rawvideo @ 0x5ca79aa17180] Terminating demuxer thread [vist#0:0/rawvideo @ 0x5ca79b076a40] Decoder thread received EOF packet [vist#0:0/rawvideo @ 0x5ca79b076a40] Decoder returned EOF, finishing [vist#0:0/rawvideo @ 0x5ca79b076a40] Terminating decoder thread No more output streams to write to, finishing. [out#0/h264 @ 0x5ca79b075f80] All streams finished [out#0/h264 @ 0x5ca79b075f80] Terminating muxer thread [AVIOContext @ 0x5ca79b07a780] Statistics: 3104212 bytes written, 0 seeks, 12 writeouts [out#0/h264 @ 0x5ca79b075f80] Output file #0 (/home/aibox/data/ffmpeg_output_aibox-opencv-ffmpeg/ffmpeg_encode_output_car-detection.264): [out#0/h264 @ 0x5ca79b075f80] Output stream #0:0 (video): 377 frames encoded; 377 packets muxed (3104212 bytes); [out#0/h264 @ 0x5ca79b075f80] Total: 377 packets (3104212 bytes) muxed [out#0/h264 @ 0x5ca79b075f80] video:3031kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000% frame= 377 fps=0.0 q=9.0 Lsize= 3031kB time=00:00:12.50 bitrate=1986.7kbits/s speed= 29x [in#0/rawvideo @ 0x5ca79aa17180] Input file #0 (/home/aibox/data/ffmpeg_output_aibox-opencv-ffmpeg/ffmpeg_decode_output_car-detection.yuv): [in#0/rawvideo @ 0x5ca79aa17180] Input stream #0:0 (video): 377 packets read (187619328 bytes); 377 frames decoded; 0 decode errors; [in#0/rawvideo @ 0x5ca79aa17180] Total: 377 packets (187619328 bytes) demuxed [AVIOContext @ 0x5ca79aa20380] Statistics: 187619328 bytes read, 0 seeks ***Current files md5: 8dccdfb5226e94732d83343cbaf25e8c ffmpeg_decode_output_car-detection.yuv ***Current files: total 186256 -rw-r--r-- 1 aibox aibox 187619328 Mar 8 13:19 ffmpeg_decode_output_car-detection.yuv -rw-r--r-- 1 aibox aibox 3104212 Mar 8 13:19 ffmpeg_encode_output_car-detection.264 Passed Successfully installed bmra_container_opencv_ffmpeg took 13 minutes 6.74 seconds Verified Intel Edge AI Box Qualifcation Suit [ Verified Intel Edge AI Box Qualifcation Suit [..................................................] 100% modules/Aibox_Test_Module is already installed. Type YES to reinstall or NO to skip installation. Skipping re-installation for modules/Aibox_Test_Module Module Path =========> /home/user/edge_aibox/Intel_Edge_AI_Box_3.1/ESDQ Installing ESDQ Module Path =========> /home/user/edge_aibox/Intel_Edge_AI_Box_3.1/ESDQ Installation Started, It may take few minutes Please Wait... Command =======> mkdir -p /home/user/esdq/modules Successfully installed ESDQ !! Clean up function Successfully installed ESDQ took 0.00 seconds [sudo] password for user: Installation of package complete +--------------------------+------------------------------+---------+ | Id | Module | Status | +--------------------------+------------------------------+---------+ | 652c9b5bd229cd102cfb088e | bmra base | SUCCESS | | 652c9ae1d229cd102cfac095 | bmra container base | FAILED | | 652c9b44d229cd102cfafa28 | bmra container base devel | SUCCESS | | 652c9b29d229cd102cfadd68 | bmra container dlstreamer | SUCCESS | | 652c9b13d229cd102cfacf01 | bmra container opencv ffmpeg | SUCCESS | | 652c9411d229cd102cfa954a | modules/Aibox Test Module | SUCCESS | | 64cb337f89c314c29867e05c | ESDQ | SUCCESS | +--------------------------+------------------------------+---------+ user@NUC13:~/edge_aibox$