- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone, I'm relatively new to the OpenVINO/NCS2 community. I'm trying to benchmark a model on the CPU/GPU/NCS2 and am getting a bit confused as to how the GPU device setting is working. My setup is an Nvidia 1080 Ti and Nvidia 960, with an Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, that has an Intel® UHD Graphics 630. When I run the GPU setting comparing an OpenVINO optimized model vs an optimized tensorflow protobuf file, I'm getting image/sec framerates of 200 vs 25k, leading me to wonder whether I've set everything up right. Running clinfo gives
Number of platforms 2 Platform Name Intel(R) OpenCL HD Graphics Platform Vendor Intel(R) Corporation Platform Version OpenCL 2.1 Platform Profile FULL_PROFILE Platform Extensions cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_depth_images cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_intel_subgroups cl_intel_required_subgroup_size cl_intel_subgroups_short cl_khr_spir cl_intel_accelerator cl_intel_media_block_io cl_intel_driver_diagnostics cl_intel_device_side_avc_motion_estimation cl_khr_priority_hints cl_khr_throttle_hints cl_khr_create_command_queue cl_khr_fp64 cl_khr_subgroups cl_khr_il_program cl_intel_spirv_device_side_avc_motion_estimation cl_intel_spirv_media_block_io cl_intel_spirv_subgroups cl_khr_spirv_no_integer_wrap_decoration cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_motion_estimation cl_intel_advanced_motion_estimation cl_intel_va_api_media_sharing Platform Host timer resolution 1ns Platform Extensions function suffix INTEL Platform Name NVIDIA CUDA Platform Vendor NVIDIA Corporation Platform Version OpenCL 1.2 CUDA 9.0.368 Platform Profile FULL_PROFILE Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer Platform Extensions function suffix NV Platform Name Intel(R) OpenCL HD Graphics Number of devices 1 Device Name Intel(R) Gen9 HD Graphics NEO Device Vendor Intel(R) Corporation Device Vendor ID 0x8086 Device Version OpenCL 2.1 NEO Driver Version 19.16.12873 Device OpenCL C Version OpenCL C 2.0 Device Type GPU Device Profile FULL_PROFILE Max compute units 24 Max clock frequency 1200MHz Device Partition (core) Max number of sub-devices 0 Supported partition types None Max work item dimensions 3 Max work item sizes 256x256x256 Max work group size 256 Preferred work group size multiple 32 Max sub-groups per work group 32 Preferred / native vector sizes char 16 / 16 short 8 / 8 int 4 / 4 long 1 / 1 half 8 / 8 (cl_khr_fp16) float 1 / 1 double 1 / 1 (cl_khr_fp64) Half-precision Floating-point support (cl_khr_fp16) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Correctly-rounded divide and sqrt operations No Single-precision Floating-point support (core) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Correctly-rounded divide and sqrt operations Yes Double-precision Floating-point support (cl_khr_fp64) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Correctly-rounded divide and sqrt operations No Address bits 64, Little-Endian Global memory size 13215203328 (12.31GiB) Error Correction support No Max memory allocation 4294959104 (4GiB) Unified memory for Host and Device Yes Shared Virtual Memory (SVM) capabilities (core) Coarse-grained buffer sharing Yes Fine-grained buffer sharing No Fine-grained system sharing No Atomics No Minimum alignment for any data type 128 bytes Alignment of base address 1024 bits (128 bytes) Preferred alignment for atomics SVM 64 bytes Global 64 bytes Local 64 bytes Max size for global variable 65536 (64KiB) Preferred total size of global vars 4294959104 (4GiB) Global Memory cache type Read/Write Global Memory cache size 524288 Global Memory cache line 64 bytes Image support Yes Max number of samplers per kernel 16 Max size for 1D images from buffer 268434944 pixels Max 1D or 2D image array size 2048 images Base address alignment for 2D image buffers 4 bytes Pitch alignment for 2D image buffers 4 bytes Max 2D image size 16384x16384 pixels Max 3D image size 16384x16384x2048 pixels Max number of read image args 128 Max number of write image args 128 Max number of read/write image args 128 Max number of pipe args 16 Max active pipe reservations 1 Max pipe packet size 1024 Local memory type Local Local memory size 65536 (64KiB) Max constant buffer size 4294959104 (4GiB) Max number of constant args 8 Max size of kernel argument 1024 Queue properties (on host) Out-of-order execution Yes Profiling Yes Queue properties (on device) Out-of-order execution Yes Profiling Yes Preferred size 131072 (128KiB) Max size 67108864 (64MiB) Max queues on device 1 Max events on device 1024 Prefer user sync for interop Yes Profiling timer resolution 83ns Execution capabilities Run OpenCL kernels Yes Run native kernels No Sub-group independent forward progress Yes IL version SPIR-V_1.0 SPIR versions 1.2 printf() buffer size 4194304 (4MiB) Built-in kernels block_motion_estimate_intel;block_advanced_motion_estimate_check_intel;block_advanced_motion_estimate_bidirectional_check_intel; Motion Estimation accelerator version (Intel) 2 Device Available Yes Compiler Available Yes Linker Available Yes Device Extensions cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_depth_images cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_image2d_from_buffer cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_intel_subgroups cl_intel_required_subgroup_size cl_intel_subgroups_short cl_khr_spir cl_intel_accelerator cl_intel_media_block_io cl_intel_driver_diagnostics cl_intel_device_side_avc_motion_estimation cl_khr_priority_hints cl_khr_throttle_hints cl_khr_create_command_queue cl_khr_fp64 cl_khr_subgroups cl_khr_il_program cl_intel_spirv_device_side_avc_motion_estimation cl_intel_spirv_media_block_io cl_intel_spirv_subgroups cl_khr_spirv_no_integer_wrap_decoration cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_motion_estimation cl_intel_advanced_motion_estimation cl_intel_va_api_media_sharing Platform Name NVIDIA CUDA Number of devices 2 Device Name GeForce GTX 1080 Ti Device Vendor NVIDIA Corporation Device Vendor ID 0x10de Device Version OpenCL 1.2 CUDA Driver Version 384.130 Device OpenCL C Version OpenCL C 1.2 Device Type GPU Device Profile FULL_PROFILE Device Topology (NV) PCI-E, 02:00.0 Max compute units 28 Max clock frequency 1582MHz Compute Capability (NV) 6.1 Device Partition (core) Max number of sub-devices 1 Supported partition types None Max work item dimensions 3 Max work item sizes 1024x1024x64 Max work group size 1024 Preferred work group size multiple 32 Warp size (NV) 32 Preferred / native vector sizes char 1 / 1 short 1 / 1 int 1 / 1 long 1 / 1 half 0 / 0 (n/a) float 1 / 1 double 1 / 1 (cl_khr_fp64) Half-precision Floating-point support (n/a) Single-precision Floating-point support (core) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Correctly-rounded divide and sqrt operations Yes Double-precision Floating-point support (cl_khr_fp64) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Correctly-rounded divide and sqrt operations No Address bits 64, Little-Endian Global memory size 11715084288 (10.91GiB) Error Correction support No Max memory allocation 2928771072 (2.728GiB) Unified memory for Host and Device No Integrated memory (NV) No Minimum alignment for any data type 128 bytes Alignment of base address 4096 bits (512 bytes) Global Memory cache type Read/Write Global Memory cache size 458752 Global Memory cache line 128 bytes Image support Yes Max number of samplers per kernel 32 Max size for 1D images from buffer 134217728 pixels Max 1D or 2D image array size 2048 images Max 2D image size 16384x32768 pixels Max 3D image size 16384x16384x16384 pixels Max number of read image args 256 Max number of write image args 16 Local memory type Local Local memory size 49152 (48KiB) Registers per block (NV) 65536 Max constant buffer size 65536 (64KiB) Max number of constant args 9 Max size of kernel argument 4352 (4.25KiB) Queue properties Out-of-order execution Yes Profiling Yes Prefer user sync for interop No Profiling timer resolution 1000ns Execution capabilities Run OpenCL kernels Yes Run native kernels No Kernel execution timeout (NV) No Concurrent copy and kernel execution (NV) Yes Number of async copy engines 2 printf() buffer size 1048576 (1024KiB) Built-in kernels Device Available Yes Compiler Available Yes Linker Available Yes Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer Device Name GeForce GTX 960 Device Vendor NVIDIA Corporation Device Vendor ID 0x10de Device Version OpenCL 1.2 CUDA Driver Version 384.130 Device OpenCL C Version OpenCL C 1.2 Device Type GPU Device Profile FULL_PROFILE Device Topology (NV) PCI-E, 01:00.0 Max compute units 8 Max clock frequency 1177MHz Compute Capability (NV) 5.2 Device Partition (core) Max number of sub-devices 1 Supported partition types None Max work item dimensions 3 Max work item sizes 1024x1024x64 Max work group size 1024 Preferred work group size multiple 32 Warp size (NV) 32 Preferred / native vector sizes char 1 / 1 short 1 / 1 int 1 / 1 long 1 / 1 half 0 / 0 (n/a) float 1 / 1 double 1 / 1 (cl_khr_fp64) Half-precision Floating-point support (n/a) Single-precision Floating-point support (core) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Correctly-rounded divide and sqrt operations Yes Double-precision Floating-point support (cl_khr_fp64) Denormals Yes Infinity and NANs Yes Round to nearest Yes Round to zero Yes Round to infinity Yes IEEE754-2008 fused multiply-add Yes Support is emulated in software No Correctly-rounded divide and sqrt operations No Address bits 64, Little-Endian Global memory size 4232183808 (3.942GiB) Error Correction support No Max memory allocation 1058045952 (1009MiB) Unified memory for Host and Device No Integrated memory (NV) No Minimum alignment for any data type 128 bytes Alignment of base address 4096 bits (512 bytes) Global Memory cache type Read/Write Global Memory cache size 131072 Global Memory cache line 128 bytes Image support Yes Max number of samplers per kernel 32 Max size for 1D images from buffer 134217728 pixels Max 1D or 2D image array size 2048 images Max 2D image size 16384x16384 pixels Max 3D image size 4096x4096x4096 pixels Max number of read image args 256 Max number of write image args 16 Local memory type Local Local memory size 49152 (48KiB) Registers per block (NV) 65536 Max constant buffer size 65536 (64KiB) Max number of constant args 9 Max size of kernel argument 4352 (4.25KiB) Queue properties Out-of-order execution Yes Profiling Yes Prefer user sync for interop No Profiling timer resolution 1000ns Execution capabilities Run OpenCL kernels Yes Run native kernels No Kernel execution timeout (NV) Yes Concurrent copy and kernel execution (NV) Yes Number of async copy engines 2 printf() buffer size 1048576 (1024KiB) Built-in kernels Device Available Yes Compiler Available Yes Linker Available Yes Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer NULL platform behavior clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform clCreateContext(NULL, ...) [default] No platform clCreateContext(NULL, ...) [other] Success [INTEL] clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No platform clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform NOTE: your OpenCL library only supports OpenCL 2.0, but some installed platforms support OpenCL 2.1. Programs using 2.1 features may crash or behave unexepectedly
and running lspci | grep VGA gives
01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) 02:00.0 VGA compatible controller: NVIDIA Corporation Device 1b06 (rev a1)
I've read on other forum posts that a multiple GPU system might need some additional configuration to make sure the Intel Graphics are being detected by OpenCL. Does everything seem right here, or is there further room for optimization? Is there any detailed documentation on making a GPU device work properly in a multiple graphic card system? Thanks in advance.
Best,
Erik
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello ErikR,
Thanks for the question.
From an OpenCL on Intel Graphics perspective...
Assuming that's common clinfo source then Intel Graphics is being detected OK via OpenCL api...
Unfortunately we can't provide support for non-Intel products, but... if a sighting is observed where an Intel software/hardware stack conflicts per some specification or device targeting is confusing or untenable we want to know about it. Appreciate you sharing your experience.
Can you share how you installed the OpenCL runtime for Intel Graphics? From where you obtained the runtime?
On OpenVINO...
Some of the default information for forum post feedback you've already provided, but can you share the rest of the following?
Please let us know what Processor, Operating System, Graphics Driver Version, and Tool Version you are using
Please state steps to reproduce the issue as precisely as possible
If you are using command line tools, please provide the full command line
If code is involved, create a small "Reproducer" sample and attach it to the message
Here, tool version is the version of OpenVINO.
Can you share how you are looking to target Intel Graphics with OpenVINO through a source code example?
Can you share the protobufs and openvino models? Where are you targeting each run and through what engine stacks?... Can you clarify what you're trying to compare for performance, where it's running and with what stacks?
Also, please don't share any privileged source or data. Thanks!
-MichaelC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi MichaelC,
Thanks for the response.
I've installed openCL through the directions on the best reponse of this form post here.
I then installed clinfo through
sudo apt install clinfo
I'm using Ubuntu 16.04.5 LTS. Since I've installed based on the forum post, looks like the graphics driver version is 18.43. My OpenVINO version is 2019.1.133.
A short snippet of how I'm attempting to target the Intel GPU is:
load_time = time.time() plugin = IEPlugin(device='GPU') net = IENetwork(model=model_xml, weights=model_bin) input_blob = next(iter(net.inputs)) out_blob = next(iter(net.outputs)) net.batch_size = inputs_count exec_net = plugin.load(network=net) del net out_node_name = 'layer_out_1/MatMul' print(time.time() - load_time) total_time = 0; for i in range(100): t1 = time.time() res = exec_net.infer(inputs={input_blob: images}) y_pred = res[out_node_name] t2 = time.time() delta_time = t2 - t1 total_time += delta_time avg_time = total_time / n_time_inference print(avg_time) print(len(images)/avg_time)
I unfortunately cannot share the protobufs and openvino models, and am unsure about how to report back on what engine stacks are being used. Sorry about that. When testing the protobuf file to compare performance, I'm using
t1 = time.time() predictions = sess.run(out_tensor, {'import/layer_in_1:0': x_test_1}) t2 = time.time()
to compare the times. Thanks again for your help.
Best,
Erik

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page