I read the article
It mentioned that the 2nd generation instructions such as AVX512_VNNI are optimized for Neural Network
I ran one of INT8 models in IntelAI
Here is my environment
- CPU info
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model name: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
CPU MHz: 1838.080
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni
I expect to run the Neural Network by the 2 gen instructions (AVX512_VNNI)
but it shows that the following optimized instructions are used:
AVX512F, AVX2, FMA
Is the docker image the optimized version to run Neural Network?
How can I get the information whether AVX512_VNNI is used or not?
How can I compile the code provided by IntelAI by the 2 gen Intel instructions?
Which docker image can I use to run the program?
Thanks in advance
Thank you for reaching out.
Please post your question on this community, there you can get assistance from the proper team.
Intel Customer Support Technician
A Contingent Worker at Intel