Intel® Optimized AI Frameworks
Receive community support for questions related to PyTorch* and TensorFlow* frameworks.

Running int8 model on Intel-Optimized-Tensorflow

CHung
Novice
1,023 Views

I read the article

https://www.intel.ai/accelerating-tensorflow-inference-with-intel-deep-learning-boost-on-2nd-gen-intel-xeon-scalable-processors/#gs.rzwuy9

It mentioned that the 2nd generation instructions such as AVX512_VNNI are optimized for Neural Network

I ran one of INT8 models in IntelAI

https://github.com/IntelAI/models/tree/master/benchmarks

Here is my environment

- Docker: docker.io/intelaipg/intel-optimized-tensorflow:latest

- CPU info

Architecture:    x86_64

CPU op-mode(s):   32-bit, 64-bit

Byte Order:     Little Endian

CPU(s):       96

On-line CPU(s) list: 0-95

Thread(s) per core: 2

Core(s) per socket: 24

Socket(s):      2

NUMA node(s):    2

Vendor ID:      GenuineIntel

CPU family:     6

Model:        85

Model name:     Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz

Stepping:      7

CPU MHz:       1838.080

BogoMIPS:      5000.00

Hypervisor vendor:  KVM

Virtualization type: full

L1d cache:      32K

L1i cache:      32K

L2 cache:      1024K

L3 cache:      36608K

NUMA node0 CPU(s):  0-23,48-71

NUMA node1 CPU(s):  24-47,72-95

Flags:        fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke avx512_vnni

I expect to run the Neural Network by the 2 gen instructions (AVX512_VNNI)

but it shows that the following optimized instructions are used:

AVX512F, AVX2, FMA

Is the docker image the optimized version to run Neural Network?

​How can I get the information whether AVX512_VNNI is used or not?

How can I compile the code provided by IntelAI by the 2 gen Intel instructions?

Which docker image can I use to run the program?

Thanks in advance

0 Kudos
1 Reply
Jing_Xu
Employee
1,023 Views

Duplicated thread. Please refer to https://software.intel.com/en-us/forums/intel-optimized-ai-frameworks/topic/843478#comment-1951298

0 Kudos
Reply