0
0
1,964
The increasing complexity of AI models and the explosive growth of AI model size are both rapidly outpacing innovations in compute resources and memory capacity available on a single device. AI model complexity now doubles every 3.5 months or about 10X per year, driving rapidly increasing demand in AI computing capability. Memory requirements for AI models are also rising due to an increasing number of parameters or weights in a model. The Intel® Stratix® 10 NX FPGA is Intel’s first AI-optimized FPGA, developed to enable customers to scale their designs with increasing AI complexity while continuing to deliver real-time results. The Intel Stratix 10 NX FPGA fabric includes a new type of AI-optimized tensor arithmetic block called the AI Tensor Block.
These AI Tensor Blocks are tuned for the common matrix-matrix or vector-matrix multiplications used for AI computations and contain dense arrays of lower precision multipliers typically used for AI model arithmetic. The smaller multipliers in these AI Tensor Blocks can also be aggregated to construct larger-precision multipliers. The AI Tensor Block’s architecture contains three dot-product units, each of which has ten multipliers and ten accumulators for a total of 30 multipliers and 30 accumulators within each block. The AI Tensor Block multipliers’ base precisions are INT8 and INT4 along with shared exponent to support Block Floating Point 16 (Block FP16) and Block Floating Point 12 (Block FP12) numerical formats. Multiple AI Tensor Blocks can be cascaded together to support larger vector calculations.
A new White Paper titled “Pushing AI Boundaries with Scalable Compute-Focused FPGAs” covers the new features and performance capabilities of the Intel Stratix 10 NX FPGAs. Click here to download the White Paper.
If you’d like to see the Intel Stratix 10 NX FPGA in action, please check out the recent blog “WaveNet Neural Network runs on Intel® Stratix® 10 NX FPGA, synthesizes 256 16 kHz audio streams in real time.”
Notices & Disclaimers
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
These AI Tensor Blocks are tuned for the common matrix-matrix or vector-matrix multiplications used for AI computations and contain dense arrays of lower precision multipliers typically used for AI model arithmetic. The smaller multipliers in these AI Tensor Blocks can also be aggregated to construct larger-precision multipliers. The AI Tensor Block’s architecture contains three dot-product units, each of which has ten multipliers and ten accumulators for a total of 30 multipliers and 30 accumulators within each block. The AI Tensor Block multipliers’ base precisions are INT8 and INT4 along with shared exponent to support Block Floating Point 16 (Block FP16) and Block Floating Point 12 (Block FP12) numerical formats. Multiple AI Tensor Blocks can be cascaded together to support larger vector calculations.
A new White Paper titled “Pushing AI Boundaries with Scalable Compute-Focused FPGAs” covers the new features and performance capabilities of the Intel Stratix 10 NX FPGAs. Click here to download the White Paper.
If you’d like to see the Intel Stratix 10 NX FPGA in action, please check out the recent blog “WaveNet Neural Network runs on Intel® Stratix® 10 NX FPGA, synthesizes 256 16 kHz audio streams in real time.”
Notices & Disclaimers
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.