Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Intel’s Flexible AI App Development: Models, Optimizations, and Runtimes
05-20-2024
Developers working on AI applications for Intel platforms have flexibility in their choices to make ...
0
Kudos
0
Comments
|
Intel and Microsoft Collaborate to Optimize DirectML for Intel® Arc™ Graphics Solutions
11-15-2023
Speed up generative AI workloads with DirectML and Intel Arc GPUs with the latest driver
0
Kudos
0
Comments
|
Intel® AI Analytics Toolkit 2023.2 Now Available
08-21-2023
Learn what new optimizations and features have been added to your AI software tools and frameworks.
0
Kudos
0
Comments
|
OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models
07-01-2022
Developers can now leverage model caching through the OpenVINO™ Execution Provider for ONNX Runtime
0
Kudos
0
Comments
|
Easily Optimize Deep Learning with 8-Bit Quantization
03-08-2022
Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit qua...
1
Kudos
0
Comments
|
Quantizing ONNX Models using Intel® Neural Compressor
02-01-2022
In this tutorial, we will show step-by-step how to quantize ONNX models with Intel® Neural Compresso...
2
Kudos
0
Comments
|
Community support is provided Monday to Friday. Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.