Tag: "ONNX" in "Artificial Intelligence (AI)"

abbrechen
Suchergebnisse werden angezeigt für 
Stattdessen suchen nach 
Meintest du: 
785 Diskussionen
Latest Tagged

Intel’s Flexible AI App Development: Models, Optimizations, and Runtimes

Developers working on AI applications for Intel platforms have flexibility in their choices to make ...
0 Kudos
0 Kommentare

Intel and Microsoft Collaborate to Optimize DirectML for Intel® Arc™ Graphics Solutions

Speed up generative AI workloads with DirectML and Intel Arc GPUs with the latest driver
0 Kudos
0 Kommentare

Intel® AI Analytics Toolkit 2023.2 Now Available

Learn what new optimizations and features have been added to your AI software tools and frameworks.
0 Kudos
0 Kommentare

OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models

Developers can now leverage model caching through the OpenVINO™ Execution Provider for ONNX Runtime
0 Kudos
0 Kommentare

Easily Optimize Deep Learning with 8-Bit Quantization

Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit qua...
1 Kudos
0 Kommentare

Quantizing ONNX Models using Intel® Neural Compressor

In this tutorial, we will show step-by-step how to quantize ONNX models with Intel® Neural Compresso...
2 Kudos
0 Kommentare