Intel Communities

1,338 Members Online 258K Discussions 27.1K Solutions
cancel
Showing results for 
Search instead for 
Did you mean: 
Popular Forum Threads
Recent Blog Posts

OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models

Developers can now leverage model caching through the OpenVINO™ Execution Provider for ONNX Runtime
0 Kudos
0 Replies
80 Views

AttentionLite: Towards Efficient Self-Attention Models for Vision

Intel Labs has created a novel framework for producing a class of parameter- and compute-efficient models called AttentionLit...
0 Kudos
0 Replies
378 Views

NEMO: A Novel Multi-Objective Optimization Method for AI Challenges

Neuroevolution-Enhanced Multi-Objective Optimization (NEMO) for Mixed-Precision Quantization delivers state-of-the-art comput...
0 Kudos
0 Replies
398 Views