Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
AI Playground: Experience the Latest GenAI Software on AI PCs Powered by Intel® Arc™ Graphics
05-07-2025
AI Playground: An open-source app for GenAI on AI PCs powered by Intel® Arc™ Graphics
0
Kudos
0
Comments
|
Get Your Innovation to Go with Innovation Select Videos
11-04-2024
Catch up on the latest Intel Innovation developer and technical content with demos, tech talks and m...
0
Kudos
0
Comments
|
Boost the Performance of AI/ML Applications using Intel® VTune™ Profiler
10-23-2024
Enhance the performance of Python* and OpenVINO™ based AI/ML workloads using Intel® VTune™ Profiler
0
Kudos
0
Comments
|
Generative AI as a Life Saver: CerebraAI Helps Detecting Strokes Quicker And More Precisely
10-21-2024
With the help of CerebraAI’s generative AI software the detection and treatment of strokes can be si...
0
Kudos
0
Comments
|
Migrating Your AI Solutions to Intel
06-23-2024
What does it take to move an AI solution from a non-Intel platform to an Intel platform? How much is...
1
Kudos
1
Comments
|
Efficient Inference and Training of Large Neural Network Models
09-13-2023
oneAPI DevSummit for AI 2023: Efficient Inference and Training of Large Neural Network Models
1
Kudos
0
Comments
|
How Computer Vision and AI are Transforming Retail Technology
09-12-2023
Many retailers operating brick-and-mortar stores are focused on creating convenient, frictionless sh...
1
Kudos
0
Comments
|
Accelerate Workloads with OpenVINO and OneDNN
07-13-2023
OpenVINO utilizes oneDNN GPU kernels for discrete GPUs to accelerate compute-intensive workloads
0
Kudos
0
Comments
|
Compressing the Transformer: Optimization of DistilBERT with the Intel® Neural Compressor
06-14-2023
A neural network, the aptly named, biologically-inspired programming paradigm, enables the processin...
1
Kudos
0
Comments
|
Numenta and Intel Accelerate Inference 20x on Large Language Models with Intel® Xeon® CPU Max Series
04-06-2023
Numenta demonstrates new NLP capabilities for customers on Intel® Xeon® CPU Max Series processors
0
Kudos
0
Comments
|
AI and HPC DevSummit 2022 Keynote: AI Software and Hardware Acceleration
02-07-2023
Intel’s AI software, tools with hardware architectures are improving AI application performance
1
Kudos
0
Comments
|
Responsible AI: The Future of AI Security and Privacy
12-15-2022
Intel Innovation 2022: Jason Martin, Principal Engineer leading Intel Labs Secure Intelligence Team
1
Kudos
0
Comments
|
CUMULATIVE_THROUGHPUT Enables Full-Speed AI Inferencing with Multiple Devices
11-28-2022
OpenVINO™ has enabled automatic selection of the most suitable target device for AI inferencing
0
Kudos
0
Comments
|
Empowering Red Hat* OpenShift* Data Science Platform with Intel AI
10-04-2022
Use the Intel® AI Analytics Toolkit on the (RHODS) platform to speed up many parts of AI workflow
1
Kudos
0
Comments
|
YOLOv5 Model INT8 Quantization based on OpenVINO™ 2022.1 POT API
09-20-2022
How to use OpenVINOTM 2022.1 Post-training Optimization Tool (POT) API for YOLOv5 Model INT8
1
Kudos
0
Comments
|
The Example of Deploying YOLOv7 Pre-trained Model Based on the OpenVINO™ 2022.1 C++ API
09-20-2022
How to deploy the YOLOv7 official pre-trained model based on the OpenVINO™ 2022.1 tool suite.
0
Kudos
0
Comments
|
Hybrid AI Inferencing managed with Microsoft Azure Arc-Enabled Kubernetes
08-02-2022
Azure Arc-Enabled Kubernetes enables centralized management of heterogenous and geographically separ...
0
Kudos
0
Comments
|
Accelerating AI applications on Windows Subsystem for Linux with Intel’s iGPU and OpenVINO™ toolkit
07-20-2022
Configure Windows system to get the most out of Intel® Integrated Graphics Processing Unit (iGPU)
0
Kudos
0
Comments
|
Optimize Inference with Intel CPU Technology
07-19-2022
Enjoy improved inferencing lower overall total cost of ownership (TCO) across an integrated AI platf...
0
Kudos
0
Comments
|
Deploy AI Inference with OpenVINO™ and Kubernetes
07-19-2022
In this blog, you will learn how to use key features of the OpenVINO™ Operator for Kubernetes
0
Kudos
0
Comments
|
OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models
07-01-2022
Developers can now leverage model caching through the OpenVINO™ Execution Provider for ONNX Runtime
0
Kudos
0
Comments
|
Witness the power of Intel® iGPU with Azure IoT Edge for Linux on Windows(EFLOW) & OpenVINO™ Toolkit
05-09-2022
In this blog you will learn how to:
Setup and deploy your application or docker container in Linux ...
0
Kudos
0
Comments
|
The Rise of Ethical Facial Recognition
04-14-2022
How ethical facial recognition can identify security threats in real time with Oosto (formally AnyVi...
0
Kudos
0
Comments
|
Accelerating Media Analytics with Intel® DL Streamer featuring OpenVINO with Intel® RealSense
03-29-2022
Optimize, tune, and run comprehensive AI inference with OpenVINO Toolkit.
1
Kudos
0
Comments
|
Easily Optimize Deep Learning with 8-Bit Quantization
03-08-2022
Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit qua...
1
Kudos
0
Comments
|
Community support is provided Monday to Friday. Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.