Success! Subscription added.
Success! Subscription removed.
Sorry, you must verify to complete this action. Please click the verification link in your email. You may re-send via your profile.
Migrating Your AI Solutions to Intel
06-23-2024
What does it take to move an AI solution from a non-Intel platform to an Intel platform? How much is...
1
Kudos
1
Comments
|
Efficient Inference and Training of Large Neural Network Models
09-13-2023
oneAPI DevSummit for AI 2023: Efficient Inference and Training of Large Neural Network Models
1
Kudos
0
Comments
|
How Computer Vision and AI are Transforming Retail Technology
09-12-2023
Many retailers operating brick-and-mortar stores are focused on creating convenient, frictionless sh...
1
Kudos
0
Comments
|
Accelerate Workloads with OpenVINO and OneDNN
07-13-2023
OpenVINO utilizes oneDNN GPU kernels for discrete GPUs to accelerate compute-intensive workloads
0
Kudos
0
Comments
|
Compressing the Transformer: Optimization of DistilBERT with the Intel® Neural Compressor
06-14-2023
A neural network, the aptly named, biologically-inspired programming paradigm, enables the processin...
1
Kudos
0
Comments
|
Numenta and Intel Accelerate Inference 20x on Large Language Models with Intel® Xeon® CPU Max Series
04-06-2023
Numenta demonstrates new NLP capabilities for customers on Intel® Xeon® CPU Max Series processors
0
Kudos
0
Comments
|
AI and HPC DevSummit 2022 Keynote: AI Software and Hardware Acceleration
02-07-2023
Intel’s AI software, tools with hardware architectures are improving AI application performance
1
Kudos
0
Comments
|
Responsible AI: The Future of AI Security and Privacy
12-15-2022
Intel Innovation 2022: Jason Martin, Principal Engineer leading Intel Labs Secure Intelligence Team
1
Kudos
0
Comments
|
CUMULATIVE_THROUGHPUT Enables Full-Speed AI Inferencing with Multiple Devices
11-28-2022
OpenVINO™ has enabled automatic selection of the most suitable target device for AI inferencing
0
Kudos
0
Comments
|
Empowering Red Hat* OpenShift* Data Science Platform with Intel AI
10-04-2022
Use the Intel® AI Analytics Toolkit on the (RHODS) platform to speed up many parts of AI workflow
1
Kudos
0
Comments
|
YOLOv5 Model INT8 Quantization based on OpenVINO™ 2022.1 POT API
09-20-2022
How to use OpenVINOTM 2022.1 Post-training Optimization Tool (POT) API for YOLOv5 Model INT8
1
Kudos
0
Comments
|
The Example of Deploying YOLOv7 Pre-trained Model Based on the OpenVINO™ 2022.1 C++ API
09-20-2022
How to deploy the YOLOv7 official pre-trained model based on the OpenVINO™ 2022.1 tool suite.
0
Kudos
0
Comments
|
Hybrid AI Inferencing managed with Microsoft Azure Arc-Enabled Kubernetes
08-02-2022
Azure Arc-Enabled Kubernetes enables centralized management of heterogenous and geographically separ...
0
Kudos
0
Comments
|
Accelerating AI applications on Windows Subsystem for Linux with Intel’s iGPU and OpenVINO™ toolkit
07-20-2022
Configure Windows system to get the most out of Intel® Integrated Graphics Processing Unit (iGPU)
0
Kudos
0
Comments
|
Optimize Inference with Intel CPU Technology
07-19-2022
Enjoy improved inferencing lower overall total cost of ownership (TCO) across an integrated AI platf...
0
Kudos
0
Comments
|
Deploy AI Inference with OpenVINO™ and Kubernetes
07-19-2022
In this blog, you will learn how to use key features of the OpenVINO™ Operator for Kubernetes
0
Kudos
0
Comments
|
OpenVINO™ Execution Provider + Model Caching = Better First Inference Latency for your ONNX Models
07-01-2022
Developers can now leverage model caching through the OpenVINO™ Execution Provider for ONNX Runtime
0
Kudos
0
Comments
|
Witness the power of Intel® iGPU with Azure IoT Edge for Linux on Windows(EFLOW) & OpenVINO™ Toolkit
05-09-2022
In this blog you will learn how to:
Setup and deploy your application or docker container in Linux ...
0
Kudos
0
Comments
|
The Rise of Ethical Facial Recognition
04-14-2022
How ethical facial recognition can identify security threats in real time with Oosto (formally AnyVi...
0
Kudos
0
Comments
|
Accelerating Media Analytics with Intel® DL Streamer featuring OpenVINO with Intel® RealSense
03-29-2022
Optimize, tune, and run comprehensive AI inference with OpenVINO Toolkit.
1
Kudos
0
Comments
|
Easily Optimize Deep Learning with 8-Bit Quantization
03-08-2022
Discover how to use the Neural Network Compression Framework of the OpenVINOTM toolkit for 8-bit qua...
1
Kudos
0
Comments
|
Simplified Deployments with OpenVINO™ Model Server and TensorFlow Serving
01-21-2022
Learn how to perform inference on JPEG images using the gRPC API in OpenVINO Model Server.
1
Kudos
0
Comments
|
Optimizing Dental Insurance and Value-Based Payments with Intel® OpenVINO.
12-13-2021
The oral healthcare industry can leverage AI to drive precision dentistry to better solve problems.
0
Kudos
0
Comments
|
How to Accelerate Deep Reinforcement Learning Training
12-10-2021
Speeding up inference with Intel® OpenVINOTM toolkit, can save time in training a robotics simulatio...
0
Kudos
0
Comments
|
Load Balancing OpenVINO™ Model Server Deployments with Red Hat OpenShift
10-13-2021
Learn how to deploy AI inference-as-a-service and scale to hundreds or thousands of nodes using Open...
0
Kudos
0
Comments
|
Community support is provided Monday to Friday. Other contact methods are available here.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
For more complete information about compiler optimizations, see our Optimization Notice.