- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Greetings All,
Intel® Distribution of OpenVINO™ toolkit 2019 R2 is available for download!
Executive summary:
- Added Deep Learning Workbench Profiler for neural network topologies and layers. DL Workbench provides the following features:
- Visualization of key metrics such as latency, throughput and performance counters
- Easy configuration for inference experiments including INT8 calibration
- Accuracy check and automatic detection of optimal performance settings
- Added new non-vision topologies: GNMT, BERT, TDNN-LSTM (NNet3), ESPNet, etc. to enable machine translation, natural language processing & speech use cases
- Introduced new Inference Engine Core APIs. The Core API automate direct mapping to devices, & provide Query API for configuration & metrics to determine best deployment platform
- Added Multi Device Inference with automatic load-balancing across available devices for higher throughput
- Serialized FP16 Intermediate Representation to work uniformly across all platforms to reduce model size by 2x compared to FP32 & improve utilization of device memory & portability of models
- Enabled OpenCL custom layer support for Intel® Neural Compute Stick 2 in preview mode
- Enabled new binary distribution via YUM* & APT*, Docker Hub*
- Added support for Windows* Server 2016
Please, read more details in the Release Notes: https://software.intel.com/en-us/articles/OpenVINO-RelNotes
Link Copied
0 Replies
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page