Breaking News

Transcend's New ESD420 Portable SSD Offers MagSafe Compatibility and Pro-Level Performance G.SKILL Trident Z5 DDR5 Memory and WigiDash Receives European Hardware Awards 2025 Silicon Power Launches WP10 Magnetic Wireless Power Bank Razer Unveils the Ultra-Lightweight DeathAdder V4 Pro Sony launches a high-resolution shotgun microphone with superior sound quality and compact design.

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Intel Introduces "Throughput Mode" in the Intel Distribution of OpenVINO Toolkit

Intel Introduces "Throughput Mode" in the Intel Distribution of OpenVINO Toolkit

Enterprise & IT Mar 7,2019 0

The latest release of the Intel Distribution of OpenVINO Toolkit includes a CPU “throughput” mode, which is said to accelerate deep learning inference.

One of the biggest challenges to AI can be eliciting high-performance deep learning inference that runs at real-world scale, leveraging existing infrastructures. The Intel Distribution of OpenVINO toolkit (a developer tool suite that stands for Open Visual Inference and Neural Network Optimization) accelerates high-performance deep learning inference deployments.

Latency, or execution time of an inference, is critical for real-time services. Typically, approaches to minimize latency focus on the performance of single inference requests, limiting parallelism to the individual input instance. This often means that real-time inference applications cannot take advantage of the computational efficiencies that batching (combining many input images to achieve optimal throughput) provides, as high batch sizes come with a latency penalty. To address this gap, the latest release of the Intel Distribution of OpenVINO Toolkit includes a CPU “throughput” mode.

According to Intel, this new mode allows efficient parallel execution of multiple inference requests by processing them using the same CNN, greatly improving the throughput. In addition to the reuse of filter weights in convolutional operations (also available with batching), a finer execution granularity available with the new mode further improves cache utilization. Using this “throughput” mode, CPU cores are evenly distributed between parallel inference requests, following the general “parallelize the outermost loop first” rule of thumb. It also reduces the amount of scheduling/synchronization compared to a latency-oriented approach when every CNN operation is made parallelized internally over the full number of CPU cores.

Intel says that the speedup from the new mode is particularly strong on high-end servers, but also significant on other Intel architecture-based systems.

Topology\Machine Dual-Socket Intel Xeon Platinum 8180 Processor Intel Core i7-8700K Processor
mobilenet-v2 2.0x 1.2x
densenet-121 2.6x 1.2x
yolo-v3 3.0x 1.3x
se-resnext50 6.7x 1.6x

Together with general threading refactoring, also introduced in the R5 release, the toolkit does not require playing OMP_NUM_THREADS, KMP_AFFINITY and other machine-specific settings to achieve these performance improvements; they can be realized with the “out of the box” Intel Distribution of OpenVINO toolkit configuration.

To simplify benchmarking, the Intel Distribution of OpenVINO toolkit features a dedicated Benchmark App that can be used to play with the number of inference requests running in parallel from the command-line. The rule of thumb is to test up to the number of CPU cores in your machine. In addition to the number of inference requests, it is also possible to play with batch size from the command-line to find the throughput sweet spot.

Tags: deep learningArtificial IntelligenceIntel
Previous Post
AMD Ryzen 3000 Desktop Series and 3rd Generation Threadripper Coming Later This Year
Next Post
Apple Expands Workforce in Qualcomm's Hometown

Related Posts

  • An Intel-HP Collaboration Delivers Next-Gen AI PCs

  • New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance

  • Intel Unveils New GPUs for AI and Workstations at Computex 2025

  • G.SKILL Releases DDR5 Memory Support List for Intel 200S Boost

  • Intel and its partners release BIOS update for Intel 15th Gen to increase performance

  • Intel-AMD new motherboards announced

  • Intel at CES 2025

  • Intel Launches Arc B-Series Graphics Cards

Latest News

Transcend's New ESD420 Portable SSD Offers MagSafe Compatibility and Pro-Level Performance
PC components

Transcend's New ESD420 Portable SSD Offers MagSafe Compatibility and Pro-Level Performance

G.SKILL Trident Z5 DDR5 Memory and WigiDash Receives European Hardware Awards 2025
Enterprise & IT

G.SKILL Trident Z5 DDR5 Memory and WigiDash Receives European Hardware Awards 2025

Silicon Power Launches WP10 Magnetic Wireless Power Bank
Consumer Electronics

Silicon Power Launches WP10 Magnetic Wireless Power Bank

Razer Unveils the Ultra-Lightweight DeathAdder V4 Pro
PC components

Razer Unveils the Ultra-Lightweight DeathAdder V4 Pro

Sony launches a high-resolution shotgun microphone with superior sound quality and compact design.
Cameras

Sony launches a high-resolution shotgun microphone with superior sound quality and compact design.

Popular Reviews

be quiet! Light Loop 360mm

be quiet! Light Loop 360mm

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Noctua NH-D15 G2

Noctua NH-D15 G2

Soundpeats Pop Clip

Soundpeats Pop Clip

be quiet! Light Base 600 LX

be quiet! Light Base 600 LX

Crucial T705 2TB NVME White

Crucial T705 2TB NVME White

be quiet! Pure Base 501

be quiet! Pure Base 501

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed