Breaking News

ASUS Announces Intel Core Ultra 200S Plus Series Support on W880, Z890, Q870, B860 and H810 Motherboards G.SKILL DDR5 Memory Kits Confirmed as Intel XMP 3.0 'Ready' for Intel Core Ultra 200S Plus Series Processors ASUS Unveils Complete Portfolio Support for Intel Core 200S Series Samsung Brings AirDrop Support to Quick Share with Galaxy S26 Series TerraMaster Spring Sale 2026 Upgraded Up To 30%

logo

  • Share Us
    • Facebook
    • Twitter
  • Home
  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map

Search form

Intel Open Sources the nGraph Compiler for Deep Learning Systems

Intel Open Sources the nGraph Compiler for Deep Learning Systems

Enterprise & IT Apr 19,2018 0

Intel's nGraph Compiler, a framework-neutral deep neural network (DNN) model compiler, is now open source, allowing support for multiple deep learning frameworks while optimizing models for multiple hardware solutions.

"Finding the right technology for AI solutions can be daunting for companies, and it's our goal to make it as easy as possible. With the nGraph Compiler, data scientists can create deep learning models without having to think about how that model needs to be adjusted across different frameworks, and its open source nature means getting access to the tools they need, quickly and easily," said Arjun Bansal, VP, AI Software, Intel.

With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices.

Currently, the nGraph Compiler supports three deep learning compute devices and six third-party deep learning frameworks: TensorFlow, MXNet, neon, PyTorch, CNTK and Caffe2. Users can run these frameworks on several devices: Intel Architecture (x86, Intel Xeon and Xeon Phi), GPU (NVIDIA cuDNN), and Intel Nervana Neural Network Processor (NNP).

When Deep Learning (DL) frameworks first emerged as the vehicle for running training and inference models, they were designed around kernels optimized for a particular device. As a result, many device details were being exposed in the model definitions, complicating the adaptability and portability of DL models to other, or more advanced, devices.

The traditional approach means that an algorithm developer faces tediousness in taking their model to an upgraded device. Enabling a model to run on a different framework is also problematic because the developer must separate the essence of the model from the performance adjustments made for the device, translate to similar ops in the new framework, and finally make the necessary changes for the preferred device configuration on the new framework.

Intel designed the nGraph library to reduce these kinds of engineering complexities. While optimized kernels for DL primitives are provided through the project and via libraries like Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN), there are also several compiler-inspired ways in which performance can be further optimized.

Tags: deep learningIntel
Previous Post
Garmin Announces Connect IQ 3.0 with New apps from Trailforks, Yelp, iHeartRadio
Next Post
Facebook Seeking to Hire Chip Designers

Related Posts

  • G.SKILL DDR5 Memory Kits Confirmed as Intel XMP 3.0 'Ready' for Intel Core Ultra 200S Plus Series Processors

  • Intel Launches New Core Ultra 200HX Plus Series Mobile Processors

  • Intel Announces New Intel Core Ultra 200S Plus Series Desktop Processors

  • Intel Launches Core Series 2 Processor with Real-Time Performance and Expands Edge AI Portfolio

  • Intel Launches new Intel Xeon 600 Processors for Workstation

  • Intel Core Ultra Series 3 Debut at CES 2026

  • Intel and NVIDIA to Jointly Develop AI Infrastructure and Personal Computing Products

  • An Intel-HP Collaboration Delivers Next-Gen AI PCs

Latest News

ASUS Announces Intel Core Ultra 200S Plus Series Support on W880, Z890, Q870, B860 and H810 Motherboards
Enterprise & IT

ASUS Announces Intel Core Ultra 200S Plus Series Support on W880, Z890, Q870, B860 and H810 Motherboards

G.SKILL DDR5 Memory Kits Confirmed as Intel XMP 3.0 'Ready' for Intel Core Ultra 200S Plus Series Processors
PC components

G.SKILL DDR5 Memory Kits Confirmed as Intel XMP 3.0 'Ready' for Intel Core Ultra 200S Plus Series Processors

ASUS Unveils Complete Portfolio Support for Intel Core 200S Series
Enterprise & IT

ASUS Unveils Complete Portfolio Support for Intel Core 200S Series

Samsung Brings AirDrop Support to Quick Share with Galaxy S26 Series
Smartphones

Samsung Brings AirDrop Support to Quick Share with Galaxy S26 Series

TerraMaster Spring Sale 2026 Upgraded Up To 30%
Enterprise & IT

TerraMaster Spring Sale 2026 Upgraded Up To 30%

Popular Reviews

be quiet! Dark Mount Keyboard

be quiet! Dark Mount Keyboard

be quiet! Light Mount Keyboard

be quiet! Light Mount Keyboard

Akaso 360 Action camera

Akaso 360 Action camera

Dragon Touch Digital Calendar

Dragon Touch Digital Calendar

be quiet! Pure Loop 3 280mm

be quiet! Pure Loop 3 280mm

Noctua NF-A12x25 G2 fans

Noctua NF-A12x25 G2 fans

Arctic Liquid Freezer III 360 Pro Argb

Arctic Liquid Freezer III 360 Pro Argb

Soft2bet and the unseen hardware that makes instant play possible

Soft2bet and the unseen hardware that makes instant play possible

Main menu

  • Home
  • News
  • Reviews
  • Essays
  • Forum
  • Legacy
  • About
    • Submit News

    • Contact Us
    • Privacy

    • Promotion
    • Advertise

    • RSS Feed
    • Site Map
  • About
  • Privacy
  • Contact Us
  • Promotional Opportunities @ CdrInfo.com
  • Advertise on out site
  • Submit your News to our site
  • RSS Feed