Friday, April 19, 2024
Search
  
Tuesday, October 10, 2017
 AMD, Intel, ARM, IBM and Others Support the Open Neural Network Exchange Format for AI
You are sending an email that contains the article
and a private message for your recipient(s).
Your Name:
Your e-mail: * Required!
Recipient (e-mail): *
Subject: *
Introductory Message:
HTML/Text
(Photo: Yes/No)
(At the moment, only Text is allowed...)
 
Message Text:

AMD, ARM, Huawei, IBM and Intel have announced their support for the Open Neural Network Exchange (ONNX) format, which was co-developed by Microsoft and Facebook in order to reduce friction for developing and deploying AI.

Introduced last month, the Open Neural Network Exchange (ONNX) format is a standard for representing deep learning models that enables models to be transferred between frameworks (PyTorch, Caffe2, and Cognitive Toolkit). ONNX is the first step toward an open ecosystem where AI developers can easily move between tools and choose the combination that is best for them.

Standardization is good for both the compute industry and for developers because it enables a level of interoperability between various products and frameworks, while streamlining the path from development to production.

By joining the project, Intel plans to further expand the choices developers have on top of frameworks powered by the Intel Nervana Graph library and deployment through the company's Deep Learning Deployment Toolkit.

Intel plans to enable users to convert ONNX models to and from Intel Nervana Graph models, giving users an even broader selection of choice in their deep learning toolkits.

Arm is already engaged to accelerate Caffe2 for its Arm Cortex-A CPUs as well as for Arm Mali GPU-based devices which currently use the Facebook application.

 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .