Monday, September 24, 2018
Search
  
Submit your own News for
inclusion in our Site.
Click here...
Breaking News
Comcast is the Winner in Auction for Sky
Fujitsu Algorithm Achieves Compresses Images by 1,000:1
Samsung Galaxy J4+ and J6+ Launched in India
Renesas to License Chip IP As It Seeks For New Revenue Sources
Razer Ifrit Packs a Professional Quality Microphone into a Low-profile Broadcast Headset
Micron, Toshiba Chips Found Inside New iPhones
3DMark Adds Ray tracing Benchmark
Adobe to Acquire Marketo for $4.75 Billion
Active Discussions
Which of these DVD media are the best, most durable?
How to back up a PS2 DL game
Copy a protected DVD?
roxio issues with xp pro
Help make DVDInfoPro better with dvdinfomantis!!!
menu making
Optiarc AD-7260S review
cdrw trouble
 Home > News > General Computing > Intel A...
Last 7 Days News : SU MO TU WE TH FR SA All News

Tuesday, October 17, 2017
Intel Advances Artificial Intelligence With Nervana Neural Network Processor


Intel CEO Brian M. Krzanich today spoke at the WSJDLive global technology conference about cognitive and artificial intelligence (AI) technology, and announced Intel's silicon for neural network processing, the Intel Nervana Neural Network Processor (NNP).

Coming before the end of this year, the Intel Nervana Neural Network Processor (NNP) (formerly known as "Lake Crest") promises to revolutionize AI computing across myriad industries. Intel says that by using Intel Nervana technology, companies will be able to develop new classes of AI applications that maximize the amount of data processed and their enable customers to find greater insights. Examples include earlier diagnosis in helathcare, more efficient targetted advertising for social media, accelerated learning in autonomous vehicles or better weather predictions.

Matrix multiplication and convolutions are a couple of the important primitives at the heart of Deep Learning. These computations are different from general purpose workloads since the operations and data movements are largely known a priori. For this reason, the Intel Nervana NNP does not have a standard cache hierarchy and on-chip memory is managed by software directly. Better memory management enables the chip to achieve high levels of utilization of the massive amount of compute on each die. This translates to achieving faster training time for Deep Learning models.

Designed with high speed on- and off-chip interconnects, the Intel Nervana NNP enables massive bi-directional data transfer. A stated design goal was to achieve true model parallelism where neural network parameters are distributed across multiple chips. This makes multiple chips act as one large virtual chip that can accommodate larger models.

Neural network computations on a single chip are largely constrained by power and memory bandwidth. To achieve higher degrees of throughput for neural network workloads, Intel has invented a new numeric format called Flexpoint. Flexpoint allows scalar computations to be implemented as fixed-point multiplications and additions while allowing for large dynamic range using a shared exponent. Since each circuit is smaller, this results in a vast increase in parallelism on a die while simultaneously decreasing power per computation.

Intel says it has multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models. The company's goal set last year is to achieve 100 times greater AI performance by 2020.

Intel is also investing in frontier technologies that will be needed for other large-scale computing applications of the future. Among these technologies, the company is achieving research breakthroughs in neuromorphic and quantum computing.

Neuromorphic chips are inspired by the human brain, which will help computers make decisions based on patterns and associations. Intel recently announced a self-learning neuromorphic test chip, which uses data to learn and make inferences, gets smarter over time, and does not need to be trained in the traditional way.

Quantum computers have the potential to be powerful computers harnessing the unique capabilities of a large number of qubits (quantum bits), as opposed to binary bits, to perform exponentially more calculations in parallel. This will enable quantum computers to tackle problems conventional computers can't handle, such as simulating nature to advance research in chemistry, materials science and molecular modeling - creating a room temperature superconductor or discovering new drugs.

Last week, Intel announced a 17-qubit superconducting test chip delivered to QuTech, its quantum research partner in the Netherlands. The delivery of this chip demonstrates the fast progress Intel and QuTech are making in researching and developing a working quantum computing system. In fact, Intel expects to deliver a 49-qubit chip by the end of this year.



Previous
Next
Samsung's 8nm LPP is Ready For Production        All News        Razer Launches Quad-core Blade Stealth Laptop and Core V2 External Graphics Enclosure
Samsung's 8nm LPP is Ready For Production     General Computing News      NXP S32 Automotive Processing Platform Brings Future Vehicles to Market Faster

Get RSS feed Easy Print E-Mail this Message

Related News
Samsung AI Forum Underlines the Importance of Unsupervised Learning
Facebook Confirms Internal Silicon Team
Cisco Unveils Server for Artificial Intelligence and Machine Learning
Samsung Opens a New AI Center in New York City
New Google Tool Spots Child Abuse in Photos
Broadcom to Design 7-nm AI processor For Wave: report
Microsoft Calls for Public Regulation of AI Face Recognition Software
Facebook Launches AR Ads
Samsung Wins at Two Top Global AI Machine Reading Comprehension Challenges
Baidu Unveils High-Performance Kunlun AI Chip, AI Partnerships With Intel
AI Algorithms Crushed Human Players in Dota 2 Video Game
Intel to Bring Silicon-Based Security to AI and Blockchain Workloads

Most Popular News
 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2018 - All rights reserved -
Privacy policy - Contact Us .