So, a group of volunteers set out to solve this problem on their own, using a homegrown solution that stressed performance over all else.
In internal tests, Huang said CNTK has proved more efficient than four other popular computational toolkits that developers use to create deep learning models for things like speech and image recognition, because it has better communication capabilities
Those types of performance gains are important in the fast-moving field of deep learning, because some of the biggest deep learning tasks can take weeks to finish.
Over the past few years, the field of deep learning has exploded as more researchers have started running machine learning algorithms using deep neural networks, which are systems that are inspired by the biological processes of the human brain. Many researchers see deep learning as a very promising approach for making artificial intelligence better.
Those gains have allowed researchers to create systems that can accurately recognize and even translate conversations, as well as ones that can recognize images and even answer questions about them.
Internally, Microsoft is using CNTK on a set of powerful computers that use graphics processing units, or GPUs.
Although GPUs were designed for computer graphics, researchers have found that they also are ideal for processing the kind of algorithms that are leading to these major advances in technology that can speak, hear and understand speech, and recognize images and movements.
Chris Basoglu, a principal development manager at Microsoft who also worked on the toolkit, said one of the advantages of CNTK is that it can be used by anyone from a researcher on a limited budget, with a single computer, to someone who has the ability to create their own large cluster of GPU-based computers. The researchers say it can scale across more GPU-based machines than other publicly available toolkits, providing a key advantage for users who want to do large-scale experiments or calculations.
Huang said it was important for his team to be able to address Microsoft’s internal needs with a tool like CNTK, but they also want to provide the same resources to other researchers who are making similar advances in deep learning.
That’s why they decided to make the tools available via open source licenses to other researchers and developers.
Starting Monday CNTK will be available, via an open-source license, to anyone else who wants to use it.
Bringing research into consumer products
Microsoft chief executive officer Satya Nadella is trying to overhaule the company's research arm and the way it works with the rest of the company. The goal is to quickly identify technology with the most potential and get it into customers' hands before a competitor replicates it.
To break down the walls between its research group and the rest of the company, Microsoft reassigned about half of its more than 1,000 research staff in September 2014 to a new group called MSR Next. Its focus is on projects with greater impact to the company rather than pure research. Meanwhile, the other half of Microsoft Research is getting pushed to find more significant ways it can contribute to the company's products.
Skype translator is an exampe of how the company tries to bring research into products. It uses speech recognition and artificial intelligence to translate live conversations into another language.
Besides Skype, other services that have benefited from the recent transformation include cloud productivity tools in Office, faster and more power-efficient servers running Bing, and the augmented-reality headset HoloLens. The latest to come out of this initiative is a new feature for Cortana. Microsoft plans to release an update to the digital assistant on Monday that relies on work from the corporate research group. It will give Cortana the ability to scan e-mails for tasks the user has agreed to accomplish and automatically set reminders to do them.
Google is not out of the AI game too. Researchers and developers on the search engine or Gmail teams share many of the same tools, including the company's open-source AI framework TensorFlow. That kind of close collaboration has helped produce new features, including Smart Reply, which suggests e-mail responses based on the content of a message. The feature, released in November 2015, was based on about a year of AI research at Google.