The move will allow Wave Computing to expand from AI training in data centers to AI inference for embedded systems.
In an interview with EE Times, Wave Computing's CEO Derek Meyer said that "our company's strategy has always been to push our dataflow fabric out to the edge." The acquisition should give Wave Computing an opportunity to infuse its scalable dataflow technology with MIPS's RISC processors while plugging it into MIPS's ecosystem.
MIPS has lost its once respectable position in the processor core IP market, although the MIPS architecture is still supported by companies including Microchip, Mobileye (Intel), MediaTek, and Japanese Denso.
The relations of Wave Computing with MIPS are obvious. Derek Meyer was once a MIPS vice president for sales & marketing. In addition, Mike Uhler, Wave Computing's vice president of operations, was MIPS's CTO, and Darren Jones, MIPS's former head of engineering, is now Wave Computing's VP of engineering.
The combined company will operate under the name Wave Computing, but it will keep the MIPS brand and corporate identity. MIPS will continue to license MIPS IP solutions.
Wave Computing is expected to deliver its high-speed machine learning solution for data center-based training to the first set of customers later this month. The system is said to deliver up to 1000x the performance for neural network training.
Wave's compute appliance leverages the company's patented dataflow architecture, which eliminates the need for a central processing unit (CPU) or graphic processing unit (GPU), removing the typical performance and scalability bottlenecks found with traditional deep learning solutions.
The Wave compute appliance comes in a 3U "plug & play" form factor that easily fits into existing data center environments. The scalable compute appliance initially supports TensorFlow, and can support a range of frameworks including the Microsoft Cognitive Toolkit (CNTK), MXNet and more. Wave Computing provides all supporting software, tools and dataflow agent libraries.
The company plans to soon unveil a roadmap of a common AI platform built on MIPS and Wave's DPU.
Wave compute is also envisioning a common AI platform from data centers to the edge. What's left to be seen is how AI-ready MIPS cores, once launched, might stack up against AI-enabled cores from competitors like Arm and Imagination is anyone's guess.
Google has also started its Tensor Processing Unit (TPU), an AI accelerator ASIC, initially for inference of machine learning. It recently moved on to a second-generation TPU more geared to training.
For its part, Nvidia sees much of its AI revenue coming from the training side. But it is also promoting Xavier, an SoC loaded with CPU and GPU cores and other hardware accelerators, designed to power the company's AI self-driving platform inside automated vehicles.