YOU ARE AT:Chips - SemiconductorArm's Project Trillium brings machine learning to mobile edge

Arm’s Project Trillium brings machine learning to mobile edge

Arm announces Project Trillium

Arm introduced a new processor platform, called Project Trillium, as part of an initiative to bring machine learning to mobile edge devices. The platform includes two new processors, the Arm Machine Learning (ML) and Object Detection (OD) processors, which the company said will enable trillions of operations per second on mobile devices.

Machine learning has become a buzzword in the semiconductor industry and beyond. The basic idea behind machine learning is to enable machines to learn from data they are fed using special algorithms. Tech giants like Amazon, Google, Qualcomm and various startups are working to harness the power of machine learning, developing their own A.I. chips in the process.

“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint. To meet this demand, Arm is announcing its new ML platform, Project Trillium,” said Rene Haas, president, IP products group, Arm.

“New devices will require the high-performance ML and AI capabilities these new processors deliver. Combined with the high degree of flexibility and scalability that our platform provides, our partners can push the boundaries of what will be possible across a broad range of devices.”

According to the company, the ML processor is made for low-power machine learning workloads. It can reportedly deliver over 4.6 trillion operations per second for mobile computing, with an efficiency of over three trillion operations per second per watt (TOPs/W).

The OD processor, on the other hand, was made to identify people and objects. It includes real-time detection with full HD processing at 60 frames per second. The OD processor scans every frame while providing a list of detected objects with their location within the scene. It can be combined with Arm Cortex CPUs, Arm MAli GPUs or the Arm MI process.

Neural network (NN) software is another component of the project, which is intended to bridge the gap between NN frameworks like TensorFlow, Caffe, and Android NN, and Arm Cortex CPUs and Arm Mali GPUs. This can be achieved when used alongside the Arm Compute Library and the CMSIS-NN, according to Arm.

The company noted the new suite of Arm ML IP will be available for early preview in April 2018, with general availability in mid-2018.

ABOUT AUTHOR

Nathan Cranford
Nathan Cranford
Nathan Cranford joined RCR Wireless News as a Technology Writer in 2017. Prior to his current position, he served as a content producer for GateHouse Media, and as a freelance science and tech reporter. His work has been published by a myriad of news outlets, including COEUS Magazine, dailyRx News, The Oklahoma Daily, Texas Writers Journal and VETTA Magazine. Nathan earned a bachelor’s from the University of Oklahoma in 2013. He lives in Austin, Texas.