Machine learning, limited by lags, speeds up

In This Story

People Mentioned in This Story
Body
Sai Manoj wears a dark blazer and striped-blue shirt in his profile for the Mason news story.
Photo provided by Sai Manoj, assistant professor in
the ECE department at Mason.

Machine learning has come a long way, which you likely notice in your everyday experiences - it’s when Google finishes your search term for you or your navigation app knows when you get in your car in the morning, that you’re likely heading to school or work, and it shows you a traffic report. 

Convenient, indeed, but Sai Manoj, an assistant professor in the George Mason University College of Engineering and Computing’s Department of Electrical and Computer Engineering, says that while we may be impressed with machine learning’s advancements, a lot of improvements are still to be made. With a grant from the National Science Foundation, Manoj hopes to give machine learning a kick in its metaphorical pants.    

Manoj says we expect mere milliseconds of delay when interacting with our smart watch, phone, or similar devices, and so we notice even the tiniest hiccups. “When we’re talking about our phones, for example, we’re considering so many computations that we’re talking in terms of gigaflops (a unit of computing speed equal to one billion operations per second), and the software cannot handle that,” says Manoj.

“The main thing that we are lacking is efficiency - you need accelerators,” he says. “Our human brain, for example, needs hardly a few hundred of millijoules of energy for basic things, whereas machine learning by our systems needs exponentially more energy.” 

Typically machine learning processes information by retrieving data from storage memory when the information is needed. This introduces inefficiencies. Manoj’s project designs look-up tables so that the data is stored in the same place where the computation transfers occur called in-memory computing. Instead of a look-up table having a discreet purpose, his proposed look-up tables do multiple things.

Manoj says, “You need a lot of programming cycles to update memory or do programming. We have two-stage multiplexers (devices that choose between several inputs and forward the selected input to a single output) which reduces the time to get data to what we call one ‘clock cycle,’ which is not achievable using traditional look-up tables.”  Another novel part of his project is that he is reconfiguring the connections between those tables, optimizing the speed of interaction compared to traditional approaches. 

Manoj leverages the flexibility of field-programmable gate arrays (FPGA) – integrated circuits that may be configured by an end user after purchase – and combines that with the benefits of ASIC, a computer chip custom-designed for a specific purpose, to create “low overhead” and programmable on-chip interconnections allowing for speedy interactions.

His accelerator is designed not only to focus on machine learning applications but to effectively handle other complex applications that we use daily, making them more efficient without compromising on performance.

And so, Manoj is putting the pedal to the metal, so to speak.