Published March 9th, 2022
Narayan Srinivasa is the Director of Machine Intelligence Research Programs at Intel Labs.
I was recently honored with the opportunity to deliver the keynote address at the 2022 Energy Consequences of Information Workshop, an annual workshop funded by the Air Force Research Laboratory (AFRL), Airforce Office of Scientific Research (AFOSR), and the Department of Energy (DOE), held February 16-18, 2022. My talk, "On Solving Hard Optimization Problems in an Energy-Efficient Way," was originally intended as an invited talk at the workshop but was bumped up to keynote status a couple of weeks before the event. I suspect this change was fueled, not by my prowess as a speaker, but by the topic's relevance to solving hard problems in our increasingly data-driven world. I’d like to share a summary of the keynote in this blog.
We face unprecedented challenges in processing the vast amounts of data we need to take us from one day to the next. Among the top challenges is the energy-intensive nature of artificial intelligence (AI), which has become universal and necessary at all levels of decision-making.
Businesses, institutions, and governments are tasked with solving a myriad of complex optimization problems, e.g. shipping route determination, disease prediction, circuit diagnostics, 5G MIMO deployments, and information security. What distinguishes these problems is that they generally cannot be solved in polynomial time. They are what we call “non-deterministic polynomial (NP) problems” (Table 1).
Table 1: The four types of optimization problems in computer science.
In this presentation, I discuss solving NP-Hard problems in an efficient way. These are the toughest to solve because (1) they cannot, as far as we know, be solved in polynomial time, and (2) they may or may not be verifiable in polynomial time. More importantly, they exponentially exceed the capability of traditional computing in terms of space, time, and precision.
How Do We Solve NP-Hard Problems?
The top four approaches currently being explored to solve NP-Hard problems are quantum annealing (QA), photonic computing (PC), analog computing (AC), and neuromorphic computing (NC). Neuromorphic computing is the approach most focused on by Intel.
NC takes its cues from how the biological brain functions to solve problems. More specifically, it involves the implementation of algorithms based on the spiking activity of neurons.
Spiking neural networks (SNNs), novel models that simulate natural learning by dynamically re-mapping neural networks, are used in neuromorphic computing to make decisions in response to learned patterns over time. Neuromorphic processors leverage these asynchronous, event-based SNNs to achieve orders of magnitude gains in power and performance over conventional architectures.
At Intel, we are exploring stochastic spiking neural networks (SSNN) on our second-generation Loihi chip, which transmits data through patterns of pulses (or spikes) between neurons. The Loihi2 chips yield greater synaptic density and neurocore capacity among several other upgrades. The neurons operate 5000x faster than biological neuron function.
Figure 1 shows some examples of problem-solving tests that we have conducted using this technology.
Figure 1. Three examples of the optimization problems for which NC-based solvers are superior to CPU-based current state-of-the-art (CSOA) are summarized in the first column. The second and third columns show the energy to solution and time to solution advantages of NC over CSOA running on CPU.
Comparing Neuromorphic Computing (NC) to Other Approaches
When it comes to NP-hard problem solving, NC offers the best scalability, energy to solution, SWaP (size, weight, and power), and binary/mixed performance. It is not only highly scalable but fully programmable with fast convergence. Most importantly, it is significantly more energy-efficient than any of the other aforementioned approaches, especially QC and PC. That said, NC tends to lag (at least for the time being) in terms of time to solution. More testing needs to occur on a variety of problems to fully understand the strengths of NC to help mitigate its deficits.
QC offers super-fast time to solution however, due to limited qubits, is limited in scalability. Additionally, the size and energy requirements for QC make it difficult to apply in real-world applications where size, weight, and power (SWaP) resources are limited.
PC is both fast and scalable but requires large and complex optical setups (i.e. it is not SWaP amenable). Implementing ~O(N) optical delay lines is challenging in terms of number and precision) but preseves quantum entanglement of optical pulses to search fast. Digital feedback using ADC/FPGA/DAC allows programmability but is energy-intensive and loses quantum entanglement. In addition, phase stability across the whole cable is susceptible to external perturbations, leading to poor performance.
AC is a super-fast, compact solution but its scalability for large problems remains unknown. Analog connections have low precision, and parasitics and variability can be difficult to control and validate.
An overall comparison of the pros and cons of all four approaches is visualized in Figure 2.
Figure 2. The comparison is made on six different variables; TTS – time to solution; ETS – Energy to solution and SWaP – size, weight, and power; QC – Quantum computing, PC – Photonic computing, AC – Analog computing, and NC – Neuromorphic computing
In summary, neuromorphic computing seems to be the most scalable and energy-efficient option across several optimization problems, while also being the most SWaP compatible. The other three approaches are in a nascent state of maturity but will be worth monitoring to see if they can overcome some of the limitations. Ultimately there may not be a single winner, but exciting research is exploring all four of these areas to help solve some of today’s hardest problems in a fast and efficient way.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.