Skip to main content

Hyperspectral compute-in-memory architecture boosts efficiency of AI

Shutterstock data centre

As the size of AI models increases exponentially, traditional electronic systems are struggling to meet the compute demand - highlighting the importance of optical technologies in data centres (image: kwarkot/Shutterstock)

A new silicon valley startup, NTT Research, has introduced a hyperspectral compute-in-memory architecture that merges space and frequency division multiplexing to boost computational efficiency and throughput. 

In contrast to many existing optical interconnect architectures, by integrating frequency division multiplexing, each pixel is able to process multiple frequency signals concurrently. This allows for the optics to process parallel data energy-efficiently, while the electronics enhance programmability.

Meeting the computing demand of AI

The advancements in artificial intelligence (AI) have transformed various industries. As the size of AI models increases exponentially, traditional electronic systems are struggling to meet the compute demand due to their scaling limitations. This struggle necessitates a large network of disaggregated electronic chips for a single computational task and highlights the importance of optical technologies in data centres, which complement electrical systems by enhancing data transfer. 

Optical interconnect technology is progressing to integrate more closely with electronic chips, driven by the need for greater bandwidth capacities. However, as the increase in serial communication speeds becomes a challenge, strategies such as space division multiplexing and frequency division multiplexing are being explored to achieve larger bandwidth. 

Additionally, even within a single electronic chip, researchers are examining ways to lower power consumption related to data transfer in traditional von Neumann architectures by considering alternatives like compute-in-memory (or in-memory computing) architectures. By performing simple computations like multiplication and addition directly within the memory units, this approach eliminates the need to repeatedly load entire sets of raw data, thereby minimising data bottlenecks caused by separating memory and processing units.

Re-evaluating the role of optics in computing

The evolution of modern data centres into hybrid opto-electronic computing machines is leading physicists to reevaluate the role of optics in performing computational tasks, especially linear operations like matrix-vector multiplication (MVM). Recent proposals have highlighted the energy efficiency of various optical MVM systems. Particularly promising are three-dimensional (3D) optical systems that use scalable free-space optics, though many still primarily rely on space division multiplexing, leaving the frequency dimension largely untapped. 

Integrating frequency division multiplexing inspired by hyperspectral imaging

NTT Research’s new system uses a two-dimensional spatial light modulator (SLM) as a programmable optical memory, enabling spatial parallel operations. This setup allows for energy-efficient parallel data processing by optics, while electronics enhance programmability. 

Given that space multiplexing alone does not match the density of electronic systems, the architecture also integrates frequency division multiplexing, inspired by hyperspectral imaging and advanced optical fibre communication technologies. This addition allows each pixel to handle multiple frequency signals concurrently. 

Although a start-up, NTT Research was developed out of the NTT Research Physics & Informatics (PHI) Lab in Tokyo, Japan, which holds over 1,600 patents.

Comparing this early-stage optical computing with mature digital electronic technologies poses challenges, but the NTT cautiously estimates that their system might achieve 100 PetaOPS (Peta-operations per second) with a power efficiency near 2 W/PetaOPS, significantly outperforming contemporary electronic GPUs.

The research was recently published in Optica

Media Partners