Imaging technology has come a long way since its inception two centuries ago. The advent of digital technology marked a turning point, allowing photons to be converted into electrons through optoelectronic sensors. This paved the way for computational imaging, gradually leading to the decline of many traditional standalone cameras, as miniature cameras became ubiquitous in smartphones.
Computational imaging branches into two distinct areas: computational photography and computer vision. Computational photography focuses on leveraging digital computation to capture and process images, while computer vision involves creating digital systems capable of interpreting and analysing visual data, much like the human visual system. These technologies have not only improved image quality but also unlocked new functionalities The evolution of imaging: From DSLRs to computational cameras including human and object recognition, 3D mapping, and feature extraction.
The rise of computational imaging
According to industry intelligence firm Yole Group, as of 2022, the computational imaging market has soared to $68bn. One of the most significant trends in computational imaging is miniaturisation. The demand for ultra-compact cameras capable of delivering high-resolution, high-dynamic-range images is a driving force behind this trend.
According to Emilie Viasnoff, Head of Optical Solutions at Synopsys: “Image quality now relies more than ever on high computing performance tied to miniaturised optics and sensors, rather than on standalone and bulky but aberration-free optics. This new trend for computational imaging can be used for computational photography and computer vision. Miniaturised cameras that deliver high-resolution, high-dynamic-range images are a key driver of the computational imaging market. Because next-generation camera modules are ultra-miniaturised, their computing performance must restore the quality of the signal through post processing.”
To achieve this trend toward miniaturisation, computational power is harnessed for post-processing, leading to the integration of artificial intelligence (AI) and image fusion as two prominent developments in the field.
Gordon Cooper, Product Manager for AI/ML Processor Products at Synopsys, explains: “The capability for high-performance computing opens the door to implement AI networks to improve image quality. Adding AI can improve low light performance, upscale image resolutions, recover image quality from cheaper lenses, and more. This trend toward computational imaging and AI will disrupt the imaging pipeline and require newer, broader design tools in the future – which will allow companies to break out of the imaging pipeline design silos.”
Applications across industries
Computational imaging has applications across various sectors, including consumer electronics, smart manufacturing, agriculture, healthcare, transportation, sports, retail, safety, and surveillance. In most cases, camera modules are integrated with companion chips as embedded systems. This integration necessitates a focus on critical factors such as power consumption, storage capacity, and latency.
An imaging system is a complex ecosystem consisting of several intricate components and advanced software, and while these subparts work together seamlessly, they can be designed by different teams using disparate tools. The real challenge arises during assembly and calibration when test engineers must manually validate various aspects of the system. However, by taking a holistic, system-level view of the imaging pipeline and incorporating AI algorithms at multiple stages, optical and electronic engineers could enhance computational imaging systems.
Leveraging AI algorithms for enhanced imaging
One advantage of breaking out of the design silos is the ability to harness AI algorithms across the entire imaging pipeline. This approach could alleviate hardware constraints and optimise performance.
Says Viasnoff: “Today’s AI-enabled miniaturised imaging systems offer tremendous functionality with computationally improved contrast, colour, sharpness, depth of focus, high dynamic range, and high motion accuracy, close to high-end traditional digital cameras. As an example, in the Apple iPhone 14 Pro, the image quality results from the three main cameras with complementary optical properties tied to an A16 chip that embeds a CPU, GPU, an image processor, and a neural engine. In addition to functionality benefits, today’s miniaturised imaging systems offer cost, weight, and packaging size advantages together with the possibility to interpret the content to make decisions, over traditional standalone optical systems.
Moreover, the continuing evolution of AI-based technologies in computational imaging – such as the development of neural networks – is revealing their potential to supplement or even replace traditional ISPs. This evolution will support more complex features like advanced denoising, low-light enhancement, blur reduction, and wide dynamic ranges to further improve image quality.”
As with any new technology, with the benefits also come the challenges of integrating today’s miniaturised and digitised imaging systems. These are often related to balancing performance with power and area, where power could include thermal issues and area directly impacts cost. Cooper says: “Adding AI to imaging systems – which brings many advantages – aggravates the performance, power, area (PPA) challenges. Implementing convolutional neural networks (CNNs) for vision requires a significant amount of computations and therefore a significant amount of data movement (of both sensor data and coefficients) which, in turn, increase the power consumption. Every imaging application will need to balance the benefits of adding AI with the cost of adding the resources for those benefits.”
Paving the way for computational imaging
To help overcome such challenges and realise the potential of computational imaging, a system-level analysis of imaging systems is essential. Synopsys offers a comprehensive suite of solutions that cater to the entire imaging pipeline. Cooper says: “For companies implementing imaging systems, there are three key benefits of partnering with Synopsys. The first is our range of software tools – from EDA tools, to lens design, to system level analysis, to analysing neural network performance, and more, we can help design, analyse and implement imaging systems-on chip (SoCs). Second, Synopsys has a broad range of licensable IP that allows companies to prioritise their designers’ efforts on differentiating technology while licensing standard IP blocks from a trusted provider. Licensing Synopsys’s ARC NPX6 Neural Processing Unit IP speeds time to market but avoids designing a customer neural network engine and the software tools to support that engine.”
The third way Synopsys can help customers, says Viasnoff, is its market expertise: “Want to add an imaging system in an automotive application, for example? Synopsys has helped many companies in this area and can share best practices and point out potential pitfalls.”
A bright future
As computational imaging continues to evolve, it is likely that more powerful and affordable imaging systems will evolve that cater to diverse domains, from assisted driving systems to mixed-reality applications. Also, the inclusion of AI into the imaging pipeline will likely accelerate. Cooper concludes: “Convolutional Neural Networks – the standard for vision applications for 10 years – are being challenged by transformers. Transformers, the neural networks that generative AI are based on, improve accuracy but at the cost of even more computations and data movement. As more imaging pipelines include AI, there is a greater opportunity to break down the silos between lens design and processing of the data in the ISP or neural network engine. This will require more integrated toolchains and system-level analysis tools.”