Immervision’s new ultra wide-angle camera module has been designed from the ground up to address the challenging requirements of operating safely in low-light conditions where other sensors are inefficient or battery hungry. While working with clients to develop computer vision cameras to handle situational analysis when flying at dusk or dawn, the company saw a need for inexpensive, high performing, low size and weight, interoperable UAS (Uncrewed Aircraft Systems) technologies for commercial and defense applications, explains Patrice Roulet Fontani, Vice President, Technology and Co-Founder.
“Furthermore, we also saw an increased demand for commercial drones and autonomous navigation systems in low-light environments such as mining and building inspection”.
The main advantage of vision-based autonomous navigation is to access areas that are more challenging for a human to navigate because of limited visibility or lack of GPS signal. For example, drones are often used for search and rescue activities, to inspect utilities like powerlines, and critical infrastructures like tunnels and railway tracks. In many cases, even when the sun is at its brightest, some of these tasks like inspecting the underside of bridges, will involve low-light conditions.
Total situational awareness
Designed to satisfy the demanding requirements of total situational awareness (or 360° surround machine perception) operating in low-light conditions, this plug and play solution gives access to Immervision’s advanced wide-angle camera module and image processing software. Its compact design fits easily in different types of UAV/UAS/drones while its light weight has a small impact on power consumption. The rugged design is well suited for a wide range of applications, including robots, land vehicles, and water vessels.
Fontani: “In order to navigate properly, most drones are equipped with navigation cameras that allow to create a full 360° vision bubble to precisely detect and avoid obstacles, also called situational awareness. This ‘situation awareness’, or being ‘aware’ in all situations, requires high precision input for navigation systems in various challenging outdoor and indoor environments as well as fast input to allow the drone to fly safely at a decent speed. The main challenge is to deliver high quality pixels to allow the flight controller to detect small objects (electric wires, small tree branches) far enough in advance to avoid collision. In short, to achieve total situational awareness (or 360° surround machine perception) while operating in low-light conditions, requires our advanced fast light weight wide-angle camera module and image processing software”.
Maximizing light sensitivity and optimizing image quality
The camera module includes the lens assembly, a 5MP sensor, and a MIPI interface, weighing only 4.7 grams. Unlike a typical fisheye lens this camera module uses a panomorph lens combining an ultra wide-angle Field of View (190° FoV) with a unique distortion profile and a large aperture (F#1.8). It is crafted to maximize the light sensitivity of pixels and optimize the image quality from edge to edge, for use by human operators and/or artificial intelligence (AI) and machine learning (ML) systems.
Fontani explains that Immervision’s team of multidisciplinary scientists, optical designers, and image processing engineers spent thousands of hours to develop a unique type of camera lens, called Panomorph. With a Panomorph design, users can focus on certain regions of the image and optimize the quality where it matters the most, which is called Smart Pixel Management. He explains that this is where Immervision’s technology differentiates the most from classic Fisheye lenses and Immervision’s customers are already benefiting from its development and deployment in numerous applications.
“For this drone module, the lens is designed to ensure the ideal concentration of light per pixel to provide the best image quality in low light conditions, across the complete field of view. Our lens design team has focused their attention on the lens F-number and relative illumination to maximize the quantity of light reaching each pixel of the image sensor. To complement the lens, we have selected a quality sensor that offers large pixel size, which contributes to the light sensitivity of the camera module.”