News

StarVision Tech — Image Quality of Image Sensors: Common Misconceptions Corrected

Home / News / StarVision Tech — Image Quality of Image Sensors: Common Misconceptions Corrected

StarVision Tech — Image Quality of Image Sensors: Common Misconceptions Corrected

image.png


image.png

About the image quality of image sensors - several misunderstandings to be corrected

image.png

We currently rely more on image sensors than most people imagine. Image sensors are used in cars to help us avoid collisions; they are used in building monitoring to prevent illegal intrusions; they are used in production lines to check the quality of products. Interestingly, image sensors are often classified by very simple metrics such as pixel size and resolution, but choosing the right sensor for different applications is much more complex than that.


resolution

We rely on sensors to detect hazards or detect defects in products, so the image quality of the sensors is critical. System designers and end users often believe that higher resolution (i.e., more pixels in an image) enhances image quality. But this is not always the case. While higher resolution preserves the sharp edges and fine details of the image, aiding in object recognition, there are other factors to consider. Higher resolutions can adversely affect key parameters such as capture speed/frame rate, sensor size, and sensor power consumption. It also affects a number of other system factors, such as larger images requiring higher bandwidth, storage space and processing power. If higher resolutions must be achieved, reducing pixel size can maintain lens and camera dimensions to meet cost and size targets while enhancing image quality.


People often assume that they need to add as many pixels as possible without considering the impact of this decision on cost and system performance. Whenever a new project is started, a comprehensive demand analysis should be conducted first, taking into account the product's end use, core parameters, and various constraints, such as the physical size and power of the lens and body. This ensures that the sensor is better suited to your application needs without prematurely narrowing your options based on resolution.

image.png

Figure 1: Resolution before using 1/1.5-inch 5.4-megapixel 3 µm discrete diode sensor

image.pngFigure 2: Resolution using a 1/1.8-inch 8.3-megapixel 2.1 µm super-exposure sensor


power supply

Image sensor performance also depends heavily on other system components that may not be in the optical path or even part of the sensor, and thus may be less visible. Designers may have made compromises in these less conspicuous aspects, such as the design of the power supply. This compromise reduces image quality because electrical noise from the power supply components can cause a variety of image defects, ranging from subtle defects to obvious defects that every viewer will notice, but they may not know the cause of the defect.
In essence, the image sensor is a photon counter. In low-light conditions, the number of photons is lower, so any "noise" in the system is more apparent in the image. Voltage spikes or voltage transients from the power supply can cause defects in the final image output. Although the sensor is designed to allow the supply voltage to fluctuate within a tolerance range, deviations outside this range can affect image quality. Therefore, the quality of power supply is a crucial factor in camera system design.


Noise source

It would only be an ideal situation for a device to measure illumination without any errors or deviations; in reality, the circuitry in the sensor chip encounters different noise sources that affect the signal level of each pixel and, in turn, the pixels in the final image. In general, readout noise is well controlled with the latest sensors, but another source of noise called dark signal non-uniformity (DSNU) poses a greater challenge.

A source of DSNU noise occurs when images are taken in complete darkness: since the scene is completely dark, there should be no signal at all, but some electrons behave strangely and are counted as being caused by the incident light, causing the image to not be completely black. If that's the case for every pixel, you can subtract that noise, just like when you edit a photo and make the entire image a little darker. But problems can arise if this noise is not evenly distributed across the pixel array, so DSNU is a measure of the difference in the pixel array, and this problem gets worse as the sensor temperature increases. Due to temperature effects, a sensor may test well in an air-conditioned laboratory but perform unsatisfactory in a hot environment. Hot nighttime conditions pose great challenges to controlling DSNU, and since there isn't much valid signal, this noise source will become more apparent. To address this issue, measurements on any sensor should be made over the temperature range and under different lighting conditions in which the system normally operates. If you select an image sensor based solely on testing in a room-temperature environment, you may encounter unexpected behavior when temperatures increase.


Signal-to-noise ratio (SNR)

The English abbreviation of signal-to-noise ratio is SNR, which is defined as the average ratio of signal power to noise power. No matter how loud the noise is, if the signal-to-noise ratio is very high, the impact of the noise on the image will be much less. It's like an error on a restaurant bill. If you only order a cup of coffee, the extra $3 is easy to spot, but if you're dining with a large group and the bill runs into the hundreds of dollars, you probably won't notice the extra charge because it's such a small percentage of the total, even though it's $3 in both cases. Likewise, if the signal comes from thousands of photons, even a few more photons may not notice the extra signal.


Going back to the image sensor, if your image contains areas of bright light and areas of dark light, you will see more noise in certain areas. Surprisingly, this noise may not be in the dark-light parts of the image, but may be in the "mid-light" areas. There are still some design constraints in the transition area from low light to bright light. It's not easy to explain this without getting into technical details, but we can compare it to gears on a bicycle to understand. If you have a 10-speed bike, it will have one gear optimized for low speeds and one gear optimized for top speeds, with many gears in between. Assume that the bike only has a top gear, a medium gear, and a bottom gear: you will have the right gears for slow (low light), medium (medium light), fast (light) riding, but the transition from low to medium, and medium to high will not be too comfortable, and you will find these missing gears important in certain sections of the ride.



Some manufacturers often use average signal-to-noise ratio as the main indicator of image sensors, deliberately selecting performance statistics in areas with good signal-to-noise ratios, implying that these data represent the overall image quality under all lighting conditions. This is similar to the bike manufacturer in the example above taking the average gear ratio data from a 3-speed bike and applying it to a 10-speed bike. The medium gear is about the average of all 3 gears, but there is a big gap in the transition from low to medium and medium to high, and none of the existing 3 gears are ideal. Designers must be aware of this and not be fooled by "average" signal-to-noise ratios. The solution is to test the sensor under various different lighting conditions required and measure the signal-to-noise ratio over the entire range to see if you are affected by "the bike is missing a gear".



In short, if image quality is critical to your image sensor application, there are some potential pitfalls you must avoid. Assumptions made about resolution and noise effects must be verified through testing to ensure that no surprises are present in the final system design.


image.png