Skip to main content

Thermal Imaging Scenes with Wide Temperature Differences

What is a good infrared or thermal image? It is an image which presents a high contrast, while showing the smallest temperature differences. Infrared cameras are able to do that, but within a defined temperature range.

The principle: an introduction

For temperatures around room temperature, for example, the operator will set the camera to a typical temperature range of -20°C to +50°C. All objects with a temperature beyond this range, the brightest or hottest parts, will be presented as saturated. Those below the range will generally be noisy.

So, if for instance our object temperature of interest is +100 °C, a range of +20 °C to +120 °C must be selected. In this case the camera will present a good image of the +100 °C object, but the detailed contrast of the room temperature objects within this image will not be as good as displayed in the first, - 20 °C to +50 °C range.

The combination of the two images would be a logical step. The solution would be to allow the camera to 'take' an image in the first, room temperature range and then 'take' a second image at the higher temperature range. Combining these two images in a smart way should produce a superior picture, which would encompass the best parts of both images. This is what superframing is about.

The problem and the application

The matter gets more complicated when dealing with extreme temperatures: a man standing by a fire in a cold winter night is such a typical example. The brightest or hottest parts of the image will be saturated, while at the same time the darkest or coldest parts of the scene will appear black in the image, or noisy. When an object appears saturated or noisy, two undesirable things happen: image details are lost and temperature measurements in that part of the scene are no longer valid.

Advanced infrared imaging and measurements often have to acquire images or image sequences of scenes featuring a very wide range of temperatures. The saturation problem can be particularly disturbing in infrared imaging for research and development applications, where imaging or high-speed digital video of scenes with very large temperature differences such as engine monitoring, a rocket launch or an explosion are required. This problem, which is particularly acute in the midwave infrared waveband, can be addressed with superframing.

Will reducing exposure (integration) time do it?

The sensitivity of an infrared camera, which means the smallest temperature difference such a camera can detect, can be controlled by varying its exposure or, as we call it for infrared imaging systems, integration time, a term which we define as the exposure time of the infrared detector inside the camera to produce a single frame.

Operating the infrared camera at a longer exposure time raises sensitivity, but at the same time it restricts the range of temperatures one can measure: the hot objects are so bright that they exceed the set temperature range of the camera. If a scene or sequence contains extreme temperature differences, which need to be simultaneously measured, the exposure time of the camera should be reduced to a much shorter time.

But this reduction, for its side, will result in the loss of the ability to measure changes in the cooler parts of the scene as it exceeds the set temperature range, resulting in a black or noisy appearance for those areas, as shown in the picture range 1 to 4.

             

1. 1.0ms/20-75°C                         2. 0.25ms/65-135°C

             

3. 0.05ms/130-230°C                    4. 0.01ms/220-380°C

Welding, a sequence with extreme temperatures: the shorter the integration or exposure time, the higher the temperature range and the larger the black or noisy areas.

Is there a single exposure time able to fully encompass a scene’s variations and accurately measure accurately all objects, cold and hot, in such a scene? No, but there’s another option.

The solution: Superframing

Superframing means taking a set of typically four images (subframes) of the scene at progressively shorter exposure times in very rapid succession, then repeating this cycle. The subframes from each cycle are merged into a single superframe, which combines the best features of the four subframes, which as we already know, have different exposure times. This process is called collapsing. In this way, the superframe image, generated by the collapsing algorithm, will be both high in contrast and wide in temperature range. The algorithm is basically quite simple: if a pixel in the first subframe is saturated, the algorithm selects the corresponding pixel from the next subframe. If that pixel is satisfactory, the algorithm stops. If not, it checks the corresponding pixel in the next subframe for suitability, and so on. All pixel values are converted into temperature or radiance units for the final superframe image.

Figures 1, 2 and 3 illustrate the superframing technique with two images of a Beechcraft King Air twin propeller airplane taken at 2 milliseconds and 30 microseconds. These images were taken with a FLIR Systems ThermaCAM Phoenix, a very high performance midwave infrared (MWIR) camera system, running at 90 frames per second at the full frame size of 640 x 512 pixels. The two images are separated by a short interval of time (about 40 milliseconds), meaning that the scene does not change very much; the propeller movement is barely perceptible.

The 2-millisecond image gives excellent contrast for nearly every portion of the scene except for the aircraft’s exhaust system, which is so hot that it saturates that part of the image. (Figure 1)

Figure 1: 2-millisecond image: saturated exhaust system

 

Conversely, the 30-microsecond image shows the exhaust system very clearly without saturation, but the rest of the scene is too cold to see clearly above the system noise floor. (Figure 2).

 

Figure 2: the 30-microsecond image shows the exhaust system very clearly without saturation, but the rest of the scene is too cold to see clearly above the system noise floor.

 

Combining these two images with the right algorithm enables imagery that is both high in contrast and wide in temperature range (Figure 3).

 

Figure 3: The solution: a picture both high in contrast and wide in temperature range

Technology

There are some technological preconditions to apply superframing, which luckily have emerged in the commercial marketplace. One is the emergence of commercially available infrared cameras with large arrays, such as 320 x 256 or 640 x 512 pixels, that are able to produce high frame rates, which are necessary to produce superframes. The other is the availability of computers able to process the enormous amounts of data (64MB/s) coming from the infrared camera when operating at high frame rates.

This technique has already been implemented in commercially-available infrared imaging software. FLIR System’s ThermaCAM Phoenix infrared camera and its RTools image processing software, offer an integrated superframing function.

Summary

Superframing dramatically extends the effective scene brightness of an infrared imaging system while maintaining thermal contrast, even at low temperatures.

The technique of superframing consists of varying the exposure, or integration time of the camera from frame to frame in a cyclic manner and combining the resulting subframes into single superframes with greatly extended temperature ranges, which allow to visualise scenes featuring extreme temperature differences.

Many thanks to Austin Richards, Ph. D. (austin.richards@flir.com) of FLIR Systems-Indigo Operations, US, and Kjell Lindström (kjell.lindstrom@flir.se) of FLIR Systems Sweden for providing valuable input for this article.

What is a good infrared or thermal image? It is an image which presents a high contrast, while showing the smallest temperature differences. Infrared cameras are able to do that, but within a defined temperature range.

The principle: an introduction

For temperatures around room temperature, for example, the operator will set the camera to a typical temperature range of -20°C to +50°C. All objects with a temperature beyond this range, the brightest or hottest parts, will be presented as saturated. Those below the range will generally be noisy.

So, if for instance our object temperature of interest is +100 °C, a range of +20 °C to +120 °C must be selected. In this case the camera will present a good image of the +100 °C object, but the detailed contrast of the room temperature objects within this image will not be as good as displayed in the first, - 20 °C to +50 °C range.

The combination of the two images would be a logical step. The solution would be to allow the camera to 'take' an image in the first, room temperature range and then 'take' a second image at the higher temperature range. Combining these two images in a smart way should produce a superior picture, which would encompass the best parts of both images. This is what superframing is about.

The problem and the application

The matter gets more complicated when dealing with extreme temperatures: a man standing by a fire in a cold winter night is such a typical example. The brightest or hottest parts of the image will be saturated, while at the same time the darkest or coldest parts of the scene will appear black in the image, or noisy. When an object appears saturated or noisy, two undesirable things happen: image details are lost and temperature measurements in that part of the scene are no longer valid.

Advanced infrared imaging and measurements often have to acquire images or image sequences of scenes featuring a very wide range of temperatures. The saturation problem can be particularly disturbing in infrared imaging for research and development applications, where imaging or high-speed digital video of scenes with very large temperature differences such as engine monitoring, a rocket launch or an explosion are required. This problem, which is particularly acute in the midwave infrared waveband, can be addressed with superframing.

Will reducing exposure (integration) time do it?

The sensitivity of an infrared camera, which means the smallest temperature difference such a camera can detect, can be controlled by varying its exposure or, as we call it for infrared imaging systems, integration time, a term which we define as the exposure time of the infrared detector inside the camera to produce a single frame.

Operating the infrared camera at a longer exposure time raises sensitivity, but at the same time it restricts the range of temperatures one can measure: the hot objects are so bright that they exceed the set temperature range of the camera. If a scene or sequence contains extreme temperature differences, which need to be simultaneously measured, the exposure time of the camera should be reduced to a much shorter time.

But this reduction, for its side, will result in the loss of the ability to measure changes in the cooler parts of the scene as it exceeds the set temperature range, resulting in a black or noisy appearance for those areas, as shown in the picture range 1 to 4.

             

1. 1.0ms/20-75°C                         2. 0.25ms/65-135°C

             

3. 0.05ms/130-230°C                    4. 0.01ms/220-380°C

Welding, a sequence with extreme temperatures: the shorter the integration or exposure time, the higher the temperature range and the larger the black or noisy areas.

Is there a single exposure time able to fully encompass a scene’s variations and accurately measure accurately all objects, cold and hot, in such a scene? No, but there’s another option.

The solution: Superframing

Superframing means taking a set of typically four images (subframes) of the scene at progressively shorter exposure times in very rapid succession, then repeating this cycle. The subframes from each cycle are merged into a single superframe, which combines the best features of the four subframes, which as we already know, have different exposure times. This process is called collapsing. In this way, the superframe image, generated by the collapsing algorithm, will be both high in contrast and wide in temperature range. The algorithm is basically quite simple: if a pixel in the first subframe is saturated, the algorithm selects the corresponding pixel from the next subframe. If that pixel is satisfactory, the algorithm stops. If not, it checks the corresponding pixel in the next subframe for suitability, and so on. All pixel values are converted into temperature or radiance units for the final superframe image.

Figures 1, 2 and 3 illustrate the superframing technique with two images of a Beechcraft King Air twin propeller airplane taken at 2 milliseconds and 30 microseconds. These images were taken with a FLIR Systems ThermaCAM Phoenix, a very high performance midwave infrared (MWIR) camera system, running at 90 frames per second at the full frame size of 640 x 512 pixels. The two images are separated by a short interval of time (about 40 milliseconds), meaning that the scene does not change very much; the propeller movement is barely perceptible.

The 2-millisecond image gives excellent contrast for nearly every portion of the scene except for the aircraft’s exhaust system, which is so hot that it saturates that part of the image. (Figure 1)

Figure 1: 2-millisecond image: saturated exhaust system

 

Conversely, the 30-microsecond image shows the exhaust system very clearly without saturation, but the rest of the scene is too cold to see clearly above the system noise floor. (Figure 2).

 

Figure 2: the 30-microsecond image shows the exhaust system very clearly without saturation, but the rest of the scene is too cold to see clearly above the system noise floor.

 

Combining these two images with the right algorithm enables imagery that is both high in contrast and wide in temperature range (Figure 3).

 

Figure 3: The solution: a picture both high in contrast and wide in temperature range

Technology

There are some technological preconditions to apply superframing, which luckily have emerged in the commercial marketplace. One is the emergence of commercially available infrared cameras with large arrays, such as 320 x 256 or 640 x 512 pixels, that are able to produce high frame rates, which are necessary to produce superframes. The other is the availability of computers able to process the enormous amounts of data (64MB/s) coming from the infrared camera when operating at high frame rates.

This technique has already been implemented in commercially-available infrared imaging software. FLIR System’s ThermaCAM Phoenix infrared camera and its RTools image processing software, offer an integrated superframing function.

Summary

Superframing dramatically extends the effective scene brightness of an infrared imaging system while maintaining thermal contrast, even at low temperatures.

The technique of superframing consists of varying the exposure, or integration time of the camera from frame to frame in a cyclic manner and combining the resulting subframes into single superframes with greatly extended temperature ranges, which allow to visualise scenes featuring extreme temperature differences.

Many thanks to Austin Richards, Ph. D. (austin.richards@flir.com) of FLIR Systems-Indigo Operations, US, and Kjell Lindström (kjell.lindstrom@flir.se) of FLIR Systems Sweden for providing valuable input for this article.

Premium Access

To access this content please enter your details in the fields below.

Media Partners