CN113936017A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN113936017A
CN113936017A CN202111258480.8A CN202111258480A CN113936017A CN 113936017 A CN113936017 A CN 113936017A CN 202111258480 A CN202111258480 A CN 202111258480A CN 113936017 A CN113936017 A CN 113936017A
Authority
CN
China
Prior art keywords
image
infrared
contour
visible light
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111258480.8A
Other languages
Chinese (zh)
Inventor
陈炜
池国泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockchip Electronics Co Ltd
Original Assignee
Rockchip Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockchip Electronics Co Ltd filed Critical Rockchip Electronics Co Ltd
Priority to CN202111258480.8A priority Critical patent/CN113936017A/en
Publication of CN113936017A publication Critical patent/CN113936017A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

An image processing method and device, the method comprising: acquiring an infrared image of a thermal target; acquiring a visible light image of the thermal target; acquiring an infrared outline image of the thermal target according to the infrared image; and acquiring a fusion image according to the infrared contour image and the visible light image. The fusion image obtained by the method has better quality.

Description

Image processing method and device
Technical Field
The invention relates to the field of digital image processing, in particular to an image processing method and device.
Background
Image sensors of different spectra sometimes have better complementary characteristics. For example, the infrared image sensor performs imaging according to the difference of infrared radiation of an object, which reflects the thermal radiation characteristic of the object, the infrared image sensor captures infrared radiation information, the detection performance of the infrared image sensor on a thermal target, such as a pedestrian and a running vehicle, is good, the infrared image shows highlight on the thermal target, and human eyes can find the thermal target easily. The visible light image sensor images according to different reflection capacities of the object to visible light, the reflection is the visible light reflection characteristic of the surface of the object, and the detail texture information of the visible light image is usually rich. The infrared image generally has low contrast and resolution and lacks detailed information, while the visible light image generally has high contrast and resolution and rich detailed information such as edge texture, but the visible light image cannot highlight the thermal target. Due to the limitations of the respective applications, a single visible light or infrared sensor technology cannot meet the requirements of increasingly demanding application scenarios.
The infrared image and the visible light image have good complementary characteristics, so that the practice shows that the images acquired by the two sensors are effectively fused in many scenes, the advantages of the respective sensors can be fully reserved, the defects of respective application are overcome, comprehensive and accurate image description of the scenes is obtained, the information is fully utilized, and the accuracy and the reliability of system analysis and decision are improved.
However, the existing infrared image and visible light image fusion technology still needs to be improved.
Disclosure of Invention
The invention aims to provide an image processing method and an image processing device to improve the fusion technology of an infrared image and a visible light image.
In order to solve the above technical problem, a technical solution of the present invention provides an image processing method, including: acquiring an infrared image of a thermal target; acquiring a visible light image of the thermal target; acquiring an infrared outline image of the thermal target according to the infrared image; and acquiring a fusion image according to the infrared contour image and the visible light image.
Optionally, the infrared image includes a plurality of infrared pixel points; the method for acquiring the infrared contour image of the thermal target according to the infrared image comprises the following steps: and acquiring contour pixel values of all infrared pixel points in the infrared image, wherein the contour pixel values of the infrared pixel points form the infrared contour image.
Optionally, the method for obtaining contour pixel values of all infrared pixel points of the infrared image includes: and repeating the process of obtaining the contour pixel value of any infrared pixel point for a plurality of times until the contour pixel values of all infrared pixel points of the infrared image are obtained.
Optionally, the method for obtaining the contour pixel value of any infrared pixel point includes: acquiring a contour extraction window by taking any infrared pixel point as a central point, wherein the contour extraction window comprises the central point and a plurality of infrared pixel points surrounding the central point; acquiring a second-largest pixel value and a second-smallest pixel value in a plurality of infrared pixel points in the contour extraction window; and acquiring a contour pixel value according to the difference value of the secondary large pixel value and the secondary small pixel value.
Optionally, the visible light image is a grayscale image or a color image; the visible light image comprises a plurality of visible light pixel points.
Optionally, the visible light image is a grayscale image; the method for acquiring the fusion image according to the infrared contour image and the visible light image comprises the following steps: the fusion image comprises a plurality of fusion pixel points which are in one-to-one correspondence with the infrared contour image pixel points and the visible light image pixel points, and the pixel value of each fusion pixel point is the larger pixel value of the pixel values of the corresponding infrared contour image pixel points and the visible light image pixel points.
Optionally, the visible light image comprises a color image; the method for acquiring the fusion image according to the infrared contour image and the visible light image comprises the following steps: converting the chromaticity space of the color image to obtain a visible light image with separated brightness and chroma; fusing the infrared outline image and the visible light image with separated brightness and chroma to obtain a fused image with separated brightness and chroma; and converting the fused brightness and chroma separated image into a color image to obtain a fused image.
Optionally, the method for obtaining the fused image with separated brightness and chroma by fusing the infrared contour image with the visible light image with separated brightness and chroma includes: the method comprises the steps of obtaining a fused brightness-chroma separated image according to an infrared contour image and a visible light image with separated brightness and chroma, wherein the fused brightness-chroma separated image comprises a plurality of fusion pixel points, the fusion pixel points are in one-to-one correspondence with the infrared contour image pixel points and the visible light image pixel points with separated brightness and chroma, and the brightness values of the fusion pixel points are larger values of the pixel values of the corresponding infrared contour image pixel points and the brightness values of the visible light image with separated brightness and chroma.
Optionally, the types of the chrominance space where the luminance and the chrominance are separated include: YUV, YCbCr or HSI.
Optionally, the method for obtaining the fused image with separated brightness and chroma by fusing the infrared contour image with the visible light image with separated brightness and chroma further includes: and adjusting the pixel values of the contour pixel points to convert the pixel values of the contour pixel points into adjusted pixel values.
Optionally, the types of the chrominance space where the luminance and the chrominance are separated include: CIELab.
Optionally, a ratio of the pixel value of the contour pixel point to the adjustment pixel value is: 255:100.
Optionally, before acquiring the fused image according to the infrared profile image and the visible light image, the method further includes: and carrying out contrast enhancement processing on the infrared contour image.
Optionally, before acquiring the infrared contour image of the thermal target according to the infrared image, the method further includes: respectively carrying out image registration on the infrared image and the visible light image; and carrying out noise reduction processing on the infrared image.
Correspondingly, the technical scheme of the invention also provides an image processing device, which comprises: the infrared image sensor is used for acquiring an infrared image of the thermal target; the visible light image sensor is used for acquiring a visible light image of the thermal target; the infrared contour processor is used for acquiring an infrared contour image of the thermal target according to the infrared image; and the image fusion processor is used for acquiring a fusion image according to the infrared contour image and the visible light image.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
according to the image processing method, the outline of the hot target in the infrared image is extracted to obtain the infrared outline image, and then the infrared outline image is fused with the visible light image to obtain the fused image. The fused image acquired by the method can have the outline of the infrared image of the hot target, so that the hot target is highlighted; and the detail information in the hot target can be kept, so that the image details are rich. Better image fusion effect can be obtained.
Further, the method for acquiring the infrared contour image comprises the following steps: and acquiring a contour extraction window by taking any infrared pixel point as a central point, acquiring a secondary large pixel value and a secondary small pixel value in the infrared pixel point in the contour extraction window, and acquiring a contour pixel value according to a difference value of the secondary large pixel value and the secondary small pixel value. The method is less influenced by noise, and the extracted hot target has smooth, coherent, bright and moderate width, thereby being very suitable for highlighting the requirement of the hot target in the infrared and visible light fusion.
Furthermore, the calculation amount for acquiring the contour pixel value and fusing the image pixel value or the brightness is small, so that real-time image output can be realized on the mobile equipment.
Drawings
Fig. 1 to 7 are flowcharts of an image processing method in an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image processing apparatus in an embodiment of the present invention.
Detailed Description
As described in the background, the existing infrared image and visible light image fusion technology still needs to be improved.
Specifically, in the existing infrared image and visible light image fusion technology, details of a thermal target in a fused image are covered by a highlighted infrared image, so that detailed information of the thermal target recorded in the visible light image is difficult to show in the fused image.
Secondly, the existing image contour extraction technologies, such as Sobel operators, Laplacian operators, Canny operators, have the problems of discontinuous extracted contours, large noise influence and the like, and the extracted contours are too narrow, so that the existing image contour extraction technologies are not suitable for the requirement of highlighting the hot target in infrared and visible light fusion.
Thirdly, some image contour extraction technologies based on deep learning, neural network and image understanding have a large calculation amount, and are difficult to implement in mobile equipment and application scenes with high real-time requirements, such as cameras and telescopes for assisting driving of automobiles and observing nocturnal wildlife.
In order to solve the above problems, the technical solution of the present invention provides an image processing method and apparatus, in which an infrared contour image is obtained by first extracting a contour of a thermal target in the infrared image, and then the infrared contour image is fused with a visible light image to obtain a fused image. The fused image acquired by the method can have the outline of the infrared image of the hot target, so that the hot target is highlighted; and the detail information in the hot target can be kept, so that the image details are rich. Better image fusion effect can be obtained.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Fig. 1 to 7 are flowcharts of an image processing method according to an embodiment of the present invention.
Referring to fig. 1, the method of image processing includes:
step S10: acquiring an infrared image of a thermal target;
step S20: acquiring a visible light image of the thermal target;
step S30: acquiring an infrared outline image of the thermal target according to the infrared image;
step S40: and acquiring a fusion image according to the infrared contour image and the visible light image.
The image processing method comprises the steps of firstly extracting the outline of a hot target in an infrared image to obtain an infrared outline image, and then fusing the infrared outline image and a visible light image to obtain a fused image. The fused image acquired by the method can have the outline of the infrared image of the hot target, so that the hot target is highlighted; and the detail information in the hot target can be kept, so that the image details are rich. Better image fusion effect can be obtained.
Next, each step will be explained by analysis.
With continued reference to fig. 1, step S10 is executed: an infrared image IR of the thermal target is acquired.
The infrared image IR comprises a number of infrared pixels IR (x, y).
In the present embodiment, the infrared image has no color component. The infrared image comprises a white heat image or a black heat image, wherein the image of an object of the white heat image is brighter when the temperature is higher, and the image of the object of the white heat image is darker when the temperature is lower; the black thermal image of the object has a darker image at higher temperatures and a lighter image at lower temperatures.
In this embodiment, an infrared image of the thermal target is acquired by an infrared image sensor. The infrared image sensor is sensitive to infrared rays, and the thermal target radiates infrared rays which are received by the infrared image sensor to form an infrared image. The infrared image sensor receives infrared rays, and the wave band range of the infrared rays is generally 3-5 mu m of medium wave infrared rays or 8-12 mu m of long wave infrared rays.
In nature, all objects can radiate infrared rays, and the thermal target is any object capable of radiating infrared rays, so that infrared images formed by different thermal infrared rays can be obtained by measuring the infrared ray difference between the target and the background by using an infrared image sensor.
In this embodiment, the device of the infrared image sensor includes an infrared camera.
With continued reference to fig. 1, step S20 is executed: a visible light image of the thermal target is acquired.
The visible light image is a gray level image or a color image; the visible light image comprises a plurality of visible light pixel points.
In this embodiment, a visible light image of the thermal target is acquired by a visible light image sensor. The visible light image sensor senses light in a visible light wave band, and receives the visible light reflected by the surface of the object to form a visible light image. The wave band range of visible light is 400 nm-780 nm.
In this embodiment, the device of the infrared image sensor includes a visible light camera.
With continuing reference to fig. 1, in this embodiment, after acquiring the infrared image and the visible light image, the method further includes:
step S50 is executed: and respectively carrying out image registration on the infrared image and the visible light image.
And respectively carrying out image registration on the infrared image and the visible light image. Therefore, the scene areas of the infrared image and the visible light image are the same, the shooting time is the same, and the image sizes are the same, wherein the image sizes are the same, namely the number of horizontal pixels and the number of vertical pixels are the same. For subsequent fusion processing.
In this embodiment, the pixel value range of the infrared image pixel point after registration is 0-255, and the pixel value range of the visible image pixel point after registration is 0-255.
Step S60 is executed: and carrying out noise reduction processing on the infrared image.
And carrying out noise reduction treatment on the infrared image, so that the subsequently extracted infrared contour image is smoother and more coherent.
In this embodiment, the method for performing noise reduction processing on the infrared image includes noise reduction technologies such as gaussian filtering, median filtering, or bilateral filtering.
With continued reference to fig. 1, step S30 is executed: and acquiring an infrared outline image edge of the thermal target according to the infrared image.
And the infrared outline image edge is used for fusing with the visible light image subsequently to obtain a fused image.
In this embodiment, an infrared profile image of the thermal target is acquired from the infrared image by an infrared profile processor.
Referring to fig. 2, in the present embodiment, the method for obtaining an infrared profile image of a thermal target according to the infrared image includes:
step S301: and acquiring contour pixel values of all infrared pixel points in the infrared image, wherein the contour pixel values of the infrared pixel points form the infrared contour image.
In this embodiment, the method for obtaining contour pixel values of all infrared pixel points of the infrared image includes: and repeating the process of obtaining the contour pixel value of any infrared pixel point for a plurality of times until the contour pixel values of all infrared pixel points of the infrared image are obtained.
Referring to fig. 3, in the present embodiment, the method for obtaining the contour pixel value of any infrared pixel includes:
step S3011: acquiring a contour extraction window by taking any infrared pixel point as a central point, wherein the contour extraction window comprises the central point and a plurality of infrared pixel points surrounding the central point;
step S3012: acquiring a second-largest pixel value and a second-smallest pixel value in a plurality of infrared pixel points in the contour extraction window;
step S3013: and acquiring a contour pixel value according to the difference value of the secondary large pixel value and the secondary small pixel value.
Acquiring a contour pixel value by acquiring a difference value between a second-largest pixel value and a second-smallest pixel value in a contour extraction window, wherein on one hand, the width of the extracted contour can be controlled by the size of the contour extraction window, so that the contour width of the infrared contour image can be controlled; on the other hand, the maximum pixel value and the minimum pixel value in the contour extraction window are often caused by noise and dead pixels, so that the interference of the noise and the dead pixels can be reduced by using the second-largest pixel value and the second-smallest pixel value to acquire the contour pixel value, the extracted contour of the thermal target is smoother, more uniform and more coherent, and the method is very suitable for highlighting the requirement of the thermal target in infrared and visible light fusion.
Next, each step will be described.
With continued reference to fig. 3, step S3011 is executed: and acquiring a contour extraction window by taking any infrared pixel point IR (x, y) as a central point, wherein the contour extraction window comprises the central point and a plurality of infrared pixel points surrounding the central point.
The contour extraction window is a centrosymmetric graph with any infrared pixel point as a central point, so that the contour pixel values of the contour pixel points extracted in the contour extraction window are uniform, and the accuracy is high.
In this embodiment, the larger the profile extraction window is, the thicker the extracted profile width is, and the thicker profile can more significantly highlight the position of the thermal target; the smaller the contour extraction window is, the thinner the extracted contour width is, and the smaller the calculation amount of the contour extraction window is. The contour width is the number of pixels. The contour extraction window is a minimum of 3x3 pixels.
Fig. 4 to 6 are schematic views of the contour extraction window.
Referring to fig. 4, fig. 4 is a schematic diagram of a contour extraction window with a size of 3 × 3 pixels, where P1 is a central point to be extracted, and W1 is a plurality of infrared pixels surrounding the central point.
Referring to fig. 5, fig. 5 is a schematic diagram of a contour extraction window with a size of 5 × 5 pixels, where P2 is a central point to be extracted, and W2 is a plurality of infrared pixels surrounding the central point.
Referring to fig. 6, fig. 6 is a schematic diagram of a contour extraction window with a size of 21 pixels, where P3 is a central point to be extracted, and W3 is a plurality of infrared pixels surrounding the central point.
The size of the contour extraction window can be selected according to the resolution of the infrared image, and a larger contour extraction window, for example, 5x5 pixels, can be used for the infrared image with high resolution, as shown in fig. 5; a smaller contour extraction window, e.g., 3x3 pixels, may be used for lower resolution infrared images, as shown in fig. 4. A contour extraction window of suitable size can extract sufficiently striking contours with moderate computational effort and comfortable visual effect, and in general, a 3x3 or 5x5 contour extraction window can cover most usage requirements.
With continued reference to fig. 3, step S3012 is executed: and acquiring a second largest pixel value1 and a second smallest pixel value2 in a plurality of infrared pixel points in the contour extraction window.
The second largest pixel value1 is the second largest pixel value in the range of the contour extraction window, including the central point and a plurality of infrared pixel points surrounding the central point; the second smallest pixel value2 is the second smallest pixel value in the plurality of infrared pixels that include the central point and surround the central point within the contour extraction window.
The maximum pixel value and the minimum pixel value in the contour extraction window are often caused by noise and dead pixels, while the use of the second largest pixel value1 eliminates the influence of the maximum pixel value in the contour extraction window, and the use of the second smallest pixel value2 eliminates the influence of the minimum pixel value in the contour extraction window, so that the possibility of noise and dead pixels is reduced, and the image quality of the infrared contour image is improved.
With continued reference to fig. 3, step S3013 is executed: and obtaining an outline pixel value edge (x, y) of the outline pixel point from value1 to value2 according to the difference value between the secondary large pixel value and the secondary small pixel value.
Acquiring a contour pixel value edge (x, y) of a contour pixel point according to a difference value between the second largest pixel value1 and the second smallest pixel value2, on one hand, the width of an extracted contour can be controlled by the size of a contour extraction window, so that the contour width of the infrared contour image can be controlled, for example, for a boundary position of brightness abrupt change, a contour with a width of 2 pixels can be extracted by selecting a contour extraction window with 3x3 pixels, and a contour with a width of 4 pixels can be extracted by selecting a contour extraction window with 5x5 pixels; on the other hand, the difference value between the second largest pixel value and the second smallest pixel value in each contour extraction window is calculated, so that the interference of noise and dead pixels can be further reduced, the extracted contour is smoother, more uniform and more continuous, meanwhile, the calculation amount is smaller, and the real-time image output is easy to realize on mobile equipment.
With continued reference to fig. 1, after acquiring the infrared contour image of the thermal target, the method further includes performing step S70: and carrying out contrast enhancement processing on the infrared contour image.
The contrast enhancement processing enhances the contrast of the infrared contour image, so that the white part of the image is whiter, and the black part of the image is blacker, thereby making the contour of the infrared contour image more striking, and making the contour of a fused image formed after the subsequent infrared contour image is fused with the visible light image clearer.
In other embodiments, the infrared contour image may not be subjected to contrast enhancement processing.
With continued reference to fig. 1, after performing contrast enhancement processing on the infrared contour image, step S40 is executed: acquiring a fusion image I according to the infrared outline image and the visible light imagefusion
In this embodiment, the infrared profile image and the visible light image are processed by the image fusion processor to obtain a fusion image Ifusion
In one embodiment, the visible light image is a gray image gray; acquiring a fusion image I according to the infrared profile image edge and the visible light imagefusionThe method comprises the following steps: the fused image IfusionComprises a plurality of fusion pixel points I which are in one-to-one correspondence with infrared contour image pixel points edge (x, y) and visible light image pixel points gray (x, y)fusion(x, y), each of the fusion pixel points IfusionThe pixel value of (x, y) is the larger pixel value of the corresponding infrared contour image pixel edge (x, y) and the pixel value of the visible light image pixel gray (x, y), Ifusion(x,y)=max(edge(x,y),gray(x,y))。
When the visible light image is a gray image, the infrared outline image pixel points of the infrared outline image and the visible light image pixel points of the visible light image can be directly fused, the fusion process is to compare the pixel values of the infrared outline image and the visible light gray image at the coordinates pixel points pixel by pixel coordinate, and take the larger pixel value as the pixel value of the fusion image at the coordinate pixel points until all the pixel points of the coordinates (x, y) are fused, so as to obtain the pixel values of the fusion image pixel points, and generate the fusion image.
The method has the advantages that the larger pixel value is taken as the pixel value of the fused image at the coordinate pixel point, the pixel value of the infrared contour image pixel point is larger at the contour position of the thermal target, and the pixel value of the infrared contour image pixel point is reserved after the fusion with the visible light image, so that the infrared contour of the thermal target is highlighted on the fused image; in the non-contour position of the thermal target, the pixel value of the infrared contour image pixel point is small, and the pixel value of the visible light image is reserved after the infrared contour image pixel point is fused with the visible light image, so that the details of the visible light image are reserved on the fused image. And the method has small calculation amount and natural fusion effect.
Referring to fig. 7, in another embodiment, the visible light image includes a color image; the method for acquiring the fusion image according to the infrared contour image and the visible light image comprises the following steps:
step S401: converting the chromaticity space of the color image to obtain a visible light image with separated brightness and chroma;
step S402: fusing the infrared outline image and the visible light image with separated brightness and chroma to obtain a fused image with separated brightness and chroma;
step S403: and converting the fused brightness and chroma separated images into color images to obtain fused images.
Each pixel of the color image is usually represented by three components of red (R), green (G) and blue (B), the infrared contour image is extracted from an infrared image, the infrared image is a white thermal image or a black thermal image, the infrared image has 256 gray scales, and the value range of the pixel value of the pixel point of the infrared contour image is 0-255, so when the visible light image is the color image, the color image and the infrared contour image have different chromaticity spaces, cannot be directly fused, and the chromaticity space conversion is needed.
Next, each step will be described.
With continued reference to fig. 7, step S401 is executed: and converting the chromaticity space of the color image to obtain a visible light image with separated brightness and chromaticity.
In one embodiment, the types of the luminance and chrominance separated chrominance spaces include: YUV, YCbCr or HSI.
In another embodiment, the types of the luminance and chroma separated chrominance spaces include: CIELab.
With continued reference to fig. 7, step S402 is executed: and fusing the infrared outline image and the visible light image with separated brightness and chroma to obtain a fused image with separated brightness and chroma.
In one embodiment, the types of the luminance and chrominance separated chrominance spaces include: YUV, YCbCr, or HSI, where in YUV chrominance space, Y is the luminance component, and U and V are the color components; in the YCbCr chrominance space, Y is the luminance component, Cb and Cr are the color components; in the HSI chrominance space, I is the luminance component and H and S are the color components.
The value range of the Y and I brightness components is 0-255.
The method for fusing the infrared outline image and the visible light image with separated brightness and chroma to obtain the fused image with separated brightness and chroma comprises the following steps: acquiring a fused brightness and chroma separated image according to an infrared contour image edge and a visible light image with separated brightness and chroma, wherein the fused brightness and chroma separated image comprises a plurality of fusion pixel points, the fusion pixel points are in one-to-one correspondence with the infrared contour image pixel points and the visible light image pixel points with separated brightness and chroma, and the brightness values Y of the fusion pixel pointsfusion(x, Y) is the larger value of the pixel value of the corresponding infrared contour image pixel edge (x, Y) and the brightness value Y (x, Y) of the visible light image with separated brightness and chroma, and Y is the larger valuefusion(x,y)=max(edge(x,y),Y(x,y))。
The value range of the Y and I brightness components is 0-255, so that the brightness components Y and I can be fused with the contour pixel value of the infrared image, the fusion process is to compare the pixel value of the infrared contour image with the brightness component of the visible light image with separated brightness and chroma in the size of the coordinate by pixel coordinate, and take the larger pixel value or brightness component as the brightness value of the fusion image at the coordinate pixel point until the pixel points of all coordinates (x, Y) are fused, so as to obtain the brightness value of the fusion image pixel point, and simultaneously keep the color components unchanged, so as to generate the fused image with separated brightness and chroma.
In this embodiment, the visible light image is color space-converted into a YUV color space. The conversion formula is as follows:
Figure BDA0003324736380000111
in another embodiment, the types of the luminance and chroma separated chrominance spaces include: CIELab.
In the CIELab chromaticity space, L is the luminance component and a and b are the color components. The value range of the brightness component L is 0-100.
And the value range of the pixel value of the infrared contour image pixel point is 0-255. Therefore, when the infrared contour image is fused with the visible light image with separated brightness and chroma, the value range of the pixel value of the pixel point of the infrared contour image needs to be adjusted to be consistent with the value range of the brightness component L of the CIELab chromaticity space, so as to perform fusion within the same calculation level.
The method for fusing the infrared outline image and the visible light image with separated brightness and chroma to obtain the fused image with separated brightness and chroma comprises the following steps: and adjusting the pixel values of the contour pixel points to convert the pixel values of the contour pixel points into adjusted pixel values.
In this embodiment, the ratio of the pixel value of the contour pixel to the adjustment pixel value is: 255:100.
With continued reference to fig. 7, step S403 is performed: converting the fused brightness and chroma separated image into a color image to obtain a fused image RGBfusion
The fused image RGBfusionThe image is a color image, so far, the fused image obtained by the method can have the outline of the infrared image of the hot target, so that the hot target is highlighted; and the detail information in the hot target can be kept, so that the image details are rich. Better image fusion effect can be obtained.
In this example, Y isfusionWith color component U, V, back to the color chromaticity space of the color image, the conversion formula is:
Figure BDA0003324736380000121
fig. 8 is a schematic structural diagram of an image processing apparatus in an embodiment of the present invention.
Referring to fig. 8, the image processing apparatus includes:
an infrared image sensor 100 for acquiring an infrared image of the thermal target;
a visible light image sensor 200 for acquiring a visible light image of the thermal target;
an infrared contour processor 300 for acquiring an infrared contour image of the thermal target according to the infrared image;
and the image fusion processor 400 is configured to obtain a fusion image according to the infrared contour image and the visible light image.
The image processing device enables the obtained fusion image to have the outline of the infrared image of the hot target, so that the hot target is highlighted; and the detail information in the hot target can be kept, so that the image details are rich. Thereby obtaining better image fusion effect.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (15)

1. An image processing method, comprising:
acquiring an infrared image of a thermal target;
acquiring a visible light image of the thermal target;
acquiring an infrared outline image of the thermal target according to the infrared image;
and acquiring a fusion image according to the infrared contour image and the visible light image.
2. The image processing method of claim 1, wherein the infrared image comprises a plurality of infrared pixel points; the method for acquiring the infrared contour image of the thermal target according to the infrared image comprises the following steps: and acquiring contour pixel values of all infrared pixel points in the infrared image, wherein the contour pixel values of the infrared pixel points form the infrared contour image.
3. The image processing method of claim 2, wherein the method of obtaining contour pixel values of all infrared pixels of the infrared image comprises: and repeating the process of obtaining the contour pixel value of any infrared pixel point for a plurality of times until the contour pixel values of all infrared pixel points of the infrared image are obtained.
4. The image processing method of claim 3, wherein the method of obtaining the contour pixel value of any one of the infrared pixel points comprises: acquiring a contour extraction window by taking any infrared pixel point as a central point, wherein the contour extraction window comprises the central point and a plurality of infrared pixel points surrounding the central point; acquiring a second-largest pixel value and a second-smallest pixel value in a plurality of infrared pixel points in the contour extraction window; and acquiring a contour pixel value according to the difference value of the secondary large pixel value and the secondary small pixel value.
5. The image processing method according to claim 2, wherein the visible light image is a grayscale image or a color image; the visible light image comprises a plurality of visible light pixel points.
6. The image processing method according to claim 5, wherein the visible light image is a grayscale image; the method for acquiring the fusion image according to the infrared contour image and the visible light image comprises the following steps: the fusion image comprises a plurality of fusion pixel points which are in one-to-one correspondence with the infrared contour image pixel points and the visible light image pixel points, and the pixel value of each fusion pixel point is the larger pixel value of the pixel values of the corresponding infrared contour image pixel points and the visible light image pixel points.
7. The image processing method according to claim 5, wherein the visible light image includes a color image; the method for acquiring the fusion image according to the infrared contour image and the visible light image comprises the following steps: converting the chromaticity space of the color image to obtain a visible light image with separated brightness and chroma; fusing the infrared outline image and the visible light image with separated brightness and chroma to obtain a fused image with separated brightness and chroma; and converting the fused brightness and chroma separated image into a color image to obtain a fused image.
8. The image processing method of claim 7, wherein the fusing the infrared profile image with the luminance and chrominance separated visible light image to obtain the fused luminance and chrominance separated image comprises: the method comprises the steps of obtaining a fused brightness-chroma separated image according to an infrared contour image and a visible light image with separated brightness and chroma, wherein the fused brightness-chroma separated image comprises a plurality of fusion pixel points, the fusion pixel points are in one-to-one correspondence with the infrared contour image pixel points and the visible light image pixel points with separated brightness and chroma, and the brightness values of the fusion pixel points are larger values of the pixel values of the corresponding infrared contour image pixel points and the brightness values of the visible light image with separated brightness and chroma.
9. The image processing method of claim 8, wherein the type of the chrominance space in which the luminance is separated from the chrominance comprises: YUV, YCbCr or HSI.
10. The image processing method of claim 8, wherein the method of fusing the infrared profile image with the luminance and chrominance separated visible light image to obtain the fused luminance and chrominance separated image further comprises: and adjusting the pixel values of the contour pixel points to convert the pixel values of the contour pixel points into adjusted pixel values.
11. The image processing method of claim 10, wherein the type of the chrominance space in which the luminance is separated from the chrominance comprises: CIELab.
12. The image processing method of claim 11, wherein the ratio of the contour pixel value to the adjustment pixel value is: 255:100.
13. The image processing method according to claim 1, further comprising, before acquiring the fused image from the infrared profile image and the visible light image: and carrying out contrast enhancement processing on the infrared contour image.
14. The image processing method of claim 1, prior to acquiring an infrared profile image of a thermal target from the infrared image, further comprising: respectively carrying out image registration on the infrared image and the visible light image; and carrying out noise reduction processing on the infrared image.
15. An apparatus for image processing, comprising:
the infrared image sensor is used for acquiring an infrared image of the thermal target;
the visible light image sensor is used for acquiring a visible light image of the thermal target;
the infrared contour processor is used for acquiring an infrared contour image of the thermal target according to the infrared image;
and the image fusion processor is used for acquiring a fusion image according to the infrared contour image and the visible light image.
CN202111258480.8A 2021-10-27 2021-10-27 Image processing method and device Pending CN113936017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111258480.8A CN113936017A (en) 2021-10-27 2021-10-27 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111258480.8A CN113936017A (en) 2021-10-27 2021-10-27 Image processing method and device

Publications (1)

Publication Number Publication Date
CN113936017A true CN113936017A (en) 2022-01-14

Family

ID=79284505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111258480.8A Pending CN113936017A (en) 2021-10-27 2021-10-27 Image processing method and device

Country Status (1)

Country Link
CN (1) CN113936017A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820506A (en) * 2022-04-22 2022-07-29 岚图汽车科技有限公司 Defect detection method and device for hot stamping part, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820506A (en) * 2022-04-22 2022-07-29 岚图汽车科技有限公司 Defect detection method and device for hot stamping part, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN110660088B (en) Image processing method and device
US8154612B2 (en) Systems, methods, and apparatus for image processing, for color classification, and for skin color detection
CN107909562B (en) Fast image fusion algorithm based on pixel level
CN101646014B (en) Image processing apparatus and image processing method
CN107798652A (en) Image processing method, device, readable storage medium storing program for executing and electronic equipment
CN107862657A (en) Image processing method, device, computer equipment and computer-readable recording medium
CN111738970A (en) Image fusion method and device and computer readable storage medium
CN107730446A (en) Image processing method, device, computer equipment and computer-readable recording medium
CN105578063A (en) Image processing method and terminal
CN107038680A (en) The U.S. face method and system that adaptive optical shines
CN101933321A (en) Image sensor apparatus and method for scene illuminant estimation
CN109283439B (en) Discharge state identification method based on three-primary-color chrominance information and machine learning
Várkonyi-Kóczy et al. Gradient-based synthesized multiple exposure time color HDR image
CN108230407B (en) Image processing method and device
WO2020119505A1 (en) Image processing method and system
CN110852956A (en) Method for enhancing high dynamic range image
WO2023016146A1 (en) Image sensor, image collection apparatus, image processing method, and image processor
TW202224404A (en) Lens dirt detection method for camera module
CN107945106B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN102088539A (en) Method and system for evaluating pre-shot picture quality
CN113936017A (en) Image processing method and device
US20100231740A1 (en) Image processing apparatus, image processing method, and computer program
US20140327796A1 (en) Method for estimating camera response function
CN107316040B (en) Image color space transformation method with unchanged illumination
CN114549386A (en) Multi-exposure image fusion method based on self-adaptive illumination consistency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination