CN110298812B - Image fusion processing method and device - Google Patents

Image fusion processing method and device Download PDF

Info

Publication number
CN110298812B
CN110298812B CN201910554868.9A CN201910554868A CN110298812B CN 110298812 B CN110298812 B CN 110298812B CN 201910554868 A CN201910554868 A CN 201910554868A CN 110298812 B CN110298812 B CN 110298812B
Authority
CN
China
Prior art keywords
brightness
component
pixel point
color
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910554868.9A
Other languages
Chinese (zh)
Other versions
CN110298812A (en
Inventor
王松
张东
俞克强
胡鑫杰
魏贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910554868.9A priority Critical patent/CN110298812B/en
Publication of CN110298812A publication Critical patent/CN110298812A/en
Application granted granted Critical
Publication of CN110298812B publication Critical patent/CN110298812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application discloses a method and a device for image fusion processing, wherein the method comprises the following steps: acquiring a color image and a black-and-white image in the same scene, decomposing the color image to obtain a characteristic parameter, and acquiring a second brightness component corresponding to the black-and-white image, wherein the characteristic parameter comprises a first brightness component and a first color component; performing first fusion on the first brightness component and the second brightness component to obtain a third brightness component; adjusting the first color component based on the first brightness component and the third brightness component to obtain a second color component; and performing second fusion on the third brightness component and the second color component to obtain a fused image. The technical problem of poor quality of the fused image obtained in the prior art is solved.

Description

Image fusion processing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for image fusion processing.
Background
With the development of electronic technology, cameras are widely applied to daily life of people, and the imaging modes of the cameras mainly include: visible light imaging and infrared light imaging, wherein the visible light imaging forms images by utilizing the reflection principle of visible light, and has the advantages of rich and natural colors, high resolution and the like, but when no light source is arranged outside or the light of the outside light source is weak, the quality of the obtained images is poor; infrared imaging is the formation of black and white images using the temperature distribution of the surface of an object. In order to obtain a high quality image, an image formed by visible light and an image formed by infrared light are generally fused to improve the quality of the image.
At present, when a visible light image and an infrared light image are fused, a color difference value of each pixel point in the visible light image is adjusted mainly according to a brightness ratio of each pixel point in the visible light image and a black-and-white image to obtain a color difference value of each pixel point in the fused image, and then, a brightness value of each pixel point in the black-and-white image and a color difference value of a corresponding pixel point in the fused image are fused to obtain a fused image. Because the color difference value of each pixel point in the fused image is obtained according to the brightness ratio of each pixel point in the visible light image and the black-and-white image in the prior art, the problems that the color information corresponding to part of the pixel points is excessively stretched, the color of the fused image is distorted, and the quality of the obtained fused image is poor are easily caused.
Disclosure of Invention
The application provides a method and a device for image fusion processing, which are used for solving the technical problem that the quality of a fused image obtained in the prior art is poor.
In a first aspect, an embodiment of the present application provides an image fusion processing method, where the method includes: the method comprises the steps that electronic equipment obtains a color image and a black-and-white image in the same scene, decomposes the color image to obtain a characteristic parameter and obtains a second brightness component corresponding to the black-and-white image, wherein the characteristic parameter comprises a first brightness component and a first color component; then, carrying out first fusion on the first brightness component and the second brightness component to obtain a third brightness component; adjusting the first color component based on the first brightness component and the third brightness component to obtain a second color component; and finally, carrying out second fusion on the third brightness component and the second color component to obtain a fused image.
In the embodiment provided by the application, after the electronic device fuses the brightness components of the color image and the black-and-white image, the color component of the color image is adjusted according to the brightness component of the color image and the fused brightness component to obtain the color component of the fused image.
Optionally, decomposing the color image to obtain feature parameters includes:
judging whether a color channel and a brightness channel in data corresponding to each pixel point in the color image are in a separated state or not;
if not, performing format conversion on the data to enable the color channel and the brightness channel in the converted data to be in a separated state;
decomposing the data corresponding to each pixel point to obtain the brightness and color information corresponding to each pixel point;
and extracting the brightness of all pixel points in the color image to obtain the first brightness component, and extracting the color information of all pixel points in the color image to obtain the first color component.
Optionally, performing first fusion on the first luminance component and the second luminance component to obtain a third luminance component, including:
fusing the brightness corresponding to each pixel point in the first brightness component with the brightness corresponding to the pixel point at the same position in the second brightness component to obtain fused brightness corresponding to each pixel point at the same position;
and obtaining the third brightness component based on the fused brightness corresponding to all the pixel points in the first brightness component.
Optionally, adjusting the first color component based on the first luminance component and the third luminance component to obtain a second color component, including:
removing noise in the first and third luminance components;
comparing the brightness corresponding to each pixel point in the denoised first brightness component with the brightness of the pixel point at the same position in the denoised third brightness component to obtain a stretching coefficient corresponding to each pixel point at the same position;
and adjusting the color information of each pixel point in the first color component based on the corresponding stretch coefficient of each pixel point in the first brightness component to obtain the second color component.
In the scheme provided by the embodiment of the application, noise in the first brightness component and the third brightness component is removed, the brightness of the same position in the denoised first brightness component and the denoised third brightness component is compared to obtain the stretching coefficient corresponding to each pixel point at the same position, and then the color information is adjusted based on the stretching coefficient corresponding to each pixel point. Therefore, according to the scheme of the embodiment of the application, by removing noise interference in the first luminance component and the third luminance component, the problem that color information corresponding to part of pixel points is excessively adjusted to cause color distortion of the fused image is avoided.
Optionally, removing noise in the first luminance component and the third luminance component includes:
performing filtering processing on the first luminance component and the third luminance component based on an image filtering algorithm; and/or
And respectively carrying out downsampling processing on the brightness in the first brightness component and the third brightness component based on a preset sampling window.
Optionally, before the adjusting the color information of each pixel point in the first color component based on the corresponding stretch coefficient of each pixel point in the first luminance component to obtain the second color component, the method further includes:
determining a threshold value of a stretching coefficient corresponding to each pixel point in the first brightness component based on a preset relation between the brightness and the threshold value of the stretching coefficient;
judging whether a threshold value of which the stretching coefficient corresponding to any pixel point is larger than the stretching coefficient corresponding to the pixel point exists in the first brightness component;
if yes, adjusting the stretching coefficient corresponding to any pixel point to be the same as the threshold value of the stretching coefficient corresponding to the pixel point.
In the scheme provided by the embodiment of the application, when it is determined that the stretching coefficient corresponding to any pixel point in the first brightness component exceeds the corresponding threshold value based on the preset relationship between the brightness and the threshold value of the stretching coefficient, the stretching coefficient corresponding to any pixel point is adjusted to be the same as the corresponding threshold value, and the problem that the color information corresponding to the pixel point is excessively adjusted due to the fact that the stretching coefficient corresponding to any pixel point is too large, and the color of the fused image is distorted is avoided.
Optionally, if the luminance in the first luminance component and the luminance in the third luminance component are respectively downsampled based on a preset sampling window, and after the stretch coefficient corresponding to any pixel is adjusted to be the same as the threshold of the stretch coefficient corresponding to the pixel, the method further includes:
and performing upsampling processing on the adjusted stretching coefficient corresponding to each pixel point in the first brightness component to obtain the stretching coefficient corresponding to each pixel point in the first brightness component.
Optionally, adjusting color information of each pixel point in the first color component based on the stretch coefficient corresponding to the pixel point in the first luminance component to obtain the second color component, including:
determining the corresponding saturation of each pixel point based on the corresponding brightness of each pixel point in the color image after being denoised and the preset relation between the saturation and the brightness of each pixel point;
and adjusting the color information of each pixel point based on the corresponding stretching coefficient and the saturation of each pixel point to obtain the second color component.
In the scheme provided by the embodiment of the application, the saturation corresponding to each pixel point is determined based on the brightness corresponding to each pixel point in the color image after the denoising and the preset relation between the saturation and the brightness of each pixel point, and the color information is adjusted based on the saturation of each pixel point and the corresponding stretching coefficient, so that the color information corresponding to each pixel point is prevented from being adjusted by using the same saturation for all the pixel points of the color image, and the color distortion corresponding to partial pixel points is caused.
Optionally, determining the saturation corresponding to each pixel point based on the brightness corresponding to each pixel point in the color image after denoising and a preset relationship between the saturation and the brightness of each pixel point includes:
determining a first difference value between a preset dark area saturation and a preset bright area saturation, a second difference value between the minimum value of the brightness in the bright area and the brightness corresponding to each pixel point in the third brightness component, and a third difference value between the minimum value of the brightness in the bright area and the maximum value of the brightness in the dark area;
determining a first ratio between the second difference and the third difference based on a preset clipping function, and determining a first product between the first ratio and the first difference;
and adding the first product and the preset bright area saturation to obtain the saturation corresponding to each pixel point.
Optionally, adjusting the color information of each pixel point based on the corresponding stretch coefficient and the saturation of each pixel point includes:
determining a second ratio between the brightness value of the third brightness component corresponding to each pixel point and the brightness value of the corresponding first brightness component, and taking the second ratio as the stretching coefficient;
determining a second product between the stretching coefficient and the saturation, and determining a fourth difference value between the color information of the first color component corresponding to each pixel point and a preset value;
and multiplying the second product by the fourth difference to obtain the color information of each pixel point after adjustment.
In a second aspect, an apparatus for image fusion processing provided in an embodiment of the present application includes:
the decomposition unit is used for acquiring a color image and a black-and-white image in the same scene, decomposing the color image to obtain a characteristic parameter and acquiring a second brightness component corresponding to the black-and-white image, wherein the characteristic parameter comprises a first brightness component and a first color component;
the first fusion unit is used for carrying out first fusion on the first brightness component and the second brightness component to obtain a third brightness component;
the processing unit is used for adjusting the first color component based on the first brightness component and the third brightness component to obtain a second color component;
and the second fusion unit is used for carrying out second fusion on the third brightness component and the second color component to obtain a fused image.
Optionally, the decomposition unit is specifically configured to:
judging whether a color channel and a brightness channel in data corresponding to each pixel point in the color image are in a separated state or not;
if the color channel and the brightness channel in the data are not in a separated state, performing format conversion on the data so as to enable the color channel and the brightness channel in the converted data to be in a separated state;
decomposing the data corresponding to each pixel point to obtain the brightness and color information corresponding to each pixel point;
and extracting the brightness of all pixel points in the color image to obtain the first brightness component, and extracting the color information of all pixel points in the color image to obtain the first color component.
Optionally, the first fusion unit is specifically configured to:
fusing the brightness corresponding to each pixel point in the first brightness component with the brightness corresponding to the pixel point at the same position in the second brightness component to obtain fused brightness corresponding to each pixel point at the same position;
and obtaining the third brightness component based on the fused brightness corresponding to all the pixel points in the first brightness component.
Optionally, the processing unit comprises: the device comprises a denoising unit, a comparison unit and an adjusting unit;
the denoising unit is configured to remove noise in the first luminance component and the third luminance component;
the comparison unit is used for comparing the brightness corresponding to each pixel point in the denoised first brightness component with the brightness of the pixel point at the same position in the denoised third brightness component to obtain the stretching coefficient corresponding to each pixel point at the same position;
and the adjusting unit is used for adjusting the color information of each pixel point in the first color component based on the corresponding stretch coefficient of each pixel point in the first brightness component to obtain the second color component.
Optionally, the denoising unit is specifically configured to:
performing filtering processing on the first luminance component and the third luminance component based on an image filtering algorithm; and/or
And respectively carrying out downsampling processing on the brightness in the first brightness component and the third brightness component based on a preset sampling window.
Optionally, the apparatus further comprises: a determination unit and a judgment unit;
the determining unit is configured to determine, based on a preset relationship between luminance and a threshold of a stretch coefficient, a threshold of a stretch coefficient corresponding to each pixel point in the first luminance component;
the judging unit is used for judging whether the stretching coefficient corresponding to any pixel point is larger than the threshold value of the stretching coefficient corresponding to the pixel point in the first brightness component;
the processing unit is further configured to adjust the stretch coefficient corresponding to any pixel point to be the same as the threshold of the stretch coefficient corresponding to the pixel point if the stretch coefficient corresponding to the pixel point is greater than the threshold of the stretch coefficient corresponding to the pixel point.
Optionally, the processing unit is further configured to:
and performing upsampling processing on the adjusted stretching coefficient corresponding to each pixel point in the first brightness component to obtain the stretching coefficient corresponding to each pixel point in the first brightness component.
Optionally, the determining unit is further configured to determine a saturation corresponding to each pixel point in the color image based on the corresponding brightness after each pixel point in the color image is denoised and a preset relationship between the saturation and the brightness of each pixel point;
the processing unit is specifically configured to adjust the color information of each pixel point based on the corresponding stretch coefficient and the saturation of each pixel point, so as to obtain the second color component.
Optionally, the determining unit is specifically configured to:
determining a first difference value between a preset dark area saturation and a preset bright area saturation, a second difference value between the minimum value of the brightness in the bright area and the brightness corresponding to each pixel point in the third brightness component, and a third difference value between the minimum value of the brightness in the bright area and the maximum value of the brightness in the dark area;
determining a first ratio between the second difference and the third difference based on a preset clipping function, and determining a first product between the first ratio and the first difference;
and adding the first product and the preset bright area saturation to obtain the saturation corresponding to each pixel point.
Optionally, the processing unit is specifically configured to:
determining a second ratio between the brightness value of the third brightness component corresponding to each pixel point and the brightness value of the corresponding first brightness component, and taking the second ratio as the stretching coefficient;
determining a second product between the stretching coefficient and the saturation, and determining a fourth difference value between the color information of the first color component corresponding to each pixel point and a preset value;
and multiplying the second product by the fourth difference to obtain the color information of each pixel point after adjustment.
In a third aspect, the present application provides an electronic device, comprising:
a memory for storing instructions for execution by at least one processor;
a processor for executing instructions stored in a memory to perform the method of the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon computer instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
Drawings
Fig. 1 is a flowchart of a method for image fusion processing according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a position of any adjacent pixel point in a color image according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an image fusion processing apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a processing unit according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus for image fusion processing according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The application provides a method and a device for image fusion processing, which are used for solving the technical problem that the quality of a fused image obtained in the prior art is poor.
In order to solve the technical problems, the summary idea is as follows:
in the technical solution of the embodiment of the application, a color image may be decomposed to obtain a first luminance component and a first color component, and a second luminance component corresponding to a black-and-white image is obtained, then the first luminance component and the second luminance component are subjected to first fusion to obtain a third luminance component, then the first color component is adjusted based on the first luminance component and the third luminance component to obtain a second color component, and finally the third luminance component and the second color component are subjected to second fusion to obtain a fused image. Therefore, the color component of the color image can be adjusted through the fused brightness component and the brightness component of the color image, the influence of the fused brightness component on the color component is avoided, and the quality of the fused image is improved.
In the solutions provided in the embodiments of the present application, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to better understand the technical solutions, the technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The method for image fusion processing provided by the embodiments of the present application is described in further detail below with reference to the drawings in the specification, and a specific implementation manner of the method may include the following steps (a flow of the method is shown in fig. 1):
step 101, an electronic device acquires a color image and a black-and-white image in the same scene, decomposes the color image to obtain a characteristic parameter, and acquires a second luminance component corresponding to the black-and-white image, wherein the characteristic parameter includes a first luminance component and a first color component.
The color image and the black-and-white image in the same scene acquired by the electronic device should have the same number of pixel points, and after the color image and the black-and-white image in the same scene are acquired, characteristic parameters, such as a luminance component and a color component, corresponding to each pixel point in the color image need to be acquired, and a luminance component corresponding to each pixel point in the black-and-white image also needs to be acquired.
In the image processing process, there are a plurality of data formats corresponding to each pixel point in the color image, where some data formats have a color channel and a brightness channel in a separated state, where the color channel includes color information, the color information includes chromaticity, hue, or saturation, and the brightness channel includes brightness information, and for example, the data formats include: YUV, HSV, etc., and some data formats in which the luminance channel and the color channel are not separated, for example, data formats including RGB, print color pattern ((Cyan, Magenta, Yellow, Black), CMYK), etc. Therefore, in the process of acquiring the characteristic parameter corresponding to each pixel point in the color image, the electronic device needs to determine the data format of the pixel point in the color image. And if the data formats of the pixel points are not matched or the color channel and the brightness channel are not separated, carrying out format conversion on the data formats of the pixel points in the color image so as to obtain the required characteristic parameters.
Further, because the characteristic parameters related to the technical solution provided in the embodiment of the present application include a luminance component and a color component, decomposing the color image to obtain the characteristic parameters includes: judging whether a color channel and a brightness channel in data corresponding to each pixel point in the color image are in a separated state or not; if the color channel and the brightness channel in the data are not in a separated state, performing format conversion on the data so as to enable the color channel and the brightness channel in the converted data to be in a separated state; decomposing data corresponding to each pixel point to obtain brightness and color information corresponding to each pixel point; the method comprises the steps of extracting the brightness of all pixel points in a color image to obtain a first brightness component, and extracting the color information of all pixel points in the color image to obtain a first color component.
Specifically, before decomposing data corresponding to each pixel point in a color image, the electronic device first judges whether a color channel and a brightness channel in the data corresponding to each pixel point in the color image are in a separated state, if not, accesses a value of each data component pixel by pixel in the color image according to a conversion relation between preset data formats, separates the brightness channel from the color channel, then decomposes the converted data corresponding to each pixel point to obtain brightness and color information corresponding to each pixel point, obtains a first brightness component based on the brightness corresponding to all the pixel points in the color image, and obtains a first color component based on the color information corresponding to all the pixel points in the color image. And if the color information and/or the brightness are contained, directly decomposing the data corresponding to each pixel point to obtain a first brightness component and a first color component.
In the execution process of this step, since the black-and-white image only has a luminance component, after the electronic device acquires the black-and-white image, the acquired data corresponding to each pixel point is the luminance corresponding to each pixel point, and then the second luminance component is obtained based on the luminances corresponding to all the pixel points.
And 102, the electronic equipment performs first fusion on the first brightness component and the second brightness component to obtain a third brightness component.
Specifically, after obtaining the first luminance component and the second luminance component, the electronic device fuses the luminance corresponding to each pixel point in the first luminance component with the luminance corresponding to the pixel point at the same position in the second luminance component to obtain the fused luminance corresponding to each pixel point at the same position, where the modes of fusing the luminance corresponding to each pixel point in the first luminance component with the luminance corresponding to the pixel point at the same position in the second luminance component are various, and the following description is given in two preferable modes:
and in the mode 1, performing wavelet decomposition on the first brightness component and the second brightness component to obtain high and low components, fusing the low frequency components according to a weighted average algorithm, and fusing the high frequency components according to an absolute maximum method to obtain a fused third brightness component.
And 2, defining a weight value corresponding to the brightness component of each pixel point in the first brightness component and the second brightness component, performing weighted average on the brightness corresponding to the pixel points at the same position in the first brightness component and the third brightness component based on the weight value corresponding to each pixel point to obtain a fused brightness component corresponding to each same pixel point, and then obtaining a fused third brightness component based on the fused brightness components corresponding to all the pixel points.
Then, the electronic device obtains a third brightness component based on the fused brightness corresponding to all the pixel points in the first brightness component.
Step 103, the electronic device adjusts the first color component based on the first luminance component and the third luminance component to obtain a second color component.
After obtaining the fused third luminance component, the electronic device adjusts the first color component based on the first luminance component and the third luminance component to obtain a second color component, including: removing noise in the first luminance component and the third luminance component; comparing the brightness corresponding to each pixel point in the denoised first brightness component with the brightness of the pixel point at the same position in the denoised third brightness component to obtain the stretching coefficient corresponding to each pixel point at the same position; and adjusting the color information of each pixel point in the first color component based on the corresponding stretching coefficient of each pixel point in the first brightness component to obtain a second color component.
Specifically, in order to avoid noise interference between adjacent pixels, sudden change of brightness corresponding to some pixels in the first brightness component and the third brightness component can occur, and then the color of the fused image is affected. The noise component in the first luminance component and the third luminance component needs to be removed, and there are various specific ways to remove the noise component, and several preferred ways to remove the noise component are exemplified below.
Mode 1, the electronic device performs filtering processing on the first luminance component and the third luminance component based on an image filtering algorithm, for example, the image filtering algorithm includes mean filtering, median filtering, or gaussian filtering.
Mode 2, the electronic device performs downsampling processing on the luminances in the first luminance component and the third luminance component respectively based on a preset sampling window. It is to be understood that the window in which the electronic device downsamples the first luma component and the third luma component is the same; the electronic device may perform downsampling processing on the luminances in the first luminance component and the third luminance component at least once based on a preset sampling window, where the preset sampling windows of the downsampling processing each time may be the same or different, and are not limited herein.
In the mode 3, the electronic device firstly performs filtering processing on the first luminance component and the third luminance component based on an image filtering algorithm, and then performs downsampling processing on the luminances in the first luminance component and the third luminance component based on a preset sampling window.
Then, after denoising the first luminance component and the third luminance component, the electronic device compares the luminance corresponding to each pixel point in the denoised first luminance component with the luminance corresponding to the pixel point in the denoised third luminance component to obtain a stretching coefficient corresponding to each pixel point, and adjusts the color information corresponding to each pixel point based on the stretching coefficient to obtain second color information.
Specifically, the electronic device adjusts color information corresponding to each pixel point in the first color component based on the stretch coefficient corresponding to the brightness of each pixel point in the first brightness component to obtain a second color component, including:
determining the corresponding saturation of each pixel point based on the corresponding brightness of each pixel point in the color image after denoising and the preset relation between the saturation and the brightness of each pixel point;
and adjusting the color information of each pixel point based on the stretching coefficient and the saturation corresponding to the brightness of each pixel point to obtain a second color component.
Specifically, the electronic device determines a first difference between a preset dark area saturation and a preset bright area saturation, a second difference between a minimum value of brightness in a bright area and brightness corresponding to each pixel point in the third brightness component, and a third difference between the minimum value of brightness in the bright area and a maximum value of brightness in the dark area; then, determining a first ratio between the second difference and the third difference and determining a first product between the first ratio and the first difference based on a preset intercept function; and then adding the first product to the preset bright area saturation to obtain the saturation corresponding to each pixel point.
That is, the electronic device can obtain the saturation corresponding to each pixel point according to the following formula:
Figure BDA0002106585970000131
wherein sat represents the saturation corresponding to each pixel point; sat _ bright represents a preset bright area saturation; sat _ dark represents a preset dark area saturation; bright _ th represents the minimum value of luminance in a bright area; dark _ th represents the maximum value of the dark-area luminance; y _ vis represents the luminance corresponding to any pixel point in the third luminance component, and clip () is a clipping function.
Further, after determining the saturation corresponding to each pixel point, the electronic device determines a second ratio between the brightness value of the third brightness component corresponding to each pixel point and the brightness value of the corresponding first brightness component, and takes the second ratio as a stretching coefficient; then, determining a second product between the stretching coefficient and the saturation, and determining a fourth difference value between the color information of the first color component corresponding to each pixel point and a preset value; and multiplying the second product by the fourth difference to obtain the color information of each pixel point after adjustment.
That is, the electronic device may adjust the color information of each pixel point according to the following formula:
Figure BDA0002106585970000132
wherein u represents color information corresponding to any pixel point in the second color component; y isvRepresenting the luminance, y, of any pixel point in the third luminance componentfExpressing the brightness corresponding to any pixel point in the first brightness component; u' represents color information corresponding to any pixel point in the first color component.
And 104, the electronic equipment performs second fusion on the third brightness component and the second color component to obtain a fused image.
In the embodiment provided by the application, after the electronic device fuses the brightness components of the color image and the black-and-white image, the color component of the color image is adjusted according to the brightness component of the color image and the fused brightness component to obtain the color component of the fused image.
Further, the problem of color distortion of the fused image caused by excessive adjustment of the color information corresponding to the pixel point is avoided. The electronic device adjusts the color information of each pixel point in the first color component based on the corresponding stretch coefficient of each pixel point in the first brightness component, and before obtaining the second color component, the method further includes:
determining a threshold value of a stretching coefficient corresponding to each pixel point in the first brightness component based on a preset relation between the brightness and the threshold value of the stretching coefficient;
judging whether a threshold value of the stretching coefficient corresponding to any pixel point is larger than the stretching coefficient corresponding to the pixel point exists in the first brightness component;
if yes, adjusting the stretching coefficient corresponding to any pixel point to be the same as the threshold value of the stretching coefficient corresponding to the pixel point.
Specifically, the corresponding relationship between the preset brightness and the threshold of the stretch coefficient is stored in the local or the database of the electronic device, after the electronic device obtains the stretch coefficient corresponding to each pixel in the first brightness component, the threshold of the stretch coefficient corresponding to each pixel is determined based on the brightness corresponding to each pixel in the first brightness component, then, whether the stretch coefficient corresponding to each pixel is greater than the threshold of the stretch coefficient corresponding to each pixel is judged, and if so, the stretch coefficient corresponding to each pixel is adjusted to be the same as the threshold of the stretch coefficient corresponding to each pixel.
Further, if the electronic device performs downsampling processing on the luminances in the first luminance component and the third luminance component based on a preset sampling window, in order to obtain a high-resolution fused image, after the electronic device adjusts the stretch coefficient corresponding to any pixel point to be the same as the threshold of the stretch coefficient corresponding to the pixel point, the method further includes:
and performing upsampling processing on the adjusted stretching coefficient corresponding to each pixel point in the first brightness component to obtain the stretching coefficient corresponding to each pixel point in the first brightness component.
Specifically, there are various methods for the electronic device to perform upsampling processing on the adjusted stretch coefficient corresponding to each pixel point in the first luminance component, for example, methods such as nearest neighbor interpolation, bilinear interpolation, mean value interpolation, or median interpolation. In order to facilitate understanding of the above processes of performing upsampling on the adjusted stretch coefficient corresponding to each pixel point in the first luminance component, the following process of performing upsampling is described in detail by taking a nearest neighbor interpolation method as an example.
For example, as shown in fig. 2, A, B, C, D is a position diagram of any adjacent pixel in the color image, where coordinates of four corners of the area are (i, j), (i +1, j), (i, j +1), and (i +1, j +1), respectively, and then the upsampling process is performed in the area, that is, a new pixel is inserted. Specifically, the coordinates of the newly inserted pixel point are set as (i + u, j + v), u and v are decimal numbers which are larger than zero and smaller than 1, if u is less than 0.5 and v is less than 0.5, the newly inserted pixel point falls in the area of the pixel point A, and the stretching coefficient corresponding to the pixel point A is assigned to the newly inserted pixel point; similarly, if u is more than 0.5 and less than 1 and v is less than 0.5, the newly inserted pixel point falls in the pixel point B area, and the tensile coefficient corresponding to the pixel point B is assigned to the newly inserted pixel point; if u is less than 0.5, and v is less than 1, the newly inserted pixel point falls in the pixel point C area, and the tensile coefficient corresponding to the pixel point C is assigned to the newly inserted pixel point; if u is more than 0.5 and less than 1 and v is more than 0.5 and less than 1, the newly inserted pixel point falls in the pixel point D area, the tensile coefficient corresponding to the pixel point D is assigned to the newly inserted pixel point, namely the tensile coefficient corresponding to the adjacent pixel point closest to the newly inserted pixel point is assigned to the newly inserted pixel point, so that the up-sampling processing in the area is realized.
An embodiment of the present application provides an apparatus for image fusion processing, and referring to fig. 3, the apparatus includes:
the decomposition unit 301 is configured to obtain a color image and a black-and-white image in the same scene, decompose the color image to obtain a characteristic parameter, and obtain a second luminance component corresponding to the black-and-white image, where the characteristic parameter includes a first luminance component and a first color component;
a first fusion unit 302, configured to perform first fusion on the first luminance component and the second luminance component to obtain a third luminance component;
a processing unit 303, configured to adjust the first color component based on the first luminance component and the third luminance component to obtain a second color component;
a second fusing unit 304, configured to perform second fusion on the third luminance component and the second color component to obtain a fused image.
Optionally, the decomposition unit 301 is specifically configured to:
judging whether a color channel and a brightness channel in data corresponding to each pixel point in the color image are in a separated state or not;
if the color channel and the brightness channel in the data are not in a separated state, performing format conversion on the data so as to enable the color channel and the brightness channel in the converted data to be in a separated state;
decomposing the data corresponding to each pixel point to obtain the brightness and color information corresponding to each pixel point;
and extracting the brightness of all pixel points in the color image to obtain the first brightness component, and extracting the color information of all pixel points in the color image to obtain the first color component.
Optionally, the first fusion unit 302 is specifically configured to:
fusing the brightness corresponding to each pixel point in the first brightness component with the brightness corresponding to the pixel point at the same position in the second brightness component to obtain fused brightness corresponding to each pixel point at the same position;
and obtaining the third brightness component based on the fused brightness corresponding to all the pixel points in the first brightness component.
Optionally, referring to fig. 4, the processing unit 303 includes: denoising section 401, comparing section 402, and adjusting section 403;
the denoising unit 401 is configured to remove noise in the first luminance component and the third luminance component;
the comparing unit 402 is configured to compare the brightness corresponding to each pixel point in the denoised first brightness component with the brightness of a pixel point at the same position in the denoised third brightness component, so as to obtain a stretch coefficient corresponding to each pixel point at the same position;
the adjusting unit 403 is configured to adjust color information of each pixel in the first color component based on a stretch coefficient corresponding to each pixel in the first luminance component, so as to obtain the second color component.
Optionally, the denoising unit 401 is specifically configured to:
performing filtering processing on the first luminance component and the third luminance component based on an image filtering algorithm; and/or
And respectively carrying out downsampling processing on the brightness in the first brightness component and the third brightness component based on a preset sampling window.
Optionally, referring to fig. 5, the apparatus further comprises: a determination unit 501 and a judgment unit 502;
the determining unit 501 is configured to determine, based on a preset relationship between luminance and a threshold of a stretch coefficient, a threshold of a stretch coefficient corresponding to each pixel point in the first luminance component;
the determining unit 502 is configured to determine whether a stretching coefficient corresponding to any pixel in the first luminance component is greater than a threshold of the stretching coefficient corresponding to the pixel;
the processing unit 303 is further configured to adjust the stretch coefficient corresponding to any pixel point to be the same as the threshold of the stretch coefficient corresponding to the pixel point if the stretch coefficient corresponding to the pixel point is greater than the threshold of the stretch coefficient corresponding to the pixel point.
Optionally, the processing unit 303 is further configured to:
and performing upsampling processing on the adjusted stretching coefficient corresponding to each pixel point in the first brightness component to obtain the stretching coefficient corresponding to each pixel point in the first brightness component.
Optionally, the determining unit 501 is further configured to determine a saturation corresponding to each pixel point in the color image based on the corresponding brightness after each pixel point in the color image is denoised and a preset relationship between the saturation and the brightness of each pixel point;
the processing unit 303 is specifically configured to adjust the color information of each pixel point based on the corresponding stretch coefficient and the saturation of each pixel point, so as to obtain the second color component.
Optionally, the determining unit 501 is specifically configured to:
determining a first difference value between a preset dark area saturation and a preset bright area saturation, a second difference value between the minimum value of the brightness in the bright area and the brightness corresponding to each pixel point in the third brightness component, and a third difference value between the minimum value of the brightness in the bright area and the maximum value of the brightness in the dark area;
determining a first ratio between the second difference and the third difference based on a preset clipping function, and determining a first product between the first ratio and the first difference;
and adding the first product and the preset bright area saturation to obtain the saturation corresponding to each pixel point.
Optionally, the processing unit 303 is specifically configured to:
determining a second ratio between the brightness value of the third brightness component corresponding to each pixel point and the brightness value of the corresponding first brightness component, and taking the second ratio as the stretching coefficient;
determining a second product between the stretching coefficient and the saturation, and determining a fourth difference value between the color information of the first color component corresponding to each pixel point and a preset value;
and multiplying the second product by the fourth difference to obtain the color information of each pixel point after adjustment.
The present application provides an electronic device, see fig. 6, comprising:
a memory 601 for storing instructions for execution by at least one processor;
a processor 602 for executing instructions stored in the memory to perform the steps described in fig. 1.
The present application provides a computer readable storage medium having stored thereon computer instructions, which, when executed on a computer, cause the computer to perform the steps described in fig. 1.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (11)

1. A method of image fusion processing, comprising:
acquiring a color image and a black-and-white image in the same scene, decomposing the color image to obtain a characteristic parameter, and acquiring a second brightness component corresponding to the black-and-white image, wherein the characteristic parameter comprises a first brightness component and a first color component;
performing first fusion on the first brightness component and the second brightness component to obtain a third brightness component;
removing noise in the first and third luminance components;
comparing the brightness corresponding to each pixel point in the denoised first brightness component with the brightness of the pixel point at the same position in the denoised third brightness component to obtain a stretching coefficient corresponding to each pixel point at the same position;
determining the corresponding saturation of each pixel point based on the corresponding brightness of each pixel point in the color image after being denoised and the preset relation between the saturation and the brightness of each pixel point;
adjusting the color information of each pixel point in the first color component based on the corresponding stretching coefficient and the saturation of each pixel point to obtain a second color component;
and performing second fusion on the third brightness component and the second color component to obtain a fused image.
2. The method of claim 1, wherein decomposing the color image into feature parameters comprises:
judging whether a color channel and a brightness channel in data corresponding to each pixel point in the color image are in a separated state or not;
if not, performing format conversion on the data to enable the color channel and the brightness channel in the converted data to be in a separated state;
decomposing the data corresponding to each pixel point to obtain the brightness and color information corresponding to each pixel point;
and extracting the brightness of all pixel points in the color image to obtain the first brightness component, and extracting the color information of all pixel points in the color image to obtain the first color component.
3. The method of claim 1, wherein first fusing the first luma component with the second luma component to obtain a third luma component comprises:
fusing the brightness corresponding to each pixel point in the first brightness component with the brightness corresponding to the pixel point at the same position in the second brightness component to obtain fused brightness corresponding to each pixel point at the same position;
and obtaining the third brightness component based on the fused brightness corresponding to all the pixel points in the first brightness component.
4. The method of claim 1, wherein removing noise in the first luma component and the third luma component comprises:
performing filtering processing on the first luminance component and the third luminance component based on an image filtering algorithm; and/or
And respectively carrying out downsampling processing on the brightness in the first brightness component and the third brightness component based on a preset sampling window.
5. The method as claimed in claim 4, wherein before adjusting the color information of each pixel in the first color component based on the corresponding stretch coefficient of each pixel in the first luminance component to obtain the second color component, the method further comprises:
determining a threshold value of a stretching coefficient corresponding to each pixel point in the first brightness component based on a preset relation between the brightness and the threshold value of the stretching coefficient;
judging whether a threshold value of which the stretching coefficient corresponding to any pixel point is larger than the stretching coefficient corresponding to the pixel point exists in the first brightness component;
if yes, adjusting the stretching coefficient corresponding to any pixel point to be the same as the threshold value of the stretching coefficient corresponding to the pixel point.
6. The method of claim 5, wherein if the luminance in the first luminance component and the luminance in the third luminance component are downsampled based on a preset sampling window, and the stretch coefficient corresponding to any pixel is adjusted to be the same as the threshold of the stretch coefficient corresponding to the pixel, the method further comprises:
and performing upsampling processing on the adjusted stretching coefficient corresponding to each pixel point in the third brightness component to obtain the stretching coefficient corresponding to each pixel point in the first brightness component.
7. The method of claim 1, wherein determining the saturation corresponding to each pixel point in the color image based on the corresponding brightness after denoising each pixel point and a preset relationship between the saturation and the brightness of each pixel point comprises:
determining a first difference value between a preset dark area saturation and a preset bright area saturation, a second difference value between the minimum value of the brightness in the bright area and the brightness corresponding to each pixel point in the third brightness component, and a third difference value between the minimum value of the brightness in the bright area and the maximum value of the brightness in the dark area;
determining a first ratio between the second difference and the third difference based on a preset clipping function, and determining a first product between the first ratio and the first difference;
and adding the first product and the preset bright area saturation to obtain the saturation corresponding to each pixel point.
8. The method of claim 7, wherein adjusting the color information of each pixel point based on the corresponding stretch coefficient and the saturation of each pixel point comprises:
determining a second ratio between the brightness value of the third brightness component corresponding to each pixel point and the brightness value of the corresponding first brightness component, and taking the second ratio as the stretching coefficient;
determining a second product between the stretching coefficient and the saturation, and a fourth difference value between the color information of the first color component corresponding to each pixel point and a preset value;
and multiplying the second product by the fourth difference to obtain the color information of each pixel point after adjustment.
9. An apparatus for image fusion processing, comprising:
the decomposition unit is used for acquiring a color image and a black-and-white image in the same scene, decomposing the color image to obtain a characteristic parameter and acquiring a second brightness component corresponding to the black-and-white image, wherein the characteristic parameter comprises a first brightness component and a first color component;
the first fusion unit is used for carrying out first fusion on the first brightness component and the second brightness component to obtain a third brightness component;
a processing unit for removing noise in the first and third luminance components; comparing the brightness corresponding to each pixel point in the denoised first brightness component with the brightness of the pixel point at the same position in the denoised third brightness component to obtain the stretching coefficient corresponding to each pixel point at the same position;
the determining unit is used for determining the saturation corresponding to each pixel point based on the brightness corresponding to each pixel point in the color image after denoising and the preset relation between the saturation and the brightness of each pixel point;
the processing unit is further configured to adjust color information of each pixel point in the first color component based on the corresponding stretch coefficient and the saturation of each pixel point to obtain a second color component;
and the second fusion unit is used for carrying out second fusion on the third brightness component and the second color component to obtain a fused image.
10. An electronic device, characterized in that the electronic device comprises:
a memory for storing instructions for execution by at least one processor;
a processor for executing instructions stored in a memory to perform the method of any one of claims 1-8.
11. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-8.
CN201910554868.9A 2019-06-25 2019-06-25 Image fusion processing method and device Active CN110298812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910554868.9A CN110298812B (en) 2019-06-25 2019-06-25 Image fusion processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910554868.9A CN110298812B (en) 2019-06-25 2019-06-25 Image fusion processing method and device

Publications (2)

Publication Number Publication Date
CN110298812A CN110298812A (en) 2019-10-01
CN110298812B true CN110298812B (en) 2021-08-27

Family

ID=68028832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910554868.9A Active CN110298812B (en) 2019-06-25 2019-06-25 Image fusion processing method and device

Country Status (1)

Country Link
CN (1) CN110298812B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111147857B (en) * 2019-12-06 2023-01-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112200757A (en) * 2020-09-29 2021-01-08 北京灵汐科技有限公司 Image processing method, image processing device, computer equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023129A (en) * 2016-05-26 2016-10-12 西安工业大学 Infrared and visible light image fused automobile anti-blooming video image processing method
CN106447641A (en) * 2016-08-29 2017-02-22 努比亚技术有限公司 Image generation device and method
CN106550227B (en) * 2016-10-27 2019-02-22 成都西纬科技有限公司 A kind of image saturation method of adjustment and device
CN106454014B (en) * 2016-11-04 2019-03-08 安徽超远信息技术有限公司 A kind of method and device improving backlight scene vehicle snapshot picture quality
CN106780392B (en) * 2016-12-27 2020-10-02 浙江大华技术股份有限公司 Image fusion method and device
US11017501B2 (en) * 2016-12-28 2021-05-25 Huawei Technologies Co., Ltd. Demosaicing method and apparatus
CN107292860B (en) * 2017-07-26 2020-04-28 武汉鸿瑞达信息技术有限公司 Image processing method and device
CN107566747B (en) * 2017-09-22 2020-02-14 浙江大华技术股份有限公司 Image brightness enhancement method and device
CN108717691B (en) * 2018-06-06 2022-04-15 成都西纬科技有限公司 Image fusion method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN110298812A (en) 2019-10-01

Similar Documents

Publication Publication Date Title
CN110246108B (en) Image processing method, device and computer readable storage medium
KR101621614B1 (en) Method and apparatus for enhancing digital image, and apparatus for image processing using the same
US8774503B2 (en) Method for color feature extraction
US10565742B1 (en) Image processing method and apparatus
US20130064448A1 (en) Image chroma noise reduction
CN106846276B (en) Image enhancement method and device
US9165346B2 (en) Method and apparatus for reducing image noise
CN107507144B (en) Skin color enhancement processing method and device and image processing device
US20110019912A1 (en) Detecting And Correcting Peteye
JP2001126075A (en) Method and device for picture processing, and recording medium
KR20070090224A (en) Method of electronic color image saturation processing
Lee et al. Color image enhancement using histogram equalization method without changing hue and saturation
Pei et al. Effective image haze removal using dark channel prior and post-processing
CN110298812B (en) Image fusion processing method and device
US8121401B2 (en) Method for reducing enhancement of artifacts and noise in image color enhancement
CN111626967A (en) Image enhancement method, image enhancement device, computer device and readable storage medium
Liu et al. Single image haze removal via depth-based contrast stretching transform
CN114037641A (en) Low-illumination image enhancement method, device, equipment and medium
US20040228522A1 (en) Color image processing method and apparatus
WO2012153661A1 (en) Image correction device, image correction display device, image correction method, program, and recording medium
CN115578294B (en) Image enhancement method, device, equipment and storage medium
CN111598794A (en) Image imaging method and device for removing underwater overlapping condition
WO2010128683A1 (en) Blue sky color detection technique
CN116109511A (en) Method and system for infrared image edge enhancement
WO2012099013A1 (en) Image correction device, image correction display device, image correction method, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant