CN112651899A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN112651899A
CN112651899A CN202110057189.8A CN202110057189A CN112651899A CN 112651899 A CN112651899 A CN 112651899A CN 202110057189 A CN202110057189 A CN 202110057189A CN 112651899 A CN112651899 A CN 112651899A
Authority
CN
China
Prior art keywords
image
brightness
value
weight
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110057189.8A
Other languages
Chinese (zh)
Inventor
黄道
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202110057189.8A priority Critical patent/CN112651899A/en
Publication of CN112651899A publication Critical patent/CN112651899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium, wherein the method includes: determining a first highlight area in a multi-frame exposure fused first image, and calculating a first brightness mean value of the first highlight area; adjusting the brightness of an underexposed image in the multi-frame exposure images fused into the first image to be the first brightness mean value, and generating a second image; determining a weight graph of the first image based on the first brightness mean value, and performing brightness fusion on the second image and the first image based on the weight graph of the first image to generate a third image; and superposing color channel information for the third image and outputting the color channel information. The image effect when shooting the image is realized, and the method and the device are more suitable for the fields of image shooting and video synthesis, so that the display effect of the image or the video is improved, and the user experience is improved.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present disclosure relates to image transformation technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, in order to improve the brightness of a shot image, a plurality of images with different exposures are generally obtained, and a plurality of frames of images with different exposures are fused according to corresponding weights to generate a better fused image with a dynamic range, color, noise, details and the like, so that the quality of the shot image is improved, and the user experience is improved. However, the fused image may produce a halo phenomenon due to the presence of some ineffective, large magnitude gradients in the multi-frame exposed input image.
Disclosure of Invention
The disclosure provides an image processing method and device, an electronic device and a storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
determining a first highlight area in a multi-frame exposure fused first image, and calculating a first brightness mean value of the first highlight area; wherein the first image has a halo phenomenon;
adjusting the brightness of an underexposed image in the multi-frame exposure images fused into the first image to be the first brightness mean value, and generating a second image;
determining a weight graph of the first image based on the first brightness mean value, and performing brightness fusion on the second image and the first image based on the weight graph of the first image to generate a third image;
and superposing color channel information for the third image and outputting the color channel information.
Preferably, the adjusting the brightness of the underexposed image in the multi-frame exposed images fused into the first image to the first brightness mean value to generate the second image includes:
determining a second highlight area in the underexposed image, and calculating a second brightness mean value of the second highlight area and a difference value of the first brightness mean value and the second brightness mean value;
and adding the difference value to the brightness of the underexposed image to be used as a brightness value of the second image.
Preferably, determining a weight map of the first image based on the first luminance mean value includes:
determining a weight map of the second image according to the first brightness mean value and the brightness value of the pixel point of the second image;
and determining the weight map of the first image according to the weight map of the second image.
Preferably, the determining the weight map of the second image according to the first luminance mean value and the luminance value of the pixel point of the second image includes: under the condition that a second brightness value of a pixel point of the second image is larger than or equal to the first brightness mean value, the weight of the second brightness value of the pixel point is 1; under the condition that the second brightness value is smaller than the first brightness mean value, calculating the ratio of the square of the difference value of the second brightness value of the pixel point and the first brightness mean value to a set constant, and setting the weight of the second brightness value of the pixel point as an index for solving the inverse number of the ratio with e as the base; the weights of the second brightness values of all the pixel points of the second image form a weight map of the second image;
determining the weight map of the first image from the weight map of the second image comprises:
the weight of a first brightness value of a designated pixel point in the first image weight map is 1 minus the weight of a second brightness value corresponding to the designated pixel value in the weight map of the second image.
Preferably, the luminance fusing the second image and the first image based on the weight map of the first image to generate a third image includes:
and weighting the first image and the second image according to the weight map of the first image and the weight value of the second image to obtain the third image.
Preferably, the determining the first highlight region in the multi-frame exposure fused first image includes:
determining the brightness weight value of the pixel points of the underexposed image, determining the area formed by the pixel points with the brightness weight value larger than a set threshold value in the underexposed image as the second highlight area, and determining the area corresponding to the second highlight area in the first image as the first highlight area.
Preferably, before determining the brightness weight value of the pixel point of the underexposed image, the method further includes:
and performing resolution reduction processing on the underexposed image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
a determining unit configured to determine a first highlight region in the multi-frame exposure-fused first image; wherein the first image has a halo phenomenon;
the calculating unit is used for calculating a first brightness mean value of the first highlight area;
the first generation unit is used for adjusting the brightness of an underexposed image in a plurality of frames of exposed images fused into the first image into the first brightness mean value to generate a second image;
the second generation unit is used for determining a weight map of the first image based on the first brightness mean value, and performing brightness fusion on the second image and the first image based on the weight map of the first image to generate a third image;
and the output unit is used for superposing color channel information for the third image and outputting the information.
Preferably, the first generating unit is further configured to:
determining a second highlight area in the underexposed image, and calculating a second brightness mean value of the second highlight area and a difference value of the first brightness mean value and the second brightness mean value;
and adding the difference value to the brightness of the underexposed image to be used as a brightness value of the second image.
Preferably, the second generating unit is further configured to:
determining a weight map of the second image according to the first brightness mean value and the brightness value of the pixel point of the second image;
and determining the weight map of the first image according to the weight map of the second image.
Preferably, the second generating unit is further configured to:
under the condition that a second brightness value of a pixel point of the second image is larger than or equal to the first brightness mean value, the weight of the second brightness value of the pixel point is 1; under the condition that the second brightness value is smaller than the first brightness mean value, calculating the ratio of the square of the difference value of the second brightness value of the pixel point and the first brightness mean value to a set constant, and setting the weight of the second brightness value of the pixel point as an index for solving the inverse number of the ratio with e as the base; the weights of the second brightness values of all the pixel points of the second image form a weight map of the second image;
determining the weight map of the first image from the weight map of the second image comprises:
the weight of a first brightness value of a designated pixel point in the first image weight map is 1 minus the weight of a second brightness value corresponding to the designated pixel value in the weight map of the second image.
Preferably, the second generating unit is further configured to:
and weighting the first image and the second image according to the weight map of the first image and the weight value of the second image to obtain the third image.
Preferably, the determining unit is further configured to:
determining the brightness weight value of the pixel points of the underexposed image, determining the area formed by the pixel points with the brightness weight value larger than a set threshold value in the underexposed image as the second highlight area, and determining the area corresponding to the second highlight area in the first image as the first highlight area.
Preferably, the apparatus further comprises:
and the resolution reducing unit is used for performing resolution reducing processing on the underexposed image before determining the brightness weight value of the pixel point of the underexposed image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor and a memory for storing processor executable instructions, wherein the processor is configured to be able to perform the steps of the image processing method described above when the executable instructions in the memory are called.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the steps of the image processing method described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, the highlight area in the underexposed image during image fusion is determined, the brightness value of the underexposed image is adjusted, and the underexposed image after the brightness value is adjusted is fused with the fusion image again, so that the halo phenomenon of the fused image is greatly reduced, and the display effect of the fusion image is further improved. The embodiment of the disclosure realizes the image effect when shooting the image, and is more suitable for the fields of image shooting and video synthesis, thereby improving the display effect of the image or the video and improving the user experience.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic flow chart illustrating an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of two-frame image composition;
fig. 4 is a schematic diagram illustrating a composition structure of an image processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device shown in an embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of devices and apparatus consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure, and as shown in fig. 1, the image processing method according to the embodiment of the present disclosure includes the following steps:
s11, determining a first highlight area in the multi-frame exposure fused first image, and calculating a first brightness mean value of the first highlight area.
Wherein the first image has a halo phenomenon.
In the embodiment of the present disclosure, the multi-frame exposure fused first image is an image obtained by fusing two or more different exposure images, where the different exposure images include an underexposed image or an overexposed image, and the multi-frame images are fused according to the weight values of the brightness of the different exposure images to generate a fused image.
In the embodiment of the present disclosure, the image fusion is performed in a laplacian pyramid layer manner, the number of images to be fused may be two or more, and first, the multi-frame images are respectively subjected to laplacian pyramid layer decomposition. The concrete mode is as follows:
constructing a Laplace pyramid layer of luminance for each of the plurality of frames of exposed images, wherein constructing the Laplace pyramid layer of images comprises:
respectively carrying out down-sampling on each image to obtain a Gaussian pyramid image layer of each image; specifically, firstly, gaussian blur processing is performed on the image of the next layer of each image in the multi-frame exposure images, and then even rows and columns of the image after gaussian blur are deleted, so that a gaussian pyramid image layer of the current layer is obtained. And sequentially carrying out the Gaussian pyramid image layer processing on each image for a set number of times to obtain the Gaussian pyramid image layers with the set number of layers of each image. For example, according to the accuracy requirement of image processing, the gaussian pyramid layer processing can be performed on any image, and for example, the image can be processed into gaussian pyramid layers of 8 layers, 4 layers, 16 layers and the like.
Obtaining a current Laplace pyramid layer of each image by:
the upper sampling layer is obtained by carrying out the upper sampling on the Gaussian pyramid image layer on the upper layer of the current image layer of each image, and the Gaussian blur processing is carried out on the upper sampling layer to obtain a first Gaussian pyramid image layer; performing subtraction operation on the current pyramid of gaussians of each image and the first pyramid of gaussians to obtain a current pyramid of laplacian; and combining the Laplacian pyramid image layers acquired layer by layer to form a Laplacian pyramid corresponding to each image. That is, the laplacian pyramid layer is formed by subtracting a series of images of the image that is reduced and then enlarged, i.e., up-sampled, from the source image. The upsampling process here includes: and mapping the pixel with the position of (x, y) in the image to be upsampled to the position of (2x +1,2y +1) of the target image, and filling 0 in the area without the pixel in the target image, namely upsampling the image to be processed.
In the embodiment of the present disclosure, after determining a laplacian pyramid image layer for each frame of image in a multi-frame image, determining weights for each layer in the laplacian pyramid image layer, where the weights form a weight pyramid, and obtaining the weight pyramid corresponding to each image in the multi-frame image specifically includes:
acquiring an original weight map corresponding to each image in a plurality of frames of images, and respectively performing down-sampling on the original weight map of each image to obtain a weight map of a second layer of each image; and respectively performing downsampling on the weight map of the second layer of each image to obtain the weight map of the third layer of each image until the weight maps which are in one-to-one correspondence with the layers of the Laplacian pyramid of each image are obtained.
In the embodiment of the disclosure, a fusion weight is further constructed for the laplacian pyramid image layer in the multi-frame exposure image; and when the images are fused, fusing the images layer by layer according to the fusion weight, and fusing the images fused layer by layer, namely multiplying the Laplace pyramid image layer by the weight of the corresponding layer, and then superposing the Laplace pyramid image layer and the weight of the corresponding layer to generate the fused image.
In an embodiment of the present disclosure, determining a first highlight region in a multi-frame exposure-fused first image includes: the method comprises the steps of determining an underexposed image fused into a first image, wherein the brightness value of a high-brightness area such as the sky in the underexposed image is higher, and the high-brightness area needs to be correspondingly processed to avoid the halo phenomenon in the fused image.
In the embodiment of the present disclosure, when determining the second highlight area or the first highlight area, it is not necessary to perform area determination on the entire high-pixel image, and the image to be processed, such as the aforementioned underexposed image, may be subjected to pixel reduction processing to reduce the resolution of the underexposed image to one tenth, one twentieth, or one hundredth, or the like, or the resolution may be reduced to 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/512, 1/1024, or the like, and by taking the resolution as an example of reducing the resolution to 1/4, one of the original adjacent 4 pixels in the image may be randomly selected to constitute a low-resolution image, or an even pixel row and an even pixel column in the image may be removed. Therefore, after the resolution is reduced, the range of the highlight area can be determined, although the precision is slightly worse, the calculation amount is greatly reduced, and the determination efficiency of the highlight area by the embodiment of the disclosure is improved.
And after the first highlight area is determined, calculating the brightness average value of the first highlight area, namely the first brightness average value. Here, an arithmetic average, a weighted average, or the like of the luminance values of all the pixels within the first highlight region may be taken as the first luminance average value.
And S12, adjusting the brightness of the underexposed image in the multi-frame exposed images fused into the first image into the first brightness mean value, and generating a second image.
In the embodiment of the present disclosure, after the average brightness value of the first highlight area is determined, the brightness of the underexposed image is adjusted to be the first brightness average value. In this embodiment of the present disclosure, when adjusting the brightness of an under-exposed image, it is necessary to ensure that the gradient information of the under-exposed image after brightness adjustment is unchanged, and therefore, in this embodiment of the present disclosure, adjusting the brightness of the under-exposed image in a plurality of frames of exposed images fused into the first image to the first brightness mean value includes:
determining a second highlight area in the underexposed image, namely an area formed by pixel points with weight values exceeding a set threshold value in the pixel points, meanwhile, calculating a second brightness mean value of the second highlight area, and calculating a difference value between the first brightness mean value and the second brightness mean value; and directly adding the difference value to the brightness of the underexposed image to be used as the brightness value of the second image. In this way, not only the brightness of the underexposed image is adjusted, but also its gradient information is guaranteed.
And S13, determining a weight map of the first image based on the first brightness mean value, and performing brightness fusion on the second image and the first image based on the weight map of the first image to generate a third image.
In an embodiment of the present disclosure, the determining a weight map of the first image based on the first luminance mean includes:
determining a weight map of the second image according to the first brightness mean value and the brightness value of the pixel point of the second image;
and determining the weight map of the first image according to the weight map of the second image.
Determining the weight map of the second image according to the first luminance mean value and the luminance value of the pixel point of the second image includes: under the condition that a second brightness value of a pixel point of the second image is larger than or equal to the first brightness mean value, the weight of the second brightness value of the pixel point is 1; under the condition that the second brightness value is smaller than the first brightness mean value, calculating the ratio of the square of the difference value of the second brightness value of the pixel point and the first brightness mean value to a set constant, and setting the weight of the second brightness value of the pixel point as an index for solving the inverse number of the ratio with e as the base; the weights of the second brightness values of all the pixel points of the second image form a weight map of the second image;
determining the weight map of the first image from the weight map of the second image comprises:
the weight of a first brightness value of a designated pixel point in the first image weight map is 1 minus the weight of a second brightness value corresponding to the designated pixel value in the weight map of the second image.
Correspondingly, the luminance fusion of the second image and the first image based on the weight map of the first image to generate a third image includes: and weighting the first image and the second image according to the weight map of the first image and the weight value of the second image to obtain the third image.
And S14, superposing color channel information for the third image and outputting.
In the embodiment of the present disclosure, after the third image obtained by fusing the first image and the second image is superimposed with the corresponding color channel information, such as the UV value, an RGB image is generated and output and displayed. The technical scheme of the embodiment of the disclosure can be suitable for scenes such as image shooting or video shooting.
The essence of the technical solution of the embodiments of the present disclosure is further clarified by specific examples below.
Fig. 2 is a schematic flow chart of an image processing method shown in an embodiment of the present disclosure, and as shown in fig. 2, in this example, three frames of input images are used for image fusion, where the three frames of input images are respectively an underexposed image ev-, a normal exposed image ev0, and an overexposed image ev +, it should be noted that the present disclosure only uses three frames of images as an example to illustrate the principle, and when the number of input frames is greater than or equal to 2 frames, the same logic is applicable to image fusion processing. Where ev represents an exposure value of an image, which is related to an aperture value of a lens and a shutter speed. The image processing method of the embodiment of the disclosure includes the following processing steps:
step 1. input image frame: ev-, ev0, ev +, luminance maps of input image frames are Y0, Y1, Y2, in the disclosed embodiment, only luminance information Y of the image is considered, and UV channel color information is not considered;
step 2. construct laplacian pyramids pyrY0, pyrY1, pyrY2 of the luminance Y of the image, calculate the original weights W0, W1, W2 of the three frame input images according to the weight curve, and based on and construct the weight pyramids pyrW0, pyrW1, pyrW 2. For a specific configuration, reference may be made to the description of the foregoing embodiments.
Fig. 3 is a schematic diagram of two-frame image synthesis, as shown in fig. 3, the leftmost side in the diagram is an input original image, laplacian pyramid decomposition (laplacian pyramid decomposition) is performed on an image to be processed, see a white frame and an outer region thereof in the second left side in the diagram, correspondingly, a gaussian pyramid corresponding to a weight map is constructed, see a white frame and an outer region thereof in the third left side in the diagram, and then the weight pyramid and the laplacian pyramid are fused to obtain a result image. In this example, the three-frame image is processed in exactly the same manner as the two-frame image shown in fig. 3. The image fusion is to multiply the brightness values of the pixel points in the two frames of images and the corresponding weight values thereof and then superpose the two frames of images, namely, a fusion image is generated.
Step 3, fusing 3 frames of image information layer by layer according to the decomposed pyramid to obtain a fused pyramid image, and then restoring the synthesized image by sampling and superposing Laplace information layer by layer, wherein the synthesized image possibly has a halo phenomenon and is marked as Y _ halo;
step 4, generally, the resolution of the original image is as high as 12 to 27M pixels, and the recognition speed is faster when the high-brightness regions such as sky and the like are acquired at a small resolution, so in the embodiment of the present disclosure, the pixel reduction processing may be performed on the high-resolution image to be processed, for example, the pixels may be reduced to 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/512, 1/1024 and the like, so as to facilitate the fast recognition of the high-brightness regions of the image, which is specifically set according to the processing accuracy and efficiency of the image. The following operations of the embodiments of the present disclosure may be performed on a small resolution image. According to the pyramid fusion principle, effective information of high-brightness regions such as sky regions generally comes from ev-, namely, the weights of ev-sky regions are large, and the regions with the ev-middle weight values larger than the threshold are determined to be the high-brightness regions omega such as sky regions by setting the threshold. In this step, that is, when the fused image is determined, in the weight value determined for each pixel point in ev-, if the weight value exceeds threshold, the pixel point is determined to be a highlight pixel point, and the region formed by the determined highlight pixel points is the highlight region Ω.
Step 5. ev-is consistent with the sky area of the composite map Y _ halo, the mean value mean _ Y0, mean _ halo of the ev-and Y _ halo in the highlight area Ω is counted, and the difference mean _ diff of the mean values is calculated as mean _ halo-mean _ Y0; the luminance average here may be an arithmetic average, a weighted average, or the like;
step 6, adjusting the sky brightness mean value of the ev-to be consistent with the brightness mean value of the Y _ halo high-brightness area, ensuring that the ev-gradient information after brightening is unchanged, and obtaining the ev-after brightening, and recording the ev-as Y _ light-Y0 + mean _ diff;
step 7, construct a weighted gaussian curve with mean _ halo as the mean, Y _ lut (x) exp (- (x-mean _ halo) x (x-mean _ halo)/sigma), x being the current pixel intensity value of the image synthesized with Y _ halo, in this example the brightened ev-; sigma is an empirical value and can be set to 2 × 36 × 36; exp () represents an exponentiation with a constant e as the base;
step 8, fusing Y _ light and Y _ halo, and calculating according to the brightness value of the same pixel coordinate in the image to be fused to obtain a corresponding weight value as follows:
Figure BDA0002901203240000091
W3=1–W4
where index is the pixel coordinate, W4 is the luminance value weight of the pixel in Y _ light, and W3 is the luminance value weight of the pixel in Y _ halo at the same coordinate position as Y _ light.
And fusing two frames according to the weight to obtain a fused image Y _ dehalo, wherein the fused image Y _ dehalo is as follows:
Y_dehalo=Y_halo×w3+y_lighten×w4,Y_halo[index]>y_lighten[index]
and Step 9, outputting the image Y _ dehalo with the halo removed, and superposing color channel information to display a normal RGB image.
According to the embodiment of the invention, the highlight area in the underexposed image during image fusion is determined, the brightness value of the underexposed image is adjusted, and the underexposed image after the brightness value is adjusted is fused with the fusion image again, so that the halo phenomenon of the fused image is greatly reduced, and the display effect of the fusion image is further improved. The embodiment of the disclosure realizes the image effect when shooting the image, and is more suitable for the fields of image shooting and video synthesis, thereby improving the display effect of the image or the video and improving the user experience.
Fig. 4 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the image processing apparatus according to the embodiment of the present disclosure includes:
a determination unit 40 for determining a first highlight region in the multi-frame exposure-fused first image; wherein the first image has a halo phenomenon;
a calculating unit 41, configured to calculate a first luminance mean value of the first highlight region;
a first generating unit 42, configured to adjust the brightness of an underexposed image in the multiple frames of exposed images fused into the first image to the first brightness mean value, and generate a second image;
a second generating unit 43, configured to determine a weight map of the first image based on the first luminance mean, perform luminance fusion on the second image and the first image based on the weight map of the first image, and generate a third image;
and an output unit 44, configured to superimpose color channel information on the third image and output the superimposed color channel information.
As an implementation manner, the first generating unit 42 is further configured to:
determining a second highlight area in the underexposed image, and calculating a second brightness mean value of the second highlight area and a difference value of the first brightness mean value and the second brightness mean value;
and adding the difference value to the brightness of the underexposed image to be used as a brightness value of the second image.
As an implementation manner, the second generating unit 43 is further configured to:
determining a weight map of the second image according to the first brightness mean value and the brightness value of the pixel point of the second image;
and determining the weight map of the first image according to the weight map of the second image.
As an implementation manner, the second generating unit 43 is further configured to:
under the condition that a second brightness value of a pixel point of the second image is larger than or equal to the first brightness mean value, the weight of the second brightness value of the pixel point is 1; under the condition that the second brightness value is smaller than the first brightness mean value, calculating the ratio of the square of the difference value of the second brightness value of the pixel point and the first brightness mean value to a set constant, and setting the weight of the second brightness value of the pixel point as an index for solving the inverse number of the ratio with e as the base; the weights of the second brightness values of all the pixel points of the second image form a weight map of the second image;
determining the weight map of the first image from the weight map of the second image comprises:
the weight of a first brightness value of a designated pixel point in the first image weight map is 1 minus the weight of a second brightness value corresponding to the designated pixel value in the weight map of the second image.
As an implementation manner, the second generating unit 43 is further configured to:
and weighting the first image and the second image according to the weight map of the first image and the weight value of the second image to obtain the third image.
As an implementation manner, the determining unit 40 is further configured to:
determining the brightness weight value of the pixel points of the underexposed image, determining the area formed by the pixel points with the brightness weight value larger than a set threshold value in the underexposed image as the second highlight area, and determining the area corresponding to the second highlight area in the first image as the first highlight area.
As one implementation, on the basis of the image processing apparatus shown in fig. 4, the image processing apparatus of the embodiment of the present disclosure further includes:
a resolution reduction unit (not shown in fig. 4) for performing resolution reduction processing on the under-exposed image.
In an exemplary embodiment, the determination Unit 40, the calculation Unit 41, the first generation Unit 42, the second generation Unit 43, the output Unit 44, and the like may be implemented by one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), Baseband Processors (BPs), Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, Micro Controllers (MCUs), microprocessors (microprocessors), or other electronic elements for performing the aforementioned image Processing methods.
With regard to the image processing apparatus in the above-described embodiment, the specific manner in which each module and unit performs operations has been described in detail in the embodiment related to the apparatus, and will not be described in detail here.
FIG. 5 is a block diagram illustrating an electronic device 800 according to an example embodiment, where, as shown in FIG. 5, the electronic device 800 supports multiple screen outputs, and the electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, images, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as Wi-Fi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the steps of the image processing of the above-described embodiments.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the steps of the image processing method of the above-described embodiments is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The disclosed embodiments also recite a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of the preceding embodiments, the method comprising:
determining a first highlight area in a multi-frame exposure fused first image, and calculating a first brightness mean value of the first highlight area; wherein the first image has a halo phenomenon;
adjusting the brightness of an underexposed image in the multi-frame exposure images fused into the first image to be the first brightness mean value, and generating a second image;
determining a weight graph of the first image based on the first brightness mean value, and performing brightness fusion on the second image and the first image based on the weight graph of the first image to generate a third image;
and superposing color channel information for the third image and outputting the color channel information.
Preferably, the adjusting the brightness of the underexposed image in the multi-frame exposed images fused into the first image to the first brightness mean value to generate the second image includes:
determining a second highlight area in the underexposed image, and calculating a second brightness mean value of the second highlight area and a difference value of the first brightness mean value and the second brightness mean value;
and adding the difference value to the brightness of the underexposed image to be used as a brightness value of the second image.
Preferably, determining a weight map of the first image based on the first luminance mean value includes:
determining a weight map of the second image according to the first brightness mean value and the brightness value of the pixel point of the second image;
and determining the weight map of the first image according to the weight map of the second image.
Preferably, the determining the weight map of the second image according to the first luminance mean value and the luminance value of the pixel point of the second image includes: under the condition that a second brightness value of a pixel point of the second image is larger than or equal to the first brightness mean value, the weight of the second brightness value of the pixel point is 1; under the condition that the second brightness value is smaller than the first brightness mean value, calculating the ratio of the square of the difference value of the second brightness value of the pixel point and the first brightness mean value to a set constant, and setting the weight of the second brightness value of the pixel point as an index for solving the inverse number of the ratio with e as the base; the weights of the second brightness values of all the pixel points of the second image form a weight map of the second image;
determining the weight map of the first image from the weight map of the second image comprises:
the weight of a first brightness value of a designated pixel point in the first image weight map is 1 minus the weight of a second brightness value corresponding to the designated pixel value in the weight map of the second image.
Preferably, the luminance fusing the second image and the first image based on the weight map of the first image to generate a third image includes:
and weighting the first image and the second image according to the weight map of the first image and the weight value of the second image to obtain the third image.
Preferably, the determining the first highlight region in the multi-frame exposure fused first image includes:
determining the brightness weight value of the pixel points of the underexposed image, determining the area formed by the pixel points with the brightness weight value larger than a set threshold value in the underexposed image as the second highlight area, and determining the area corresponding to the second highlight area in the first image as the first highlight area.
Preferably, before determining the brightness weight value of the pixel point of the underexposed image, the method further includes:
and performing resolution reduction processing on the underexposed image.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. An image processing method, characterized in that the method comprises:
determining a first highlight area in a multi-frame exposure fused first image, and calculating a first brightness mean value of the first highlight area; wherein the first image has a halo phenomenon;
adjusting the brightness of an underexposed image in the multi-frame exposure images fused into the first image to be the first brightness mean value, and generating a second image;
determining a weight graph of the first image based on the first brightness mean value, and performing brightness fusion on the second image and the first image based on the weight graph of the first image to generate a third image;
and superposing color channel information for the third image and outputting the color channel information.
2. The method according to claim 1, wherein the adjusting the brightness of the underexposed image in the multi-frame exposure images fused into the first image to the first brightness mean value to generate a second image comprises:
determining a second highlight area in the underexposed image, and calculating a second brightness mean value of the second highlight area and a difference value of the first brightness mean value and the second brightness mean value;
and adding the difference value to the brightness of the underexposed image to be used as a brightness value of the second image.
3. The method of claim 1, wherein determining the weight map for the first image based on the first luminance mean comprises:
determining a weight map of the second image according to the first brightness mean value and the brightness value of the pixel point of the second image;
and determining the weight map of the first image according to the weight map of the second image.
4. The method of claim 3,
determining the weight map of the second image according to the first luminance mean value and the luminance value of the pixel point of the second image includes: under the condition that a second brightness value of a pixel point of the second image is larger than or equal to the first brightness mean value, the weight of the second brightness value of the pixel point is 1; under the condition that the second brightness value is smaller than the first brightness mean value, calculating the ratio of the square of the difference value of the second brightness value of the pixel point and the first brightness mean value to a set constant, and setting the weight of the second brightness value of the pixel point as an index for solving the inverse number of the ratio with e as the base; the weights of the second brightness values of all the pixel points of the second image form a weight map of the second image;
determining the weight map of the first image from the weight map of the second image comprises:
the weight of a first brightness value of a designated pixel point in the first image weight map is 1 minus the weight of a second brightness value corresponding to the designated pixel value in the weight map of the second image.
5. The method of claim 4, wherein the luma fusing the second image with the first image based on the weight map of the first image to generate a third image comprises:
and weighting the first image and the second image according to the weight map of the first image and the weight value of the second image to obtain the third image.
6. The method of claim 2, wherein determining the first highlight region in the multi-frame exposure-fused first image comprises:
determining the brightness weight value of the pixel points of the underexposed image, determining the area formed by the pixel points with the brightness weight value larger than a set threshold value in the underexposed image as the second highlight area, and determining the area corresponding to the second highlight area in the first image as the first highlight area.
7. The method of claim 6, wherein prior to determining the luminance weight values for the pixels of the under-exposed image, the method further comprises:
and performing resolution reduction processing on the underexposed image.
8. An image processing apparatus, characterized in that the apparatus comprises:
a determining unit configured to determine a first highlight region in the multi-frame exposure-fused first image; wherein the first image has a halo phenomenon;
the calculating unit is used for calculating a first brightness mean value of the first highlight area;
the first generation unit is used for adjusting the brightness of an underexposed image in a plurality of frames of exposed images fused into the first image into the first brightness mean value to generate a second image;
the second generation unit is used for determining a weight map of the first image based on the first brightness mean value, and performing brightness fusion on the second image and the first image based on the weight map of the first image to generate a third image;
and the output unit is used for superposing color channel information for the third image and outputting the information.
9. The apparatus of claim 8, wherein the first generating unit is further configured to:
determining a second highlight area in the underexposed image, and calculating a second brightness mean value of the second highlight area and a difference value of the first brightness mean value and the second brightness mean value;
and adding the difference value to the brightness of the underexposed image to be used as a brightness value of the second image.
10. The apparatus of claim 8, wherein the second generating unit is further configured to:
determining a weight map of the second image according to the first brightness mean value and the brightness value of the pixel point of the second image;
and determining the weight map of the first image according to the weight map of the second image.
11. The apparatus of claim 10, wherein the second generating unit is further configured to:
under the condition that a second brightness value of a pixel point of the second image is larger than or equal to the first brightness mean value, the weight of the second brightness value of the pixel point is 1; under the condition that the second brightness value is smaller than the first brightness mean value, calculating the ratio of the square of the difference value of the second brightness value of the pixel point and the first brightness mean value to a set constant, and setting the weight of the second brightness value of the pixel point as an index for solving the inverse number of the ratio with e as the base; the weights of the second brightness values of all the pixel points of the second image form a weight map of the second image;
determining the weight map of the first image from the weight map of the second image comprises:
the weight of a first brightness value of a designated pixel point in the first image weight map is 1 minus the weight of a second brightness value corresponding to the designated pixel value in the weight map of the second image.
12. The apparatus of claim 11, wherein the second generating unit is further configured to:
and weighting the first image and the second image according to the weight map of the first image and the weight value of the second image to obtain the third image.
13. The apparatus of claim 9, wherein the determining unit is further configured to:
determining the brightness weight value of the pixel points of the underexposed image, determining the area formed by the pixel points with the brightness weight value larger than a set threshold value in the underexposed image as the second highlight area, and determining the area corresponding to the second highlight area in the first image as the first highlight area.
14. The apparatus of claim 13, further comprising:
and the resolution reducing unit is used for performing resolution reducing processing on the underexposed image before determining the brightness weight value of the pixel point of the underexposed image.
15. An electronic device, characterized in that the electronic device comprises: a processor and a memory for storing processor-executable instructions, wherein the processor is configured to be capable of performing the steps of the image processing method of any one of claims 1 to 7 when the executable instructions in the memory are called.
16. A non-transitory computer readable storage medium, instructions in which, when executed by a processor of an electronic device, enable the electronic device to perform the steps of the image processing method of any of claims 1 to 7.
CN202110057189.8A 2021-01-15 2021-01-15 Image processing method and device, electronic device and storage medium Pending CN112651899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110057189.8A CN112651899A (en) 2021-01-15 2021-01-15 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110057189.8A CN112651899A (en) 2021-01-15 2021-01-15 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN112651899A true CN112651899A (en) 2021-04-13

Family

ID=75368374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110057189.8A Pending CN112651899A (en) 2021-01-15 2021-01-15 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112651899A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191994A (en) * 2021-04-26 2021-07-30 北京小米移动软件有限公司 Image processing method, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8885976B1 (en) * 2013-06-20 2014-11-11 Cyberlink Corp. Systems and methods for performing image fusion
CN106127718A (en) * 2016-06-17 2016-11-16 中国人民解放军国防科学技术大学 A kind of many exposure images fusion method based on wavelet transformation
WO2019072190A1 (en) * 2017-10-12 2019-04-18 Oppo广东移动通信有限公司 Image processing method, electronic apparatus, and computer readable storage medium
CN110087003A (en) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 More exposure image fusion methods
WO2019183813A1 (en) * 2018-03-27 2019-10-03 华为技术有限公司 Image capture method and device
CN110611750A (en) * 2019-10-31 2019-12-24 北京迈格威科技有限公司 Night scene high dynamic range image generation method and device and electronic equipment
CN111770282A (en) * 2020-06-28 2020-10-13 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and terminal equipment
CN112150399A (en) * 2020-09-27 2020-12-29 安谋科技(中国)有限公司 Image enhancement method based on wide dynamic range and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8885976B1 (en) * 2013-06-20 2014-11-11 Cyberlink Corp. Systems and methods for performing image fusion
CN106127718A (en) * 2016-06-17 2016-11-16 中国人民解放军国防科学技术大学 A kind of many exposure images fusion method based on wavelet transformation
WO2019072190A1 (en) * 2017-10-12 2019-04-18 Oppo广东移动通信有限公司 Image processing method, electronic apparatus, and computer readable storage medium
WO2019183813A1 (en) * 2018-03-27 2019-10-03 华为技术有限公司 Image capture method and device
CN111418201A (en) * 2018-03-27 2020-07-14 华为技术有限公司 Shooting method and equipment
CN110087003A (en) * 2019-04-30 2019-08-02 深圳市华星光电技术有限公司 More exposure image fusion methods
CN110611750A (en) * 2019-10-31 2019-12-24 北京迈格威科技有限公司 Night scene high dynamic range image generation method and device and electronic equipment
CN111770282A (en) * 2020-06-28 2020-10-13 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and terminal equipment
CN112150399A (en) * 2020-09-27 2020-12-29 安谋科技(中国)有限公司 Image enhancement method based on wide dynamic range and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUA SHAO ET AL.: "Halo-Free Multi-Exposure Image Fusion Based on Sparse Representation of Gradient Features", 《APPLIED SCIENCES》, vol. 8, no. 9, 3 September 2018 (2018-09-03), pages 1 - 18 *
TOM MERTENS ET AL.: "Exposure Fusion", 《15TH PACIFIC CONFERENCE ON COMPUTER GRAPHICS AND APPLICATIONS (PG\'07)》, 4 December 2007 (2007-12-04), pages 1 - 9 *
王淑青;李叶伟;: "基于亮度一致性的多曝光图像融合", 湖北工业大学学报, no. 01, 15 February 2018 (2018-02-15), pages 61 - 64 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191994A (en) * 2021-04-26 2021-07-30 北京小米移动软件有限公司 Image processing method, device and storage medium
CN113191994B (en) * 2021-04-26 2023-11-21 北京小米移动软件有限公司 Image processing method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110493538B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
CN109118430B (en) Super-resolution image reconstruction method and device, electronic equipment and storage medium
CN110958401B (en) Super night scene image color correction method and device and electronic equipment
CN112614064B (en) Image processing method, device, electronic equipment and storage medium
CN112634160A (en) Photographing method and device, terminal and storage medium
CN112651918A (en) Image processing method and device, electronic device and storage medium
CN113747067B (en) Photographing method, photographing device, electronic equipment and storage medium
CN113160038B (en) Image style migration method and device, electronic equipment and storage medium
CN110876014B (en) Image processing method and device, electronic device and storage medium
KR102082365B1 (en) Method for image processing and an electronic device thereof
CN113160039B (en) Image style migration method and device, electronic equipment and storage medium
CN113177890B (en) Image processing method and device, electronic equipment and storage medium
CN107730443B (en) Image processing method and device and user equipment
CN112651899A (en) Image processing method and device, electronic device and storage medium
CN113660425A (en) Image processing method and device, electronic equipment and readable storage medium
CN107451972B (en) Image enhancement method, device and computer readable storage medium
CN111275641A (en) Image processing method and device, electronic equipment and storage medium
CN113256785B (en) Image processing method, apparatus, device and medium
CN114240792A (en) Image exposure fusion method and device and storage medium
CN113676674A (en) Image processing method and device, electronic equipment and readable storage medium
CN112950503A (en) Training sample generation method and device and truth value image generation method and device
CN112200745A (en) Method and device for processing remote sensing image, electronic equipment and storage medium
CN113191994B (en) Image processing method, device and storage medium
CN117455782A (en) Image enhancement method, image enhancement device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination