CN111028190A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111028190A
CN111028190A CN201911253881.7A CN201911253881A CN111028190A CN 111028190 A CN111028190 A CN 111028190A CN 201911253881 A CN201911253881 A CN 201911253881A CN 111028190 A CN111028190 A CN 111028190A
Authority
CN
China
Prior art keywords
image
weight
weight map
region
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911253881.7A
Other languages
Chinese (zh)
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911253881.7A priority Critical patent/CN111028190A/en
Publication of CN111028190A publication Critical patent/CN111028190A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the application discloses an image processing method, an image processing device, a storage medium and electronic equipment, wherein a first image and a second image of the same shooting scene are obtained; dividing the first image and the second image into a plurality of regions, respectively; calculating a weight value corresponding to each region according to the brightness information of each region to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein the weight maps comprise the weight value corresponding to each pixel point; and fusing the second image and the first image based on the first weight map and the second weight map. According to the scheme, the local fusion weight corresponding to each region is obtained by processing the image in the region division mode, and when fusion operation is carried out, local fusion processing is carried out according to the local fusion weight of each region, so that the image fusion effect is improved.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
With the continuous development of intelligent terminal technology, the use of electronic devices (such as smart phones, tablet computers, and the like) is becoming more and more popular. Most of electronic devices are built-in with cameras, and with the enhancement of processing capability of mobile terminals and the development of camera technologies, users have higher and higher requirements for the quality of shot images.
In order to capture an image with a better effect, some image synthesis processing algorithms are used to improve the quality of an output image, such as an HDR (High-Dynamic Range) synthesis algorithm or a multi-frame noise reduction algorithm, and when these algorithms are applied, an image fusion algorithm needs to be used to fuse multiple frames of images to obtain a final synthesized image, but the conventional image fusion algorithm has a poor fusion effect.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a storage medium and an electronic device, which can improve the image fusion effect.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first image and a second image of the same shooting scene;
dividing the first image and the second image into a plurality of regions, respectively;
calculating a weight value corresponding to each region according to the brightness information of each region to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein the weight maps comprise the weight value corresponding to each pixel point;
and fusing the second image and the first image based on the first weight map and the second weight map.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the image acquisition module is used for acquiring a first image and a second image of the same shooting scene;
an image dividing module for dividing the first image and the second image into a plurality of regions, respectively;
the weight calculation module is used for calculating a weight value corresponding to each region according to the brightness information of each region so as to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein the weight maps comprise the weight value corresponding to each pixel point;
and the image fusion module is used for fusing the second image and the first image based on the first weight map and the second weight map.
In a third aspect, embodiments of the present application provide a storage medium having a computer program stored thereon, which, when run on a computer, causes the computer to perform an image processing method as provided in any of the embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory has a computer program, and the processor is configured to execute the image processing method according to any embodiment of the present application by calling the computer program.
According to the scheme provided by the embodiment of the application, a first image and a second image of the same shooting scene are obtained, the first image and the second image are divided into a plurality of regions respectively, then, for each region in the first image and the second image, a weight value corresponding to each region is calculated according to brightness information, a first weight graph corresponding to the first image and a second weight graph corresponding to the second image are obtained, wherein the weight graphs comprise weight values corresponding to each pixel point in the image, finally, the second image and the first image are subjected to fusion processing according to the first weight graph and the second weight graph, the scheme is used for obtaining local fusion weights corresponding to each region by carrying out regional processing on the image, when fusion operation is carried out, local fusion processing is carried out according to the local fusion weights of each region, and therefore the image fusion effect is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first image processing method according to an embodiment of the present application.
Fig. 2 is a schematic image partition diagram of an image processing method according to an embodiment of the present application.
Fig. 3 is a weight diagram illustrating an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of an image processing circuit of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiment of the present application provides an image processing method, and an execution subject of the image processing method may be the image processing apparatus provided in the embodiment of the present application, or an electronic device integrated with the image processing apparatus, where the image processing apparatus may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 1, fig. 1 is a first flowchart illustrating an image processing method according to an embodiment of the present disclosure. The specific flow of the image processing method provided by the embodiment of the application can be as follows:
101. a first image and a second image of the same shooting scene are acquired.
The scheme of the embodiment of the application can be applied to various scenes needing image fusion, such as HDR image shooting, HDR video recording, multi-frame noise reduction processing and the like. The first image and the second image are obtained by shooting the same scene.
For example, the electronic device may receive a first image and a second image, which are transmitted by other terminals and obtained by shooting the same shooting scene. Or, the electronic device starts the camera to shoot a shooting scene in a shooting mode to obtain an image to be processed, and determines a first image and a second image from the image to be processed. The following describes a scheme of an embodiment of the present application in detail, taking HDR image capture as an example.
In some embodiments, the electronic device may continuously expose the shooting scene, and acquire more images than the number of image frames required for HDR synthesis as the images to be processed.
For example, in a shooting mode, the electronic device starts a camera to shoot a shooting scene, and obtains multiple frames and multiple frames of images to be processed. And determining a frame of image to be processed with good shooting effect from the plurality of images to be processed as a reference image and recording the frame of image to be processed as a first image, and recording other images to be processed as images to be fused and recording the images as a second image. For example, a frame of image to be processed with the highest definition is selected as the reference image. For another example, a frame of image to be processed with the best exposure effect is selected as the reference image. The first image may have only one frame, and the second image may have one or more frames.
Wherein, in some embodiments, the plurality of frames of images to be processed may have different exposure parameters. In some embodiments, multiple frames of images to be processed may also have the same exposure parameters. For example, when acquiring an image to be processed, the electronic device determines an Exposure parameter of a normal Exposure according to an automatic photometry system of the camera, then adjusts the Exposure parameter based on the Exposure parameter of the normal Exposure to increase the Exposure degree, and then performs shooting, for example, increasing an Exposure amount of 1EV (Exposure value, which is an amount reflecting how much Exposure is), for example, increasing the Exposure amount by extending the Exposure time period. The specific number of the second images may be set according to actual needs, which is not limited in this application.
After multiple frames of images to be processed of the same shooting scene are obtained, one frame of image to be processed is determined from the images to be processed and serves as a reference image for image fusion, the reference image is recorded as a first image, and other images to be processed except the reference image are recorded as second images. In some embodiments, sharpness detection is performed on multiple frames of images to be processed, for example, sharpness of the images is detected through edge information, gradient information, and the like, and one frame of image with the highest sharpness is used as a reference image.
102. The first image and the second image are divided into a plurality of regions, respectively.
After the first image and the second image are obtained, the first image and the second image are respectively subjected to region division processing, and one frame of image is divided into a plurality of regions. For example, the image may be divided into a plurality of rectangular regions in a regular division manner. For another example, a contour line in the image is recognized, and the image is divided into a plurality of irregular regions according to the contour line.
For example, the electronic device acquires 5 frames of images to be processed, respectively labeled A, B, C, D, E, F. In the exposure effect of the image a, the image a is preferably used as a reference image, and the remaining image B, C, D, E, F is referred to as a second image. Next, taking the division of the image B and the image a as an example, please refer to fig. 2, and fig. 2 is a schematic diagram of image partition of the image processing method according to the embodiment of the present application. The image a and the image B are divided into M × N regions, where the sizes of M and N may be set according to the resolution of the image to be processed, the image processing efficiency, and the like.
103. According to the brightness information of each area, calculating a weight value corresponding to each area to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein the weight maps include the weight value corresponding to each pixel point.
For each region in the first image and the second image, a corresponding weight value is calculated. Taking the first region of the first row in the image a as an example, the luminance information of the region is obtained, and assuming that the region contains a × b pixels in total, the luminance information of the region includes the luminance value of each pixel in the a × b pixels. And calculating the average brightness value of the region according to the brightness values of all the pixel points, and then carrying out normalization processing on the average brightness value to convert the average brightness value into a numerical value between [0,1 ]. Next, the weight w corresponding to the region is calculated according to the following formula:
Figure BDA0002309777830000051
wherein x is the normalized average brightness value of the region, δ is an adjustable parameter, and the value range of δ is a preset range, for example, in some embodiments, the value range of δ may be 0.05 to 0.5. The value range of the weight w calculated according to the formula is [0,1 ].
And after the weight value of each region is obtained, obtaining a weight map of the image according to the weight value of each region in the image. The weighted value corresponding to the region is used as the weighted value corresponding to each pixel point in the region, and the weighted graph of the image comprises the weighted value of each pixel point in the image. Referring to fig. 3, fig. 3 is a schematic view of a weight graph of an image processing method according to an embodiment of the present disclosure. Wherein, W11、W12……W1M、……W21、W22、……W2M、……WN1、WN2、……WNMThe weight values corresponding to the respective regions in the image a.
104. And fusing the second image and the first image based on the first weight map and the second weight map.
After the first weight map and the second weight map are obtained, the second image and the first image are fused, wherein the fusion mode is as follows: and for each region, carrying out fusion processing on the second image and the first image according to the corresponding weight value. For example, the first region of the first line in image a is fused with the first region of the first line in image B. Assume that the first image has a weight of WAPixel value of CAThe second image has only one frame with weight WBPixel value of CB. Then for each pixel point, the fused pixel value is WA*CA+WB*CB. If there are multiple frames in the second image, the weighted summation is performed according to the same principle.
Because the image fusion is carried out in a regional mode, in order to avoid obvious difference at the junction between the regions, after the fusion operation is completed, the blocking effect elimination processing is carried out, so that the junction between the regions is excessively natural.
In particular implementation, the present application is not limited by the execution sequence of the described steps, and some steps may be performed in other sequences or simultaneously without conflict.
As can be seen from the above, the image processing method provided in the embodiment of the present application acquires the first image and the second image of the same shooting scene, divides the first image and the second image into a plurality of regions, and then, for each area in the first image and the second image, calculating the weight value corresponding to each area according to the brightness information to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein, the weight map comprises the weight value corresponding to each pixel point in the image, and finally, the second image and the first image are fused according to the first weight map and the second weight map, the scheme obtains the local fusion weight corresponding to each region by processing the image in regions, when the fusion operation is carried out, the local fusion processing is carried out according to the local fusion weight of each region, and then the image fusion effect is improved.
In some embodiments, acquiring the first image and the second image of the same photographic scene may include: acquiring multiple frames of images to be processed of the same shooting scene, wherein the multiple frames of images to be processed have different exposure parameters; determining a first image serving as a reference image from the plurality of frames of images to be processed, and taking other images except the first image as second images; and carrying out image alignment processing on the second image relative to the first image.
In order to avoid ghost images which may occur during superposition and fusion of multiple frames of images, after the first image and the second image are determined, the second image and the first image are aligned according to a preset image alignment algorithm, and the aligned first image and the aligned second image are used as objects for fusion processing in the embodiment of the application, so that the fusion effect can be further improved. Here, the image alignment is mainly based on the alignment of features, for example, feature points in two frames of images are matched, and the second image is aligned with the first image by performing affine transformation, perspective transformation, or other processing on the second image based on the matched feature points.
In some embodiments, before the fusing the second image with the first image based on the first weight map and the second weight map, the method further comprises: determining a first target area with the brightness larger than a preset brightness threshold value from a plurality of first areas of the first image; reducing the weight value corresponding to the first target area; correcting the first weight map based on the reduced weight value of the first target region.
In this embodiment, after the first weight map and the second weight map are obtained through calculation, and before the image is subjected to fusion processing, special regions in the image are identified, and the weights of the special regions are subjected to fine adjustment processing. For example, for a first image as a reference image, it is detected whether the image includes an overexposed region, where the overexposed region is a region whose luminance is greater than a preset luminance threshold, for example, when the luminance value range of the image is 0 to 255, the preset luminance threshold is 235, and when it is detected that the luminance of a region is greater than the value, the region may be determined to be the overexposed region. The average brightness of the pixel points in one region can be used as the brightness corresponding to the region. After the weight value of the first target region is reduced, the first weight map is corrected according to the reduced weight value.
When the image fusion processing is carried out, for the same region, the pixel points of the multi-frame images are weighted and calculated according to different weight values, so that the weight values corresponding to the overexposure regions are reduced in order to avoid overlarge influence of the overexposure regions on the final fusion image.
Or, in some embodiments, before the fusing the second image and the first image based on the first weight map and the second weight map, the method further includes: determining a first target area with the brightness larger than a preset brightness threshold value from a plurality of first areas of the first image; reducing the weight value corresponding to the first target area; determining a fifth target area corresponding to the first target area in the second image, and increasing a weight value corresponding to the fifth target area; and correcting the first weight map based on the weight value of the first target area after the weight value is reduced, and correcting the second weight map according to the weight value of the fifth target area after the weight value is increased.
On the basis of the above embodiment, while the weight value corresponding to the overexposed region in the first image is reduced, the weight value of the region corresponding to the overexposed region in the second image is increased, so that the fused image has better brightness distribution.
Or, in some embodiments, before the fusing the second image and the first image based on the first weight map and the second weight map, the method further includes: determining a first target area with the brightness larger than a preset brightness threshold value from a plurality of first areas of the first image and the second image; reducing the weight value corresponding to the first target area; correcting the first and second weight maps based on the reduced weight value of the first target region.
In this embodiment, overexposed regions in both the first image and the second image may be detected, and the weight values of these regions may be reduced.
In the above three schemes for correcting the weight value, the adjustment amount of the weight value of the first target region may be proportional to the difference between the first target region and the first preset threshold. The adjustment amount of the weight value of the fifth target region may be balanced with the adjustment amount of the weight value of the first target region. For example, the weight value of the first target region is decreased by 0.05, and the weight value of the fifth target region may be increased by 0.05.
In some embodiments, before the fusing the second image with the first image based on the first weight map and the second weight map, the method further includes: determining a second target area with motion characteristics in a plurality of first areas of the first image according to the second image and the first image; determining a third target area in the second image corresponding to the second target area; increasing the weight value corresponding to the second target area and decreasing the weight value corresponding to the third target area; and correcting the first weight map based on the increased weight value of the second target area, and correcting the second weight map according to the decreased weight value of the prime number third target area.
Because there may be object movement in the shooting scene during shooting, although in the image alignment operation, in order to avoid generating ghost in the fused image, the correlation processing has been performed, the motion features in the image are detected here, and by adjusting the fusion weight values of the regions where the motion features are located, the influence of the motion features on the image fusion is further eliminated, and the ghost generated in the fused image is avoided. In the shooting process, positions of the unified objects in different frame images are different due to movement of the unified objects, and differences generated in the superposed images by the position differences are motion characteristics.
For example, a second image and the first image are subjected to luminance alignment processing, and the second image subjected to luminance alignment and the first image are subjected to superimposition processing; for each area, carrying out image subtraction processing on the superposed first image and second image to obtain corresponding image difference; and taking the corresponding area with the image difference degree larger than the preset difference degree as a second target area with motion characteristics. In order to avoid a large error of the result of image subtraction caused by a brightness difference, the brightness of the first image and the brightness of the second image are aligned, then the two frames of images are overlapped for image subtraction, and when no motion feature exists in the region, the difference between the first image and the second image corresponding to the region is very small, so that if the overlapped first image and the overlapped second image are subjected to image subtraction, and the difference degree of the acquired images is greater than a preset threshold and is greater than the preset threshold, the region can be judged to have a moving object during shooting, and the region is taken as a second target region. And simultaneously determining a third target area corresponding to the second target area in the second image. The weight value of the second target region is increased and the weight value of the third target region is decreased.
In some embodiments, if the difference between the image difference and the preset difference is too large, the weight value of the second target area may be increased to 1, and the weight value of the third target area may be decreased to 0. I.e. no fusion is performed for regions where motion features are present, only the first image is used.
In some embodiments, before the fusing the second image with the first image based on the first weight map and the second weight map, the method further comprises: identifying a fourth target area containing preset image characteristics in the first image and the second image; adjusting the weight value corresponding to the fourth target area in the first weight map to 1, and adjusting the weight value corresponding to the fourth target area in the second weight map to 0. In this embodiment, the preset image features may be face features, portrait features and/or shooting subject features; the fourth target region may be a face region, a portrait region, or a photographing subject region.
In this embodiment, in order to make some feature regions in the finally output image, for example, a face region, a portrait region, or a shooting subject have a good display effect, region protection may be performed on these regions during fusion, that is, the output images of these regions only use the first image and do not use the second image. Therefore, the object can be achieved by adjusting the weight value corresponding to the fourth target area in the first weight map to 1 and the weight value corresponding to the fourth target area in the second weight map to 0.
Based on the image processing method provided by the embodiment, on the basis of the image sub-region fusion, the corresponding weight values of the regions with some characteristics are finely adjusted based on the region characteristics, so that the image fusion effect is further improved.
An image processing apparatus is also provided in an embodiment. Referring to fig. 4, fig. 4 is a schematic structural diagram of an image processing apparatus 300 according to an embodiment of the present disclosure. The image processing apparatus 300 is applied to an electronic device, and the image processing apparatus 300 includes an image obtaining module 301, an image dividing module 302, a weight calculating module 303, and an image fusing module 304, as follows:
an image obtaining module 301, configured to obtain a first image and a second image of the same shooting scene;
an image dividing module 302, configured to divide the first image and the second image into a plurality of regions, respectively;
a weight calculating module 303, configured to calculate a weight value corresponding to each region according to brightness information of each region, so as to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, where the weight maps include a weight value corresponding to each pixel point;
an image fusion module 304, configured to fuse the second image with the first image based on the first weight map and the second weight map.
In some embodiments, the weight calculation module 303 is further configured to: calculating the average brightness of pixel points in each region according to the brightness information of each region and carrying out normalization processing on the average brightness; and calculating a weight value corresponding to each region according to the average brightness after the corresponding normalization processing so as to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image.
In some embodiments, the weight calculation module 303 is further configured to:
determining a first target area with the brightness larger than a preset brightness threshold value from a plurality of first areas of the first image; reducing the weight value corresponding to the first target area; correcting the first weight map based on the reduced weight value of the first target region.
In some embodiments, the weight calculation module 303 is further configured to:
determining a second target area with motion characteristics in a plurality of first areas of the first image according to the second image and the first image; determining a third target area in the second image corresponding to the second target area; increasing the weight value corresponding to the second target area and decreasing the weight value corresponding to the third target area; and correcting the first weight map based on the increased weight value of the second target area, and correcting the second weight map according to the decreased weight value of the prime number third target area.
In some embodiments, the weight calculation module 303 is further configured to:
performing brightness alignment processing on a second image and the first image, and overlapping the second image after brightness alignment with the first image; for each area, carrying out image subtraction processing on the superposed first image and second image to obtain corresponding image difference; and taking the corresponding area with the image difference degree larger than the preset difference degree as a second target area with motion characteristics.
In some embodiments, the weight calculation module 303 is further configured to: identifying a fourth target area containing preset image characteristics in the first image and the second image; adjusting the weight value corresponding to the fourth target area in the first weight map to 1, and adjusting the weight value corresponding to the fourth target area in the second weight map to 0.
In some embodiments, the preset image features are human face features, human image features and/or shooting subject features; the fourth target area is a face area, a portrait area or a shooting subject area.
In some embodiments, the image acquisition module 301 is further configured to: acquiring multiple frames of images to be processed of the same shooting scene, wherein the multiple frames of images to be processed have different exposure parameters; determining a first image serving as a reference image from the plurality of frames of images to be processed, and taking other images except the first image as second images; and carrying out image alignment processing on the second image relative to the first image.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It should be noted that the image processing apparatus provided in the embodiment of the present application and the image processing method in the foregoing embodiment belong to the same concept, and any method provided in the embodiment of the image processing method may be executed on the image processing apparatus, and a specific implementation process thereof is described in detail in the embodiment of the image processing method, and is not described herein again.
As can be seen from the above, the image processing apparatus provided in the embodiment of the present application acquires the first image and the second image of the same shooting scene, divides the first image and the second image into a plurality of regions, respectively, and then, for each area in the first image and the second image, calculating the weight value corresponding to each area according to the brightness information to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein, the weight map comprises the weight value corresponding to each pixel point in the image, and finally, the second image and the first image are fused according to the first weight map and the second weight map, the scheme obtains the local fusion weight corresponding to each region by processing the image in regions, when the fusion operation is carried out, the local fusion processing is carried out according to the local fusion weight of each region, and then the image fusion effect is improved.
The embodiment of the application further provides an electronic device, and the electronic device can be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device 400 may include a camera module 401, a memory 402, a processor 403, a touch display 404, a speaker 405, a microphone 406, and the like.
The camera module 401 may include Image Processing circuitry, which may be implemented using hardware and/or software components, and may include various Processing units that define an Image Signal Processing (Image Signal Processing) pipeline. The image processing circuit may include at least: a camera, an Image Signal Processor (ISP Processor), control logic, an Image memory, and a display. Wherein the camera may comprise at least one or more lenses and an image sensor. The image sensor may include an array of color filters (e.g., Bayer filters). The image sensor may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor and provide a set of raw image data that may be processed by an image signal processor.
The image signal processor may process the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the image signal processor may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision. The raw image data can be stored in an image memory after being processed by an image signal processor. The image signal processor may also receive image data from an image memory.
The image Memory may be part of a Memory device, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
When image data is received from the image memory, the image signal processor may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to an image memory for additional processing before being displayed. The image signal processor may also receive processed data from the image memory and perform image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the image signal processor may also be sent to an image memory, and the display may read image data from the image memory. In one embodiment, the image memory may be configured to implement one or more frame buffers.
The statistical data determined by the image signal processor may be sent to the control logic. For example, the statistical data may include statistical information of the image sensor such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens shading correction, and the like.
The control logic may include a processor and/or microcontroller that executes one or more routines (e.g., firmware). One or more routines may determine camera control parameters and ISP control parameters based on the received statistics. For example, the control parameters of the camera may include camera flash control parameters, control parameters of the lens (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), etc.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image processing circuit in the present embodiment. For ease of explanation, only aspects of image processing techniques related to embodiments of the present invention are shown.
For example, the image processing circuitry may include: camera, image signal processor, control logic ware, image memory, display. The camera may include one or more lenses and an image sensor, among others. In some embodiments, the camera may be either a tele camera or a wide camera.
And the image collected by the camera is transmitted to an image signal processor for processing. After the image signal processor processes the image, statistical data of the image (such as brightness of the image, contrast value of the image, color of the image, etc.) may be sent to the control logic. The control logic device can determine the control parameters of the camera according to the statistical data, so that the camera can carry out operations such as automatic focusing and automatic exposure according to the control parameters. The image can be stored in the image memory after being processed by the image signal processor. The image signal processor may also read the image stored in the image memory for processing. In addition, the image can be directly sent to a display for displaying after being processed by the image signal processor. The display may also read the image in the image memory for display.
In addition, not shown in the figure, the electronic device may further include a CPU and a power supply module. The CPU is connected with the logic controller, the image signal processor, the image memory and the display, and is used for realizing global control. The power supply module is used for supplying power to each module.
The memory 402 stores applications containing executable code. The application programs may constitute various functional modules. The processor 403 executes various functional applications and data processing by running an application program stored in the memory 402.
The processor 403 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device.
The touch display screen 404 may be used to receive user touch control operations for the electronic device. Speaker 405 may play audio signals. The microphone 406 may be used to pick up sound signals.
In this embodiment, the processor 403 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 403 runs the application programs stored in the memory 402, so as to execute:
acquiring a first image and a second image of the same shooting scene;
dividing the first image and the second image into a plurality of regions, respectively;
calculating a weight value corresponding to each region according to the brightness information of each region to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein the weight maps comprise the weight value corresponding to each pixel point;
and fusing the second image and the first image based on the first weight map and the second weight map.
In some embodiments, when calculating the weight value corresponding to each region according to the brightness information of each region to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, the processor 403 performs:
calculating the average brightness of pixel points in each region according to the brightness information of each region and carrying out normalization processing on the average brightness;
and calculating a weight value corresponding to each region according to the average brightness after the corresponding normalization processing so as to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image.
In some embodiments, before fusing the second image with the first image based on the first weight map and the second weight map, processor 403 further performs:
determining a first target area with the brightness larger than a preset brightness threshold value from a plurality of first areas of the first image;
reducing the weight value corresponding to the first target area;
correcting the first weight map based on the reduced weight value of the first target region.
In some embodiments, before fusing the second image with the first image based on the first weight map and the second weight map, processor 403 further performs:
in some embodiments, when determining, from the second image and the first image, that a second target region with motion features exists in the plurality of first regions of the first image, the processor 403 further performs:
performing brightness alignment processing on a second image and the first image, and overlapping the second image after brightness alignment with the first image;
for each area, carrying out image subtraction processing on the superposed first image and second image to obtain corresponding image difference;
and taking the corresponding area with the image difference degree larger than the preset difference degree as a second target area with motion characteristics.
In some embodiments, before fusing the second image with the first image based on the first weight map and the second weight map, processor 403 further performs:
identifying a fourth target area containing preset image characteristics in the first image and the second image;
adjusting the weight value corresponding to the fourth target area in the first weight map to 1, and adjusting the weight value corresponding to the fourth target area in the second weight map to 0.
In some embodiments, in acquiring multiple frames of images to be processed, processor 403 performs:
acquiring multiple frames of images to be processed of the same shooting scene, wherein the multiple frames of images to be processed have different exposure parameters;
determining a first image serving as a reference image from the plurality of frames of images to be processed, and taking other images except the first image as second images;
and carrying out image alignment processing on the second image relative to the first image.
As can be seen from the above, embodiments of the present application provide an electronic device that acquires a first image and a second image of the same shooting scene, divides the first image and the second image into a plurality of regions, respectively, and then, for each area in the first image and the second image, calculating the weight value corresponding to each area according to the brightness information to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein, the weight map comprises the weight value corresponding to each pixel point in the image, and finally, the second image and the first image are fused according to the first weight map and the second weight map, the scheme obtains the local fusion weight corresponding to each region by processing the image in regions, when the fusion operation is carried out, the local fusion processing is carried out according to the local fusion weight of each region, and then the image fusion effect is improved.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the image processing method according to any of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Furthermore, the terms "first", "second", and "third", etc. in this application are used to distinguish different objects, and are not used to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
The image processing method, the image processing apparatus, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An image processing method, comprising:
acquiring a first image and a second image of the same shooting scene;
dividing the first image and the second image into a plurality of regions, respectively;
calculating a weight value corresponding to each region according to the brightness information of each region to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein the weight maps comprise the weight value corresponding to each pixel point;
and fusing the second image and the first image based on the first weight map and the second weight map.
2. The method as claimed in claim 1, wherein the calculating a weight value corresponding to each region according to the luminance information of each region to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image comprises:
calculating the average brightness of pixel points in each region according to the brightness information of each region and carrying out normalization processing on the average brightness;
and calculating a weight value corresponding to each region according to the average brightness after the corresponding normalization processing so as to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image.
3. The image processing method according to claim 1, wherein before the fusing the second image with the first image based on the first weight map and the second weight map, the method further comprises:
determining a first target area with the brightness larger than a preset brightness threshold value from a plurality of first areas of the first image;
reducing the weight value corresponding to the first target area;
correcting the first weight map based on the reduced weight value of the first target region.
4. The image processing method according to claim 1, wherein before the fusing the second image with the first image based on the first weight map and the second weight map, the method further comprises:
determining a second target area with motion characteristics in a plurality of first areas of the first image according to the second image and the first image;
determining a third target area in the second image corresponding to the second target area;
increasing the weight value corresponding to the second target area and decreasing the weight value corresponding to the third target area;
and correcting the first weight map based on the increased weight value of the second target area, and correcting the second weight map according to the decreased weight value of the prime number third target area.
5. The image processing method according to claim 4, wherein the determining a second target area with motion characteristics in a plurality of first areas of the first image according to the second image and the first image comprises:
performing brightness alignment processing on a second image and the first image, and overlapping the second image after brightness alignment with the first image;
for each area, carrying out image subtraction processing on the superposed first image and second image to obtain corresponding image difference;
and taking the corresponding area with the image difference degree larger than the preset difference degree as a second target area with motion characteristics.
6. The image processing method according to claim 1, wherein before the fusing the second image with the first image based on the first weight map and the second weight map, the method further comprises:
identifying a fourth target area containing preset image characteristics in the first image and the second image;
adjusting the weight value corresponding to the fourth target area in the first weight map to 1, and adjusting the weight value corresponding to the fourth target area in the second weight map to 0.
7. The image processing method according to claim 6, wherein the preset image features are face features, portrait features and/or shooting subject features; the fourth target area is a face area, a portrait area or a shooting subject area.
8. The image processing method of any one of claims 1 to 7, wherein the acquiring the first image and the second image of the same photographic scene comprises:
acquiring multiple frames of images to be processed of the same shooting scene, wherein the multiple frames of images to be processed have different exposure parameters;
determining a first image serving as a reference image from the plurality of frames of images to be processed, and taking other images except the first image as second images;
and carrying out image alignment processing on the second image relative to the first image.
9. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring a first image and a second image of the same shooting scene;
an image dividing module for dividing the first image and the second image into a plurality of regions, respectively;
the weight calculation module is used for calculating a weight value corresponding to each region according to the brightness information of each region so as to obtain a first weight map corresponding to the first image and a second weight map corresponding to the second image, wherein the weight maps comprise the weight value corresponding to each pixel point;
and the image fusion module is used for fusing the second image and the first image based on the first weight map and the second weight map.
10. A storage medium having stored thereon a computer program, characterized in that, when the computer program runs on a computer, it causes the computer to execute the image processing method according to any one of claims 1 to 8.
11. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is configured to execute the image processing method according to any one of claims 1 to 8 by calling the computer program.
CN201911253881.7A 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment Pending CN111028190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911253881.7A CN111028190A (en) 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911253881.7A CN111028190A (en) 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111028190A true CN111028190A (en) 2020-04-17

Family

ID=70208391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911253881.7A Pending CN111028190A (en) 2019-12-09 2019-12-09 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111028190A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563552A (en) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 Image fusion method and related equipment and device
CN112132769A (en) * 2020-08-04 2020-12-25 绍兴埃瓦科技有限公司 Image fusion method and device and computer equipment
CN112188175A (en) * 2020-08-25 2021-01-05 北京旷视科技有限公司 Photographing apparatus and image processing method
CN112308985A (en) * 2020-11-03 2021-02-02 豪威科技(武汉)有限公司 Vehicle-mounted image splicing method, system and device
CN112995467A (en) * 2021-02-05 2021-06-18 深圳传音控股股份有限公司 Image processing method, mobile terminal and storage medium
CN113077533A (en) * 2021-03-19 2021-07-06 浙江大华技术股份有限公司 Image fusion method and device and computer storage medium
CN113313661A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Image fusion method and device, electronic equipment and computer readable storage medium
CN113610861A (en) * 2021-06-21 2021-11-05 重庆海尔制冷电器有限公司 Method for processing food material image in refrigeration equipment, refrigeration equipment and readable storage medium
WO2021223094A1 (en) * 2020-05-06 2021-11-11 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for reducing noise, and computer usable medium storing software for implementing the method
CN113781370A (en) * 2021-08-19 2021-12-10 北京旷视科技有限公司 Image enhancement method and device and electronic equipment
CN113888452A (en) * 2021-06-23 2022-01-04 荣耀终端有限公司 Image fusion method, electronic device, storage medium, and computer program product
WO2022027878A1 (en) * 2020-08-04 2022-02-10 深圳市精锋医疗科技有限公司 Image processing method for endoscope
CN114554050A (en) * 2022-02-08 2022-05-27 维沃移动通信有限公司 Image processing method, device and equipment
CN114630053A (en) * 2020-12-11 2022-06-14 青岛海信移动通信技术股份有限公司 HDR image display method and display equipment
WO2022151852A1 (en) * 2021-01-18 2022-07-21 Oppo广东移动通信有限公司 Image processing method, apparatus, and system, electronic device, and storage medium
CN116801047A (en) * 2023-08-17 2023-09-22 深圳市艾科维达科技有限公司 Weight normalization-based set top box image processing module and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108174118A (en) * 2018-01-04 2018-06-15 珠海格力电器股份有限公司 Image processing method, device and electronic equipment
CN109741288A (en) * 2019-01-04 2019-05-10 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110022469A (en) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110136071A (en) * 2018-02-02 2019-08-16 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN110213502A (en) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108174118A (en) * 2018-01-04 2018-06-15 珠海格力电器股份有限公司 Image processing method, device and electronic equipment
CN110136071A (en) * 2018-02-02 2019-08-16 杭州海康威视数字技术股份有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN109741288A (en) * 2019-01-04 2019-05-10 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110022469A (en) * 2019-04-09 2019-07-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110213502A (en) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王春萌;: "基于光度值比例关系的HDR鬼影消除算法" *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563552A (en) * 2020-05-06 2020-08-21 浙江大华技术股份有限公司 Image fusion method and related equipment and device
WO2021223094A1 (en) * 2020-05-06 2021-11-11 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for reducing noise, and computer usable medium storing software for implementing the method
CN111563552B (en) * 2020-05-06 2023-09-05 浙江大华技术股份有限公司 Image fusion method, related device and apparatus
CN112132769A (en) * 2020-08-04 2020-12-25 绍兴埃瓦科技有限公司 Image fusion method and device and computer equipment
WO2022027878A1 (en) * 2020-08-04 2022-02-10 深圳市精锋医疗科技有限公司 Image processing method for endoscope
CN112188175A (en) * 2020-08-25 2021-01-05 北京旷视科技有限公司 Photographing apparatus and image processing method
CN112308985A (en) * 2020-11-03 2021-02-02 豪威科技(武汉)有限公司 Vehicle-mounted image splicing method, system and device
CN112308985B (en) * 2020-11-03 2024-02-02 豪威科技(武汉)有限公司 Vehicle-mounted image stitching method, system and device
CN114630053A (en) * 2020-12-11 2022-06-14 青岛海信移动通信技术股份有限公司 HDR image display method and display equipment
CN114630053B (en) * 2020-12-11 2023-12-12 青岛海信移动通信技术有限公司 HDR image display method and display device
WO2022151852A1 (en) * 2021-01-18 2022-07-21 Oppo广东移动通信有限公司 Image processing method, apparatus, and system, electronic device, and storage medium
CN112995467A (en) * 2021-02-05 2021-06-18 深圳传音控股股份有限公司 Image processing method, mobile terminal and storage medium
CN113077533A (en) * 2021-03-19 2021-07-06 浙江大华技术股份有限公司 Image fusion method and device and computer storage medium
CN113313661A (en) * 2021-05-26 2021-08-27 Oppo广东移动通信有限公司 Image fusion method and device, electronic equipment and computer readable storage medium
CN113610861B (en) * 2021-06-21 2023-11-14 重庆海尔制冷电器有限公司 Food image processing method in refrigeration equipment, refrigeration equipment and readable storage medium
CN113610861A (en) * 2021-06-21 2021-11-05 重庆海尔制冷电器有限公司 Method for processing food material image in refrigeration equipment, refrigeration equipment and readable storage medium
CN113888452A (en) * 2021-06-23 2022-01-04 荣耀终端有限公司 Image fusion method, electronic device, storage medium, and computer program product
CN113781370A (en) * 2021-08-19 2021-12-10 北京旷视科技有限公司 Image enhancement method and device and electronic equipment
CN114554050A (en) * 2022-02-08 2022-05-27 维沃移动通信有限公司 Image processing method, device and equipment
CN114554050B (en) * 2022-02-08 2024-02-27 维沃移动通信有限公司 Image processing method, device and equipment
CN116801047A (en) * 2023-08-17 2023-09-22 深圳市艾科维达科技有限公司 Weight normalization-based set top box image processing module and method
CN116801047B (en) * 2023-08-17 2024-02-13 深圳市艾科维达科技有限公司 Weight normalization-based set top box image processing module and method

Similar Documents

Publication Publication Date Title
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110062160B (en) Image processing method and device
CN107948519B (en) Image processing method, device and equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
CN108989700B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
CN108335279B (en) Image fusion and HDR imaging
CN108683862B (en) Imaging control method, imaging control device, electronic equipment and computer-readable storage medium
EP3609177B1 (en) Control method, control apparatus, imaging device, and electronic device
RU2562918C2 (en) Shooting device, shooting system and control over shooting device
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN110381263B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110445989B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2021530911A (en) Night view photography methods, devices, electronic devices and storage media
CN110191291B (en) Image processing method and device based on multi-frame images
CN110198417A (en) Image processing method, device, storage medium and electronic equipment
WO2015014286A1 (en) Method and apparatus for generating high dynamic range image
CN107509044B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN110266954B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110430370B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106791451B (en) Photographing method of intelligent terminal
CN110717871A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination