CN113992861B - Image processing method and image processing device - Google Patents

Image processing method and image processing device Download PDF

Info

Publication number
CN113992861B
CN113992861B CN202010731900.9A CN202010731900A CN113992861B CN 113992861 B CN113992861 B CN 113992861B CN 202010731900 A CN202010731900 A CN 202010731900A CN 113992861 B CN113992861 B CN 113992861B
Authority
CN
China
Prior art keywords
image
frame
exposure
target area
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010731900.9A
Other languages
Chinese (zh)
Other versions
CN113992861A (en
Inventor
许越
赵会斌
陆艳青
王进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rainbow Software Co ltd
Original Assignee
Rainbow Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rainbow Software Co ltd filed Critical Rainbow Software Co ltd
Priority to CN202010731900.9A priority Critical patent/CN113992861B/en
Priority to PCT/CN2021/093188 priority patent/WO2022021999A1/en
Publication of CN113992861A publication Critical patent/CN113992861A/en
Application granted granted Critical
Publication of CN113992861B publication Critical patent/CN113992861B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method and an image processing device. The image processing method comprises the following steps: acquiring a single-exposure multi-frame image according to a first exposure parameter, wherein the single-exposure multi-frame image is an image with the same exposure parameter and containing a target area; and synthesizing the multi-frame images with single exposure to obtain a first image. According to the image processing method, the technical problems of serious detail loss, ghost and insufficient dynamic range of the shot target image are solved.

Description

Image processing method and image processing device
Technical Field
The present invention relates to computer vision, and more particularly, to an image processing method and an image processing apparatus.
Background
In recent years, with the improvement of shooting performance of electronic devices and the popularization of mobile phone shooting, demands for mobile phone shooting have been increased. But is limited by the size and quality of the portable electronic device, for example, when photographing a moon at a long distance, it is difficult for a user to obtain a picture of a clear and bright moon subject, and the photographed image generally suffers from problems of blurring, loud noise, loss of detail, insufficient dynamic range, and the like.
Aiming at the problems, the prior art usually adopts the method that the same scene is shot in different exposure modes, a plurality of frames of pictures are obtained and then fused into a frame of wide dynamic range image, and the method can keep the image details under different brightness, however, in the multi-exposure multi-frame image synthesis process, serious detail loss and ghost phenomena of the generated image can be caused due to the reasons that the images cannot be aligned, moving objects exist and the like.
Disclosure of Invention
The embodiment of the invention provides an image processing method and an image processing device, which at least solve the technical problems of serious detail loss, ghosting and insufficient dynamic range of a shot target image.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: acquiring a single-exposure multi-frame image according to a first exposure parameter, wherein the single-exposure multi-frame image is an image with the same exposure parameter and containing a target area; and synthesizing the multi-frame images with single exposure to obtain a first image.
Optionally, the first exposure parameter is determined by photometry of the target area in the preview image or is selected within a preset range.
Optionally, performing image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image containing a clear target area; and carrying out fusion processing on the first image and the second image to obtain a third image.
Optionally, determining the first exposure parameter in the preview image by photometry of the target area includes: feature detection is carried out in the preview image, and a target area is obtained; performing photometry on the target area to obtain the average brightness of the target area; and determining a first exposure parameter according to the average brightness of the target area.
Optionally, synthesizing the single-exposure multi-frame image to obtain a first image, and performing multi-frame fusion on the single-exposure multi-frame image by using a multi-frame image super-resolution technology to obtain a fourth image; and performing single-frame dynamic range expansion on the fourth image to obtain the first image.
Optionally, performing multi-frame fusion on the single-exposure multi-frame image by using a multi-frame image super-resolution technology to obtain a fourth image, including: selecting a frame with highest definition from the single-exposure multi-frame images as a reference frame; the rest frames of the single-exposure multi-frame image are aligned to the reference frame based on the characteristics, and an aligned multi-frame image is obtained; and carrying out weighted fusion on the aligned multi-frame images based on the spatial information to obtain a fourth image.
Optionally, performing multi-frame fusion on the single-exposure multi-frame image by using a multi-frame image super-resolution technology to obtain a fourth image, and further including:
sequencing the single-exposure multi-frame images according to definition, and selecting a first frame image as a reference frame;
the reference frame is weighted and fused to the second frame image based on characteristic alignment and airspace information, and the fused image is updated to be the reference frame;
and sequentially executing the processes of feature alignment, spatial information weighted fusion and updating the reference frame until the final frame of image is subjected to feature alignment and spatial information weighted fusion, so as to obtain a fourth image.
Optionally, performing single-frame dynamic range expansion on the fourth image to obtain the first image, including: constructing a logarithmic brightness pyramid according to the fourth image to obtain the ambient light brightness of different scales; combining the ambient light brightness of different scales, carrying out layer-by-layer downsampling reconstruction and mapping on the fourth image according to a logarithmic brightness pyramid, and obtaining the logarithmic reflection quantity of the object surface at each pixel position; and mapping the logarithmic reflection quantity of the object surface to an image numerical interval by utilizing the local mean value and the mean square error information to obtain a first image.
Optionally, image enhancement is performed on the target area of the first image according to a pre-trained enhancement model, so as to obtain a second image containing a clear target area, including: training by using the constructed sample training set to obtain a pre-trained enhancement model; and carrying out image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image containing a clear target area.
Optionally, training with the constructed sample training set to obtain a pre-trained enhancement model, comprising: acquiring a first quality sample set acquired by electronic equipment, and acquiring a second quality sample set at the same position, wherein the image quality of the second quality sample set is higher than that of the first quality sample set; grouping the first quality sample set and the second quality sample set according to different amplification factors of the equipment, and aligning the grouped sample sets according to characteristics to obtain a first sample set; the first sample set is sent to an AI training engine to obtain a pre-trained enhancement model.
Optionally, the second set of quality samples comprises: high quality image samples collected by high definition electronics or high quality image samples stored in a storage device.
Optionally, the grouped sample sets are aligned according to features, wherein the features include: the features of the image set or the auxiliary calibration point features around the image set in the first and second quality sample sets.
According to another aspect of the embodiment of the present invention, there is also provided an image processing apparatus including: the acquisition unit is used for acquiring single-exposure multi-frame images according to the first exposure parameters, wherein the single-exposure multi-frame images comprise images with the same exposure parameters of the target area; and the first fusion unit is used for synthesizing the single-exposure multi-frame images to obtain a first image.
Optionally, the image processing apparatus further includes a detection unit for determining the first exposure parameter by metering light to a target area in a preview image or selecting the first exposure parameter within a preset range.
Optionally, the image processing apparatus further includes: the enhancement unit is used for carrying out image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image containing a clear target; and the second fusion unit is used for carrying out fusion processing on the first image and the second image to obtain a third image.
Optionally, the detection unit includes: the detection module is used for carrying out feature detection in the preview image to obtain a target area; the light measuring module is used for measuring light of the target area and obtaining average brightness of the target area; and the determining module is used for determining a first exposure parameter according to the average brightness of the target area.
Optionally, the first fusion unit includes: the fusion module is used for carrying out multi-frame fusion on the single-exposure multi-frame images by using a multi-frame image super-resolution technology to obtain a fourth image; and the expansion module is used for carrying out single-frame dynamic range expansion on the fourth image to obtain the first image.
Optionally, the fusion module further includes: the reference sub-module is used for selecting a frame with highest definition from the single-exposure multi-frame images as a reference frame; the alignment sub-module is used for aligning the rest frames of the single-exposure multi-frame image to the reference frame based on the characteristics to obtain an aligned multi-frame image; and the fusion sub-module is used for carrying out weighted fusion on the aligned multi-frame images based on the spatial information to obtain a fourth image.
Optionally, the fusion module further includes: the first processing sub-module is used for sequencing in the single-exposure multi-frame images according to the definition and selecting a first frame image as a reference frame; the second processing sub-module is used for carrying out weighted fusion on the reference frame based on characteristic alignment and spatial information to a second frame image, and updating the fused image into the reference frame; and the third processing sub-module is used for sequentially executing the processes of feature alignment, spatial information weighted fusion and updating the reference frame until the final frame of image is completed based on feature alignment and spatial information weighted fusion, and obtaining a fourth image.
Optionally, the expansion module further includes: the method comprises the steps of constructing a logarithmic brightness pyramid according to a fourth image to obtain the ambient brightness of different scales; combining the ambient light brightness of different scales, carrying out layer-by-layer downsampling reconstruction and mapping on the fourth image according to a logarithmic brightness pyramid, and obtaining the logarithmic reflection quantity of the object surface at each pixel position; and mapping the logarithmic reflection quantity of the object surface to an image numerical interval by utilizing the local mean value and the mean square error information to obtain a first image.
Optionally, the enhancement unit includes: the construction module is used for training by utilizing the constructed sample training set to obtain a pre-trained enhancement model; and the enhancement module is used for carrying out image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image containing a clear target area.
Optionally, the building module further comprises: a fourth processing sub-module, configured to obtain a first quality sample set collected by the electronic device, and obtain a second quality sample set at the same location, where the image quality of the second quality sample set is higher than that of the first quality sample set; a fifth processing sub-module, configured to group the first quality sample set and the second quality sample set according to different amplification factors of the device, and align the grouped sample sets according to features, so as to obtain a first sample set; and a sixth processing sub-module, configured to send the first sample set to the AI training engine to obtain a pre-trained enhancement model.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program performs any one of the image processing methods described above.
According to another aspect of the embodiments of the present invention, there is also provided a processor for running a program; wherein the program executes any one of the image processing methods described above.
In an embodiment of the present invention, the following steps are performed: acquiring a single-exposure multi-frame image according to a first exposure parameter, wherein the single-exposure multi-frame image is an image with the same exposure parameter and containing a target area; and synthesizing the multi-frame images with single exposure to obtain a first image. The multi-frame fusion and image enhancement processing can be carried out on the multi-frame image with single exposure so as to optimize the image quality, obtain a clearer and brighter image containing a target area, and solve the technical problems of serious detail loss, ghost and insufficient dynamic range of the shot target image.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of an alternative image processing method according to an embodiment of the invention;
FIG. 2 is a flow chart of an alternative single exposure multi-frame synthesis method according to an embodiment of the invention;
FIG. 3 is a flow chart of an alternative image processing method according to an embodiment of the invention;
FIG. 4 is a flow chart of an alternative image enhancement method according to an embodiment of the present invention;
fig. 5 is a block diagram of an alternative image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the order of use may be interchanged where appropriate such that embodiments of the invention described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention can be applied to the electronic equipment with the camera unit, and the electronic equipment can comprise: smart phones, tablet computers, desktop computers, personal Digital Assistants (PDAs), portable Multimedia Players (PMPs), cameras, etc.
A flowchart of an alternative image processing method according to an embodiment of the present invention is described below. It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Referring to fig. 1, there is a flowchart of an alternative image processing method according to an embodiment of the present invention. In this embodiment, taking a shooting moon as an example, description will be given. As shown in fig. 1, the image processing method includes the steps of:
s12, acquiring a single-exposure multi-frame image according to a first exposure parameter; wherein the single exposure multi-frame image is an image with the same exposure parameters and containing the target area;
as an example, after determining the first exposure parameter of the photographed moon image, the electronic device may photograph the moon through its image capturing unit a plurality of times according to the first exposure parameter, and collect a single-exposure multi-frame image, wherein the single-exposure multi-frame image is a moon image having the same exposure parameter. The multi-frame image is not strictly continuous, and can be physically and completely continuously shot, or can be acquired at intervals of several frames, and the total shooting time is controlled within a certain range.
S14, synthesizing the single-exposure multi-frame images to obtain a first image;
in the embodiment of the invention, through the steps, multi-frame fusion and image enhancement processing can be carried out on multi-frame moon images with single exposure so as to optimize the image quality and obtain clearer and brighter moon images, thereby solving the technical problems of serious detail loss, ghost and insufficient dynamic range of the photographed moon images.
In an alternative embodiment, the above image processing method may further include step S10, for determining a first exposure parameter, in particular, the first exposure parameter may be determined by metering light to the target area in the preview image or may be selected within a preset range.
Wherein, in combination with the process of shooting the moon, determining the first exposure parameter by photometry on the target area in the preview image includes: performing feature detection in the preview image to obtain a moon area; performing photometry on the moon area to obtain the average brightness of the moon area; and determining a first exposure parameter according to the average brightness of the moon area. Alternatively, a person skilled in the art may preset the range of exposure parameters according to factors such as the characteristics of the photographed object and the background environment in which the object is located, and select the first exposure parameter within the range.
In addition, in the embodiment of the invention, the characteristic detection is performed on the preview image of the electronic device to obtain the moon region in the preview image, the electronic device can detect the moon in the preview image through a preset detection algorithm, and the algorithm for detecting the moon region is not limited. The accurate selection of single exposure is mainly based on the accurate identification and the accurate detection of brightness of the moon area, and likewise, the electronic equipment can perform photometry on the moon area in the preview interface through a preset photometry algorithm to obtain the average brightness of the moon area. In addition, the mapping relation between the brightness of the moon area and the exposure parameter is pre-established, and after the average brightness of the moon area in the preview image is obtained, the electronic device can determine the exposure parameter of the electronic device in the current state, namely the first exposure parameter, according to the measured average brightness of the moon area and the pre-established mapping relation.
Step S14 is specifically described below. Referring to fig. 2, a flow chart of an alternative single exposure multi-frame synthesis method according to an embodiment of the invention is shown.
In the embodiment of the present invention, step S14, performing a synthesis process on the single-exposure multi-frame image, and obtaining the first image may include:
S140: performing multi-frame fusion on the single-exposure multi-frame images by using a multi-frame image super-resolution technology to obtain a fourth image;
s142: and performing single-frame dynamic range expansion on the fourth image to obtain the first image.
Optionally, in an embodiment of the present invention, performing multi-frame fusion on a single-exposure multi-frame image using a multi-frame image super-resolution technology to obtain a fourth image, including: selecting a frame with highest definition from the single-exposure multi-frame images as a reference frame; the rest frames of the single-exposure multi-frame image are aligned to the reference frame based on the characteristics, and an aligned multi-frame image is obtained; and carrying out weighted fusion on the aligned multi-frame images based on the spatial information to obtain a fourth image. In general, a user uses an electronic device to finally obtain a multi-frame color RGB image, which is obtained by a sensor to collect Bayer data through a demosaicing algorithm, and essentially upsamples undersampled RGB data onto full-size RGB data, which inevitably increases false information, resulting in lower resolution and larger noise. The super-resolution technology multi-frame fusion technology is based on Bayer data, and selects a frame with highest definition from multi-frame images with single exposure as a reference frame, and aligns the rest frames with the reference frame. When a single exposure multi-frame image is acquired, random displacement exists between frames, and corresponding channel information which is lack of undersampled Bayer data can be supplemented through real channel information of the rest frames with small amount of random displacement at the channel position of the reference frame. The aligned multi-frame images are weighted and fused based on airspace information to obtain a fourth image, and a final full-size RGB image with even higher size is obtained, so that the resolution of single-exposure multi-frame image fusion is improved, and a certain denoising effect is achieved.
Optionally, in an embodiment of the present invention, multi-frame fusion is performed on a single-exposure multi-frame image by using a multi-frame image super-resolution technology to obtain a fourth image, where a manner of aligning and fusing at the same time may be adopted, which specifically includes: sequencing the single-exposure multi-frame images according to definition, and selecting a first frame image as a reference frame; the reference frame is weighted and fused to the second frame image based on characteristic alignment and airspace information, and the fused image is updated to be the reference frame; and sequentially executing the processes of feature alignment, spatial information weighted fusion and updating the reference frame until the final frame of image is subjected to feature alignment and spatial information weighted fusion, so as to obtain a fourth image. In the fusion process of each input frame of the single-exposure multi-frame image, the fusion weight can be calculated through each input frame and the first frame image, and the fusion weight can also be calculated through each input frame and the updated reference frame. The multi-frame fusion method of the alignment and fusion at the same time adopts a mode of not fixing the reference frame, so that the error of the upper stage can be cleared in the process of replacing the reference frame fusion, the transmission and the gradual amplification of the error can be avoided, and the processing accuracy is ensured.
Optionally, in an embodiment of the present invention, performing single-frame dynamic range expansion on the fourth image, obtaining the first image includes: constructing a logarithmic brightness pyramid according to the fourth image to obtain the ambient light brightness of different scales; combining the ambient light brightness of different scales, carrying out layer-by-layer downsampling reconstruction and mapping on the fourth image according to a logarithmic brightness pyramid, and obtaining the logarithmic reflection quantity of the object surface at each pixel position; and mapping the logarithmic reflection quantity of the object surface to an image numerical interval by utilizing the local mean value and the mean square error information to obtain a first image. Based on retina-cerebral cortex theory, human eyes perceive that an actual image is derived from the product of ambient light and object surface reflection, and completely remove ambient light components in the image and keep reflection components so as to recover original information of the object, thereby achieving the image enhancement effect. The image can convert the product relation into addition and subtraction relation based on the logarithmic domain, so that the environment illumination component can be completely removed from the image, the object reflection component can be reserved, and the effect of enhancing the image vision can be achieved.
Constructing a logarithmic brightness pyramid according to the input fourth image after multi-frame fusion, and approximating the ambient brightness by utilizing the convolution of the fourth image and Gaussian kernels under different scales, thereby obtaining the ambient brightness of different scales; the reflection component is reserved based on the fact that the ambient light component is completely removed from the image, so that original information of an object is restored, the fourth image is reconstructed and mapped in a layer-by-layer downsampling mode according to a logarithmic brightness pyramid by combining the ambient light brightness of different scales, and the logarithmic reflection quantity of the object surface at each pixel position is obtained; and mapping the logarithmic reflection quantity of the object surface from a logarithmic domain to a normal image numerical interval by utilizing the local mean value and the mean square error information, and obtaining a final brightness adjustment image, namely a first image. For another example, in another embodiment, for an extremely dark scene, before the fourth image is subjected to single-frame dynamic expansion, the fourth image is first linearly brightened to obtain a fifth image, and the fourth image and the fifth image are subjected to laplace pyramid fusion, and then subjected to single-frame dynamic range expansion. Since the fourth image and the fifth image actually participating in fusion are changed from the fourth image of the same frame, alignment is not needed, and no ghosting exists. Therefore, the method can lighten the dark part of the fused single-frame image in a normal scene or an extremely dark environment, improve local contrast, adjust saturation and simultaneously avoid ghosts or synthesis anomalies possibly caused by expansion of the dynamic range of multiple frames.
Referring to fig. 3, a flowchart of an alternative image processing method according to an embodiment of the present invention is shown. For the fused image after the fusion of the single-exposure multi-frame images, more detail information of the target image can be recovered by the method, and compared with fig. 1, the image processing method further comprises the following steps:
s16, carrying out image enhancement on a target area of the first image according to a pre-trained enhancement model to obtain a second image containing a clear target area;
s18, fusion processing is carried out on the first image and the second image, and a third image is obtained.
Step S16 is specifically described below. Referring to fig. 4, a flowchart of an alternative image enhancement method according to an embodiment of the present invention is shown.
In the embodiment of the present invention, step S16, that is, performing image enhancement on the target area of the first image according to the pre-trained enhancement model, obtains a second image including a clear target area, includes:
s162, training by using the constructed sample training set to obtain a pre-trained enhancement model;
and S164, carrying out image enhancement on the target area of the first image according to the pre-trained enhancement model, and obtaining a second image containing a clear target area.
Optionally, in an embodiment of the present invention, the pre-trained enhancement model obtained by training using the constructed sample training set includes: acquiring a first quality sample set acquired by electronic equipment, and acquiring a second quality sample set at the same position, wherein the image quality of the second quality sample set is higher than that of the first quality sample set; grouping the first quality sample set and the second quality sample set according to different amplification factors of the equipment, and aligning the grouped sample sets according to characteristics to obtain a first sample set; the first sample set is sent to an AI training engine to obtain a pre-trained enhancement model.
Optionally, in an embodiment of the present invention, acquiring the second quality sample set at the same location includes: high quality image samples collected by high definition electronics or high quality image samples stored in a storage device.
Optionally, in an embodiment of the present invention, the grouped sample sets are aligned according to features, where the features include features such as color, brightness, and angle of the image group in the first quality sample set and the second quality sample set or auxiliary calibration point features around the image group.
Referring to fig. 5, a block diagram of an alternative image processing apparatus according to an embodiment of the present invention is shown. As shown in fig. 5, the image processing apparatus 4 includes:
an acquiring unit 42, configured to acquire a multi-frame image of a single exposure according to the first exposure parameter; wherein the single exposure multi-frame image is an image with the same exposure parameters and containing the target area;
the first fusion unit 44 is configured to perform a synthesis process on the single-exposure multi-frame image to obtain a first image.
The image processing apparatus 4 may further include:
an enhancement unit 46, configured to perform image enhancement on the first image according to the pre-trained enhancement model, so as to obtain a second image containing a clear target;
And a second fusion unit 48, configured to perform fusion processing on the first image and the second image, and obtain a third image.
In an alternative embodiment, the image processing apparatus 4 may further comprise a detection unit 40 for determining the first exposure parameter, in particular, the first exposure parameter may be determined by photometry of the target area in the preview image or may be selected within a preset range.
Alternatively, in an embodiment of the present invention, the detection unit 40 may include: the detection module 400 is configured to perform feature detection in the preview image to obtain a target area; the light measuring module 402 is used for measuring light of the target area to obtain average brightness of the target area; a determining module 404, configured to determine a first exposure parameter according to the average brightness of the target area. Alternatively, a person skilled in the art may preset the range of exposure parameters according to factors such as the characteristics of the photographed object and the background environment in which the object is located, and select the first exposure parameter within the range.
Alternatively, in an embodiment of the present invention, the first fusing unit 44 may include:
the fusion module 442 is configured to perform multi-frame fusion on the single-exposure multi-frame image by using a multi-frame image super-resolution technology, so as to obtain a fourth image;
And the expansion module 444 is configured to perform single-frame dynamic range expansion on the fourth image to obtain the first image.
Alternatively, in one embodiment of the present invention, the fusion module 442 includes: a reference sub-module 4422, configured to select, from the single-exposure multi-frame images, a frame with the highest definition as a reference frame; an alignment sub-module 4424 for aligning the rest frames of the single-exposure multi-frame image to the reference frame based on the characteristics to obtain an aligned multi-frame image; and the fusion submodule 4426 is used for carrying out weighted fusion on the aligned multi-frame images based on the spatial information to obtain a fourth image. In general, a user uses an electronic device to finally obtain a multi-frame color RGB image, which is obtained by a sensor to collect Bayer data through a demosaicing algorithm, and essentially upsamples undersampled RGB data onto full-size RGB data, which inevitably increases false information, resulting in lower resolution and higher noise. The super-resolution technology multi-frame fusion technology is based on Bayer data, and selects a frame with highest definition from multi-frame images with single exposure as a reference frame, and aligns the rest frames with the reference frame. When a single exposure multi-frame image is acquired, random displacement exists between frames, and corresponding channel information which is lack of undersampled Bayer data can be supplemented through real channel information of the rest frames with small amount of random displacement at the channel position of the reference frame. The aligned multi-frame images are weighted and fused based on airspace information to obtain a fourth image, and a final full-size RGB image with even higher size is obtained, so that the resolution of single-exposure multi-frame image fusion is improved, and a certain denoising effect is achieved.
Optionally, in one embodiment of the present invention, the fusion module 442 includes a first processing sub-module 4423 configured to sort the multi-frame images of a single exposure according to sharpness and select the first frame image as a reference frame; a second processing sub-module 4425, configured to blend the reference frame to the second frame image based on feature alignment and spatial domain information weighting, and update the blended image to the reference frame; the third processing sub-module 4427 is configured to sequentially perform the processes of feature alignment, spatial information weighted fusion and updating the reference frame until the final frame of image completes the feature alignment and spatial information weighted fusion, and obtain a fourth image. In the fusion process of each input frame of the single-exposure multi-frame image, the fusion weight can be calculated through each input frame and the first frame image, and the fusion weight can also be calculated through each input frame and the updated reference frame. The multi-frame fusion device for aligning and fusing the edges adopts a mode of not fixing the reference frames, so that the error of the upper stage can be cleared in the process of updating the reference frame fusion, the transmission and the gradual amplification of the error can be avoided, and the processing accuracy is ensured.
Optionally, in an embodiment of the present invention, the extension module 444 further includes: constructing a logarithmic brightness pyramid according to the fourth image to obtain the ambient light brightness of different scales; combining the ambient light brightness of different scales, reconstructing and mapping the fourth image by downsampling according to a logarithmic brightness pyramid layer by layer, and obtaining the logarithmic reflection quantity of the object surface at each pixel position; and mapping the logarithmic reflection quantity of the object surface to an image numerical interval by utilizing the local mean value and the mean square error information to obtain a first image. Based on retina-cerebral cortex theory, human eyes perceive that an actual image is derived from the product of ambient light and object surface reflection, and completely remove ambient light components in the image and keep reflection components so as to recover original information of the object, thereby achieving the image enhancement effect. The image can convert the product relation into addition and subtraction relation based on the logarithmic domain, and after the ambient illumination component is obtained, the object reflection component can be reserved by completely subtracting the ambient illumination component from the image, so that the effect of enhancing the image vision is achieved.
Constructing a logarithmic brightness pyramid according to the input fourth image after multi-frame fusion, and approximating the ambient brightness by utilizing the convolution of the fourth image and Gaussian kernels under different scales, thereby obtaining the ambient brightness of different scales; the reflection component is reserved based on the fact that the ambient illumination component is completely removed from the image, so that original information of an object is restored, the ambient light brightness of different scales is combined, the fourth image is reconstructed and mapped in a layer-by-layer downsampling mode according to a logarithmic brightness pyramid, and the logarithmic reflection quantity of the object surface at each pixel position is obtained; and mapping the logarithmic reflection quantity of the object surface from a logarithmic domain to a normal image numerical interval by utilizing the local mean value and the mean square error information, and obtaining a final brightness adjustment image, namely a first image. For another example, in another embodiment, for an extremely dark scene, before the fourth image is subjected to single-frame dynamic expansion, the fourth image is first linearly brightened to obtain a fifth image, and the fourth image and the fifth image are subjected to laplace pyramid fusion, and then subjected to single-frame dynamic range expansion. Since the fourth image and the fifth image actually participating in fusion are changed from the same frame, alignment is not required and no ghost exists. Therefore, the method can lighten the dark part of the fused single-frame image in normal scenes or extremely dark environments, improve local contrast, adjust saturation and simultaneously avoid ghosts or synthesis anomalies possibly caused by expansion of the dynamic range of multiple frames.
Alternatively, in an embodiment of the present invention, the enhancing unit 46 may include:
a constructing module 462 for training with the constructed sample training set to obtain a pre-trained enhancement model;
and an enhancement module 464 for performing image enhancement on the first image according to the pre-trained enhancement model to obtain a second image containing a clear target region.
Alternatively, in an embodiment of the present invention, the constructing module 462 may include: acquiring a first quality sample set acquired by electronic equipment, and acquiring a second quality sample set at the same position, wherein the image quality of the second quality sample set is higher than that of the first quality sample set; grouping the first quality sample set and the second quality sample set according to different amplification factors of the equipment, and aligning the grouped sample sets according to characteristics to obtain a first sample set; the first sample set is sent to an AI training engine to obtain a pre-trained enhancement model.
Optionally, in an embodiment of the present invention, acquiring the second quality sample set at the same location includes: high quality image samples collected by high definition electronics or high quality image samples stored in a storage device.
Optionally, in an embodiment of the present invention, the grouped sample sets are aligned according to features, where the features include features such as color, brightness, angle, etc. of the image group in the first quality sample set and the second quality sample set, or auxiliary calibration point features around the image group.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program performs any one of the image processing methods described above.
According to another aspect of the embodiments of the present invention, there is also provided a processor for running a program; wherein the program executes any one of the image processing methods described above.
It should be noted that the image processing method or the image processing device is not limited to shooting moon, but is also suitable for shooting objects which are far away and have obvious contrast with the background, such as sun, lighthouse, stars, illumination lamps on buildings and the like, and can solve the technical problems of serious detail loss, ghost images and insufficient dynamic range in shooting images.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (20)

1. An image processing method for an electronic device having an image capturing unit, the method comprising:
acquiring a single-exposure multi-frame image according to a first exposure parameter, wherein the single-exposure multi-frame image is an image with the same exposure parameter and containing a target area;
synthesizing the single-exposure multi-frame image to obtain a first image;
the synthesizing processing is performed on the single-exposure multi-frame image to obtain a first image, including:
performing multi-frame fusion on the single-exposure multi-frame image by using a multi-frame image super-resolution technology to obtain a fourth image;
performing single-frame dynamic range expansion on the fourth image to obtain the first image;
the performing single-frame dynamic range expansion on the fourth image, and obtaining the first image includes: constructing a logarithmic brightness pyramid according to the fourth image to obtain the ambient light brightness of different scales; combining the ambient light brightness of different scales, carrying out layer-by-layer downsampling reconstruction and mapping on the fourth image according to the logarithmic brightness pyramid, and obtaining the logarithmic reflection quantity of the object surface at each pixel position; and mapping the object surface logarithmic reflection quantity to an image numerical value interval by utilizing the local mean value and the mean square error information to obtain the first image.
2. The method according to claim 1, characterized in that the first exposure parameter is determined by photometry of the target area in the preview image or is selected within a preset range.
3. The method according to claim 1, wherein the method further comprises:
performing image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image containing a clear target area;
and carrying out fusion processing on the first image and the second image to obtain a third image.
4. The method of claim 2, wherein determining the first exposure parameter by metering light to the target area in the preview image comprises:
performing feature detection in the preview image to obtain the target area;
performing photometry on the target area to obtain average brightness of the target area;
and determining the first exposure parameter according to the average brightness of the target area.
5. The method of claim 1, wherein the multi-frame fusion of the single-exposure multi-frame images using multi-frame image super-resolution techniques to obtain a fourth image comprises:
Selecting a frame with highest definition from the single-exposure multi-frame images as a reference frame;
the rest frames of the single-exposure multi-frame image are aligned to the reference frame based on the characteristics, and an aligned multi-frame image is obtained;
and carrying out weighted fusion on the aligned multi-frame images based on spatial information to obtain the fourth image.
6. The method of claim 1, wherein the multi-frame fusion of the single-exposure multi-frame images using multi-frame image super-resolution techniques to obtain a fourth image, further comprising:
sequencing the single-exposure multi-frame images according to definition, and selecting a first frame image as a reference frame;
the reference frame is weighted and fused to a second frame image based on feature alignment and spatial information, and the fused image is updated to be the reference frame;
and sequentially executing the processes of the feature alignment, the spatial information weighted fusion and the updating of the reference frame until the final frame of image completes the feature alignment and the spatial information weighted fusion, and obtaining the fourth image.
7. A method according to claim 3, wherein said image enhancing the target area of the first image according to the pre-trained enhancement model to obtain a second image comprising a clear target area comprises: training by using the constructed sample training set to obtain the pre-trained enhancement model; and carrying out image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain the second image containing the clear target area.
8. The method of claim 7, wherein training with the constructed sample training set to obtain the pre-trained enhancement model comprises:
acquiring a first quality sample set acquired by electronic equipment, and acquiring a second quality sample set at the same position, wherein the image quality of the second quality sample set is higher than that of the first quality sample set;
grouping the first quality sample set and the second quality sample set according to different amplification factors of the electronic equipment, and aligning the grouped sample sets according to characteristics to obtain a first sample set;
and sending the first sample set to an AI training engine to obtain the pre-trained enhancement model.
9. The method of claim 8, wherein the second set of quality samples comprises: high quality image samples collected by high definition electronics or high quality image samples stored in a storage device.
10. The method of claim 8, wherein the aligning the grouped sample sets is in accordance with a feature, wherein the feature comprises: the features of the image set or the auxiliary calibration point features around the image set in the first and second quality sample sets.
11. An image processing apparatus, comprising:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a single-exposure multi-frame image according to a first exposure parameter, wherein the single-exposure multi-frame image is an image with the same exposure parameter and containing a target area;
the first fusion unit is used for synthesizing the single-exposure multi-frame images to obtain a first image;
wherein the first fusion unit comprises:
the fusion module is used for carrying out multi-frame fusion on the single-exposure multi-frame images by using a multi-frame image super-resolution technology to obtain a fourth image;
the expansion module is used for carrying out single-frame dynamic range expansion on the fourth image to obtain the first image;
wherein performing single-frame dynamic range expansion on the fourth image to obtain the first image includes: constructing a logarithmic brightness pyramid according to the fourth image to obtain the ambient light brightness of different scales; combining the ambient light brightness of different scales, carrying out layer-by-layer downsampling reconstruction and mapping on the fourth image according to the logarithmic brightness pyramid, and obtaining the logarithmic reflection quantity of the object surface at each pixel position; and mapping the object surface logarithmic reflection quantity to an image numerical value interval by utilizing the local mean value and the mean square error information to obtain the first image.
12. The apparatus according to claim 11, wherein the image processing apparatus further comprises a detection unit for determining the first exposure parameter by metering light to a target area in a preview image or selecting the first exposure parameter within a preset range.
13. The apparatus of claim 11, wherein the apparatus further comprises:
the enhancement unit is used for carrying out image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image containing a clear target area;
and the second fusion unit is used for carrying out fusion processing on the first image and the second image to obtain a third image.
14. The apparatus of claim 12, wherein the detection unit comprises:
the detection module is used for carrying out feature detection in the preview image to obtain the target area;
the light measuring module is used for measuring light of the target area and obtaining average brightness of the target area; and the determining module is used for determining the first exposure parameter according to the average brightness of the target area.
15. The apparatus of claim 11, wherein the fusion module further comprises:
The reference sub-module is used for selecting a frame with highest definition from the single-exposure multi-frame images as a reference frame;
an alignment sub-module, configured to align the rest frames of the single-exposure multi-frame image to the reference frame based on features, and obtain an aligned multi-frame image;
and the fusion sub-module is used for carrying out weighted fusion on the aligned multi-frame images based on airspace information to obtain the fourth image.
16. The apparatus of claim 11, wherein the fusion module further comprises:
the first processing sub-module is used for sequencing the single-exposure multi-frame images according to the definition and selecting a first frame image as a reference frame;
the second processing sub-module is used for carrying out weighted fusion on the reference frame based on characteristic alignment and spatial information to a second frame image, and updating the fused image into the reference frame;
and the third processing sub-module is used for sequentially executing the processes of the feature alignment, the spatial information weighted fusion and the updating of the reference frame until the final frame of image completes the feature alignment and the spatial information weighted fusion, and obtaining the fourth image.
17. The apparatus of claim 13, wherein the enhancement unit comprises:
the construction module is used for training by utilizing the constructed sample training set to obtain the pre-trained enhancement model;
and the enhancement module is used for carrying out image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain the second image containing the clear target area.
18. The apparatus of claim 17, wherein the build module further comprises:
a fourth processing sub-module, configured to obtain a first quality sample set collected by an electronic device, and obtain a second quality sample set at the same location, where the image quality of the second quality sample set is higher than that of the first quality sample set;
a fifth processing sub-module, configured to group the first quality sample set and the second quality sample set according to different amplification factors of the electronic device, and align the grouped sample sets according to features, so as to obtain a first sample set;
and a sixth processing sub-module, configured to send the first sample set to an AI training engine to obtain the pre-trained enhancement model.
19. A storage medium comprising a stored program, wherein the program performs the image processing method of any one of claims 1 to 10.
20. A processor for executing a program, wherein the program when executed performs the image processing method of any one of claims 1 to 10.
CN202010731900.9A 2020-07-27 2020-07-27 Image processing method and image processing device Active CN113992861B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010731900.9A CN113992861B (en) 2020-07-27 2020-07-27 Image processing method and image processing device
PCT/CN2021/093188 WO2022021999A1 (en) 2020-07-27 2021-05-12 Image processing method and image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010731900.9A CN113992861B (en) 2020-07-27 2020-07-27 Image processing method and image processing device

Publications (2)

Publication Number Publication Date
CN113992861A CN113992861A (en) 2022-01-28
CN113992861B true CN113992861B (en) 2023-07-21

Family

ID=79731474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010731900.9A Active CN113992861B (en) 2020-07-27 2020-07-27 Image processing method and image processing device

Country Status (2)

Country Link
CN (1) CN113992861B (en)
WO (1) WO2022021999A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630050A (en) * 2022-03-25 2022-06-14 展讯半导体(南京)有限公司 Photographing method, device, medium and terminal equipment
CN115118859A (en) * 2022-06-27 2022-09-27 联想(北京)有限公司 Electronic device and processing method
CN116245741B (en) * 2022-06-28 2023-11-17 荣耀终端有限公司 Image processing method and related device
CN115439384A (en) * 2022-09-05 2022-12-06 中国科学院长春光学精密机械与物理研究所 Ghost-free multi-exposure image fusion method and device
CN115661437B (en) * 2022-10-20 2024-01-26 陕西学前师范学院 Image processing system and method
CN115409754B (en) * 2022-11-02 2023-03-24 深圳深知未来智能有限公司 Multi-exposure image fusion method and system based on image area validity
CN115908190B (en) * 2022-12-08 2023-10-13 南京图格医疗科技有限公司 Method and system for enhancing image quality of video image
CN116342449B (en) * 2023-03-29 2024-01-16 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium
CN117615257B (en) * 2024-01-18 2024-04-05 常州微亿智造科技有限公司 Imaging method, device, medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017017585A (en) * 2015-07-02 2017-01-19 オリンパス株式会社 Imaging apparatus and image processing method
CN106357968A (en) * 2015-07-13 2017-01-25 奥林巴斯株式会社 Image pickup apparatus and image processing method
CN106657714A (en) * 2016-12-30 2017-05-10 杭州当虹科技有限公司 Method for improving viewing experience of high dynamic range video
CN110248098A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008034979B4 (en) * 2008-07-25 2011-07-07 EADS Deutschland GmbH, 85521 Method and device for generating error-reduced high-resolution and contrast-enhanced images
JP5544764B2 (en) * 2009-06-09 2014-07-09 ソニー株式会社 Image processing apparatus and method, and program
CN107566739B (en) * 2017-10-18 2019-12-06 维沃移动通信有限公司 photographing method and mobile terminal
US11070743B2 (en) * 2018-03-27 2021-07-20 Huawei Technologies Co., Ltd. Photographing using night shot mode processing and user interface
US20190335077A1 (en) * 2018-04-25 2019-10-31 Ocusell, LLC Systems and methods for image capture and processing
CN110572584B (en) * 2019-08-26 2021-05-07 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017017585A (en) * 2015-07-02 2017-01-19 オリンパス株式会社 Imaging apparatus and image processing method
CN106357968A (en) * 2015-07-13 2017-01-25 奥林巴斯株式会社 Image pickup apparatus and image processing method
CN106657714A (en) * 2016-12-30 2017-05-10 杭州当虹科技有限公司 Method for improving viewing experience of high dynamic range video
CN110248098A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium

Also Published As

Publication number Publication date
CN113992861A (en) 2022-01-28
WO2022021999A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
CN113992861B (en) Image processing method and image processing device
CN108898567B (en) Image noise reduction method, device and system
US11055827B2 (en) Image processing apparatus and method
CN108694705B (en) Multi-frame image registration and fusion denoising method
US8964060B2 (en) Determining an image capture payload burst structure based on a metering image capture sweep
EP1924966B1 (en) Adaptive exposure control
CN106713755B (en) Panoramic image processing method and device
CN111669514B (en) High dynamic range imaging method and apparatus
WO2014099284A1 (en) Determining exposure times using split paxels
KR20090111136A (en) Method and Apparatus of Selecting Best Image
KR20130013288A (en) High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
CN108024054A (en) Image processing method, device and equipment
US20140168474A1 (en) Determining an Image Capture Payload Burst Structure
An et al. Single-shot high dynamic range imaging via deep convolutional neural network
CN113259594A (en) Image processing method and device, computer readable storage medium and terminal
US11763430B2 (en) Correcting dust and scratch artifacts in digital images
JP2022179514A (en) Control apparatus, imaging apparatus, control method, and program
CN111127367A (en) Method, device and system for processing face image
CN114549373A (en) HDR image generation method and device, electronic equipment and readable storage medium
Vien et al. Single-shot high dynamic range imaging via multiscale convolutional neural network
CN116017172A (en) Raw domain image noise reduction method and device, camera and terminal
CN111080543B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113658068A (en) Deep learning-based denoising enhancement system and method for CMOS camera
Van Vo et al. High dynamic range video synthesis using superpixel-based illuminance-invariant motion estimation
CN113256523A (en) Image processing method and apparatus, medium, and computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 310052 19th Floor, No. 392, Binxing Road, Changhe Street, Binjiang District, Hangzhou City, Zhejiang Province (Hongsoft Building)

Applicant after: Rainbow Software Co.,Ltd.

Address before: 310012 22nd and 23rd Floors of Building A, Paradise Software Park, No. 3 Xidoumen Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Rainbow Software Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant