WO2022021999A1 - Image processing method and image processing apparatus - Google Patents

Image processing method and image processing apparatus Download PDF

Info

Publication number
WO2022021999A1
WO2022021999A1 PCT/CN2021/093188 CN2021093188W WO2022021999A1 WO 2022021999 A1 WO2022021999 A1 WO 2022021999A1 CN 2021093188 W CN2021093188 W CN 2021093188W WO 2022021999 A1 WO2022021999 A1 WO 2022021999A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame
target area
fusion
sample set
Prior art date
Application number
PCT/CN2021/093188
Other languages
French (fr)
Chinese (zh)
Inventor
许越
赵会斌
陆艳青
Original Assignee
虹软科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 虹软科技股份有限公司 filed Critical 虹软科技股份有限公司
Publication of WO2022021999A1 publication Critical patent/WO2022021999A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Definitions

  • the present invention relates to computer vision technology, and in particular, to an image processing method and an image processing device.
  • the existing technology usually adopts the shooting of the same scene in different exposure modes, and then obtains multiple frames of photos and fuses them into one frame of wide dynamic range image.
  • the generated images will suffer from serious loss of details and ghosting.
  • Embodiments of the present invention provide an image processing method and an image processing device, so as to at least solve the technical problems of serious loss of details, ghost images and insufficient dynamic range in the captured target image.
  • an image processing method comprising: acquiring a single-exposed multi-frame image according to a first exposure parameter, wherein the single-exposed multi-frame image is an image containing a target area with the same exposure parameter image; a first image is obtained by synthesizing multiple frames of images of a single exposure.
  • the first exposure parameter is determined by metering the target area in the preview image, or the first exposure parameter is selected within a preset range.
  • image enhancement is performed on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area; the first image and the second image are fused to obtain a third image.
  • metering the target area in the preview image to determine the first exposure parameter includes: performing feature detection in the preview image to obtain the target area; metering the target area to obtain the average brightness of the target area; The average brightness of , determines the first exposure parameter.
  • synthesizing the single-exposed multi-frame images to obtain the first image further comprising performing multi-frame fusion on the single-exposed multi-frame images using a multi-frame image super-resolution technology to obtain a fourth image;
  • the image is subjected to single-frame dynamic range expansion to obtain a first image.
  • using the multi-frame image super-resolution technology to perform multi-frame fusion on the single-exposed multi-frame images to obtain a fourth image comprising: selecting a frame with the highest definition as a reference frame among the single-exposed multi-frame images; The remaining frames of the single-exposed multi-frame image are aligned to the reference frame based on features to obtain an aligned multi-frame image; the aligned multi-frame images are weighted and fused based on spatial information to obtain a fourth image.
  • performing multi-frame fusion on the single-exposed multi-frame images using the multi-frame image super-resolution technology to obtain a fourth image further comprising:
  • the reference frame is weighted and fused to the second frame image based on feature alignment and spatial information, and the fused image is updated to the reference frame;
  • Processes based on feature alignment, weighted fusion based on spatial information, and updating the reference frame are sequentially performed, until the last frame image completes the feature alignment based and the weighted fusion based on spatial information, and a fourth image is obtained.
  • performing single-frame dynamic range expansion on the fourth image to obtain the first image includes: constructing a logarithmic brightness pyramid according to the fourth image to obtain ambient light brightness of different scales;
  • the digital brightness pyramid performs layer-by-layer downsampling, reconstruction and mapping on the fourth image to obtain the logarithmic reflection of the object surface at each pixel position; using the local mean and mean square error information, the logarithmic reflection of the object surface is mapped to the image value range, Get the first image.
  • performing image enhancement on the target area of the first image according to the pre-trained enhancement model, to obtain a second image containing a clear target area includes: using the constructed sample training set training to obtain a pre-trained enhancement model; The trained enhancement model performs image enhancement on the target area of the first image to obtain a second image containing a clear target area.
  • using the constructed sample training set to train to obtain a pre-trained enhanced model includes: obtaining a first quality sample set collected by the electronic device, and obtaining a second quality sample set at the same location, wherein the image quality of the second quality sample set is high. in the first quality sample set; group the first quality sample set and the second quality sample set according to different magnifications of the equipment, and align the grouped sample sets according to the features to obtain the first sample set; The set is fed into the AI training engine to obtain a pre-trained augmented model.
  • the second quality sample set includes: high-quality image samples collected by a high-definition electronic device or high-quality image samples stored in a storage device.
  • the grouped sample sets are aligned according to features, wherein the features include: features of the image groups in the first quality sample set and the second quality sample set or features of auxiliary calibration points around the image groups.
  • an image processing apparatus including: an acquisition unit configured to acquire a single-exposure multi-frame image according to a first exposure parameter, where the single-exposure multi-frame image includes a target area images with the same exposure parameters; the first fusion unit is configured to perform synthesis processing on multiple frames of images with a single exposure to obtain a first image.
  • the image processing apparatus further includes a detection unit configured to determine the first exposure parameter or select the first exposure parameter within a preset range by metering the target area in the preview image.
  • the image processing apparatus further includes: an enhancement unit, configured to perform image enhancement on a target area of the first image according to a pre-trained enhancement model, to obtain a second image containing a clear target; a second fusion unit, configured to Fusion processing is performed on the first image and the second image to obtain a third image.
  • an enhancement unit configured to perform image enhancement on a target area of the first image according to a pre-trained enhancement model, to obtain a second image containing a clear target
  • a second fusion unit configured to Fusion processing is performed on the first image and the second image to obtain a third image.
  • the detection unit includes: a detection module configured to perform feature detection in the preview image to obtain a target area; a photometric module configured to perform photometry on the target area to obtain an average brightness of the target area; a determination module, is configured to determine the first exposure parameter based on the average brightness of the target area.
  • the first fusion unit includes: a fusion module configured to perform multi-frame fusion on a single-exposed multi-frame image using a multi-frame image super-resolution technology to obtain a fourth image; an extension module configured to perform multi-frame fusion on the fourth image; The image is subjected to single-frame dynamic range expansion to obtain a first image.
  • the fusion module further includes: a reference sub-module, configured to select a frame with the highest definition among the single-exposed multi-frame images as a reference frame; an alignment sub-module configured to select the rest of the single-exposed multi-frame images as a reference frame;
  • the frame-to-reference frame is aligned based on features to obtain aligned multi-frame images;
  • the fusion sub-module is configured to weight the aligned multi-frame images based on spatial information to obtain a fourth image.
  • the fusion module further includes: a first processing submodule, configured to sort the multi-frame images of a single exposure according to the definition, and select the first frame image as a reference frame; a second processing submodule, configured In order to fuse the reference frame based on feature alignment and weighted fusion based on spatial information to the second frame image, and update the fused image to the reference frame; the third processing sub-module is configured to sequentially perform feature alignment based, based on spatial information weighted fusion And the process of updating the reference frame, until the last frame image completes the weighted fusion based on feature alignment and based on spatial information, and obtains the fourth image.
  • a first processing submodule configured to sort the multi-frame images of a single exposure according to the definition, and select the first frame image as a reference frame
  • a second processing submodule configured In order to fuse the reference frame based on feature alignment and weighted fusion based on spatial information to the second frame image, and update the fused image to the reference frame
  • the expansion module further includes: for constructing a logarithmic brightness pyramid according to the fourth image, to obtain ambient light brightness of different scales; combining the ambient light brightness of different scales, perform layer-by-layer downlinking on the fourth image according to the logarithmic brightness pyramid. Sampling reconstruction and mapping are performed to obtain the logarithmic reflection amount of the object surface at each pixel position; the logarithmic reflection amount of the object surface is mapped to the image value range by using the local mean and mean square error information to obtain a first image.
  • the enhancement unit includes: a building module configured to use the constructed sample training set for training to obtain a pre-trained enhancement model; an enhancement module configured to image the target area of the first image according to the pre-trained enhancement model. Enhanced to obtain a second image containing clear target areas.
  • the building module further includes: a fourth processing submodule, configured to obtain a first quality sample set collected by the electronic device, and obtain a second quality sample set at the same location, where the image quality of the second quality sample set is higher than that of the first quality sample set. quality sample set; the fifth processing submodule is configured to group the first quality sample set and the second quality sample set according to different magnifications of the equipment, and align the grouped sample sets according to the features to obtain the first sample set ; The sixth processing sub-module is configured to send the first sample set to the AI training engine to obtain a pre-trained enhanced model.
  • a fourth processing submodule configured to obtain a first quality sample set collected by the electronic device, and obtain a second quality sample set at the same location, where the image quality of the second quality sample set is higher than that of the first quality sample set. quality sample set
  • the fifth processing submodule is configured to group the first quality sample set and the second quality sample set according to different magnifications of the equipment, and align the grouped sample sets according to the
  • a storage medium is further provided, and the storage medium includes a stored program, wherein the program executes any one of the above image processing methods.
  • a processor is further provided, and the processor is used for running a program; wherein, any one of the above image processing methods is executed when the program is running.
  • the following steps are performed: obtaining a single exposure multi-frame image according to a first exposure parameter, and the single-exposure multi-frame image is an image with the same exposure parameter including the target area; A composite process is performed to obtain a first image.
  • Multi-frame fusion and image enhancement processing can be performed on multi-frame images with a single exposure to optimize image quality and obtain a clearer and brighter image containing the target area, which solves the problem of serious loss of details, ghosting and motion in the captured target image.
  • Technical issues with insufficient scope are performed: obtaining a single exposure multi-frame image according to a first exposure parameter, and the single-exposure multi-frame image is an image with the same exposure parameter including the target area;
  • a composite process is performed to obtain a first image.
  • Multi-frame fusion and image enhancement processing can be performed on multi-frame images with a single exposure to optimize image quality and obtain a clearer and brighter image containing the target area, which solves the problem of serious loss of details, ghosting and motion in the captured target image
  • FIG. 1 is a flowchart of an optional image processing method according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of an optional single-exposure multi-frame synthesis method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of an optional image processing method according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of an optional image enhancement method according to an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of an optional image processing apparatus according to an embodiment of the present invention.
  • the embodiments of the present invention may be applied to electronic devices having a camera unit, and the electronic devices may include: smart phones, tablet computers, desktop computers, personal digital assistants (PDAs), portable multimedia players (PMPs), cameras, and the like.
  • the electronic devices may include: smart phones, tablet computers, desktop computers, personal digital assistants (PDAs), portable multimedia players (PMPs), cameras, and the like.
  • PDAs personal digital assistants
  • PMPs portable multimedia players
  • FIG. 1 it is a flowchart of an optional image processing method according to an embodiment of the present invention.
  • the moon is taken as an example for description.
  • the image processing method includes the following steps:
  • the electronic device may capture multiple images of the moon through its camera unit according to the first exposure parameter, and collect multiple frames of images with a single exposure, wherein the single exposure
  • the multiple frames of images are moon images with the same exposure parameters.
  • the multi-frame images are not required to be strictly continuous, and can be shot physically completely continuously, or images can be acquired at intervals of several frames, and the total shooting time can be controlled within a certain range.
  • multi-frame fusion and image enhancement processing can be performed on a multi-frame moon image with a single exposure, so as to optimize the image quality, obtain a clearer and brighter moon image, and solve the problem of the appearance of the captured moon image.
  • the above-mentioned image processing method may further include step S10 for determining the first exposure parameter, specifically, the first exposure parameter may be determined by metering the target area in the preview image or in a preset The first exposure parameter is selected within the range of .
  • determining the first exposure parameter by metering the target area in the preview image includes: performing feature detection in the preview image to obtain the moon area; metering the moon area to obtain the average brightness of the moon area; The average brightness of the area determines the first exposure parameter.
  • those skilled in the art may preset a range of exposure parameters according to factors such as characteristics of the shooting target and the background environment where the target is located, and select the first exposure parameter within the range.
  • feature detection is performed on the preview image of the electronic device to obtain the moon area in the preview image.
  • the electronic device can detect the moon in the preview image through a preset detection algorithm.
  • the algorithm of the present invention for detecting the moon area Not limited.
  • the accurate selection of a single exposure is mainly based on the accurate identification of the moon area and the accurate detection of the brightness.
  • the electronic device can meter the moon area in the preview interface through the preset metering algorithm to obtain the average brightness of the moon area.
  • the invention does not limit the metering algorithm.
  • the present application also pre-establishes the mapping relationship between the brightness of the moon area and the exposure parameters. After obtaining the average brightness of the moon area in the preview image, the electronic device can determine the electronic device according to the measured average brightness of the moon area and the pre-established mapping relationship. Exposure parameters of the device in the current state, that is, the first exposure parameters.
  • Step S14 will be described in detail below.
  • FIG. 2 it is a flowchart of an optional single-exposure multi-frame synthesis method according to an embodiment of the present invention.
  • step S14 performing synthesis processing on the single-exposed multi-frame images, and obtaining the first image may include:
  • S140 Perform multi-frame fusion on a single-exposed multi-frame image using a multi-frame image super-resolution technology to obtain a fourth image;
  • S142 Perform single-frame dynamic range expansion on the fourth image to obtain the first image.
  • performing multi-frame fusion on a single-exposed multi-frame image using a multi-frame image super-resolution technology to obtain a fourth image includes: selecting clear images from the single-exposed multi-frame images.
  • the frame with the highest degree is used as the reference frame; the remaining frames of the single-exposed multi-frame image are aligned to the reference frame based on features to obtain the aligned multi-frame image; the aligned multi-frame images are weighted and fused based on the spatial information to obtain the fourth image .
  • the multi-frame color RGB image finally obtained by the user using the electronic device is obtained by the Bayer data collected by the sensor through the demosaicing algorithm, which is essentially up-sampling the undersampled RGB data to the full-size RGB data, which inevitably increases the false information, resulting in lower resolution and higher noise.
  • the super-resolution technology multi-frame fusion technology is based on Bayer data, selects the frame with the highest definition in a single exposure multi-frame image as a reference frame, and aligns the remaining frames with it. When collecting multiple frames of images with a single exposure, there is random displacement between frames.
  • the corresponding channels missing from the under-sampled Bayer data can be supplemented.
  • the aligned multi-frame images are weighted and fused based on the spatial information to obtain the fourth image, and the final full-size or even higher-sized RGB image is obtained, thereby improving the resolution of the multi-frame image fusion of a single exposure, and at the same time, there is a certain degree of denoising. Effect.
  • multi-frame fusion is performed on a single-exposed multi-frame image by using a multi-frame image super-resolution technology to obtain a fourth image, which can be fused while aligning, specifically including:
  • the single-exposed multi-frame images are sorted according to the sharpness, and the first frame image is selected as the reference frame; the reference frame is weighted and fused to the second frame image based on feature alignment and spatial information, and the fused image is updated as Reference frame; perform the process of feature alignment, weighted fusion based on spatial information, and update reference frame in sequence, until the last frame image completes the feature alignment and spatial information weighted fusion, and obtains a fourth image.
  • the fusion weight may be calculated by each input frame and the first frame image, or the fusion weight may be calculated by each input frame and the updated reference frame. Since the multi-frame fusion method of aligning and merging adopts the method of not fixing the reference frame, the error of the previous level will be cleared in the process of replacing the reference frame fusion, which can avoid the transmission of errors and stage-by-stage amplification, and ensure the processing accuracy.
  • performing single-frame dynamic range expansion on the fourth image to obtain the first image includes: constructing a logarithmic brightness pyramid according to the fourth image, and obtaining ambient light brightness of different scales; According to the scale of ambient light brightness, the fourth image is down-sampled, reconstructed and mapped layer by layer according to the logarithmic brightness pyramid, and the logarithmic reflection of the object surface at each pixel position is obtained; using the local mean and mean square error information, the logarithm of the object surface The reflection amount is mapped to the image value interval to obtain the first image.
  • the human eye perceives the actual image from the product of the ambient illumination and the surface reflection of the object.
  • the ambient illumination component is completely removed in the image and the reflection component is retained to restore the original information of the object, which can achieve the image enhancement effect.
  • the above product relationship can be converted into an addition-subtraction relationship, so the ambient light component can be completely removed from the image while the object reflection component can be retained to enhance the visual effect of the image.
  • a logarithmic brightness pyramid is constructed, and the fourth image is convolved with Gaussian kernels at different scales to approximate the ambient light brightness, so as to obtain the ambient light brightness of different scales;
  • the ambient light component is used to retain the reflection component to restore the original information of the object.
  • the fourth image is down-sampled, reconstructed and mapped layer by layer according to the logarithmic brightness pyramid, and the logarithm of the object surface at each pixel position is obtained.
  • Reflection amount Using the local mean and mean square error information, the logarithmic reflection amount on the surface of the object is mapped from the logarithmic domain to the normal image numerical range to obtain the final brightness adjustment image, that is, the first image.
  • the fourth image is first linearly brightened to obtain a fifth image, and the fourth image and the fifth image are obtained by linearly brightening the fourth image.
  • the images are fused by Laplacian pyramid, and then subjected to single-frame dynamic range expansion. Since the fourth image and the fifth image actually participating in the fusion are changed from the same frame of the fourth image, alignment is not required, and there is no ghost image.
  • the fused single-frame image can be brightened, the local contrast can be improved, and the saturation can be adjusted, while avoiding ghosting or synthesis anomalies that may be introduced by the multi-frame dynamic range expansion.
  • FIG. 3 it is a flowchart of an optional image processing method according to an embodiment of the present invention.
  • the image processing method also includes the following steps:
  • Step S16 will be described in detail below.
  • FIG. 4 it is a flowchart of an optional image enhancement method according to an embodiment of the present invention.
  • step S16 that is, performing image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area, including:
  • S164 Perform image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area.
  • using the constructed sample training set to train the pre-trained enhanced model includes: acquiring a first quality sample set collected by an electronic device, and acquiring a second quality sample set at the same location, wherein , the image quality of the second quality sample set is higher than that of the first quality sample set; the first quality sample set and the second quality sample set are grouped according to the different magnifications of the equipment, and the grouped sample sets are aligned according to the features to obtain the first quality sample set.
  • Sample set send the first sample set to the AI training engine to obtain a pre-trained enhanced model.
  • acquiring the second quality sample set at the same location includes: high-quality image samples collected by a high-definition electronic device or high-quality image samples stored in a storage device.
  • the grouped sample sets are aligned according to features, wherein the features include features such as color, brightness, and angle of the image groups in the first quality sample set and the second quality sample set, or features around the image groups.
  • the features include features such as color, brightness, and angle of the image groups in the first quality sample set and the second quality sample set, or features around the image groups.
  • FIG. 5 it is a structural block diagram of an optional image processing apparatus according to an embodiment of the present invention.
  • the image processing device 4 includes:
  • the acquiring unit 42 is configured to acquire a single exposure multi-frame image according to the first exposure parameter; wherein, the single-exposure multi-frame image is an image with the same exposure parameter including the target area;
  • the first fusion unit 44 is configured to perform synthesis processing on a single exposure of multiple frames of images to obtain a first image.
  • the image processing device 4 may also include:
  • the enhancement unit 46 is configured to perform image enhancement on the first image according to the pre-trained enhancement model to obtain a second image containing a clear target;
  • the second fusion unit 48 is configured to perform fusion processing on the first image and the second image to obtain a third image.
  • the above-mentioned image processing device 4 may further include a detection unit 40 configured to determine the first exposure parameter, specifically, the first exposure parameter may be determined by metering the target area in the preview image or The first exposure parameter is selected within a preset range.
  • the detection unit 40 may include: a detection module 400 configured to perform feature detection in the preview image to obtain a target area; a photometric module 402 configured to perform photometry on the target area , to obtain the average brightness of the target area; the determining module 404 is configured to determine the first exposure parameter according to the average brightness of the target area.
  • a detection module 400 configured to perform feature detection in the preview image to obtain a target area
  • a photometric module 402 configured to perform photometry on the target area , to obtain the average brightness of the target area
  • the determining module 404 is configured to determine the first exposure parameter according to the average brightness of the target area.
  • those skilled in the art may preset a range of exposure parameters according to factors such as characteristics of the shooting target and the background environment where the target is located, and select the first exposure parameter within the range.
  • the first fusion unit 44 may include:
  • the fusion module 442 is configured to perform multi-frame fusion on the single-exposed multi-frame images using the multi-frame image super-resolution technology to obtain a fourth image;
  • the expansion module 444 is configured to perform single-frame dynamic range expansion on the fourth image to obtain the first image.
  • the fusion module 442 includes: a reference sub-module 4422, which is configured to select a frame with the highest definition among the multi-frame images of single exposure as a reference frame; an alignment sub-module 4424, which is configured by The remaining frames of the multi-frame images configured as single exposure are aligned to the reference frame based on features to obtain the aligned multi-frame images; the fusion sub-module 4426 is configured to perform weighted fusion of the aligned multi-frame images based on spatial information to obtain the fourth image. image.
  • the multi-frame color RGB image finally obtained by the user using the electronic device is obtained by the Bayer data collected by the sensor through the demosaicing algorithm.
  • the undersampled RGB data is up-sampled to the full-size RGB data, which inevitably increases the false value. information, resulting in lower resolution and higher noise.
  • the super-resolution technology multi-frame fusion technology is based on Bayer data, selects the frame with the highest definition in a single exposure multi-frame image as a reference frame, and aligns the remaining frames with it. When collecting multiple frames of images with a single exposure, there is random displacement between frames. Through the real channel information of the remaining frames with a small random displacement at the channel position of the reference frame, the corresponding channels missing from the under-sampled Bayer data can be supplemented. information.
  • the aligned multi-frame images are weighted and fused based on the spatial information to obtain the fourth image, and the final full-size or even higher-sized RGB image is obtained, thereby improving the resolution of the multi-frame image fusion of a single exposure, and at the same time, there is a certain degree of denoising. Effect.
  • the fusion module 442 includes: a first processing submodule 4423, configured to sort the multi-frame images of a single exposure according to the sharpness, and select the first frame image as the reference frame; the second processing sub-module 4425 is configured to fuse the reference frame into the second frame image based on feature alignment and spatial information weighting, and update the fused image to the reference frame; the third processing sub-module 4427, is It is configured to perform the process of feature alignment, weighted fusion based on spatial information, and updating the reference frame in sequence, until the last frame image completes the feature alignment and spatial information weighted fusion to obtain a fourth image.
  • a first processing submodule 4423 configured to sort the multi-frame images of a single exposure according to the sharpness, and select the first frame image as the reference frame
  • the second processing sub-module 4425 is configured to fuse the reference frame into the second frame image based on feature alignment and spatial information weighting, and update the fused image to the reference frame
  • the third processing sub-module 4427 is It
  • the fusion weight may be calculated by each input frame and the first frame image, or the fusion weight may be calculated by each input frame and the updated reference frame. Since the multi-frame fusion device for aligning and merging adopts the method of not fixing the reference frame, the error of the previous level will be cleared in the process of updating the reference frame fusion, which can avoid the transmission of errors and stage-by-stage amplification, and ensure the processing accuracy.
  • the expansion module 444 further includes: constructing a logarithmic brightness pyramid according to the fourth image, to obtain ambient light brightness of different scales; combining the ambient light brightness of different scales, according to the logarithmic brightness pyramid
  • the fourth image is down-sampled, reconstructed and mapped layer by layer to obtain the logarithmic reflection of the object surface at each pixel position; using the local mean and mean square error information, the logarithmic reflection of the object surface is mapped to the image numerical range to obtain the first image.
  • the human eye perceives the actual image from the product of the ambient illumination and the surface reflection of the object.
  • the ambient illumination component is completely removed in the image and the reflection component is retained to restore the original information of the object, which can achieve the image enhancement effect. If the image is based on the logarithmic domain, the above product relationship can be converted into an addition-subtraction relationship. After obtaining the ambient illumination component, the reflection component of the object can be retained by completely subtracting the ambient illumination component from the image, thereby achieving the effect of enhancing the image vision.
  • a logarithmic brightness pyramid is constructed, and the fourth image is convolved with Gaussian kernels at different scales to approximate the ambient light brightness, so as to obtain the ambient light brightness of different scales;
  • the ambient light component is retained and the reflection component is retained to restore the original information of the object.
  • the fourth image is down-sampled, reconstructed and mapped layer by layer according to the logarithmic brightness pyramid, and the logarithm of the object surface at each pixel position is obtained.
  • Reflection amount Using the local mean and mean square error information, the logarithmic reflection amount on the surface of the object is mapped from the logarithmic domain to the normal image numerical range to obtain the final brightness adjustment image, that is, the first image.
  • the fourth image is first linearly brightened to obtain a fifth image, and the fourth image and the fifth image are obtained by linearly brightening the fourth image.
  • the images are fused by Laplacian pyramid, and then subjected to single-frame dynamic range expansion. Since the fourth image and the fifth image that actually participate in the fusion are changed from the same frame, alignment is not required, and there is no ghost image. In this way, whether in normal scenes or extremely dark environments, the fused single-frame image can brighten dark parts, improve local contrast, and adjust saturation, while avoiding ghosting or synthesis that may be introduced due to multi-frame dynamic range expansion. abnormal.
  • the enhancement unit 46 may include:
  • Building module 462 configured to use the constructed sample training set for training to obtain a pre-trained enhanced model
  • the enhancement module 464 is configured to perform image enhancement on the first image according to the pre-trained enhancement model to obtain a second image including a clear target area.
  • the building module 462 may include: acquiring a first quality sample set collected by the electronic device, and acquiring a second quality sample set at the same location, where the image quality of the second quality sample set is higher than the first quality Sample set; group the first quality sample set and the second quality sample set according to the different magnifications of the equipment, and align the grouped sample sets according to the features to obtain the first sample set; send the first sample set to AI Train the engine to obtain a pre-trained augmented model.
  • acquiring the second quality sample set at the same location includes: high-quality image samples collected by a high-definition electronic device or high-quality image samples stored in a storage device.
  • the grouped sample sets are aligned according to features, wherein the features include features such as color, brightness, and angle of the image groups in the first quality sample set and the second quality sample set, or the periphery of the image groups.
  • a storage medium is further provided, and the storage medium includes a stored program, wherein the program executes any one of the above image processing methods.
  • a processor is further provided, and the processor is used for running a program; wherein, any one of the above image processing methods is executed when the program is running.
  • image processing method or image processing device is not limited to photographing the moon, but is also suitable for photographing distant objects with obvious contrast with their backgrounds, such as the sun, lighthouses, stars, lights on buildings, etc. Both can solve the technical problems of severe loss of detail, ghosting and insufficient dynamic range in captured images.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .

Abstract

Disclosed are an image processing method and an image processing apparatus. The image processing method comprises: acquiring multiple frames of images under a single exposure according to a first exposure parameter, wherein the multiple frames of images under the single exposure are images that contain target regions and have the same exposure parameter; and synthesizing the multiple frames of images under the single exposure to obtain a first image. According to the image processing method, the present invention solves the technical problems of serious detail loss, a ghost image and an insufficient dynamic range in a photographed target image.

Description

一种图像处理方法及图像处理装置Image processing method and image processing device
本申请要求于2020年07月27日提交中国专利局、申请号为202010731900.9,申请名称“一种图像处理方法及图像处理装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on July 27, 2020 with the application number 202010731900.9 and the application title "An Image Processing Method and Image Processing Device", the entire contents of which are incorporated herein by reference middle.
技术领域technical field
本发明涉及计算机视觉技术,具体而言,涉及一种图像处理方法及图像处理装置。The present invention relates to computer vision technology, and in particular, to an image processing method and an image processing device.
背景技术Background technique
近年来,随着电子设备的拍摄性能的提高以及手机拍摄的普及,人们提高了对手机拍摄的要求。但是受限于便携式电子设备的尺寸和质量,例如,在拍摄远距离的月亮时,用户很难获取清晰并且明亮的月亮主体的照片,通常拍摄的图像会出现模糊、噪声大、细节丢失、动态范围不足等问题。In recent years, with the improvement of the shooting performance of electronic devices and the popularization of mobile phone shooting, people have increased the requirements for mobile phone shooting. However, limited by the size and quality of portable electronic devices, for example, when shooting the moon at a distance, it is difficult for users to obtain a clear and bright photo of the moon subject. Usually, the captured image will appear blurred, noisy, detail loss, dynamic Insufficient range, etc.
针对上述问题,现有的技术通常采用的是在不同曝光模式下拍摄同一场景,得到多帧照片后融合成一帧宽动态范围图像,这种方法可保留不同亮度下的图像细节,然而在多曝光的多帧图像合成过程中,由于图像无法对齐、存在运动物体等原因,会导致生成图像出现细节丢失严重和鬼影现象。In view of the above-mentioned problems, the existing technology usually adopts the shooting of the same scene in different exposure modes, and then obtains multiple frames of photos and fuses them into one frame of wide dynamic range image. In the process of synthesizing multi-frame images, due to the unaligned images, the existence of moving objects, etc., the generated images will suffer from serious loss of details and ghosting.
发明内容SUMMARY OF THE INVENTION
本发明实施例提供了一种图像处理方法及图像处理装置,以至少解决了拍摄的目标图像出现细节丢失严重、鬼影和动态范围不足的技术问题。Embodiments of the present invention provide an image processing method and an image processing device, so as to at least solve the technical problems of serious loss of details, ghost images and insufficient dynamic range in the captured target image.
根据本发明实施例的一个方面,提供了一种图像处理方法,包括:依据第一曝光参数获取单一曝光的多帧图像,其中,单一曝光的多帧图像为包含目标区域的具有相同曝光参数的图像;对单一曝光的多帧图像进行合成处理,获得第一图像。According to an aspect of the embodiments of the present invention, an image processing method is provided, comprising: acquiring a single-exposed multi-frame image according to a first exposure parameter, wherein the single-exposed multi-frame image is an image containing a target area with the same exposure parameter image; a first image is obtained by synthesizing multiple frames of images of a single exposure.
可选地,通过在预览图像对目标区域测光确定所述第一曝光参数或者在预设的范围内选定所述第一曝光参数。Optionally, the first exposure parameter is determined by metering the target area in the preview image, or the first exposure parameter is selected within a preset range.
可选地,根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像;对第一图像和第二图像进行融合处理,获得第三图像。Optionally, image enhancement is performed on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area; the first image and the second image are fused to obtain a third image.
可选地,在预览图像对目标区域测光,确定第一曝光参数,包括:在预览图像中进行特征检测,获得目标区域;对目标区域进行测光,获得目标区域的平均亮度;依据目标区域的平均亮度,确定第一曝光参数。Optionally, metering the target area in the preview image to determine the first exposure parameter includes: performing feature detection in the preview image to obtain the target area; metering the target area to obtain the average brightness of the target area; The average brightness of , determines the first exposure parameter.
可选地,对单一曝光的多帧图像进行合成处理,获得第一图像,还包括对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像;对第四图像进行单帧动态范围扩展,获得第一图像。Optionally, synthesizing the single-exposed multi-frame images to obtain the first image, further comprising performing multi-frame fusion on the single-exposed multi-frame images using a multi-frame image super-resolution technology to obtain a fourth image; The image is subjected to single-frame dynamic range expansion to obtain a first image.
可选地,对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像,包括:在单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧;单一曝光的多帧图像的其余帧向参考帧基于特征对齐,获得对齐后的多帧图像;对对齐后的多帧图像基于空域信息加权融合,获得第四图像。Optionally, using the multi-frame image super-resolution technology to perform multi-frame fusion on the single-exposed multi-frame images to obtain a fourth image, comprising: selecting a frame with the highest definition as a reference frame among the single-exposed multi-frame images; The remaining frames of the single-exposed multi-frame image are aligned to the reference frame based on features to obtain an aligned multi-frame image; the aligned multi-frame images are weighted and fused based on spatial information to obtain a fourth image.
可选地,对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像,还包括:Optionally, performing multi-frame fusion on the single-exposed multi-frame images using the multi-frame image super-resolution technology to obtain a fourth image, further comprising:
在单一曝光的多帧图像中依据清晰度进行排序,并选择第一帧图像作为参考帧;Sort the multi-frame images of a single exposure according to the sharpness, and select the first frame image as the reference frame;
将参考帧基于特征对齐、基于空域信息加权融合至第二帧图像,并将融合后的图像更新为参考帧;The reference frame is weighted and fused to the second frame image based on feature alignment and spatial information, and the fused image is updated to the reference frame;
依次执行基于特征对齐、基于空域信息加权融合和更新参考帧的过程,直到最后一帧图像完成基于特征对齐与基于空域信息加权融合,获得第四图像。Processes based on feature alignment, weighted fusion based on spatial information, and updating the reference frame are sequentially performed, until the last frame image completes the feature alignment based and the weighted fusion based on spatial information, and a fourth image is obtained.
可选地,对第四图像进行单帧动态范围扩展,获得第一图像,包括:依据第四图像构建对数亮度金字塔,获取不同尺度的环境光亮度;结合不同尺度的环境光亮度,根据对数亮度金字塔对第四图像进行逐层下采样重建及映射,获取每一像素位置的物体表面对数反射量;利用局部均值与均方差信息,将物体表面对数反射量映射至图像数值区间,获得第一图像。Optionally, performing single-frame dynamic range expansion on the fourth image to obtain the first image includes: constructing a logarithmic brightness pyramid according to the fourth image to obtain ambient light brightness of different scales; The digital brightness pyramid performs layer-by-layer downsampling, reconstruction and mapping on the fourth image to obtain the logarithmic reflection of the object surface at each pixel position; using the local mean and mean square error information, the logarithmic reflection of the object surface is mapped to the image value range, Get the first image.
可选地,根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像,包括:利用构建的样本训练集训练,获得预训练的增强模型;根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像。Optionally, performing image enhancement on the target area of the first image according to the pre-trained enhancement model, to obtain a second image containing a clear target area, includes: using the constructed sample training set training to obtain a pre-trained enhancement model; The trained enhancement model performs image enhancement on the target area of the first image to obtain a second image containing a clear target area.
可选地,利用构建的样本训练集训练得到预训练的增强模型,包括:获取电子设备采集的第一质量样本集,获取同一位置的第二质量样本集,其中第二质量样本集图像质量高于第一质量样本集;依据设备不同的放大倍数将第一质量样本集和第二质量样本集分组,并将分组后的样本集依据特征对齐,获得第一样本集;将第一样本集送 入AI训练引擎,获得预训练的增强模型。Optionally, using the constructed sample training set to train to obtain a pre-trained enhanced model includes: obtaining a first quality sample set collected by the electronic device, and obtaining a second quality sample set at the same location, wherein the image quality of the second quality sample set is high. in the first quality sample set; group the first quality sample set and the second quality sample set according to different magnifications of the equipment, and align the grouped sample sets according to the features to obtain the first sample set; The set is fed into the AI training engine to obtain a pre-trained augmented model.
可选地,第二质量样本集包括:高清电子设备采集的高质量图像样本或存储设备中存储的高质量图像样本。Optionally, the second quality sample set includes: high-quality image samples collected by a high-definition electronic device or high-quality image samples stored in a storage device.
可选地,分组后的样本集依据特征对齐,其中特征包括:第一质量样本集和第二质量样本集中图像组的特征或图像组周边的辅助标定点特征。Optionally, the grouped sample sets are aligned according to features, wherein the features include: features of the image groups in the first quality sample set and the second quality sample set or features of auxiliary calibration points around the image groups.
根据本发明实施例的另一方面,还提供了一种图像处理装置,包括:获取单元,被配置为依据第一曝光参数获取单一曝光的多帧图像,单一曝光的多帧图像为包含目标区域具有相同曝光参数的图像;第一融合单元,被配置为对单一曝光的多帧图像进行合成处理,获得第一图像。According to another aspect of the embodiments of the present invention, an image processing apparatus is further provided, including: an acquisition unit configured to acquire a single-exposure multi-frame image according to a first exposure parameter, where the single-exposure multi-frame image includes a target area images with the same exposure parameters; the first fusion unit is configured to perform synthesis processing on multiple frames of images with a single exposure to obtain a first image.
可选的,所述图像处理装置还包括检测单元,被配置为通过在预览图像对目标区域测光确定所述第一曝光参数或者在预设的范围内选定所述第一曝光参数。Optionally, the image processing apparatus further includes a detection unit configured to determine the first exposure parameter or select the first exposure parameter within a preset range by metering the target area in the preview image.
可选的,图像处理装置还包括:增强单元,被配置为根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标的第二图像;第二融合单元,被配置为对第一图像和第二图像进行融合处理,获得第三图像。Optionally, the image processing apparatus further includes: an enhancement unit, configured to perform image enhancement on a target area of the first image according to a pre-trained enhancement model, to obtain a second image containing a clear target; a second fusion unit, configured to Fusion processing is performed on the first image and the second image to obtain a third image.
可选地,检测单元包括:检测模块,被配置为在预览图像中进行特征检测,获得目标区域;测光模块,被配置为对目标区域进行测光,获得目标区域的平均亮度;确定模块,被配置为依据目标区域的平均亮度,确定第一曝光参数。Optionally, the detection unit includes: a detection module configured to perform feature detection in the preview image to obtain a target area; a photometric module configured to perform photometry on the target area to obtain an average brightness of the target area; a determination module, is configured to determine the first exposure parameter based on the average brightness of the target area.
可选地,第一融合单元包括:融合模块,被配置为对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像;扩展模块,被配置为对第四图像进行单帧动态范围扩展,获得第一图像。Optionally, the first fusion unit includes: a fusion module configured to perform multi-frame fusion on a single-exposed multi-frame image using a multi-frame image super-resolution technology to obtain a fourth image; an extension module configured to perform multi-frame fusion on the fourth image; The image is subjected to single-frame dynamic range expansion to obtain a first image.
可选地,融合模块还包括:参考子模块,被配置为在单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧;对齐子模块,被配置为单一曝光的多帧图像的其余帧向参考帧基于特征对齐,获得对齐后的多帧图像;融合子模块,被配置为对对齐后的多帧图像基于空域信息加权融合,获得第四图像。Optionally, the fusion module further includes: a reference sub-module, configured to select a frame with the highest definition among the single-exposed multi-frame images as a reference frame; an alignment sub-module configured to select the rest of the single-exposed multi-frame images as a reference frame; The frame-to-reference frame is aligned based on features to obtain aligned multi-frame images; the fusion sub-module is configured to weight the aligned multi-frame images based on spatial information to obtain a fourth image.
可选的,融合模块还包括:第一处理子模块,被配置为在单一曝光的多帧图像中依据清晰度进行排序,并选择第一帧图像作为参考帧;第二处理子模块,被配置为将参考帧基于特征对齐、基于空域信息加权融合至第二帧图像,并将融合后的图像更新为参考帧;第三处理子模块,被配置为依次执行基于特征对齐、基于空域信息加权融合和更新参考帧的过程,直到最后一帧图像完成基于特征对齐与基于空域信息加权融合,获得第四图像。Optionally, the fusion module further includes: a first processing submodule, configured to sort the multi-frame images of a single exposure according to the definition, and select the first frame image as a reference frame; a second processing submodule, configured In order to fuse the reference frame based on feature alignment and weighted fusion based on spatial information to the second frame image, and update the fused image to the reference frame; the third processing sub-module is configured to sequentially perform feature alignment based, based on spatial information weighted fusion And the process of updating the reference frame, until the last frame image completes the weighted fusion based on feature alignment and based on spatial information, and obtains the fourth image.
可选地,扩展模块还包括:用于依据第四图像构建对数亮度金字塔,获取不同尺度的环境光亮度;结合不同尺度的环境光亮度,根据对数亮度金字塔对第四图像进行逐层下采样重建及映射,获取每一像素位置的物体表面对数反射量;利用局部均值与均方差信息,将物体表面对数反射量映射至图像数值区间,获得第一图像。Optionally, the expansion module further includes: for constructing a logarithmic brightness pyramid according to the fourth image, to obtain ambient light brightness of different scales; combining the ambient light brightness of different scales, perform layer-by-layer downlinking on the fourth image according to the logarithmic brightness pyramid. Sampling reconstruction and mapping are performed to obtain the logarithmic reflection amount of the object surface at each pixel position; the logarithmic reflection amount of the object surface is mapped to the image value range by using the local mean and mean square error information to obtain a first image.
可选地,增强单元包括:构建模块,被配置为利用构建的样本训练集训练,获得预训练的增强模型;增强模块,被配置为根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像。Optionally, the enhancement unit includes: a building module configured to use the constructed sample training set for training to obtain a pre-trained enhancement model; an enhancement module configured to image the target area of the first image according to the pre-trained enhancement model. Enhanced to obtain a second image containing clear target areas.
可选地,构建模块还包括:第四处理子模块,被配置为获取电子设备采集的第一质量样本集,获取同一位置的第二质量样本集,第二质量样本集图像质量高于第一质量样本集;第五处理子模块,被配置为依据设备不同的放大倍数将第一质量样本集和第二质量样本集分组,并将分组后的样本集依据特征对齐,获得第一样本集;第六处理子模块,被配置为将第一样本集送入AI训练引擎,获得预训练的增强模型。Optionally, the building module further includes: a fourth processing submodule, configured to obtain a first quality sample set collected by the electronic device, and obtain a second quality sample set at the same location, where the image quality of the second quality sample set is higher than that of the first quality sample set. quality sample set; the fifth processing submodule is configured to group the first quality sample set and the second quality sample set according to different magnifications of the equipment, and align the grouped sample sets according to the features to obtain the first sample set ; The sixth processing sub-module is configured to send the first sample set to the AI training engine to obtain a pre-trained enhanced model.
根据本发明实施例的另一方面,还提供了一种存储介质,存储介质包括存储的程序,其中,程序执行上述中任意一项图像处理方法。According to another aspect of the embodiments of the present invention, a storage medium is further provided, and the storage medium includes a stored program, wherein the program executes any one of the above image processing methods.
根据本发明实施例的另一方面,还提供了一种处理器,处理器用于运行程序;其中,程序运行时执行上述中任意一项图像处理方法。According to another aspect of the embodiments of the present invention, a processor is further provided, and the processor is used for running a program; wherein, any one of the above image processing methods is executed when the program is running.
在本发明实施例中,通过执行以下步骤:依据第一曝光参数获取单一曝光的多帧图像,单一曝光的多帧图像为包含目标区域的具有相同曝光参数的图像;对单一曝光的多帧图像进行合成处理,获得第一图像。可以对具有单一曝光的多帧图像进行多帧融合和图像增强处理,以优化图像质量,获得更清晰和明亮的包含目标区域的图像,解决了拍摄的目标图像出现细节丢失严重、鬼影和动态范围不足的技术问题。In the embodiment of the present invention, the following steps are performed: obtaining a single exposure multi-frame image according to a first exposure parameter, and the single-exposure multi-frame image is an image with the same exposure parameter including the target area; A composite process is performed to obtain a first image. Multi-frame fusion and image enhancement processing can be performed on multi-frame images with a single exposure to optimize image quality and obtain a clearer and brighter image containing the target area, which solves the problem of serious loss of details, ghosting and motion in the captured target image. Technical issues with insufficient scope.
附图说明Description of drawings
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The accompanying drawings described herein are used to provide a further understanding of the present invention and constitute a part of the present application. The exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the attached image:
图1是根据本发明实施例的一种可选的图像处理方法的流程图;1 is a flowchart of an optional image processing method according to an embodiment of the present invention;
图2是根据本发明实施例的一种可选的单一曝光的多帧合成方法的流程图;FIG. 2 is a flowchart of an optional single-exposure multi-frame synthesis method according to an embodiment of the present invention;
图3是根据本发明实施例的一种可选的图像处理方法的流程图;3 is a flowchart of an optional image processing method according to an embodiment of the present invention;
图4是根据本发明实施例的一种可选的图像增强方法的流程图;4 is a flowchart of an optional image enhancement method according to an embodiment of the present invention;
图5是根据本发明实施例的一种可选的图像处理装置的结构框图。FIG. 5 is a structural block diagram of an optional image processing apparatus according to an embodiment of the present invention.
具体实施方式detailed description
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的顺序在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second" and the like in the description and claims of the present invention and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the order so used may be interchanged under appropriate circumstances so that the embodiments of the invention described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.
本发明实施例可以应用于具有摄像单元的电子设备中,电子设备可以包括:智能手机、平板电脑、台式电脑、个人数字助理(PDA)、便携式多媒体播放器(PMP)、相机等。The embodiments of the present invention may be applied to electronic devices having a camera unit, and the electronic devices may include: smart phones, tablet computers, desktop computers, personal digital assistants (PDAs), portable multimedia players (PMPs), cameras, and the like.
下面说明本发明实施例的一种可选的图像处理方法的流程图。需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。The following describes a flowchart of an optional image processing method according to an embodiment of the present invention. It should be noted that the steps shown in the flowcharts of the accompanying drawings may be executed in a computer system, such as a set of computer-executable instructions, and, although a logical sequence is shown in the flowcharts, in some cases, Steps shown or described may be performed in an order different from that herein.
参考图1,是根据本发明实施例的一种可选的图像处理方法的流程图。在本实施例中,以拍摄月亮为例进行说明。如图1所示,该图像处理方法包括如下步骤:Referring to FIG. 1 , it is a flowchart of an optional image processing method according to an embodiment of the present invention. In this embodiment, the moon is taken as an example for description. As shown in Figure 1, the image processing method includes the following steps:
S12,依据第一曝光参数获取单一曝光的多帧图像;其中,单一曝光的多帧图像为包含目标区域的具有相同曝光参数的图像;S12, obtaining a single exposure multi-frame image according to the first exposure parameter; wherein, the single-exposure multi-frame image is an image with the same exposure parameter including the target area;
作为一种示例,在确定拍摄的月亮图像的第一曝光参数后,电子设备可按照第一曝光参数,通过其摄像单元对月亮进行多次拍摄,采集单一曝光的多帧图像,其中,单一曝光的多帧图像为具有相同曝光参数的月亮图像。需要说明的是,多帧图像不要求严格连续,可以物理上完全连续拍摄,也可以间隔几帧获取图像,将总拍摄时间控制在一定范围内即可。As an example, after determining the first exposure parameter of the captured moon image, the electronic device may capture multiple images of the moon through its camera unit according to the first exposure parameter, and collect multiple frames of images with a single exposure, wherein the single exposure The multiple frames of images are moon images with the same exposure parameters. It should be noted that the multi-frame images are not required to be strictly continuous, and can be shot physically completely continuously, or images can be acquired at intervals of several frames, and the total shooting time can be controlled within a certain range.
S14,对单一曝光的多帧图像进行合成处理,获得第一图像;S14, synthesizing the single-exposed multi-frame images to obtain a first image;
在本发明实施例中,通过上述步骤,可以对具有单一曝光的多帧月亮图像进行多帧融合和图像增强处理,以优化图像质量,获得更清晰和明亮的月亮图像,解决拍摄的月亮图像出现细节丢失严重、鬼影和动态范围不足的技术问题。In the embodiment of the present invention, through the above steps, multi-frame fusion and image enhancement processing can be performed on a multi-frame moon image with a single exposure, so as to optimize the image quality, obtain a clearer and brighter moon image, and solve the problem of the appearance of the captured moon image. Technical issues with severe loss of detail, ghosting, and insufficient dynamic range.
在一种可选的实施例中,上述图像处理方法还可以包括步骤S10,用于确定第一曝光参数,具体地,可以通过在预览图像对目标区域测光确定第一曝光参数或者在预设的范围内选定所述第一曝光参数。In an optional embodiment, the above-mentioned image processing method may further include step S10 for determining the first exposure parameter, specifically, the first exposure parameter may be determined by metering the target area in the preview image or in a preset The first exposure parameter is selected within the range of .
其中,结合拍摄月亮过程,在预览图像对目标区域测光确定第一曝光参数包括:在预览图像中进行特征检测,获得月亮区域;对月亮区域进行测光,获得月亮区域的平均亮度;依据月亮区域的平均亮度,确定第一曝光参数。或者,本领域技术人员可以依据拍摄目标的特性和目标所处的背景环境等因素预设曝光参数的范围,并在该范围内选定第一曝光参数。Wherein, in combination with the process of photographing the moon, determining the first exposure parameter by metering the target area in the preview image includes: performing feature detection in the preview image to obtain the moon area; metering the moon area to obtain the average brightness of the moon area; The average brightness of the area determines the first exposure parameter. Alternatively, those skilled in the art may preset a range of exposure parameters according to factors such as characteristics of the shooting target and the background environment where the target is located, and select the first exposure parameter within the range.
另外,本发明实施例中对电子设备预览图像进行特征检测,获得预览图像中的月亮区域,电子设备可通过预设的检测算法在预览图像中对月亮进行检测,本发明对于检测月亮区域的算法不做限定。单一曝光的准确选择主要基于月亮区域的准确识别与亮度的准确检测,同样地,电子设备可通过预设的测光算法在预览界面中对月亮区域进行测光,获得月亮区域的平均亮度,本发明对测光算法不做限定。此外,本申请还预先建立月亮区域的亮度与曝光参数的映射关系,在获取预览图像中月亮区域的平均亮度以后,电子设备可根据测量的月亮区域的平均亮度和预先建立的映射关系,确定电子设备在当前状态下的曝光参数,即第一曝光参数。In addition, in the embodiment of the present invention, feature detection is performed on the preview image of the electronic device to obtain the moon area in the preview image. The electronic device can detect the moon in the preview image through a preset detection algorithm. The algorithm of the present invention for detecting the moon area Not limited. The accurate selection of a single exposure is mainly based on the accurate identification of the moon area and the accurate detection of the brightness. Similarly, the electronic device can meter the moon area in the preview interface through the preset metering algorithm to obtain the average brightness of the moon area. The invention does not limit the metering algorithm. In addition, the present application also pre-establishes the mapping relationship between the brightness of the moon area and the exposure parameters. After obtaining the average brightness of the moon area in the preview image, the electronic device can determine the electronic device according to the measured average brightness of the moon area and the pre-established mapping relationship. Exposure parameters of the device in the current state, that is, the first exposure parameters.
下面对步骤S14做具体的说明。参考图2,是根据本发明实施例的一种可选的单一曝光的多帧合成方法的流程图。Step S14 will be described in detail below. Referring to FIG. 2 , it is a flowchart of an optional single-exposure multi-frame synthesis method according to an embodiment of the present invention.
在本发明实施例中,步骤S14,对单一曝光的多帧图像进行合成处理,获得第一图像可以包括:In the embodiment of the present invention, in step S14, performing synthesis processing on the single-exposed multi-frame images, and obtaining the first image may include:
S140:对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像;S140: Perform multi-frame fusion on a single-exposed multi-frame image using a multi-frame image super-resolution technology to obtain a fourth image;
S142:对第四图像进行单帧动态范围扩展,获得第一图像。S142: Perform single-frame dynamic range expansion on the fourth image to obtain the first image.
可选的,在本发明一种实施例中,对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像,包括:在单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧;单一曝光的多帧图像的其余帧向参考帧基于特征对齐,获得对齐 后的多帧图像;对对齐后的多帧图像基于空域信息加权融合,获得第四图像。通常,用户使用电子设备最终获得的多帧彩色RGB图像,是由传感器采集的Bayer数据通过解马赛克算法获得,本质上是将欠采样RGB数据上采样到全尺寸RGB数据上,不可避免地会增加伪信息,导致分辨率较低,噪声较大。超分辨率技术多帧融合技术则是基于Bayer数据,在单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧,将其余帧与之进行对齐。当在采集单一曝光的多帧图像时,帧与帧之间存在随机位移,通过有小量随机位移的其余帧在参考帧通道位置的真实通道信息,可以补充出欠采样Bayer数据所缺少的对应通道信息。对对齐后的多帧图像基于空域信息加权融合,获得第四图像,得到最终的全尺寸甚至更高尺寸RGB图像,由此提高单一曝光的多帧图像融合的分辨率,同时有一定的去噪效果。Optionally, in an embodiment of the present invention, performing multi-frame fusion on a single-exposed multi-frame image using a multi-frame image super-resolution technology to obtain a fourth image includes: selecting clear images from the single-exposed multi-frame images. The frame with the highest degree is used as the reference frame; the remaining frames of the single-exposed multi-frame image are aligned to the reference frame based on features to obtain the aligned multi-frame image; the aligned multi-frame images are weighted and fused based on the spatial information to obtain the fourth image . Usually, the multi-frame color RGB image finally obtained by the user using the electronic device is obtained by the Bayer data collected by the sensor through the demosaicing algorithm, which is essentially up-sampling the undersampled RGB data to the full-size RGB data, which inevitably increases the false information, resulting in lower resolution and higher noise. The super-resolution technology multi-frame fusion technology is based on Bayer data, selects the frame with the highest definition in a single exposure multi-frame image as a reference frame, and aligns the remaining frames with it. When collecting multiple frames of images with a single exposure, there is random displacement between frames. Through the real channel information of the remaining frames with a small random displacement at the channel position of the reference frame, the corresponding channels missing from the under-sampled Bayer data can be supplemented. information. The aligned multi-frame images are weighted and fused based on the spatial information to obtain the fourth image, and the final full-size or even higher-sized RGB image is obtained, thereby improving the resolution of the multi-frame image fusion of a single exposure, and at the same time, there is a certain degree of denoising. Effect.
可选的,在本发明一种实施例中,对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像,可采用边对齐边融合的方式,具体包括:在单一曝光的多帧图像中依据清晰度进行排序,并选择第一帧图像作为参考帧;将参考帧基于特征对齐、基于空域信息加权融合至第二帧图像,并将融合后的图像更新为参考帧;依次执行基于特征对齐、基于空域信息加权融合和更新参考帧的过程,直到最后一帧图像完成基于特征对齐与基于空域信息加权融合,获得第四图像。在单一曝光的多帧图像的每一输入帧融合过程中,其可通过每一输入帧与第一帧图像计算融合权重,亦可通过每一输入帧与更新后的参考帧计算融合权重。该边对齐边融合的多帧融合方法由于采用的是不固定参考帧的方式,上一级的误差会在更换参考帧融合的过程中被清除,可以避免误差的传递和逐级放大,保证处理的准确性。Optionally, in an embodiment of the present invention, multi-frame fusion is performed on a single-exposed multi-frame image by using a multi-frame image super-resolution technology to obtain a fourth image, which can be fused while aligning, specifically including: The single-exposed multi-frame images are sorted according to the sharpness, and the first frame image is selected as the reference frame; the reference frame is weighted and fused to the second frame image based on feature alignment and spatial information, and the fused image is updated as Reference frame; perform the process of feature alignment, weighted fusion based on spatial information, and update reference frame in sequence, until the last frame image completes the feature alignment and spatial information weighted fusion, and obtains a fourth image. During the fusion process of each input frame of a single exposure multi-frame image, the fusion weight may be calculated by each input frame and the first frame image, or the fusion weight may be calculated by each input frame and the updated reference frame. Since the multi-frame fusion method of aligning and merging adopts the method of not fixing the reference frame, the error of the previous level will be cleared in the process of replacing the reference frame fusion, which can avoid the transmission of errors and stage-by-stage amplification, and ensure the processing accuracy.
可选的,在本发明一种实施例中,对第四图像进行单帧动态范围扩展,获得第一图像包括:依据第四图像构建对数亮度金字塔,获取不同尺度的环境光亮度;结合不同尺度的环境光亮度,根据对数亮度金字塔对第四图像进行逐层下采样重建及映射,获取每一像素位置的物体表面对数反射量;利用局部均值与均方差信息,将物体表面对数反射量映射至图像数值区间,获得第一图像。基于视网膜-大脑皮层理论,人眼感知实际图像来源于环境光照与物体表面反射的乘积,在图像中完全去除环境光照分量而保留反射分量从而恢复物体的原始信息,可达到图像增强效果。图像基于对数域则可将上述乘积关系转换为加减关系,故可实现从图像中完全去除环境光照分量而保留物体反射分量,达到增强图像视觉的效果。Optionally, in an embodiment of the present invention, performing single-frame dynamic range expansion on the fourth image to obtain the first image includes: constructing a logarithmic brightness pyramid according to the fourth image, and obtaining ambient light brightness of different scales; According to the scale of ambient light brightness, the fourth image is down-sampled, reconstructed and mapped layer by layer according to the logarithmic brightness pyramid, and the logarithmic reflection of the object surface at each pixel position is obtained; using the local mean and mean square error information, the logarithm of the object surface The reflection amount is mapped to the image value interval to obtain the first image. Based on the retina-cerebral cortex theory, the human eye perceives the actual image from the product of the ambient illumination and the surface reflection of the object. The ambient illumination component is completely removed in the image and the reflection component is retained to restore the original information of the object, which can achieve the image enhancement effect. When the image is based on the logarithmic domain, the above product relationship can be converted into an addition-subtraction relationship, so the ambient light component can be completely removed from the image while the object reflection component can be retained to enhance the visual effect of the image.
根据输入的多帧融合后的第四图像,构建对数亮度金字塔,利用第四图像与不同尺度下的高斯核卷积近似环境光亮度,从而获取不同尺度的环境光亮度;基于图像中完全去除环境光照分量而保留反射分量从而恢复物体的原始信息,结合不同尺度的环 境光亮度,根据对数亮度金字塔对第四图像进行逐层下采样重建及映射,获取每一像素位置的物体表面对数反射量;利用局部均值与均方差信息,将物体表面对数反射量从对数域映射到正常图像数值区间,获取最终亮度调整图像,即第一图像。又例如,在另一个实施例中,对于极暗的场景,在对第四图像进行单帧动态扩展之前,首先将第四图像线性提亮,获得第五图像,并对第四图像和第五图像进行拉普拉斯金字塔融合,再进行单帧动态范围扩展。由于实际参与融合的第四图像和第五图像由同一帧第四图像变化而来,不需要对齐,亦不存在鬼影。由此无论是在正常场景,或极暗环境,均可对融合后的单帧图像提亮暗部,提高局部对比度,调整饱和度,同时避免因多帧动态范围扩展可能引入的鬼影或合成异常。According to the input fourth image after multi-frame fusion, a logarithmic brightness pyramid is constructed, and the fourth image is convolved with Gaussian kernels at different scales to approximate the ambient light brightness, so as to obtain the ambient light brightness of different scales; The ambient light component is used to retain the reflection component to restore the original information of the object. Combined with the ambient light brightness of different scales, the fourth image is down-sampled, reconstructed and mapped layer by layer according to the logarithmic brightness pyramid, and the logarithm of the object surface at each pixel position is obtained. Reflection amount: Using the local mean and mean square error information, the logarithmic reflection amount on the surface of the object is mapped from the logarithmic domain to the normal image numerical range to obtain the final brightness adjustment image, that is, the first image. For another example, in another embodiment, for a very dark scene, before performing single-frame dynamic expansion on the fourth image, the fourth image is first linearly brightened to obtain a fifth image, and the fourth image and the fifth image are obtained by linearly brightening the fourth image. The images are fused by Laplacian pyramid, and then subjected to single-frame dynamic range expansion. Since the fourth image and the fifth image actually participating in the fusion are changed from the same frame of the fourth image, alignment is not required, and there is no ghost image. In this way, whether in a normal scene or a very dark environment, the fused single-frame image can be brightened, the local contrast can be improved, and the saturation can be adjusted, while avoiding ghosting or synthesis anomalies that may be introduced by the multi-frame dynamic range expansion. .
参考图3,是根据本发明实施例的一种可选的图像处理方法的流程图。对于单一曝光的多帧图像融合后的融合图像,可通过该方法恢复更多目标图像的细节信息,与图1相比,该图像处理方法还包括以下步骤:Referring to FIG. 3 , it is a flowchart of an optional image processing method according to an embodiment of the present invention. For the fused image after the fusion of single-exposed multi-frame images, more detailed information of the target image can be recovered by this method. Compared with Figure 1, the image processing method also includes the following steps:
S16,根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像;S16, performing image enhancement on the target area of the first image according to the pre-trained enhancement model, to obtain a second image containing a clear target area;
S18,对第一图像和第二图像进行融合处理,获取第三图像。S18: Perform fusion processing on the first image and the second image to obtain a third image.
下面对步骤S16做具体的说明。参考图4,是根据本发明实施例的一种可选的图像增强方法的流程图。Step S16 will be described in detail below. Referring to FIG. 4 , it is a flowchart of an optional image enhancement method according to an embodiment of the present invention.
在本发明实施例中,步骤S16,即根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像,包括:In the embodiment of the present invention, step S16, that is, performing image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area, including:
S162,利用构建的样本训练集训练,获得预训练的增强模型;S162, using the constructed sample training set for training to obtain a pre-trained enhanced model;
S164,根据预训练的增强模型对第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像。S164: Perform image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area.
可选地,在本发明实施例中,利用构建的样本训练集训练得到的预训练的增强模型,包括:获取电子设备采集的第一质量样本集,获取同一位置的第二质量样本集,其中,第二质量样本集图像质量高于第一质量样本集;依据设备不同的放大倍数将第一质量样本集和第二质量样本集分组,并将分组后的样本集依据特征对齐,获得第一样本集;将第一样本集送入AI训练引擎,获得预训练的增强模型。Optionally, in this embodiment of the present invention, using the constructed sample training set to train the pre-trained enhanced model includes: acquiring a first quality sample set collected by an electronic device, and acquiring a second quality sample set at the same location, wherein , the image quality of the second quality sample set is higher than that of the first quality sample set; the first quality sample set and the second quality sample set are grouped according to the different magnifications of the equipment, and the grouped sample sets are aligned according to the features to obtain the first quality sample set. Sample set; send the first sample set to the AI training engine to obtain a pre-trained enhanced model.
可选地,在本发明实施例中,获取同一位置的第二质量样本集包括:高清电子设备采集的高质量图像样本或存储设备中存储的高质量图像样本。Optionally, in this embodiment of the present invention, acquiring the second quality sample set at the same location includes: high-quality image samples collected by a high-definition electronic device or high-quality image samples stored in a storage device.
可选地,在本发明实施例中,将分组后的样本集依据特征对齐,其中特征包括第 一质量样本集和第二质量样本集中图像组的颜色、亮度和角度等特征或图像组周边的辅助标定点特征。Optionally, in this embodiment of the present invention, the grouped sample sets are aligned according to features, wherein the features include features such as color, brightness, and angle of the image groups in the first quality sample set and the second quality sample set, or features around the image groups. Auxiliary calibration point feature.
参考图5,是根据本发明实施例的一种可选的图像处理装置的结构框图。如图5所示,图像处理装置4包括:Referring to FIG. 5 , it is a structural block diagram of an optional image processing apparatus according to an embodiment of the present invention. As shown in Figure 5, the image processing device 4 includes:
获取单元42,被配置为依据第一曝光参数获取单一曝光的多帧图像;其中,单一曝光的多帧图像为包含目标区域的具有相同曝光参数的图像;The acquiring unit 42 is configured to acquire a single exposure multi-frame image according to the first exposure parameter; wherein, the single-exposure multi-frame image is an image with the same exposure parameter including the target area;
第一融合单元44,被配置为对单一曝光的多帧图像进行合成处理,获得第一图像。The first fusion unit 44 is configured to perform synthesis processing on a single exposure of multiple frames of images to obtain a first image.
图像处理装置4还可以包括:The image processing device 4 may also include:
增强单元46,被配置为根据预训练的增强模型对第一图像的进行图像增强,获得包含清晰目标的第二图像;The enhancement unit 46 is configured to perform image enhancement on the first image according to the pre-trained enhancement model to obtain a second image containing a clear target;
第二融合单元48,被配置为对第一图像和第二图像进行融合处理,获得第三图像。The second fusion unit 48 is configured to perform fusion processing on the first image and the second image to obtain a third image.
在一种可选的实施例中,上述图像处理装置4还可以包括检测单元40,被配置为确定第一曝光参数,具体地,可以通过在预览图像对目标区域测光确定第一曝光参数或者在预设的范围内选定所述第一曝光参数。In an optional embodiment, the above-mentioned image processing device 4 may further include a detection unit 40 configured to determine the first exposure parameter, specifically, the first exposure parameter may be determined by metering the target area in the preview image or The first exposure parameter is selected within a preset range.
可选的,在本发明实施例中,检测单元40可以包括:检测模块400,被配置为在预览图像中进行特征检测,获得目标区域;测光模块402,被配置为对目标区域进行测光,获得目标区域的平均亮度;确定模块404,被配置为依据目标区域的平均亮度,确定第一曝光参数。或者,本领域技术人员可以依据拍摄目标的特性和目标所处的背景环境等因素预设曝光参数的范围,并在该范围内选定第一曝光参数。Optionally, in this embodiment of the present invention, the detection unit 40 may include: a detection module 400 configured to perform feature detection in the preview image to obtain a target area; a photometric module 402 configured to perform photometry on the target area , to obtain the average brightness of the target area; the determining module 404 is configured to determine the first exposure parameter according to the average brightness of the target area. Alternatively, those skilled in the art may preset a range of exposure parameters according to factors such as characteristics of the shooting target and the background environment where the target is located, and select the first exposure parameter within the range.
可选的,在本发明实施例中,第一融合单元44可以包括:Optionally, in this embodiment of the present invention, the first fusion unit 44 may include:
融合模块442,被配置为对单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像;The fusion module 442 is configured to perform multi-frame fusion on the single-exposed multi-frame images using the multi-frame image super-resolution technology to obtain a fourth image;
扩展模块444,被配置为对第四图像进行单帧动态范围扩展,获得第一图像。The expansion module 444 is configured to perform single-frame dynamic range expansion on the fourth image to obtain the first image.
可选的,在本发明一种实施例中,融合模块442包括:参考子模块4422,被配置为单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧;对齐子模块4424,被配置为单一曝光的多帧图像的其余帧向参考帧基于特征对齐,获得对齐后的多帧图像;融合子模块4426,被配置为对对齐后的多帧图像基于空域信息加权融合,获得第四图像。通常,用户使用电子设备最终获得的多帧彩色RGB图像,是由传感器采集的Bayer数据通过解马赛克算法获得,本质上是将欠采样RGB数据上采样到全尺寸RGB数据上, 不可避免地增加伪信息,导致分辨率较低,噪声较大。超分辨率技术多帧融合技术则是基于Bayer数据,在单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧,将其余帧与之进行对齐。当在采集单一曝光的多帧图像时,帧与帧之间存在随机位移,通过有小量随机位移的其余帧在参考帧通道位置的真实通道信息,可以补充出欠采样Bayer数据所缺少的对应通道信息。对对齐后的多帧图像基于空域信息加权融合,获得第四图像,得到最终的全尺寸甚至更高尺寸RGB图像,由此提高单一曝光的多帧图像融合的分辨率,同时有一定的去噪效果。Optionally, in an embodiment of the present invention, the fusion module 442 includes: a reference sub-module 4422, which is configured to select a frame with the highest definition among the multi-frame images of single exposure as a reference frame; an alignment sub-module 4424, which is configured by The remaining frames of the multi-frame images configured as single exposure are aligned to the reference frame based on features to obtain the aligned multi-frame images; the fusion sub-module 4426 is configured to perform weighted fusion of the aligned multi-frame images based on spatial information to obtain the fourth image. image. Usually, the multi-frame color RGB image finally obtained by the user using the electronic device is obtained by the Bayer data collected by the sensor through the demosaicing algorithm. In essence, the undersampled RGB data is up-sampled to the full-size RGB data, which inevitably increases the false value. information, resulting in lower resolution and higher noise. The super-resolution technology multi-frame fusion technology is based on Bayer data, selects the frame with the highest definition in a single exposure multi-frame image as a reference frame, and aligns the remaining frames with it. When collecting multiple frames of images with a single exposure, there is random displacement between frames. Through the real channel information of the remaining frames with a small random displacement at the channel position of the reference frame, the corresponding channels missing from the under-sampled Bayer data can be supplemented. information. The aligned multi-frame images are weighted and fused based on the spatial information to obtain the fourth image, and the final full-size or even higher-sized RGB image is obtained, thereby improving the resolution of the multi-frame image fusion of a single exposure, and at the same time, there is a certain degree of denoising. Effect.
可选的,在本发明的一种实施例中,融合模块442包括:第一处理子模块4423,被配置为在单一曝光的多帧图像中依据清晰度进行排序,并选择第一帧图像作为参考帧;第二处理子模块4425,被配置为将参考帧基于特征对齐、基于空域信息加权融合至第二帧图像,并将融合后的图像更新为参考帧;第三处理子模块4427,被配置为依次执行基于特征对齐、基于空域信息加权融合和更新参考帧的过程,直到最后一帧图像完成基于特征对齐与基于空域信息加权融合,获得第四图像。在单一曝光的多帧图像每一输入帧融合过程中,其可通过每一输入帧与第一帧图像计算融合权重,亦可通过每一输入帧与更新后的参考帧计算融合权重。该边对齐边融合的多帧融合装置由于采用的是不固定参考帧的方式,上一级的误差会在更新参考帧融合的过程中被清除,可以避免误差的传递和逐级放大,保证处理的准确性。Optionally, in an embodiment of the present invention, the fusion module 442 includes: a first processing submodule 4423, configured to sort the multi-frame images of a single exposure according to the sharpness, and select the first frame image as the reference frame; the second processing sub-module 4425 is configured to fuse the reference frame into the second frame image based on feature alignment and spatial information weighting, and update the fused image to the reference frame; the third processing sub-module 4427, is It is configured to perform the process of feature alignment, weighted fusion based on spatial information, and updating the reference frame in sequence, until the last frame image completes the feature alignment and spatial information weighted fusion to obtain a fourth image. During the fusion process of each input frame of a single exposure multi-frame image, the fusion weight may be calculated by each input frame and the first frame image, or the fusion weight may be calculated by each input frame and the updated reference frame. Since the multi-frame fusion device for aligning and merging adopts the method of not fixing the reference frame, the error of the previous level will be cleared in the process of updating the reference frame fusion, which can avoid the transmission of errors and stage-by-stage amplification, and ensure the processing accuracy.
可选的,在本发明一种实施例中,扩展模块444还包括:依据第四图像构建对数亮度金字塔,获取不同尺度的环境光亮度;结合不同尺度的环境光亮度,根据对数亮度金字塔逐层对第四图像下采样重建及映射,获取每一像素位置的物体表面对数反射量;利用局部均值与均方差信息,将物体表面对数反射量映射至图像数值区间,获得第一图像。基于视网膜-大脑皮层理论,人眼感知实际图像来源于环境光照与物体表面反射的乘积,在图像中完全去除环境光照分量而保留反射分量从而恢复物体的原始信息,可达到图像增强效果。图像基于对数域则可将上述乘积关系转换为加减关系,在获得环境光照分量后,通过在图像中完全减去环境光照分量可保留物体反射分量,从而达到增强图像视觉的效果。Optionally, in an embodiment of the present invention, the expansion module 444 further includes: constructing a logarithmic brightness pyramid according to the fourth image, to obtain ambient light brightness of different scales; combining the ambient light brightness of different scales, according to the logarithmic brightness pyramid The fourth image is down-sampled, reconstructed and mapped layer by layer to obtain the logarithmic reflection of the object surface at each pixel position; using the local mean and mean square error information, the logarithmic reflection of the object surface is mapped to the image numerical range to obtain the first image. . Based on the retina-cerebral cortex theory, the human eye perceives the actual image from the product of the ambient illumination and the surface reflection of the object. The ambient illumination component is completely removed in the image and the reflection component is retained to restore the original information of the object, which can achieve the image enhancement effect. If the image is based on the logarithmic domain, the above product relationship can be converted into an addition-subtraction relationship. After obtaining the ambient illumination component, the reflection component of the object can be retained by completely subtracting the ambient illumination component from the image, thereby achieving the effect of enhancing the image vision.
根据输入的多帧融合后的第四图像,构建对数亮度金字塔,利用第四图像与不同尺度下的高斯核卷积近似环境光亮度,从而获取不同尺度的环境光亮度;基于图像中完全去除环境光照分量而保留反射分量从而恢复物体的原始信息,结合不同尺度的环境光亮度,并根据对数亮度金字塔对第四图像逐层下采样重建及映射,获取每一像素位置的物体表面对数反射量;利用局部均值与均方差信息,将物体表面对数反射量从对数域映射到正常图像数值区间,获取最终亮度调整图像,即第一图像。又例如,在 另一个实施例中,对于极暗的场景,在对第四图像进行单帧动态扩展以前,首先将第四图像线性提亮,获得第五图像,并对第四图像和第五图像进行拉普拉斯金字塔融合,再进行单帧动态范围扩展。由于实际参与融合的第四图像和第五图像由同一帧变化而来,不需要对齐,亦不存在鬼影。由此无论是在正常场景,或极暗环境,均可对融合后的单帧图像可提亮暗部,提高局部对比度,调整饱和度,同时避免因多帧动态范围扩展可能引入的鬼影或合成异常。According to the input fourth image after multi-frame fusion, a logarithmic brightness pyramid is constructed, and the fourth image is convolved with Gaussian kernels at different scales to approximate the ambient light brightness, so as to obtain the ambient light brightness of different scales; The ambient light component is retained and the reflection component is retained to restore the original information of the object. Combined with the ambient light brightness of different scales, the fourth image is down-sampled, reconstructed and mapped layer by layer according to the logarithmic brightness pyramid, and the logarithm of the object surface at each pixel position is obtained. Reflection amount: Using the local mean and mean square error information, the logarithmic reflection amount on the surface of the object is mapped from the logarithmic domain to the normal image numerical range to obtain the final brightness adjustment image, that is, the first image. For another example, in another embodiment, for a very dark scene, before performing single-frame dynamic expansion on the fourth image, the fourth image is first linearly brightened to obtain a fifth image, and the fourth image and the fifth image are obtained by linearly brightening the fourth image. The images are fused by Laplacian pyramid, and then subjected to single-frame dynamic range expansion. Since the fourth image and the fifth image that actually participate in the fusion are changed from the same frame, alignment is not required, and there is no ghost image. In this way, whether in normal scenes or extremely dark environments, the fused single-frame image can brighten dark parts, improve local contrast, and adjust saturation, while avoiding ghosting or synthesis that may be introduced due to multi-frame dynamic range expansion. abnormal.
可选地,在本发明实施例中,增强单元46可以包括:Optionally, in this embodiment of the present invention, the enhancement unit 46 may include:
构建模块462,被配置为利用构建的样本训练集训练,获得预训练的增强模型; Building module 462, configured to use the constructed sample training set for training to obtain a pre-trained enhanced model;
增强模块464,被配置为根据预训练的增强模型对第一图像的进行图像增强,获得包含清晰目标区域的第二图像。The enhancement module 464 is configured to perform image enhancement on the first image according to the pre-trained enhancement model to obtain a second image including a clear target area.
可选地,在本发明实施例中,构建模块462可以包括:获取电子设备采集的第一质量样本集,获取同一位置的第二质量样本集,第二质量样本集图像质量高于第一质量样本集;依据设备不同的放大倍数将第一质量样本集和第二质量样本集分组,并将分组后的样本集依据特征对齐,获得第一样本集;将第一样本集送入AI训练引擎,获得预训练的增强模型。Optionally, in this embodiment of the present invention, the building module 462 may include: acquiring a first quality sample set collected by the electronic device, and acquiring a second quality sample set at the same location, where the image quality of the second quality sample set is higher than the first quality Sample set; group the first quality sample set and the second quality sample set according to the different magnifications of the equipment, and align the grouped sample sets according to the features to obtain the first sample set; send the first sample set to AI Train the engine to obtain a pre-trained augmented model.
可选地,在本发明实施例中,获取同一位置的第二质量样本集包括:高清电子设备采集的高质量图像样本或存储设备中存储的高质量图像样本。Optionally, in this embodiment of the present invention, acquiring the second quality sample set at the same location includes: high-quality image samples collected by a high-definition electronic device or high-quality image samples stored in a storage device.
可选地,在本发明实施例中,依据将分组后的样本集依据特征对齐,其中特征包括第一质量样本集和第二质量样本集中图像组的颜色、亮度和角度等特征或图像组周边的辅助标定点特征。Optionally, in this embodiment of the present invention, the grouped sample sets are aligned according to features, wherein the features include features such as color, brightness, and angle of the image groups in the first quality sample set and the second quality sample set, or the periphery of the image groups. The auxiliary calibration point feature of .
根据本发明实施例的另一方面,还提供了一种存储介质,存储介质包括存储的程序,其中,程序执行上述中任意一项图像处理方法。According to another aspect of the embodiments of the present invention, a storage medium is further provided, and the storage medium includes a stored program, wherein the program executes any one of the above image processing methods.
根据本发明实施例的另一方面,还提供了一种处理器,处理器用于运行程序;其中,程序运行时执行上述中任意一项图像处理方法。According to another aspect of the embodiments of the present invention, a processor is further provided, and the processor is used for running a program; wherein, any one of the above image processing methods is executed when the program is running.
需要说明的是,上述图像处理方法或图像处理装置不局限于拍摄月亮,还适用于拍摄远距离且与其背景存在明显对比度的物体,例如,太阳、灯塔、星星、建筑物上的照明灯等,均能解决拍摄图像中出现细节丢失严重、鬼影和动态范围不足的技术问题。It should be noted that the above-mentioned image processing method or image processing device is not limited to photographing the moon, but is also suitable for photographing distant objects with obvious contrast with their backgrounds, such as the sun, lighthouses, stars, lights on buildings, etc. Both can solve the technical problems of severe loss of detail, ghosting and insufficient dynamic range in captured images.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.
在本发明的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above-mentioned embodiments of the present invention, the description of each embodiment has its own emphasis. For parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed technical content can be implemented in other ways. The device embodiments described above are only illustrative, for example, the division of the units may be a logical function division, and there may be other division methods in actual implementation, for example, multiple units or components may be combined or Integration into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of units or modules, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention. The aforementioned storage medium includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes .
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above are only the preferred embodiments of the present invention. It should be pointed out that for those skilled in the art, without departing from the principles of the present invention, several improvements and modifications can be made. It should be regarded as the protection scope of the present invention.

Claims (24)

  1. 一种图像处理方法,用于具有摄像单元的电子设备,所述方法包括:An image processing method for an electronic device with a camera unit, the method comprising:
    依据第一曝光参数获取单一曝光的多帧图像,所述单一曝光的多帧图像为包含目标区域的具有相同曝光参数的图像;Acquiring a single exposure multi-frame image according to the first exposure parameter, the single-exposure multi-frame image is an image with the same exposure parameter including the target area;
    对所述单一曝光的多帧图像进行合成处理,获得第一图像。A first image is obtained by synthesizing the single-exposed multi-frame images.
  2. 根据权利要求1所述的方法,其中,通过在预览图像对目标区域测光确定所述第一曝光参数或者在预设的范围内选定所述第一曝光参数。The method according to claim 1, wherein the first exposure parameter is determined by metering light on the target area in the preview image, or the first exposure parameter is selected within a preset range.
  3. 根据权利要求1所述的方法,其中,所述方法还包括:The method of claim 1, wherein the method further comprises:
    根据预训练的增强模型对所述第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像;Perform image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area;
    对所述第一图像和所述第二图像进行融合处理,获得第三图像。Fusion processing is performed on the first image and the second image to obtain a third image.
  4. 根据权利要求2所述的方法,其中,所述在预览图像对目标区域测光,确定第一曝光参数,包括:The method according to claim 2, wherein the determining the first exposure parameter by metering light on the target area in the preview image comprises:
    在所述预览图像中进行特征检测,获得所述目标区域;Perform feature detection in the preview image to obtain the target area;
    对所述目标区域进行测光,获得目标区域的平均亮度;Perform photometry on the target area to obtain the average brightness of the target area;
    依据所述目标区域的平均亮度,确定所述第一曝光参数。The first exposure parameter is determined according to the average brightness of the target area.
  5. 根据权利要求1所述的方法,其中,所述对所述单一曝光的多帧图像进行合成处理,获得第一图像,还包括:The method according to claim 1, wherein, performing a composite process on the single-exposed multi-frame images to obtain a first image, further comprising:
    对所述单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像;performing multi-frame fusion on the single-exposed multi-frame images using multi-frame image super-resolution technology to obtain a fourth image;
    对所述第四图像进行单帧动态范围扩展,获得所述第一图像。Performing single-frame dynamic range expansion on the fourth image to obtain the first image.
  6. 根据权利要求5所述的方法,其中,所述对所述单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像,包括:The method according to claim 5, wherein, performing multi-frame fusion on the single-exposed multi-frame images using a multi-frame image super-resolution technique to obtain a fourth image, comprising:
    在所述单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧;Selecting a frame with the highest definition in the single-exposed multi-frame images as a reference frame;
    所述单一曝光的多帧图像的其余帧向所述参考帧基于特征对齐,获得对齐后的多帧图像;The remaining frames of the single-exposed multi-frame image are aligned to the reference frame based on features to obtain an aligned multi-frame image;
    对所述对齐后的多帧图像基于空域信息加权融合,获得所述第四图像。The aligned multi-frame images are weighted and fused based on spatial information to obtain the fourth image.
  7. 根据权利要求5所述的方法,其中,所述对所述单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像,还包括:The method according to claim 5, wherein, performing multi-frame fusion on the single-exposed multi-frame images using multi-frame image super-resolution technology to obtain a fourth image, further comprising:
    在所述单一曝光的多帧图像中依据清晰度进行排序,并选择第一帧图像作为参考帧;Sorting the multi-frame images of the single exposure according to the sharpness, and selecting the first frame image as the reference frame;
    将所述参考帧基于特征对齐、基于空域信息加权融合至第二帧图像,并将融合后的图像更新为参考帧;The reference frame is weighted and fused to the second frame image based on feature alignment and spatial information, and the fused image is updated to the reference frame;
    依次执行所述基于特征对齐、所述基于空域信息加权融合和更新参考帧的过程,直到最后一帧图像完成所述基于特征对齐与所述基于空域信息加权融合,获得所述第四图像。The process of the feature-based alignment, the weighted fusion based on the spatial information and the updating of the reference frame is performed in sequence, until the last frame image completes the feature-based alignment and the weighted fusion based on the spatial information, and the fourth image is obtained.
  8. 根据权利要求5所述的图像处理方法,其中,所述对所述第四图像进行单帧动态范围扩展,获得所述第一图像,包括:The image processing method according to claim 5, wherein the performing single-frame dynamic range expansion on the fourth image to obtain the first image comprises:
    依据所述第四图像构建对数亮度金字塔,获取不同尺度的环境光亮度;constructing a logarithmic brightness pyramid according to the fourth image to obtain ambient brightness of different scales;
    结合所述不同尺度的环境光亮度,根据所述对数亮度金字塔对所述第四图像进行逐层下采样重建及映射,获取每一像素位置的物体表面对数反射量;Combined with the ambient light brightness of different scales, perform layer-by-layer downsampling reconstruction and mapping on the fourth image according to the logarithmic brightness pyramid, and obtain the logarithmic reflection amount of the object surface at each pixel position;
    利用局部均值与均方差信息,将所述物体表面对数反射量映射至图像数值区间,获得所述第一图像。Using the local mean value and mean square error information, the logarithmic reflection amount of the surface of the object is mapped to an image numerical value interval to obtain the first image.
  9. 根据权利要求3所述的方法,其中,所述根据预训练的增强模型对所述第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像,包括:利用构建的样本训练集训练,获得所述预训练的增强模型;根据所述预训练的增强模型对所述第一图像的目标区域进行图像增强,获得包含清晰目标区域的所述第二图像。The method according to claim 3, wherein the performing image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain the second image including the clear target area comprises: using the constructed sample training set training to obtain the pre-trained enhancement model; image enhancement is performed on the target area of the first image according to the pre-trained enhancement model to obtain the second image including the clear target area.
  10. 根据权利要求9所述的方法,其中,所述利用构建的样本训练集训练得到所述预训练的增强模型,包括:The method according to claim 9, wherein the pre-trained enhanced model obtained by using the constructed sample training set training comprises:
    获取电子设备采集的第一质量样本集,获取同一位置的第二质量样本集,所述第二质量样本集图像质量高于所述第一质量样本集;Obtaining a first quality sample set collected by an electronic device, and obtaining a second quality sample set at the same location, where the image quality of the second quality sample set is higher than that of the first quality sample set;
    依据设备不同的放大倍数将所述第一质量样本集和所述第二质量样本集分组,并将分组后的样本集依据特征对齐,获得第一样本集;Grouping the first quality sample set and the second quality sample set according to different magnifications of the equipment, and aligning the grouped sample sets according to features to obtain the first sample set;
    将所述第一样本集送入AI训练引擎,获得所述预训练的增强模型。The first sample set is sent to the AI training engine to obtain the pre-trained enhanced model.
  11. 根据权利要求10所述的方法,其中,所述第二质量样本集包括:高清电子设备采集的高质量图像样本或存储设备中存储的高质量图像样本。The method according to claim 10, wherein the second quality sample set comprises: high-quality image samples collected by a high-definition electronic device or high-quality image samples stored in a storage device.
  12. 根据权利要求10所述的方法,其中,所述将分组后的样本集依据特征对齐,其中所述特征包括:所述第一质量样本集和所述第二质量样本集中图像组的特征或图像组周边的辅助标定点特征。The method according to claim 10, wherein the grouped sample sets are aligned according to features, wherein the features include: features or images of image groups in the first quality sample set and the second quality sample set Auxiliary calibration point feature around the group.
  13. 一种图像处理装置,其中,包括:An image processing device, comprising:
    获取单元,被配置为依据所述第一曝光参数获取单一曝光的多帧图像,所述单一曝光的多帧图像为包含目标区域的具有相同曝光参数的图像;an acquisition unit, configured to acquire a single-exposure multi-frame image according to the first exposure parameter, where the single-exposure multi-frame image is an image with the same exposure parameter including a target area;
    第一融合单元,被配置为对所述单一曝光的多帧图像进行合成处理,获得第一图像。The first fusion unit is configured to perform synthesis processing on the single-exposed multi-frame images to obtain a first image.
  14. 根据权利要求13所述的装置,其中,所述图像处理装置还包括检测单元,被配置为通过在预览图像对目标区域测光确定所述第一曝光参数或者在预设的范围内选定所述第一曝光参数。The device according to claim 13, wherein the image processing device further comprises a detection unit, configured to determine the first exposure parameter by metering the target area in the preview image or to select the first exposure parameter within a preset range. Describe the first exposure parameter.
  15. 根据权利要求13所述的装置,其中,所述装置还包括:The apparatus of claim 13, wherein the apparatus further comprises:
    增强单元,被配置为根据预训练的增强模型对所述第一图像的目标区域进行图像增强,获得包含清晰目标区域的第二图像;an enhancement unit, configured to perform image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain a second image including a clear target area;
    第二融合单元,被配置为对所述第一图像和所述第二图像进行融合处理,获得第三图像。The second fusion unit is configured to perform fusion processing on the first image and the second image to obtain a third image.
  16. 根据权利要求14所述的装置,其中,所述检测单元包括:The apparatus of claim 14, wherein the detection unit comprises:
    检测模块,被配置为在预览图像中进行特征检测,获得所述目标区域;a detection module, configured to perform feature detection in the preview image to obtain the target area;
    测光模块,被配置为对所述目标区域进行测光,获得目标区域的平均亮度;a light metering module configured to perform light metering on the target area to obtain the average brightness of the target area;
    确定模块,被配置为依据所述目标区域的平均亮度,确定所述第一曝光参数。The determining module is configured to determine the first exposure parameter according to the average brightness of the target area.
  17. 根据权利要求13所述的装置,其中,所述第一融合单元包括:The apparatus of claim 13, wherein the first fusion unit comprises:
    融合模块,被配置为对所述单一曝光的多帧图像使用多帧图像超分辨率技术进行多帧融合,获得第四图像;a fusion module, configured to perform multi-frame fusion on the single-exposed multi-frame images using multi-frame image super-resolution technology to obtain a fourth image;
    扩展模块,被配置为对所述第四图像进行单帧动态范围扩展,获得所述第一图像。The expansion module is configured to perform single-frame dynamic range expansion on the fourth image to obtain the first image.
  18. 根据权利要求17所述的装置,其中,所述融合模块还包括:The apparatus of claim 17, wherein the fusion module further comprises:
    参考子模块,被配置为在所述单一曝光的多帧图像中选择清晰度最高的一帧作为参考帧;a reference sub-module, configured to select a frame with the highest definition in the single-exposed multi-frame images as a reference frame;
    对齐子模块,被配置为所述单一曝光的多帧图像的其余帧向所述参考帧基于特征对齐,获得对齐后的多帧图像;an alignment sub-module, configured to align the remaining frames of the single-exposed multi-frame images to the reference frame based on features to obtain aligned multi-frame images;
    融合子模块,被配置为对所述对齐后的多帧图像基于空域信息加权融合,获得所述第四图像。The fusion sub-module is configured to perform weighted fusion of the aligned multi-frame images based on spatial information to obtain the fourth image.
  19. 根据权利17所述的装置,其中,所述融合模块还包括:The apparatus of claim 17, wherein the fusion module further comprises:
    第一处理子模块,被配置为在所述单一曝光的多帧图像中依据清晰度进行排序,并选择第一帧图像作为参考帧;a first processing submodule, configured to sort the multi-frame images of the single exposure according to the sharpness, and select the first frame image as a reference frame;
    第二处理子模块,被配置为将所述参考帧基于特征对齐、基于空域信息加权融合至第二帧图像,并将融合后的图像更新为参考帧;The second processing submodule is configured to fuse the reference frame to the second frame image based on feature alignment and spatial information weight, and update the fused image to the reference frame;
    第三处理子模块,被配置为依次执行所述基于特征对齐、所述基于空域信息加权融合和更新参考帧的过程,直到最后一帧图像完成所述基于特征对齐与所述基于空域信息加权融合,获得所述第四图像。The third processing sub-module is configured to perform the process of feature-based alignment, weighted fusion based on spatial information and update reference frame in sequence, until the last frame of image completes the feature-based alignment and the weighted fusion based on spatial information , to obtain the fourth image.
  20. 根据权利要求17所述的装置,其中,所述扩展模块还包括:用于依据所述第四图像构建对数亮度金字塔,获取不同尺度的环境光亮度;结合所述不同尺度的环境光亮度,根据所述对数亮度金字塔对所述第四图像进行逐层下采样重建及映射,获取每一像素位置的物体表面对数反射量;利用局部均值与均方差信息,将所述物体表面对数反射量映射至图像数值区间,获得所述第一图像。The device according to claim 17, wherein the expansion module further comprises: for constructing a logarithmic brightness pyramid according to the fourth image, to obtain ambient light brightness of different scales; combining the ambient light brightness of different scales, Perform layer-by-layer downsampling reconstruction and mapping on the fourth image according to the logarithmic brightness pyramid, to obtain the logarithmic reflection amount of the object surface at each pixel position; using the local mean and mean square error information, the logarithm The reflection amount is mapped to the image value interval to obtain the first image.
  21. 根据权利要求15所述的装置,其中,所述增强单元包括:The apparatus of claim 15, wherein the enhancement unit comprises:
    构建模块,被配置为利用构建的样本训练集训练,获得所述预训练的增强模型;a building module configured to train with the constructed sample training set to obtain the pre-trained enhanced model;
    增强模块,被配置为根据所述预训练的增强模型对所述第一图像的目标区域进行图像增强,获得包含清晰目标区域的所述第二图像。The enhancement module is configured to perform image enhancement on the target area of the first image according to the pre-trained enhancement model to obtain the second image including the clear target area.
  22. 根据权利要求21所述的装置,其中,所述构建模块还包括:The apparatus of claim 21, wherein the building block further comprises:
    第四处理子模块,被配置为获取电子设备采集的第一质量样本集,获取同一位置的第二质量样本集,所述第二质量样本集图像质量高于所述第一质量样本集;a fourth processing submodule, configured to obtain a first quality sample set collected by the electronic device, and obtain a second quality sample set at the same location, where the image quality of the second quality sample set is higher than that of the first quality sample set;
    第五处理子模块,被配置为依据设备不同的放大倍数将所述第一质量样本集和 所述第二质量样本集分组,并将分组后的样本集依据特征对齐,获得第一样本集;a fifth processing sub-module, configured to group the first quality sample set and the second quality sample set according to different magnifications of the equipment, and align the grouped sample sets according to features to obtain a first sample set ;
    第六处理子模块,被配置为将所述第一样本集送入AI训练引擎,获得所述预训练的增强模型。The sixth processing sub-module is configured to send the first sample set to the AI training engine to obtain the pre-trained enhanced model.
  23. 一种存储介质,其中,所述存储介质包括存储的程序,其中,所述程序执行权利要求1至12中任意一项所述的图像处理方法。A storage medium, wherein the storage medium includes a stored program, wherein the program executes the image processing method according to any one of claims 1 to 12.
  24. 一种处理器,其中,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至12中任意一项所述的图像处理方法。A processor, wherein the processor is used to run a program, wherein the image processing method according to any one of claims 1 to 12 is executed when the program is run.
PCT/CN2021/093188 2020-07-27 2021-05-12 Image processing method and image processing apparatus WO2022021999A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010731900.9 2020-07-27
CN202010731900.9A CN113992861B (en) 2020-07-27 2020-07-27 Image processing method and image processing device

Publications (1)

Publication Number Publication Date
WO2022021999A1 true WO2022021999A1 (en) 2022-02-03

Family

ID=79731474

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093188 WO2022021999A1 (en) 2020-07-27 2021-05-12 Image processing method and image processing apparatus

Country Status (2)

Country Link
CN (1) CN113992861B (en)
WO (1) WO2022021999A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630050A (en) * 2022-03-25 2022-06-14 展讯半导体(南京)有限公司 Photographing method, device, medium and terminal equipment
CN115409754A (en) * 2022-11-02 2022-11-29 深圳深知未来智能有限公司 Multi-exposure image fusion method and system based on image area validity
CN115661437A (en) * 2022-10-20 2023-01-31 陕西学前师范学院 Image processing system and method
CN115908190A (en) * 2022-12-08 2023-04-04 南京图格医疗科技有限公司 Method and system for enhancing image quality of video image
CN116245741A (en) * 2022-06-28 2023-06-09 荣耀终端有限公司 Image processing method and related device
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium
CN116630220A (en) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium
CN117615257A (en) * 2024-01-18 2024-02-27 常州微亿智造科技有限公司 Imaging method, device, medium and equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115118859A (en) * 2022-06-27 2022-09-27 联想(北京)有限公司 Electronic device and processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110248098A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
US20190335077A1 (en) * 2018-04-25 2019-10-31 Ocusell, LLC Systems and methods for image capture and processing
CN110572584A (en) * 2019-08-26 2019-12-13 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111147739A (en) * 2018-03-27 2020-05-12 华为技术有限公司 Photographing method, photographing device and mobile terminal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008034979B4 (en) * 2008-07-25 2011-07-07 EADS Deutschland GmbH, 85521 Method and device for generating error-reduced high-resolution and contrast-enhanced images
JP5544764B2 (en) * 2009-06-09 2014-07-09 ソニー株式会社 Image processing apparatus and method, and program
JP6495122B2 (en) * 2015-07-02 2019-04-03 オリンパス株式会社 Imaging apparatus and image processing method
JP6495126B2 (en) * 2015-07-13 2019-04-03 オリンパス株式会社 Imaging apparatus and image processing method
CN106657714B (en) * 2016-12-30 2020-05-15 杭州当虹科技股份有限公司 Method for improving high dynamic range video watching experience
CN107566739B (en) * 2017-10-18 2019-12-06 维沃移动通信有限公司 photographing method and mobile terminal
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111147739A (en) * 2018-03-27 2020-05-12 华为技术有限公司 Photographing method, photographing device and mobile terminal
US20190335077A1 (en) * 2018-04-25 2019-10-31 Ocusell, LLC Systems and methods for image capture and processing
CN110248098A (en) * 2019-06-28 2019-09-17 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110572584A (en) * 2019-08-26 2019-12-13 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114630050A (en) * 2022-03-25 2022-06-14 展讯半导体(南京)有限公司 Photographing method, device, medium and terminal equipment
CN116245741A (en) * 2022-06-28 2023-06-09 荣耀终端有限公司 Image processing method and related device
CN116245741B (en) * 2022-06-28 2023-11-17 荣耀终端有限公司 Image processing method and related device
CN115661437B (en) * 2022-10-20 2024-01-26 陕西学前师范学院 Image processing system and method
CN115661437A (en) * 2022-10-20 2023-01-31 陕西学前师范学院 Image processing system and method
CN115409754B (en) * 2022-11-02 2023-03-24 深圳深知未来智能有限公司 Multi-exposure image fusion method and system based on image area validity
CN115409754A (en) * 2022-11-02 2022-11-29 深圳深知未来智能有限公司 Multi-exposure image fusion method and system based on image area validity
CN115908190B (en) * 2022-12-08 2023-10-13 南京图格医疗科技有限公司 Method and system for enhancing image quality of video image
CN115908190A (en) * 2022-12-08 2023-04-04 南京图格医疗科技有限公司 Method and system for enhancing image quality of video image
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium
CN116342449B (en) * 2023-03-29 2024-01-16 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium
CN116630220A (en) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium
CN117615257A (en) * 2024-01-18 2024-02-27 常州微亿智造科技有限公司 Imaging method, device, medium and equipment
CN117615257B (en) * 2024-01-18 2024-04-05 常州微亿智造科技有限公司 Imaging method, device, medium and equipment

Also Published As

Publication number Publication date
CN113992861A (en) 2022-01-28
CN113992861B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
WO2022021999A1 (en) Image processing method and image processing apparatus
WO2020192483A1 (en) Image display method and device
CN108898567B (en) Image noise reduction method, device and system
Li et al. Selectively detail-enhanced fusion of differently exposed images with moving objects
JP6159298B2 (en) Method for detecting and removing ghost artifacts in HDR image processing using multi-scale normalized cross-correlation
CN107220931B (en) High dynamic range image reconstruction method based on gray level mapping
KR101643607B1 (en) Method and apparatus for generating of image data
Tursun et al. An objective deghosting quality metric for HDR images
CN109754377B (en) Multi-exposure image fusion method
CN107948500A (en) Image processing method and device
CN108259770B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108024058B (en) Image blurs processing method, device, mobile terminal and storage medium
CN111028165B (en) High-dynamic image recovery method for resisting camera shake based on RAW data
Várkonyi-Kóczy et al. Gradient-based synthesized multiple exposure time color HDR image
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
US20220343470A1 (en) Correcting Dust and Scratch Artifacts in Digital Images
CN113034417A (en) Image enhancement system and image enhancement method based on generation countermeasure network
CN115641391A (en) Infrared image colorizing method based on dense residual error and double-flow attention
CN115883755A (en) Multi-exposure image fusion method under multi-type scene
Liu et al. Joint hdr denoising and fusion: A real-world mobile hdr image dataset
CN108401109B (en) Image acquisition method and device, storage medium and electronic equipment
CN107454328B (en) Image processing method, device, computer readable storage medium and computer equipment
CN113628134A (en) Image noise reduction method and device, electronic equipment and storage medium
CN116017172A (en) Raw domain image noise reduction method and device, camera and terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21849579

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21849579

Country of ref document: EP

Kind code of ref document: A1