CN115049572A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents
Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN115049572A CN115049572A CN202210740741.8A CN202210740741A CN115049572A CN 115049572 A CN115049572 A CN 115049572A CN 202210740741 A CN202210740741 A CN 202210740741A CN 115049572 A CN115049572 A CN 115049572A
- Authority
- CN
- China
- Prior art keywords
- image
- brightness
- motion
- exposure
- motion area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 35
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000004590 computer program Methods 0.000 claims abstract description 20
- 230000002708 enhancing effect Effects 0.000 claims abstract description 18
- 230000003287 optical effect Effects 0.000 claims description 41
- 230000004927 fusion Effects 0.000 claims description 23
- 238000010586 diagram Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
The application relates to an image processing method, an apparatus, an electronic device, a storage medium and a computer program product. The method comprises the following steps: acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image; enhancing the brightness of the first image to obtain a third image; and fusing the first image, the second image and the third image to obtain a target image. The method can improve the accuracy of image processing.
Description
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of image technology, in order to obtain a clearer image, the electronic device captures a plurality of images to fuse the images, so as to obtain a new image.
However, in the process of image processing, images obtained by fusing a plurality of images often have ghost images, and the image processing is not accurate.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment, a computer readable storage medium and a computer program product, which can improve the accuracy of image processing and obtain a clearer target image.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
enhancing the brightness of the first image to obtain a third image;
and fusing the first image, the second image and the third image to obtain a target image.
In a second aspect, the present application further provides an image processing apparatus. The device comprises:
the acquisition module is used for acquiring a first image and a second image; the exposure of the first image is smaller than the exposure of the second image;
the brightness improving module is used for enhancing the brightness of the first image to obtain a third image;
and the fusion module is used for fusing the first image, the second image and the third image to obtain a target image.
In a third aspect, the present application further provides an electronic device. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
enhancing the brightness of the first image to obtain a third image;
and fusing the first image, the second image and the third image to obtain a target image.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
enhancing the brightness of the first image to obtain a third image;
and fusing the first image, the second image and the third image to obtain a target image.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
enhancing the brightness of the first image to obtain a third image;
and fusing the first image, the second image and the third image to obtain a target image.
The image processing method, the image processing apparatus, the electronic device, the computer-readable storage medium and the computer program product are used for acquiring a first image and a second image, wherein the exposure of the first image is less than that of the second image; and enhancing the brightness of the first image to obtain a third image, wherein the third image has high brightness and avoids ghost in an overexposed area. Then, the first image, the second image and the third image are fused to obtain a target image with the ghost removed, so that the problem that the motion ghost appears in an overexposed area after fusion due to the fact that the image with a large exposure is used as a reference frame is avoided, and the accuracy of image processing is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a method of image processing in one embodiment;
FIG. 2 is a schematic diagram of a first image in one embodiment;
FIG. 3 is a diagram of a second image in one embodiment;
FIG. 4 is a schematic illustration of a third image in one embodiment;
FIG. 5 is a flow diagram illustrating the calculation of an optical flow field in one embodiment;
FIG. 6 is a diagram of motion compensation in one embodiment;
FIG. 7 is a flow diagram of obtaining a fourth image in one embodiment;
FIG. 8 is a flow diagram of image acquisition in one embodiment;
FIG. 9 is a flow diagram of image processing in one embodiment;
FIG. 10 is a schematic illustration of an image obtained by a conventional method in one embodiment;
FIG. 11 is a diagram illustrating a target image obtained by the image processing method in one embodiment;
FIG. 12 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 13 is a diagram illustrating the internal architecture of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, an image processing method is provided, which is exemplified by applying the method to an electronic device, where the electronic device may be a terminal, and it is understood that the method may also be applied to a server, and may also be applied to a system including a terminal and a server, and implemented by interaction between the terminal and the server. The terminal can be but not limited to various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the internet of things equipment can be smart sound boxes, smart televisions, smart air conditioners, smart vehicle-mounted equipment and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In this embodiment, the method includes the following steps 102 to 106.
102, acquiring a first image and a second image; the exposure amount of the first image is smaller than that of the second image.
The exposure amount is an integral of the illuminance EV of light received by a certain surface element of the object over time t. Exposure amount is illuminance × exposure time period. The illumination is determined by the aperture, and the exposure duration is controlled by the shutter. The size of the aperture and the length of the shutter determine the amount of exposure. Therefore, the exposure amount is controlled by both the diaphragm and the shutter.
The exposure of the first image is less than the exposure of the second image, i.e. the first image is a dark frame or an underexposed image relative to the second image. Alternatively, the first image is an underexposed image EV-, and the second image is a normally exposed image EV 0.
And 104, enhancing the brightness of the first image to obtain a third image.
The third image is the luminance-enhanced first image.
Optionally, the electronic device obtains target brightness, and adjusts the brightness of the first image to the target brightness to obtain a third image; the target brightness is greater than the brightness of the first image.
Fig. 2 is a schematic diagram of a first image, fig. 3 is a schematic diagram of a second image, the exposure amount of the first image is smaller than that of the second image, the brightness of the first image is darker, and a ghost image exists in a moving area in the second image. Fig. 4 is a schematic diagram of the third image, which has a bright brightness and no ghost.
And 106, fusing the first image, the second image and the third image to obtain a target image.
In an alternative embodiment, the electronic device may use a high dynamic range imaging fusion algorithm to fuse the first image, the second image, and the third image to obtain the target image.
In another optional implementation, the electronic device uses the third image as a reference frame, aligns the first image, the second image, and the third image, and then fuses the first image, the second image, and the third image to obtain a target image. The reference frame is an image frame used for performing alignment processing with other image frames in the image fusion process.
In other embodiments, the electronic device may also fuse the first image, the second image, and the third image in other manners, which is not limited herein.
The image processing method acquires a first image and a second image, wherein the exposure of the first image is less than that of the second image; and enhancing the brightness of the first image to obtain a third image, wherein the third image has high brightness and avoids ghost in an overexposed area. Then, the first image, the second image and the third image are fused to obtain a target image with the ghost removed, so that the problem that the motion ghost appears in an overexposed area after fusion due to the fact that the image with a large exposure is used as a reference frame is avoided, and the accuracy of image processing is improved.
In one embodiment, fusing the first image, the second image and the third image to obtain a target image, includes: taking the third image as a reference frame, and fusing the second image and the third image to obtain a fourth image; and fusing the fourth image and the first image to obtain a target image.
In an optional implementation manner, the electronic device uses the third image as a reference frame, performs alignment processing on pixels of the second image and the third image, and averages the pixels after the alignment processing, thereby obtaining a fourth image.
In another optional implementation, the electronic device uses the third image as a reference frame, and fuses the motion region of the second image and the motion region of the third image, and fuses the non-motion region of the second image and the non-motion region of the third image, so as to obtain the fourth image.
In other embodiments, the electronic device may obtain the fourth image in other manners, which is not limited herein.
And the electronic equipment adopts an HDR (High Dynamic Range Imaging) algorithm to fuse the fourth image and the first image to obtain a target image with a High Dynamic Range.
In this embodiment, the electronic device uses the third image as a reference frame, fuses the second image and the third image to obtain a fourth image, and then fuses the fourth image and the first image to obtain a target image from which a ghost is removed, so that the accuracy of image processing is improved.
In one embodiment, taking the third image as a reference frame, and fusing the second image and the third image to obtain a fourth image, includes: determining a motion area and a non-motion area in the second image and a motion area in the third image by taking the third image as a reference frame; fusing the motion area of the second image and the motion area of the third image to obtain a fused motion area; and obtaining a fourth image based on the fused motion area and the non-motion area of the second image.
Optionally, the electronic device may determine a motion region in the second image and a motion region in the third image, and determine a non-motion region in the second image except the motion region and a non-motion region in the third image except the motion region, with the third image as a reference frame; and fusing the motion area of the second image and the motion of the third image to obtain a fused motion area, and splicing the fused motion area and the non-motion area of the second image to obtain a fourth image.
In an alternative embodiment, the moving and non-moving areas in the second image and the moving area in the third image may be determined based on the optical flow field between the second image and the third image.
It is understood that the region between the second image and the third image where the optical flow field exists is a motion region, and the region where the optical flow field does not exist, that is, the region where no motion exists, is a non-motion region.
In another optional implementation, the electronic device may further compare pixel differences between the second image and the third image, and determine a moving area and a non-moving area in the second image and a moving area in the third image by using an area composed of pixels with pixel differences larger than a preset threshold as the moving area. Further, after obtaining a motion region composed of pixels with pixel differences larger than a preset threshold value, the electronic device performs morphological operation on the motion region to obtain a modified motion region, and determines a motion region and a non-motion region in the second image and a motion region in the third image based on the modified motion region. Wherein the morphological operations include dilation or erosion, among others.
In other embodiments, the electronic device may also determine the motion region and the non-motion region in the second image, and the motion region in the third image, which is not limited herein.
Optionally, after the fused motion region is spliced with the non-motion region of the second image, the electronic device performs smooth transition on the boundary between the fused motion region and the non-motion region of the second image to obtain a fourth image.
Optionally, the electronic device fills pixels of the non-motion region of the second image into the contour of the non-motion region of the third image, so as to obtain a fused non-motion region. It is understood that the contour of the non-motion region of the second image is the same as the contour of the non-motion region of the third image, and the blending of the non-motion region includes pixels of the non-motion region of the second image and the contour of the non-motion region of the third image, i.e., is the same as the contour of the non-motion region of the second image.
In the embodiment, a third image is taken as a reference frame, and a motion area and a non-motion area in the second image and a motion area in the third image are determined; fusing the motion area of the second image and the motion area of the third image to obtain a fused motion area; then, based on fusing the moving area and the non-moving area of the second image, an accurate fourth image can be obtained.
In one embodiment, determining the motion area and the non-motion area in the second image and the motion area in the third image by using the third image as a reference frame comprises: determining an optical flow field between the second image and the third image by taking the third image as a reference frame; based on the optical flow field, a motion region and a non-motion region in the second image and a motion region in the third image are determined.
And the optical flow field is a two-dimensional (2D) instantaneous velocity field formed by all pixel points in the image. The optical flow field includes the magnitude and direction of object motion.
Optionally, with the third image as a reference frame, the electronic device determines an optical-flow field between the second image and the third image based on a DIS (Dense Inverse Search-based) optical flow method; based on the optical flow field, motion and non-motion regions in the second image and motion regions in the third image may be determined. The optical flow field represents the motion displacement of the object.
As shown in fig. 5, the electronic device determines an optical flow field between the second image and the third image based on the DIS optical flow method with the third image as a reference frame. In fig. 5, the arrows of the human arms indicate the direction of the optical flow field, the lengths of the arrows indicate the amplitude of the optical flow field, and the areas without the arrows are non-motion areas.
In this embodiment, the electronic device determines an optical flow field between the second image and the third image by using the third image as a reference frame, and then can more accurately determine a motion region and a non-motion region in the second image and a motion region in the third image based on the optical flow field.
In one embodiment, fusing the motion region of the second image and the motion region of the third image to obtain a fused motion region, includes: performing motion compensation on pixels in a motion area of the second image based on the amplitude and the direction of the optical flow field to obtain pixels after motion compensation; and filling the pixels after motion compensation into the contour of the motion area of the third image to obtain a fusion motion area.
The amplitude of the optical flow field represents the magnitude of the motion of the object, and the direction of the optical flow field represents the direction of the motion of the object. The contour refers to a shape formed by edge pixels of a certain area. Illustratively, the contour of the hand is the shape of the hand edge pixels.
Optionally, the electronic device may perform motion compensation on pixels in a motion region of the second image based on the amplitude and the direction of the optical flow field to obtain motion-compensated pixels; and removing pixels in the contour of the motion area of the third image, and filling the pixels after motion compensation into the contour of the motion area of the third image to obtain a fused motion area. The pixels of the non-moving area in the second image are not displaced.
As shown in fig. 6, the electronic device performs motion compensation on the pixels in the motion region of the second image based on the magnitude and direction of the optical flow field, so as to obtain the motion-compensated pixels. As can be seen from fig. 6, the pixels of the motion area of the second image have a ghost, and the pixels after motion compensation have removed the ghost.
The optical flow field is a two-dimensional coordinate optical flow comprising an optical flow field in the X-direction and an optical flow field in the Y-direction. The electronic device may calculate an optical flow field in the X direction and an optical flow field in the Y direction, respectively, and perform motion compensation in the X direction and the Y direction of the pixels in the motion region of the second image based on the optical flow field in the X direction and the optical flow field in the Y direction.
In this embodiment, the electronic device performs motion compensation on the pixels in the motion region of the second image based on the amplitude and direction of the optical flow field to obtain motion-compensated pixels, and then fills the motion-compensated pixels into the contour of the motion region of the third image, so as to obtain a more accurate fusion motion region.
In one embodiment, as shown in fig. 7, the electronic device determines a motion region and a non-motion region of the second image and a motion region and a non-motion region of the third image based on an optical flow field between the second image and the third image; performing motion compensation on pixels in a motion area of the second image based on the amplitude and the direction of the optical flow field to obtain a motion-compensated second image; filling the pixels after motion compensation in the second image after motion compensation into the outline of the motion area of the third image to obtain a fusion motion area; and obtaining a fourth image based on the fused motion area and the non-motion area of the second image.
In one embodiment, enhancing the brightness of the first image to obtain the third image comprises: enhancing the brightness of the first image to the target brightness to obtain a third image; the difference between the target brightness and the brightness of the second image is less than a preset brightness threshold.
The preset brightness threshold may be set as desired. For example, the preset brightness threshold may be 0.01, or may be 0. The preset brightness threshold is 0, and the target brightness is also the brightness of the second image.
In an alternative embodiment, the electronic device obtains the target brightness, and enhances the brightness of the first image to the target brightness to obtain the third image.
In another alternative embodiment, the electronic device obtains a brightness gain value, and multiplies each pixel in the first image by the brightness gain value to enhance the brightness of the first image to a target brightness, so as to obtain a third image.
In other embodiments, the electronic device may also enhance the brightness of the first image to the target brightness in other manners, which are not limited herein.
In this embodiment, the brightness of the first image is enhanced to the target brightness, so that a third image can be obtained, and the difference between the target brightness and the brightness of the second image is smaller than the preset brightness threshold, so that image fusion can be performed more accurately by using the enhanced brightness third image as a reference frame, so as to obtain the target image.
In one embodiment, obtaining a luminance gain value comprises: acquiring a first exposure time length for shooting a first image and a first gain value of an image sensor, and acquiring a second exposure time length for shooting a second image and a second gain value of the image sensor; a brightness gain value is determined based on the first exposure duration, the first gain value, the second exposure duration, and the second gain value.
Alternatively, the electronic device acquires a first exposure time EV0_ shutter for capturing the first image and a first Gain value EV0_ sensorgain of the image sensor, and acquires a second exposure time EV _ shutter for capturing the second image and a second Gain value EV _ sensorgain of the image sensor, multiplies the first exposure time EV0_ shutter and the first Gain value EV0_ sensorgain of the image sensor to obtain a first product, multiplies the second exposure time EV _ shutter and the second Gain value EV _ sensorgain of the image sensor to obtain a second product, and divides the first product by the second product to obtain a luminance Gain value Gain (EV0_ shutter 0_ sensorgain)/(EV _ shuttsen _ sengain).
In another alternative embodiment, the electronic device may further add the first gain value to the first exposure time length, add the second gain value to the second exposure time length, and divide the two sums to obtain the brightness gain value.
In other embodiments, the electronic device may also calculate the brightness gain value in other manners, which is not limited herein.
In one embodiment, acquiring a second image comprises: acquiring a plurality of original RAW images; and performing spatial domain fusion on the plurality of original RAW images to obtain a second image. Wherein the brightness of the original RAW image is the same as the brightness of the second image.
The original RAW image is RAW image data that has not been processed.
Optionally, the electronic device sets a first exposure parameter and a second exposure parameter on the image sensor, obtains a first image EV "by exposure with the first exposure parameter, and obtains a plurality of original RAW images EV0 by exposure with the second exposure parameter; the first exposure duration in the first exposure parameters is less than the exposure duration in the second exposure parameters; processing a plurality of original RAW images through a normally arranged image signal processor to obtain a plurality of YUV images, and performing spatial domain fusion on the plurality of YUV images to obtain a second image; and the second image is a YUV image, and the ghost removing processing is performed in the spatial domain fusion process.
Optionally, the electronic device processes the first image through a specially configured image signal processor, and enhances the brightness of the first image to obtain a third image; the first image is a RAW image, and the third image is a YUV image.
YUV is a color coding method, Y represents brightness (Luma) and is a gray scale value, and U and V represent Chroma (Chroma) for describing the color and saturation of an image, and is used to specify the color of a pixel.
In this embodiment, a plurality of original RAW images are acquired; and performing spatial domain fusion on the plurality of original RAW images to obtain a second image without ghosting.
In one embodiment, as shown in fig. 8, the electronic device sets different exposure parameters in the image sensor, acquiring a plurality of RAW images and a first image of the RAW; the brightness of the first image of the RAW is less than the brightness of the original RAW image; processing multi-frame original RAW images by an ISP _1(Image Signal Processing) to obtain YUV second images, Processing RAW first images by the ISP _1 to obtain YUV first images, and Processing the RAW first images by the ISP _2 to obtain YUV third images with enhanced brightness.
In one embodiment, as shown in fig. 9, the electronic device acquires a plurality of RAW images and a first image of the RAW by the image sensor; processing a plurality of original RAW images and a plurality of first images of RAW through image signals to respectively obtain a YUV second image, a YUV first image and a YUV third image; calculating an optical flow field and motion compensation based on the YUV second image and the YUV third image by taking the YUV third image as a reference frame, and fusing to obtain a fourth image; and performing HDR fusion on the fourth image and the YUV first image to obtain a high-dynamic target image.
As shown in fig. 10, the motion area in the image obtained by the electronic device using the conventional method still has the ghost. As shown in fig. 11, the electronic device acquires a first image and a second image; the exposure of the first image is less than the exposure of the second image; enhancing the brightness of the first image to obtain a third image; and taking the third image as a reference frame, and fusing the first image, the second image and the third image to obtain a target image, wherein ghosting is eliminated, and a clearer image is obtained.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides an image processing apparatus for implementing the image processing method. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the image processing apparatus provided below can be referred to the limitations of the image processing method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 12, there is provided an image processing apparatus including: an obtaining module 1202, a brightness increasing module 1204, and a fusing module 1206, wherein:
an obtaining module 1202 for obtaining a first image and a second image; the exposure amount of the first image is smaller than that of the second image.
And a brightness increasing module 1204, configured to increase brightness of the first image to obtain a third image.
The fusion module 1206 is configured to fuse the first image, the second image, and the third image to obtain a target image.
The image processing device acquires a first image and a second image, wherein the exposure of the first image is smaller than that of the second image; and enhancing the brightness of the first image to obtain a third image, wherein the third image has high brightness and avoids ghost in an overexposed area. Then, the first image, the second image and the third image are fused to obtain a target image with the ghost removed, so that the problem that the motion ghost appears in an overexposed area after fusion due to the fact that the image with a large exposure is used as a reference frame is avoided, and the accuracy of image processing is improved.
In an embodiment, the fusion module 1206 is further configured to fuse the second image and the third image by using the third image as a reference frame to obtain a fourth image; and fusing the fourth image and the first image to obtain a target image.
In one embodiment, the fusion module 1206 is further configured to determine an optical flow field between the second image and the third image by using the third image as a reference frame; based on the optical flow field, a motion region and a non-motion region in the second image and a motion region in the third image are determined.
In one embodiment, the fusion module 1206 is further configured to determine a motion region and a non-motion region in the second image and a motion region in the third image with the third image as a reference frame; fusing the motion area of the second image and the motion area of the third image to obtain a fused motion area; and obtaining a fourth image based on the fused motion area and the non-motion area of the second image.
In an embodiment, the fusion module 1206 is further configured to perform motion compensation on pixels in a motion area of the second image based on the amplitude and the direction of the optical flow field, so as to obtain motion-compensated pixels; and filling the pixels after motion compensation into the contour of the motion area of the third image to obtain a fusion motion area.
In an embodiment, the brightness increasing module 1204 is further configured to increase the brightness of the first image to a target brightness to obtain a third image; the difference between the target brightness and the brightness of the second image is less than a preset brightness threshold.
In one embodiment, the brightness increasing module 1204 is further configured to obtain a brightness gain value; and multiplying each pixel in the first image by the brightness gain value to obtain a third image of the target brightness.
In one embodiment, the brightness increasing module 1204 is further configured to obtain a first exposure duration for capturing the first image and a first gain value of the image sensor, and obtain a second exposure duration for capturing the second image and a second gain value of the image sensor; a brightness gain value is determined based on the first exposure duration, the first gain value, the second exposure duration, and the second gain value.
In one embodiment, the obtaining module 1202 is further configured to obtain a plurality of RAW images; and performing spatial domain fusion on the plurality of original RAW images to obtain a second image.
In one embodiment, the luminance of the original RAW image and the luminance of the second image are the same.
The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules may be embedded in a hardware form or may be independent of a processor in the electronic device, or may be stored in a memory in the electronic device in a software form, so that the processor calls and executes operations corresponding to the modules.
In one embodiment, an electronic device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 13. The electronic device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the electronic device is used for exchanging information between the processor and an external device. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display unit of the electronic equipment is used for forming a visual picture and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structure shown in fig. 13 is a block diagram of only a portion of the structure relevant to the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image processing method.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (14)
1. An image processing method, comprising:
acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
enhancing the brightness of the first image to obtain a third image;
and fusing the first image, the second image and the third image to obtain a target image.
2. The method of claim 1, wherein fusing the first image, the second image, and the third image to obtain a target image comprises:
taking the third image as a reference frame, and fusing the second image and the third image to obtain a fourth image;
and fusing the fourth image and the first image to obtain a target image.
3. The method according to claim 2, wherein the fusing the second image and the third image with the third image as a reference frame to obtain a fourth image comprises:
determining a motion area and a non-motion area in the second image and a motion area in the third image by taking the third image as a reference frame;
fusing the motion area of the second image and the motion area of the third image to obtain a fused motion area;
and obtaining a fourth image based on the fused motion area and the non-motion area of the second image.
4. The method according to claim 3, wherein the determining the motion area and the non-motion area in the second image and the motion area in the third image by using the third image as a reference frame comprises:
determining an optical flow field between the second image and the third image by taking the third image as a reference frame;
based on the optical flow field, motion and non-motion regions in the second image and motion regions in the third image are determined.
5. The method according to claim 4, wherein the fusing the motion region of the second image and the motion region of the third image to obtain a fused motion region comprises:
performing motion compensation on pixels in the motion area of the second image based on the amplitude and the direction of the optical flow field to obtain pixels after motion compensation;
and filling the pixels after the motion compensation into the contour of the motion area of the third image to obtain a fusion motion area.
6. The method according to any one of claims 1 to 5, wherein the enhancing the brightness of the first image to obtain a third image comprises:
enhancing the brightness of the first image to a target brightness to obtain a third image; the difference between the target brightness and the brightness of the second image is less than a preset brightness threshold.
7. The method of claim 6, wherein the enhancing the brightness of the first image to a target brightness to obtain a third image comprises:
acquiring a brightness gain value;
and multiplying each pixel in the first image by the brightness gain value to obtain a third image of the target brightness.
8. The method of claim 7, wherein obtaining the luminance gain value comprises:
acquiring a first exposure time length for shooting the first image and a first gain value of an image sensor, and acquiring a second exposure time length for shooting the second image and a second gain value of the image sensor;
determining a brightness gain value based on the first exposure duration, the first gain value, the second exposure duration, and the second gain value.
9. The method of any of claims 1 to 5, wherein acquiring a second image comprises:
acquiring a plurality of original RAW images;
and performing spatial domain fusion on the plurality of original RAW images to obtain a second image.
10. The method of claim 9, wherein the original RAW image and the second image have the same brightness.
11. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a first image and a second image; the exposure of the first image is less than the exposure of the second image;
the brightness improving module is used for enhancing the brightness of the first image to obtain a third image;
and the fusion module is used for fusing the first image, the second image and the third image to obtain a target image.
12. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image processing method according to any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
14. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 10 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210740741.8A CN115049572A (en) | 2022-06-28 | 2022-06-28 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210740741.8A CN115049572A (en) | 2022-06-28 | 2022-06-28 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115049572A true CN115049572A (en) | 2022-09-13 |
Family
ID=83163203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210740741.8A Pending CN115049572A (en) | 2022-06-28 | 2022-06-28 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049572A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117278865A (en) * | 2023-11-16 | 2023-12-22 | 荣耀终端有限公司 | Image processing method and related device |
-
2022
- 2022-06-28 CN CN202210740741.8A patent/CN115049572A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117278865A (en) * | 2023-11-16 | 2023-12-22 | 荣耀终端有限公司 | Image processing method and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108898567B (en) | Image noise reduction method, device and system | |
CN108335279B (en) | Image fusion and HDR imaging | |
CN110062176B (en) | Method and device for generating video, electronic equipment and computer readable storage medium | |
US20180109711A1 (en) | Method and device for overexposed photography | |
CN110062157B (en) | Method and device for rendering image, electronic equipment and computer readable storage medium | |
CN113706414A (en) | Training method of video optimization model and electronic equipment | |
CN114862735A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN115049572A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
Lapray et al. | Hardware-based smart camera for recovering high dynamic range video from multiple exposures | |
CN114092562A (en) | Noise model calibration method, image denoising method, device, equipment and medium | |
WO2024067461A1 (en) | Image processing method and apparatus, and computer device and storage medium | |
CN111340722A (en) | Image processing method, processing device, terminal device and readable storage medium | |
CN113393391B (en) | Image enhancement method, image enhancement device, electronic apparatus, and storage medium | |
CN115272155A (en) | Image synthesis method, image synthesis device, computer equipment and storage medium | |
CN115439386A (en) | Image fusion method and device, electronic equipment and storage medium | |
CN113902639A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN110599437A (en) | Method and apparatus for processing video | |
CN115423823A (en) | Image processing method and device | |
CN117541525A (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
CN118055335A (en) | Reference frame determining method, apparatus, electronic device, and computer-readable storage medium | |
Fu et al. | A Multi-resolution Image Fusion Algorithm Based on Multi-factor Weights. | |
CN118505546A (en) | Video processing method, device, computer equipment and storage medium | |
CN117876237A (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
CN118102120A (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium | |
CN118015102A (en) | Image processing method, apparatus, electronic device, and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |