CN110599433B - Double-exposure image fusion method based on dynamic scene - Google Patents

Double-exposure image fusion method based on dynamic scene Download PDF

Info

Publication number
CN110599433B
CN110599433B CN201910693162.0A CN201910693162A CN110599433B CN 110599433 B CN110599433 B CN 110599433B CN 201910693162 A CN201910693162 A CN 201910693162A CN 110599433 B CN110599433 B CN 110599433B
Authority
CN
China
Prior art keywords
image
exposure
exposure image
pyramid
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910693162.0A
Other languages
Chinese (zh)
Other versions
CN110599433A (en
Inventor
吴雨祥
邵晓鹏
李英
王文超
吴昌辉
王子
王星量
董磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910693162.0A priority Critical patent/CN110599433B/en
Publication of CN110599433A publication Critical patent/CN110599433A/en
Application granted granted Critical
Publication of CN110599433B publication Critical patent/CN110599433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a double-exposure image fusion method based on a dynamic scene, which comprises the steps of acquiring a first exposure image and a second exposure image; processing the first exposure image and the second exposure image according to the self-adaptive threshold value to obtain a binary image, wherein the binary image comprises a motion area of the first exposure image and a motion area of the second exposure image; performing brightness balance processing on the first exposure image and the second exposure image according to the Retinex theory to correspondingly obtain a first brightness balance image and a second brightness balance image; combining the first exposure image and the second exposure image with the motion area respectively to obtain a first basic weight image and a second basic weight image correspondingly; and obtaining a fused image according to a pyramid image fusion algorithm with the self-adaptive detail enhancement. The invention can accurately detect the motion area of the image through the self-adaptive threshold value, balance the brightness of two frames of images through the brightness balance algorithm, and enable the underexposure and overexposure area information in the image to be clearly displayed through the pyramid-based self-adaptive enhancement method.

Description

Double-exposure image fusion method based on dynamic scene
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a double-exposure image fusion method based on a dynamic scene.
Background
The dynamic range is typically the ratio of the luminance values of the brightest portion to the darkest portion of the scene. When a camera or a mobile phone and other common photographing devices are used for photographing pictures, the dynamic range of the obtained pictures is far lower than that of the pictures contained in a real scene. Referring to FIGS. 1 a-1 b, FIG. 1a is a low dynamic range (Low Dynamic Range, LDR) image with a dynamic range of typically 10 2 About, fig. 1b shows a high dynamic range scene (High Dynamic Range, HDR) image with a dynamic range up to 10 6 A light level. Therefore, when an image is photographed, a darker or brighter area in the real scene will show saturation phenomenon, i.e. full black or full white (commonly referred to as underexposure and overexposure phenomenon) in the photographed image, thereby causing loss of image information and seriously affecting the image quality. Although some specialized image acquisition devices have appeared in recent years to be able to capture HDR data directly, the dynamic range that these devices can capture is still not as high as that of a real scene and is quite expensive to popularize. Therefore, in order to solve the problem of the gap between the real scene and the dynamic range of the photographed image, the details in the real scene are better captured, and the high dynamic range imaging technology is generated.
The main principle of the high dynamic range imaging technology is that scene information with different brightness ranges is obtained by continuously changing the exposure time of a camera, and then the scene information is combined, so that the photo effect is more similar to a real scene observed by human eyes. There are two capture strategies: one is hardware-based single exposure capture, the other is continuous multiple exposure capture at different times. Hardware-based single exposure capture, because of the simultaneous bracketing of the exposures on a single imaging sensor, sacrifices the spatial resolution of the image and the dynamic range of capture is far from that perceived by the human visual system. Different simultaneous sequential multiple exposure fusion techniques are an important topic of research in the field of HDR imaging. The technology controls the luminous flux of scene brightness information entering a camera by controlling shutter time, shoots a sequence of multi-exposure images so that the sequence of multi-exposure images contains detail information of different brightness ranges of the scene, and fuses the information to obtain an HDR image. In the real scene shooting process, there are objects that obviously move, and a scene in which moving objects exist is called a dynamic scene. Once the scene changes during the shooting process, a blurred or semitransparent image appears in the area where the scene changes in the finally obtained fused image, which is generally referred to as "ghosting", for example, please refer to fig. 2 a-2 c, fig. 2a is a high exposure image, fig. 2b is a low exposure image, and fig. 2c is an image that generates "ghosting" after fusion.
The traditional 'ghost' removal and image fusion methods all need to shoot three or more multi-exposure image sequences to carry out ghost fusion operation, the shooting mode greatly reduces the time resolution of image shooting and seriously influences the quality of fusion images, and the fusion images fused in the modes also have the problems of plaque and artifact of the fusion images caused by unclear details of underexposure and overexposure areas and inconsistent brightness of source images.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a double-exposure image fusion method based on a dynamic scene. The technical problems to be solved by the invention are realized by the following technical scheme:
a double exposure image fusion method based on dynamic scene includes:
acquiring a first exposure image and a second exposure image, wherein the exposure degrees of the first exposure image and the second exposure image are different;
processing the first exposure image and the second exposure image according to an adaptive threshold to obtain a binary image, wherein the binary image comprises a motion area of the first exposure image and a motion area of the second exposure image;
performing brightness balance processing on the first exposure image and the second exposure image according to the Retinex theory to correspondingly obtain a first brightness balance image and a second brightness balance image;
Combining the first exposure image and the second exposure image with the motion area respectively to obtain a first basic weight image and a second basic weight image;
and processing the first brightness balance image, the second brightness balance image, the first basic weight image and the second basic weight image according to a pyramid image fusion algorithm with the self-adaptive detail enhancement to obtain fused images.
In one embodiment of the present invention, processing the first exposure image and the second exposure image according to an adaptive threshold to obtain a binary image includes:
respectively carrying out histogram equalization processing on the first exposure image and the second exposure image to obtain a third exposure image and a fourth exposure image correspondingly, wherein the brightness of the third exposure image is the same as that of the fourth exposure image;
calculating pixel difference values between the third exposure image and the fourth exposure image by using a frame difference method to obtain a difference image;
performing threshold segmentation processing on the differential image according to the self-adaptive threshold to obtain an initial binary image;
performing morphological dilation and erosion treatment on the initial binary image to obtain the binary image comprising a motion region.
In one embodiment of the present invention, performing a threshold segmentation process on the differential image according to an adaptive threshold to obtain an initial binary image includes:
obtaining a threshold value array according to the total pixel percentage of the differential image, the pixel number occupied by the width of the differential image and the pixel number occupied by the height of the differential image occupied by the static areas of the first exposure image and the second exposure image;
obtaining the self-adaptive threshold according to the threshold array;
and carrying out threshold segmentation processing on the differential image according to the self-adaptive threshold value to obtain an initial binary image.
In one embodiment of the present invention, the threshold array is:
Figure BDA0002148508230000041
wherein ,
Figure BDA0002148508230000042
for the threshold value array, n i And P is the percentage of the number of pixels of the static areas of the first exposure image and the second exposure image to the total number of pixels of the difference image, and t is an integer between 0 and 255.
In one embodiment of the present invention, the calculation formula of the adaptive threshold is:
Figure BDA0002148508230000043
/>
wherein T is the adaptive threshold, min (·) is the minimum value of the threshold array, and mt is a predetermined threshold.
In one embodiment of the present invention, performing a brightness balance process on the first exposure image and the second exposure image according to Retinex theory correspondingly obtains a first brightness balance image and a second brightness balance image, including:
Respectively filtering illuminance components of the first exposure image and the second exposure image by using a Retinex theory, and correspondingly obtaining a fifth exposure image and a sixth exposure image;
and respectively processing the fifth exposure image and the sixth exposure image according to the brightness mapping model to correspondingly obtain a first brightness balance image and a second brightness balance image.
In one embodiment of the present invention, combining the first exposure image and the second exposure image with the motion region respectively corresponds to obtain a first basic weight image and a second basic weight image, including:
obtaining a first initial weight image according to the image contrast and the exposure moderation degree of the first exposure image;
obtaining a second initial weight image according to the image contrast and the exposure moderation degree of the second exposure image;
combining the first initial weight image and the binary image and carrying out normalization treatment to obtain a first basic weight image;
and combining the second initial weight image with the binary image and carrying out normalization processing to obtain a second basic weight image.
In one embodiment of the present invention, processing the first luminance balance image, the second luminance balance image, the first basis weight image, and the second basis weight image according to a pyramid image fusion algorithm with adaptive detail enhancement to obtain a fused image includes:
Respectively carrying out Laplacian pyramid transformation on the first brightness balance image and the second brightness balance image to correspondingly obtain a first Laplacian pyramid and a second Laplacian pyramid;
respectively carrying out Gaussian pyramid transformation on the first basic weight image and the second basic weight image to obtain a first weight graph Gaussian pyramid and a second weight graph Gaussian pyramid;
and obtaining a fused image according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid.
In one embodiment of the present invention, obtaining a fused image according to the first laplacian pyramid, the second laplacian pyramid, the first weight map gaussian pyramid, and the second weight map gaussian pyramid includes:
obtaining a fused Laplacian pyramid according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid;
obtaining an enhanced Laplacian pyramid according to the gain coefficient matrix and the fused Laplacian pyramid;
And performing inverse transformation on the enhanced Laplacian pyramid to obtain a fused image.
In one embodiment of the present invention, the calculation formula of the gain coefficient is:
Figure BDA0002148508230000061
wherein G is the gain coefficient matrix, G L G is the smallest gain coefficient H D is the layer number of the highest layer of the fused Laplacian pyramid, D is the layer number of the fused Laplacian pyramid, gamma is an adjustable parameter,
Figure BDA0002148508230000062
noise visibility at (x, y) coordinate position in the fused laplacian pyramid for layer d.
The invention has the beneficial effects that:
the double exposure image fusion method based on the dynamic scene can accurately detect the motion area of the image through the self-adaptive threshold value, balance the brightness of two frames of images through the brightness balance algorithm, and enable the underexposure and overexposure area information in the image to be clearly displayed through the pyramid-based self-adaptive enhancement method.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIGS. 1 a-1 b are effect contrast diagrams of a high dynamic range scene imaging diagram and a scene diagram captured directly by a common camera according to an embodiment of the present invention;
FIGS. 2 a-2 c are schematic diagrams illustrating a phenomenon of "ghosting" generated by fusing a high exposure image and a low exposure image according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a dual exposure image fusion method based on a dynamic scene provided by an embodiment of the invention;
FIG. 4 is a schematic flow chart of another dynamic scene-based dual-exposure image fusion method according to an embodiment of the present invention;
FIGS. 5 a-5 b are graphs comparing effects of a low exposure image and a histogram equalization processed image according to an embodiment of the present invention;
FIGS. 6 a-6 b are graphs comparing effects of a high exposure image and a histogram equalization processed image according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a differential image provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of a binary image according to an embodiment of the present invention;
FIGS. 9 a-9 b are schematic illustrations of a balanced image provided by embodiments of the present invention;
FIG. 10 is a schematic illustration of a fused image provided by an embodiment of the present invention;
FIGS. 11a to 11g are graphs showing the effect comparison of images processed by a dynamic scene-based double-exposure image fusion method and methods of Sen et al and MaK et al;
FIGS. 12 a-12 g are graphs showing the effect of a dual-exposure image fusion method based on a dynamic scene and images processed by the methods of Sen et al and MaK et al according to an embodiment of the present invention;
fig. 13a to 13g are graphs showing the effect of still another dynamic scene-based double-exposure image fusion method and images processed by the methods of Sen et al and mak et al according to the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
At present, pep and Kautz propose a bitmap motion detection (Bitmap Movement Detection, BMD) based method, the principle of which is mainly based on a binary image obtained by dividing the binary image by a median threshold, and the brightness distribution of the binary image is not affected by the exposure level, so that the motion pixels can be detected according to the brightness distribution differences of images with different exposure levels. For an input sequence of multi-exposure images, a median threshold bitmap for each image should first be calculated. And then, carrying out exclusive OR operation on all bitmaps to obtain a motion bitmap of the input image sequence, wherein when the pixel value of the motion bitmap is 1, the motion bitmap indicates that a motion object possibly exists at the pixel position in a scene, and the obtained motion bitmap often contains some noise, so that an incorrect detection result is caused. Thus, a morphological open operation (erosion followed by dilation) is used to eliminate noise in the binary image to produce the final motion bitmap. However, the method can accurately detect the motion area of a plurality of multi-exposure images conforming to the median threshold segmentation rule or motion images aiming at a plurality of small displacements, but when the motion amplitude of an object is large, particularly when the brightness difference of two frames of exposure images is overlarge, the accuracy of detecting the 'ghosts' by using the method can be reduced, so that a serious 'ghosts' phenomenon is generated in the fused image.
In addition, wang et al propose a motion detection method based on an inter-frame difference method, which detects a motion region by detecting an inter-frame difference value, but if two images are directly subjected to difference, there is a large difference in still region images due to different exposure degrees. The multi-exposure image sequence should first be intensity aligned. The method uses the variation of the average value of L in the Lab space of the image to align the brightness of the image. After the brightness is aligned to a uniform level, selecting a reference image, subtracting other frames of images from the reference image to obtain a difference region of the two images, selecting a threshold value, wherein when the difference point is larger than the threshold value, the difference region is regarded as a motion region, and when the difference point is smaller than the threshold value, the difference region is regarded as a static region. After the binary image of threshold segmentation is obtained, isolated points and noise points exist in the image, and the image is subjected to expansion corrosion operation by using a morphological processing method of the image, so that a difference image of a motion area is obtained. And multiplying the basic weight map of the difference image by a motion difference image template to obtain a final basic weight map, and obtaining a final fused image without 'ghosts' by adopting a traditional multi-resolution pyramid fusion algorithm. The method can effectively remove 'ghosts' aiming at a plurality of multi-exposure image sequences, but is very difficult to select a proper threshold, and the author of the method provides an adaptive threshold selection algorithm which is determined by utilizing the brightness difference of two frames of exposure images and the difference value of the median value of the pixel distance gray scale, but the threshold judgment algorithm can still detect a static area for a real static scene, so that the fusion quality of the images is reduced. Meanwhile, the method has the advantages that more than three input images are needed to obtain the expected effect in the calculation mode of the weight map, and the time resolution of the images is greatly reduced.
Therefore, based on the above reasons, the present embodiment provides a dual-exposure image fusion method based on a dynamic scene, please refer to fig. 3, fig. 3 is a schematic flow chart of the dual-exposure image fusion method based on a dynamic scene, and the dual-exposure image fusion method based on a dynamic scene provided in the embodiment of the present invention includes:
step 1, acquiring a first exposure image and a second exposure image, wherein the exposure degrees of the first exposure image and the second exposure image are different;
step 2, processing the first exposure image and the second exposure image according to the self-adaptive threshold value to obtain a binary image, wherein the binary image comprises the motion areas of the first exposure image and the second exposure image;
step 3, carrying out brightness balance processing on the first exposure image and the second exposure image according to the Retinex theory to correspondingly obtain a first brightness balance image and a second brightness balance image;
step 4, respectively combining the first exposure image and the second exposure image with the motion area to obtain a first basic weight image and a second basic weight image correspondingly;
and step 5, processing the first brightness balance image, the second brightness balance image, the first basic weight image and the second basic weight image according to a pyramid image fusion algorithm with the self-adaptive detail enhancement to obtain a fused image.
In this embodiment, for convenience of understanding, the fusion processing process of the first exposure image and the second exposure image is exemplified, where the first exposure image and the second exposure image are two images with different exposure degrees, for example, the first exposure image is a high exposure image, the second exposure image is a low exposure image, for example, the first exposure image is a low exposure image, and the second exposure image is a high exposure image, and it should be noted that the low exposure image and the high exposure image in this embodiment represent relative values of two groups of exposure degrees, and are not limited to the values of the values.
The first exposure image and the second exposure image are detected through the adaptive threshold, the binary images of the first exposure image and the second exposure image are obtained after the moving area is detected, the adaptive threshold can divide the first exposure image and the second exposure image, the moving area and the static area can be accurately divided, so that the binary images comprising the moving area can be accurately obtained, the first exposure image and the second exposure image are subjected to brightness balance processing through the Retinex theory, the first brightness balance image and the second brightness balance image are obtained, the first exposure image and the second exposure image are combined with the moving area to obtain the first basic weight image, the second exposure image and the moving area are combined to obtain the second basic weight image, and finally the first brightness balance image, the second brightness balance image, the first basic weight image and the second basic weight image are fused through the Laplacian fusion algorithm to obtain the fused images. Therefore, the double-exposure image fusion method provided by the embodiment can realize the effects of effective ghost removal and image fusion on two frames of exposure images with different exposure degrees, the processing result can effectively recover the detailed information of the over-exposure and under-exposure areas of the images, meanwhile, the brightness balance processing method can avoid the plaque and artifact phenomena in the traditional fusion images, the fusion strategy of the two frames of exposure images can greatly reduce the time resolution of image shooting, and a solid foundation is laid for the implementation of the double-exposure image fusion method on a hardware platform and the acquisition of high dynamic range real-time video.
Example two
Referring to fig. 4, fig. 4 is a flow chart of another dual exposure image fusion method based on dynamic scene according to an embodiment of the present invention. The embodiment specifically describes a dynamic scene-based double-exposure image fusion method in the first embodiment on the basis of the above embodiment.
Based on the foregoing embodiments, step 2 in the first embodiment may specifically include:
step 201, respectively performing histogram equalization processing on the first exposure image and the second exposure image to obtain a third exposure image and a fourth exposure image correspondingly;
firstly, because the exposure degrees of the first exposure image and the second exposure image are different, the brightness of the first exposure image is inconsistent with the brightness of the second exposure image, therefore, the first exposure image and the second exposure image are preprocessed, so that the static area and the dynamic area can be correctly screened out by the first exposure image and the second exposure image after the difference processing, the first exposure image and the second exposure image are subjected to histogram equalization processing, so that the brightness of the first exposure image and the second exposure image is kept consistent, the first exposure image becomes a third exposure image after the histogram equalization processing, the second exposure image becomes a fourth exposure image after the histogram equalization processing, and the brightness of the third exposure image and the fourth exposure image is the same, wherein the histogram equalization formula can be expressed as follows:
I′ k =Histeq(I k ) (1)
wherein ,Ik For an original image with different exposure degrees, k corresponds to the serial number of the different exposure image, in this embodiment k takes 1 or 2,1 corresponds to the first exposure image, 2 corresponds to the second exposure image, histeq (·) is histogram equalization transformation, then I '' k To correspond to the histogram-transformed image.
For example, referring to fig. 5a to 5b and 6a to 6b, fig. 5a is a low exposure image, fig. 5b is a low exposure image after histogram equalization, fig. 6a is a high exposure image corresponding to fig. 5a, and fig. 6b is a high exposure image after histogram equalization, so that it can be seen that the low exposure image and the high exposure image become uniform in brightness after histogram equalization.
Step 202, calculating pixel difference values between a third exposure image and a fourth exposure image by using a frame difference method to obtain a difference image;
in this embodiment, a frame difference method is used to perform a difference process on the third exposure image and the fourth exposure image, and a pixel difference value between the third exposure image and the fourth exposure image is calculated, so as to obtain a difference image, where a calculation formula of the frame difference method is as follows:
ΔI′=|I′ 1 -I′ 2 | (2)
wherein ΔI 'is a differential image, I' 1 For the third exposure image, I' 2 Is the fourth exposure image.
For example, referring to fig. 7, fig. 7 is a differential image obtained according to fig. 5b and 6 b.
And 203, performing threshold segmentation processing on the differential image according to the self-adaptive threshold value to obtain an initial binary image.
After obtaining the difference image, a proper threshold value is required to be selected to perform threshold segmentation processing on the difference image, the embodiment performs threshold segmentation processing on the difference image by using an adaptive threshold value to obtain an initial binary image, and a part higher than the adaptive threshold value is regarded as a motion area, and a part lower than the adaptive threshold value is regarded as a static area. The binary image is calculated as follows:
Figure BDA0002148508230000121
wherein M is a binary image after threshold segmentation, and T is an adaptive threshold.
In order to better perform threshold segmentation processing on the differential image, the embodiment provides a method for determining an adaptive threshold, which comprises the following steps:
step 2031, obtaining a threshold array according to the total pixel percentage of the differential image, the pixel number of the width of the differential image and the pixel number of the height of the differential image occupied by the pixel numbers of the static areas of the first exposure image and the second exposure image;
by observing the histogram of the differential image, it is found that the pixel arrangement of the differential image is mostly concentrated around the 0 gray scale value, that is, the pixel difference value between the third exposure image and the fourth exposure image is mostly close to 0, and the partial pixels correspond to the static region. Assuming that the percentage of the number of pixels of the static area to the total number of pixels of the image is fixed, and the number of pixels is set to be P, the number of pixels is accumulated from the pixel point with the gray value of 0 of the difference image until the number of pixels is added to the critical value of the total number of pixels of the static area, so as to obtain a threshold value array, and the calculation formula of the threshold value array is as follows:
Figure BDA0002148508230000131
wherein ,
Figure BDA0002148508230000132
for the threshold value array, n i The number of pixels in the difference image with the gray value of i, the number of pixels in the static area of the first exposure image and the second exposure image is the percentage of the number of pixels in the difference image, t is a threshold range meeting the condition, and the value of t can be an integer between 0 and 255, for example.
In practice, the minimum value of the threshold value array is the required threshold value, but if the minimum value of the threshold value array is too close to the gray value 0, the probability that there is a motion region in the differential image is small, so the embodiment sets another threshold value so that the pixels of the motion region cannot be detected in the static image, and the calculation formula of the adaptive threshold value is as follows:
Figure BDA0002148508230000133
/>
wherein T is an adaptive threshold, min (·) is the minimum value of the threshold array, and mt is a predetermined threshold.
The predetermined threshold mt is another threshold for dividing the static area pixels provided in this embodiment, and may be generally 18, that is, when the gray value of the adaptive threshold obtained by calculation is smaller than 18, it is considered that there is no moving area in the first exposure image and the second exposure image of two frames, and the threshold may be adjusted according to different scene conditions. After the adaptive threshold is determined, the differential image may be subjected to a threshold segmentation process using equation (3), thereby obtaining an initial binary image.
Step 204, performing morphological expansion corrosion treatment on the initial binary image to obtain a binary image comprising a motion area;
the initial binary image obtained in step 203 also has the phenomena of holes and isolated noise points, so that morphological expansion corrosion treatment is also required to be performed on the initial binary image, thereby corroding isolated noise point areas and hole areas, and simultaneously expanding pixel points at the edge of a motion area, so that the determined motion area completely comprises pixels of the motion area. The calculation formula of the swelling corrosion is as follows:
Figure BDA0002148508230000141
wherein the symbols are
Figure BDA0002148508230000142
For etching operations, the symbols->
Figure BDA0002148508230000143
For expansion operation, B 1 For etching the filter template, e.g. etching the filter template to a radius of 4Circles of individual pixel points, B 2 For the dilation-filter template, for example, the dilation-filter template is a circle with a radius of 20 pixels, and M' is a binary image including a motion region after the dilation-erosion operation.
For example, referring to fig. 8, fig. 8 is a binary image including a motion region after performing an expansion etching operation on the differential image of fig. 7.
Based on the foregoing embodiments, the embodiment specifically describes step 3 in the first embodiment, where step 3 may specifically include:
Step 301, filtering illuminance components of the first exposure image and the second exposure image by using a Retinex theory, and correspondingly obtaining a fifth exposure image and a sixth exposure image;
the Retinex image enhancement algorithm considers that the image is composed of an illumination component and a reflection component, the illumination component of the image reflects the brightness of the whole image, and the reflection component reflects the original face information of the image scene. Based on this, in this embodiment, the original information of the first exposure image and the second exposure image is restored by filtering the illuminance component information of the first exposure image and the second exposure image and retaining the reflection component information of the first exposure image and the second exposure image, and the Retinex model may be expressed as:
S k (x,y)=L k (x,y)·R k (x,y) (7)
wherein ,Sk For the kth original image (the original image is the original image to be fused in the video sequence), in this embodiment, S is set 1 S is the first exposure image 2 For the second exposure image, (x, y) is the coordinate position of the pixel, L k Representing the luminance component of the kth original image, R k Is the reflection component of the kth original image. For a gray scale image, in the single scale case, the reflection component of the kth original image can be expressed as:
R′ k (x,y)=logS k (x,y)-log[F(x,y)*S k (x,y)] (8)
wherein ,R′k For the reflection component of the kth original image, F (x, y) is a center-around function, typically a Gaussian function, which Can be expressed as:
Figure BDA0002148508230000151
/>
wherein sigma is the standard deviation, usually taken
Figure BDA0002148508230000152
Step 302, respectively processing the fifth exposure image and the sixth exposure image according to the brightness mapping model to obtain a first brightness balance image and a second brightness balance image;
for the reflection component R k The 'direct exponential transformation' stretches the dynamic range of the image, and the brightness of the fifth exposure image and the sixth exposure image cannot be guaranteed to be in the same range, so that the purpose of balancing the brightness of the fifth exposure image and the sixth exposure image cannot be achieved, and therefore a certain brightness mapping range is defined for the fifth exposure image and the sixth exposure image, and the brightness of the fifth exposure image and the sixth exposure image is balanced. First, the maximum value and the minimum value of the mapping of the fifth exposure image and the sixth exposure image are respectively:
Figure BDA0002148508230000153
wherein ave is the reflection component R' k α and β are adjustable parameters, which typically take α=β=3, V is the maximum value of the variance of the fifth exposure image and the sixth exposure image, and the specific calculation formulas of ave and V are as follows:
Figure BDA0002148508230000161
wherein Mean (·) is the matrix average, var (·) is the matrix variance, and max (·) is the maximum value of the array, so the luminance mapping model is:
Figure BDA0002148508230000162
wherein ,
Figure BDA0002148508230000163
namely, the result after the brightness balance of the kth Zhang Yuantu image, in the present embodiment +.>
Figure BDA0002148508230000164
A first brightness balance image after the brightness balance treatment of the fifth exposure image is +.>
Figure BDA0002148508230000165
And the second brightness balance image is the second exposure image after the brightness balance treatment.
For example, please refer to fig. 9 a-9 b, wherein fig. 9a is a balanced image corresponding to fig. 5a after the brightness balancing process, and fig. 9b is a balanced image corresponding to fig. 6a after the brightness balancing process.
Based on the foregoing embodiments, the embodiment specifically describes step 4 in the first embodiment, where step 4 specifically may include:
step 401, respectively combining the first exposure image and the second exposure image with the motion area to obtain a first basic weight image and a second basic weight image;
step 4011, obtaining a first initial weight image according to the image contrast and the exposure moderation degree of the first exposure image, and obtaining a second initial weight image according to the image contrast and the exposure moderation degree of the second exposure image;
for single-channel gray scale images, no consideration is required for the saturation information of the images. The initial weighted image of the image is thus obtained by multiplying the following two factors:
Figure BDA0002148508230000171
wherein ,Ck,x,y For the kth original image I k Image contrast, E, at (x, y) coordinates k,x,y Is the kth Zhang Yuantu imageI k Exposure moderation at (x, y) coordinates, w c And w is equal to e As a weight index, generally w c =w e =1,W k,x,y For the pixel value of the kth Zhang Chushi weighted image at coordinates (x, y), then W 1,x,y For the pixel value, W, of the first initial weight image at coordinates (x, y) 2,x,y The pixel values at coordinates (x, y) for the second initial weight image.
Step 4012, combining the first initial weight image and the motion area and performing normalization processing to obtain a first basic weight image, and combining the second initial weight image and the motion area and performing normalization processing to obtain a second basic weight image;
in order to make the weighted image of the ' ghost ' region have no abrupt change, the M ' after being subjected to Gaussian filtering is determined as a final motion region mask. Therefore, the initial weight image and the binary image containing the motion area need to be combined, then the basic weight image can be obtained after normalization, and the calculation formula of the combination of the initial weight image and the binary image containing the motion area is as follows:
Figure BDA0002148508230000172
wherein ,W′k,x,y For the pixel value of the image at coordinates (x, y) of the k-th initial weight image combined with the binary image, then W' 1,x,y Pixel values, W ', at coordinates (x, y) for the image of the first initial weight image combined with the binary image' 2,x,y For the pixel value of the image combining the second initial weight image and the binary image at the coordinates (x, y), ref is the sequence number of the reference image, in this embodiment, the sequence number is 1 or 2, the motion state of the fused image and the motion state of the selected reference image should be kept consistent, and then the reference image is processed for W' k,x,y Normalization is performed, and the formula is as follows:
Figure BDA0002148508230000181
wherein ,
Figure BDA0002148508230000182
pixel value of base weight map at coordinates (x, y), then +.>
Figure BDA0002148508230000183
For the pixel value of the first basis weight image at coordinates (x, y),/for the pixel value of the first basis weight image at coordinates (x, y)>
Figure BDA0002148508230000184
The pixel values at coordinates (x, y) for the second basis weight image.
Based on the foregoing embodiments, the embodiment specifically describes step 5 in the first embodiment, where step 5 specifically may include:
step 5.1, carrying out Laplacian pyramid transformation on the first brightness balance image to obtain a first Laplacian pyramid, and carrying out Laplacian pyramid transformation on the second brightness balance image to obtain a second Laplacian pyramid;
step 5.2, carrying out Gaussian pyramid transformation on the first basic weight image to obtain a first weight image Gaussian pyramid, and carrying out Gaussian pyramid transformation on the second basic weight image to obtain a second weight image Gaussian pyramid;
Step 5.3, obtaining a fused image according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid;
and 5.31, obtaining a fused Laplacian pyramid according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid, wherein the calculation formula of the Laplacian pyramid is as follows:
Figure BDA0002148508230000185
wherein ,
Figure BDA0002148508230000186
pixel value at (x, y) of Laplacian pyramid coordinate of the d-th layer corresponding to the k-th brightness balanced image +.>
Figure BDA0002148508230000187
For the pixel value at (x, y) of the Gaussian pyramid coordinate of the d-th layer corresponding to the k Zhang Quanchong image, L { F } d Is the fused Laplacian pyramid.
Step 5.32, obtaining an enhanced Laplacian pyramid according to the gain coefficient matrix and the fused Laplacian pyramid;
because the fusion of the laplacian pyramids and the excessive filtering of the illumination component, the fused image obtained in step 5.31 loses a part of detail information, and for this problem, the embodiment restores the detail information of the image by performing detail enhancement processing on the fused laplacian pyramids, and reduces the noise information of the image at the same time, and the human eye vision system is more sensitive to the high-frequency information of the image, so that the gain coefficient should be reduced along with the increase of the layer number of the laplacian pyramids, and the calculation formula of the gain coefficient is as follows:
Figure BDA0002148508230000191
Wherein G is a gain coefficient matrix, G L G is the smallest gain coefficient H For the maximum gain factor, D is the layer number of the highest layer of the fused laplacian pyramid, this example takes d=log 2 (min(H,W))-log 2 (min (H, W)/2), d is the layer number of the fused Laplacian pyramid, gamma is an adjustable parameter, gamma is 0.5 for example,
Figure BDA0002148508230000192
for the noise visibility at the (x, y) coordinate position in the d-th layer fused Laplacian pyramid, +.>
Figure BDA0002148508230000193
The calculation formula of (2) is as follows:
Figure BDA0002148508230000194
wherein ,ed (x, y) is a local image entropy value at the (x, y) coordinate position in the Laplacian pyramid after d-th layer fusion, for example, the local image size can be 3*3 size neighborhood image, ω is an adjustable parameter, which can be generally 1, and generally can be considered that the smaller the noise visibility of the local image with more detail information is, the larger the visibility of the local noise with less detail information is.
Finally, multiplying the gain coefficient matrix by the Laplacian pyramid of the corresponding pixel point of the corresponding layer to obtain an enhanced Laplacian pyramid, wherein the calculation formula of the enhanced Laplacian pyramid is as follows:
Figure BDA0002148508230000195
wherein ,
Figure BDA0002148508230000201
pixel values at the (x, y) position of the d-th layer coordinates for the enhanced laplacian pyramid.
And 5.32, inversely transforming the enhanced Laplacian pyramid to form a fused image.
For example, referring to fig. 10, fig. 10 is a fused image of fig. 5a and 6 a.
The invention can realize the effect of effective 'ghost' removal and image fusion on two frames of exposure images, the processing result can effectively recover the detailed information of the over-exposure and under-exposure areas of the images, meanwhile, the brightness balance algorithm of the embodiment can avoid the plaque and artifact phenomenon in the traditional fusion images, the fusion method of the two frames of exposure images provided by the embodiment can greatly reduce the time resolution of image shooting, and a solid foundation is laid for implementing the method provided by the embodiment on a hardware platform and acquiring the high dynamic range real-time video.
In order to better illustrate the double-exposure image fusion method based on the dynamic scene provided by the invention, the embodiment compares the double-exposure image fusion method with algorithms of Sen et al and mak et al, please refer to fig. 11 a-11 g, fig. 11a and 11b are respectively low-exposure images and high-exposure images shot by cameras, fig. 11c is a direct fusion effect of a traditional pyramid algorithm, and it can be seen from fig. 11c that serious plaque and artifact phenomenon exist at trunks and fences in the images, which are shadows formed on trunks by leaf shielding, but the operation result of the algorithm of the traditional pyramid algorithm causes serious distortion of scene information. In order to solve the problem, two frames of images are required to be subjected to brightness balance, the images after the brightness balance are subjected to direct fusion, fig. 11d is a fusion effect diagram after the brightness balance treatment, the artifact of the plaque is disappeared, but the artifact of the plaque appears in a walking area of the person, the fusion quality of the images is seriously influenced, fig. 11e and 11f are respectively the processing results of Sen et al and MaK et al on the shot two frames of exposure images, fig. 11g is an effect diagram after the ghost is removed by utilizing the double exposure image fusion method of the invention, and as can be seen from the processing results, the processing results of Sen et al obtain better results in terms of ghost removal, but the detail information of leaves and branches is lost, such as the information of a frame selection area in fig. 11e, and the detail information of an underexposure area is almost completely lost; the processing result of Ma K et al obtains good processing results in terms of the overall contrast of the image and the detail information, but in the frame-selected area in fig. 11f and at the railing, the contrast of the detail information is not obvious and overexposure phenomenon also exists. 11g can show that the fusion method well avoids the defects in the processing result of the algorithm, can display the detail information of each part of the image, well recovers the detail information of the overexposed area and the underexposed area, and has good display capability for the detail area.
In addition, the present embodiment also provides another image effect comparison graph of the double-exposure image fusion method based on the dynamic scene and the images processed by the methods of Sen et al and mak et al, please refer to fig. 12 a-12 g, fig. 12a and fig. 12b are respectively a low-exposure image and a high-exposure image of the playfield scene shot by the camera, fig. 12c is a direct fusion effect image of the traditional pyramid algorithm, and it can be seen from fig. 12c that serious plaque and artifact phenomenon exist at leaves and stairs in the images, and the scene information is seriously distorted due to the overlarge brightness difference of the two frames of images. Fig. 12d is a graph showing the image fusion effect after brightness balance, in which the plaque artifact at the stairs and leaves is disappeared, but a semitransparent "ghost" phenomenon appears in the human movement and ball movement region, fig. 12e and 12f are the processing results of Sen et al and Ma K et al on two frames of exposure images shot by the camera, respectively, and fig. 12g is the processing result of the double exposure image fusion method according to the present invention, and as can be seen from fig. 12e, the processing result of Sen et al almost cannot see the detailed information thereof in the chairman table region; as can be seen from fig. 12f, the processing result of Ma K et al obtains a good processing result in the overall contrast of the image, but the problem of insignificant contrast of detail information occurs in the frame selection area; as can be seen from fig. 12g, the double-exposure image fusion method of the invention can display detail information of each place of the image, especially has good restoration capability for detail information of a chairman table, and can completely see tree information behind the chairman table.
In addition, this embodiment also provides a comparison graph of the effect of the double-exposure image fusion method based on dynamic scene and the images processed by the methods of Sen et al and mak et al, please refer to fig. 13 a-13 g, fig. 13a and 13b are respectively the low-exposure image and the high-exposure image of the pine scene shot by the camera, fig. 13c is the direct fusion effect of the traditional pyramid algorithm, as can be seen from fig. 13c, the plaque and the artifact phenomenon still exist, even serious artifact phenomenon can occur in the human motion area, fig. 13d is the image fusion effect graph after brightness balance, as can be seen from fig. 13d, a large number of overexposed areas and underexposed areas exist, meanwhile, the contour of the human motion area is disordered, fig. 13e and 13f are respectively the processing results of the two frames of exposure images shot by the camera, as can be seen from the processing results of fig. 13e, and the processing results of Sen et al and underexposed areas can not recover detailed information of the overexposed areas; as can be seen from fig. 13f, the processing result of Ma K et al shows serious artifacts at the leaf in the frame selection area, which seriously affects the overall look and feel of the image, and as can be seen from fig. 13g, the processing result of the double-exposure image fusion method of the invention has a natural display effect at the leaf, and meanwhile, the detail information of each part of the image can be clearly displayed, so that the method has a higher contrast ratio in the overexposed area and the underexposed area.
In the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Further, one skilled in the art can engage and combine the different embodiments or examples described in this specification.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (8)

1. The double-exposure image fusion method based on the dynamic scene is characterized by comprising the following steps of:
acquiring a first exposure image and a second exposure image, wherein the exposure degrees of the first exposure image and the second exposure image are different;
respectively carrying out histogram equalization processing on the first exposure image and the second exposure image to correspondingly obtain a third exposure image and a fourth exposure image with the same brightness;
calculating pixel difference values between the third exposure image and the fourth exposure image by using a frame difference method to obtain a difference image;
obtaining a threshold value array according to the total pixel percentage of the differential image, the pixel number occupied by the width of the differential image and the pixel number occupied by the height of the differential image occupied by the static areas of the first exposure image and the second exposure image; obtaining an adaptive threshold according to the threshold array;
Performing threshold segmentation processing on the differential image according to the self-adaptive threshold to obtain an initial binary image;
performing morphological expansion corrosion treatment on the initial binary image to obtain a binary image comprising a motion area;
filtering illuminance components in the first exposure image and the second exposure image according to the Retinex theory, and correspondingly obtaining a first brightness balance image and a second brightness balance image according to the first exposure image and the second exposure image after filtering the illuminance components;
combining the first exposure image and the second exposure image with the motion area respectively to obtain a first basic weight image and a second basic weight image;
and processing the first brightness balance image, the second brightness balance image, the first basic weight image and the second basic weight image according to a pyramid image fusion algorithm with the self-adaptive detail enhancement to obtain fused images.
2. The dual exposure image fusion method of claim 1, wherein the threshold array is:
Figure FDA0004153871240000021
wherein ,
Figure FDA0004153871240000022
for the threshold value array, n i And P is the percentage of the number of pixels of the static areas of the first exposure image and the second exposure image to the total number of pixels of the differential image, t is an integer between 0 and 255, H is the number of pixels of the differential image, and W is the number of pixels of the differential image.
3. The double exposure image fusion method according to claim 2, wherein the calculation formula of the adaptive threshold is:
Figure FDA0004153871240000023
wherein T is the self-adaptive threshold, min (·) is the minimum value of the threshold array, and mt is a predetermined threshold.
4. The method for fusing double exposure images according to claim 1, wherein filtering the illuminance components in the first exposure image and the second exposure image according to Retinex theory, and correspondingly obtaining a first brightness balance image and a second brightness balance image according to the first exposure image and the second exposure image after filtering the illuminance components, comprises:
respectively filtering illuminance components of the first exposure image and the second exposure image by using a Retinex theory, and correspondingly obtaining a fifth exposure image and a sixth exposure image;
and respectively processing the fifth exposure image and the sixth exposure image according to the brightness mapping model to correspondingly obtain a first brightness balance image and a second brightness balance image.
5. The method of claim 1, wherein combining the first exposure image and the second exposure image with the motion region respectively corresponds to obtain a first basis weight image and a second basis weight image, and comprises:
Obtaining a first initial weight image according to the image contrast and the exposure moderation degree of the first exposure image;
obtaining a second initial weight image according to the image contrast and the exposure moderation degree of the second exposure image;
combining the first initial weight image and the binary image and carrying out normalization treatment to obtain a first basic weight image;
and combining the second initial weight image with the binary image and carrying out normalization processing to obtain a second basic weight image.
6. The method of claim 1, wherein processing the first luminance-balanced image, the second luminance-balanced image, the first basis weight image, and the second basis weight image according to an adaptive detail-enhanced pyramid image fusion algorithm to obtain a fused image comprises:
respectively carrying out Laplacian pyramid transformation on the first brightness balance image and the second brightness balance image to correspondingly obtain a first Laplacian pyramid and a second Laplacian pyramid;
respectively carrying out Gaussian pyramid transformation on the first basic weight image and the second basic weight image to obtain a first weight graph Gaussian pyramid and a second weight graph Gaussian pyramid;
And obtaining a fused image according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid.
7. The method of claim 6, wherein obtaining the fused image from the first laplacian pyramid, the second laplacian pyramid, the first weight map gaussian pyramid, and the second weight map gaussian pyramid comprises:
obtaining a fused Laplacian pyramid according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid;
obtaining an enhanced Laplacian pyramid according to the gain coefficient matrix and the fused Laplacian pyramid;
and performing inverse transformation on the enhanced Laplacian pyramid to obtain a fused image.
8. The method of claim 7, wherein the gain factor is calculated by the formula:
Figure FDA0004153871240000041
wherein G is the gain coefficient matrix, G L G is the smallest gain coefficient H D is the layer number of the highest layer of the fused Laplacian pyramid, D is the layer number of the fused Laplacian pyramid, gamma is an adjustable parameter,
Figure FDA0004153871240000042
noise visibility at (x, y) coordinate position in the fused laplacian pyramid for layer d. />
CN201910693162.0A 2019-07-30 2019-07-30 Double-exposure image fusion method based on dynamic scene Active CN110599433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693162.0A CN110599433B (en) 2019-07-30 2019-07-30 Double-exposure image fusion method based on dynamic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693162.0A CN110599433B (en) 2019-07-30 2019-07-30 Double-exposure image fusion method based on dynamic scene

Publications (2)

Publication Number Publication Date
CN110599433A CN110599433A (en) 2019-12-20
CN110599433B true CN110599433B (en) 2023-06-06

Family

ID=68853001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693162.0A Active CN110599433B (en) 2019-07-30 2019-07-30 Double-exposure image fusion method based on dynamic scene

Country Status (1)

Country Link
CN (1) CN110599433B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429368B (en) * 2020-03-16 2023-06-27 重庆邮电大学 Multi-exposure image fusion method for self-adaptive detail enhancement and ghost elimination
CN111724332B (en) * 2020-06-09 2023-10-31 四川大学 Image enhancement method and system suitable for closed cavity detection
CN111770282B (en) * 2020-06-28 2021-06-01 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and terminal equipment
CN111754440B (en) * 2020-06-29 2023-05-05 苏州科达科技股份有限公司 License plate image enhancement method, system, equipment and storage medium
CN112489095B (en) * 2020-11-25 2021-08-17 东莞埃科思科技有限公司 Reference image selection method and device, storage medium and depth camera
WO2022141265A1 (en) * 2020-12-30 2022-07-07 华为技术有限公司 Image processing method and device
CN113012070B (en) * 2021-03-25 2023-09-26 常州工学院 High dynamic scene image sequence acquisition method based on fuzzy control
CN113362264B (en) * 2021-06-23 2022-03-18 中国科学院长春光学精密机械与物理研究所 Gray level image fusion method
CN113781370A (en) * 2021-08-19 2021-12-10 北京旷视科技有限公司 Image enhancement method and device and electronic equipment
CN114897745B (en) * 2022-07-14 2022-12-20 荣耀终端有限公司 Method for expanding dynamic range of image and electronic equipment
CN116528058B (en) * 2023-05-26 2023-10-31 中国人民解放军战略支援部队航天工程大学 High dynamic imaging method and system based on compression reconstruction
CN116563190B (en) * 2023-07-06 2023-09-26 深圳市超像素智能科技有限公司 Image processing method, device, computer equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426091A (en) * 2007-11-02 2009-05-06 韩国科亚电子股份有限公司 Apparatus for digital image stabilization using object tracking and method thereof
WO2014044045A1 (en) * 2012-09-20 2014-03-27 华为技术有限公司 Image processing method and device
GB201613927D0 (en) * 2015-08-24 2016-09-28 Motorola Mobility Llc Method and apparatus for auto exposure value detection for high dynamic range imaging
CN107220931A (en) * 2017-08-02 2017-09-29 安康学院 A kind of high dynamic range images method for reconstructing based on grey-scale map

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4346620B2 (en) * 2006-03-17 2009-10-21 株式会社東芝 Image processing apparatus, image processing method, and image processing program
EP2297939B1 (en) * 2008-06-19 2018-04-18 Panasonic Intellectual Property Management Co., Ltd. Method and apparatus for motion blur and ghosting prevention in imaging system
US8406569B2 (en) * 2009-01-19 2013-03-26 Sharp Laboratories Of America, Inc. Methods and systems for enhanced dynamic range images and video from multiple exposures
KR20130031574A (en) * 2011-09-21 2013-03-29 삼성전자주식회사 Image processing method and image processing apparatus
US10255888B2 (en) * 2012-12-05 2019-04-09 Texas Instruments Incorporated Merging multiple exposures to generate a high dynamic range image
US20160205291A1 (en) * 2015-01-09 2016-07-14 PathPartner Technology Consulting Pvt. Ltd. System and Method for Minimizing Motion Artifacts During the Fusion of an Image Bracket Based On Preview Frame Analysis
US10148888B2 (en) * 2016-05-18 2018-12-04 Texas Instruments Incorporated Image data processing for multi-exposure wide dynamic range image data
CN106530263A (en) * 2016-10-19 2017-03-22 天津大学 Single-exposure high-dynamic range image generation method adapted to medical image
CN109493283A (en) * 2018-08-23 2019-03-19 金陵科技学院 A kind of method that high dynamic range images ghost is eliminated
CN109300101A (en) * 2018-10-18 2019-02-01 重庆邮电大学 A kind of more exposure image fusion methods based on Retinex theory
CN109754377B (en) * 2018-12-29 2021-03-19 重庆邮电大学 Multi-exposure image fusion method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101426091A (en) * 2007-11-02 2009-05-06 韩国科亚电子股份有限公司 Apparatus for digital image stabilization using object tracking and method thereof
WO2014044045A1 (en) * 2012-09-20 2014-03-27 华为技术有限公司 Image processing method and device
GB201613927D0 (en) * 2015-08-24 2016-09-28 Motorola Mobility Llc Method and apparatus for auto exposure value detection for high dynamic range imaging
CN107220931A (en) * 2017-08-02 2017-09-29 安康学院 A kind of high dynamic range images method for reconstructing based on grey-scale map

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automatic Exposure Compensation for Multi-Exposure Image Fusion;Yuma Kinoshita等;IEEE;全文 *
基于集成成像数据源和双曝光法的全息体视图合成技术研究;赵夫良;计算机软件及计算机应用;全文 *
多尺度细节融合的多曝光高动态图像重建;付争方;朱虹;;计算机工程与应用(第24期);全文 *

Also Published As

Publication number Publication date
CN110599433A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110599433B (en) Double-exposure image fusion method based on dynamic scene
CN110619593B (en) Double-exposure video imaging system based on dynamic scene
Tao et al. Low-light image enhancement using CNN and bright channel prior
Bennett et al. Video enhancement using per-pixel virtual exposures
EP2987134B1 (en) Generation of ghost-free high dynamic range images
CN109754377B (en) Multi-exposure image fusion method
US8724921B2 (en) Method of capturing high dynamic range images with objects in the scene
CN109862282B (en) Method and device for processing person image
JP4675851B2 (en) Method for adaptively determining camera settings according to scene and camera configured to adaptively determine camera settings according to scene
US20100272356A1 (en) Device and method for estimating whether an image is blurred
CN111064904A (en) Dark light image enhancement method
CN103973990B (en) wide dynamic fusion method and device
Kim et al. A novel approach for denoising and enhancement of extremely low-light video
JP2007035028A (en) Producing method for high dynamic range image, and production system for high dynamic range output image
KR102221116B1 (en) A device and method for removing the noise on the image using cross-kernel type median filter
US10992845B1 (en) Highlight recovery techniques for shallow depth of field rendering
Lee et al. Image contrast enhancement using classified virtual exposure image fusion
CN110163807B (en) Low-illumination image enhancement method based on expected bright channel
Park et al. Generation of high dynamic range illumination from a single image for the enhancement of undesirably illuminated images
CN110047060B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
CN115063331B (en) Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
Han et al. Automatic illumination and color compensation using mean shift and sigma filter
Chung et al. Under-exposed image enhancement using exposure compensation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant