CN111429368A - Multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination - Google Patents

Multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination Download PDF

Info

Publication number
CN111429368A
CN111429368A CN202010182184.3A CN202010182184A CN111429368A CN 111429368 A CN111429368 A CN 111429368A CN 202010182184 A CN202010182184 A CN 202010182184A CN 111429368 A CN111429368 A CN 111429368A
Authority
CN
China
Prior art keywords
image
exposure
pyramid
enhancement
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010182184.3A
Other languages
Chinese (zh)
Other versions
CN111429368B (en
Inventor
瞿中
吕磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Minglong Electronic Technology Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202010182184.3A priority Critical patent/CN111429368B/en
Publication of CN111429368A publication Critical patent/CN111429368A/en
Application granted granted Critical
Publication of CN111429368B publication Critical patent/CN111429368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination, which comprises the steps of obtaining L DR image sequences, constructing a weight graph based on the signal intensity and the exposure intensity of L DR image sequences, carrying out motion detection on the L DR image sequences to obtain static image sequences, obtaining a fusion image pyramid based on the weight graph and the static image sequences, carrying out self-adaptive enhancement on the fusion image pyramid, and carrying out Laplacian reconstruction on the self-adaptively enhanced images to obtain a fusion image Rfinal. The invention can reserve more image detail information to make the fused image clearer, can effectively detect and eliminate ghost, and can be applied to high dynamic imaging of various scenes.

Description

Multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination
Technical Field
The invention relates to the technical field of multi-exposure image fusion, in particular to a multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination.
Background
Currently available imaging devices typically capture a dynamic range of typically 1000:1, and the dynamic range of most scenes is much greater than that which can be captured by conventional imaging devices, resulting in less than complete recording of image information in the scene. Images taken by common imaging devices cannot meet the urgent need of people for High-quality images, and the problem is solved by the emergence of High Dynamic Range (HDR) imaging technology. The high dynamic imaging technology can be generally divided into two approaches, namely capturing a high dynamic image through professional hardware equipment and displaying the high dynamic image on low dynamic equipment, and directly synthesizing the high dynamic image through a plurality of low dynamic range images through an algorithm and displaying the high dynamic image on the low dynamic equipment.
In recent years, researchers at home and abroad have conducted research on multi-exposure image fusion. Mertens et al propose a pyramid-based multi-exposure image fusion algorithm, which can effectively remove halos. Vanmali and the like directly perform weighted addition on the input image sequence, so that the calculation efficiency of the algorithm is improved. Qu and the like improve a multi-scale image fusion framework, and effectively improve image detail information.
Ghost removal of images has also been a focus of recent research. Wei et al remove ghosts using difference maps and superpixel segmentation partition weights.
Ma and the like adopt an image blocking idea to realize multi-exposure image fusion and remove ghosts, Wang and the like improve a difference image method to detect moving objects in a dynamic scene so as to obtain a ghost-free image, L i and the like remove ghosts by utilizing histogram equalization and median filtering refinement weights.
In order to effectively enhance the details of the image and eliminate the ghost image of the image to a greater extent, the invention provides a novel detail enhancement and ghost image elimination method.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, it is an object of the present invention to provide a multi-exposure image fusion method with adaptive detail enhancement and ghost elimination, which solves the shortcomings of the prior art.
To achieve the above and other related objects, the present invention provides a multi-exposure image fusion method with adaptive detail enhancement and ghost elimination, comprising:
acquiring L a DR image sequence;
constructing a weight map based on the signal intensities and exposure intensities of the L DR image sequence;
motion detection is carried out on the L DR image sequence to obtain a static image sequence;
obtaining a fused image pyramid based on the weight map and the static image sequence;
performing self-adaptive enhancement on the fused image pyramid;
performing Laplace reconstruction on the image after the self-adaption enhancement to obtain a fusion image Rfinal
Optionally, motion detection is performed on the L DR image sequence based on entropy of image two-dimensional information.
Optionally, the process of obtaining the difference image includes:
acquiring a reference image Iref
Based on reference picture IrefCarrying out exposure adjustment on each image in the L DR image sequence to obtain an exposure-adjusted image sequence;
according to the reference image IrefAnd the exposure adjusted image sequence
Figure BDA0002412961420000021
Obtaining a difference image Dk
Optionally, the exposure-adjusted reference image sequence is obtained by histogram matching
Figure BDA0002412961420000022
Alternatively, the difference image D is calculated by the following formulakThe entropy of the two-dimensional information of the image,
Figure BDA0002412961420000023
wherein T is the two-dimensional information entropy of the image, i is the gray value of the pixel, j is the mean value of the pixels in the neighborhood of the pixel 15 × 15, and N isijThe frequency of occurrence of the feature doublet f (i, j) in the image,
Figure BDA0002412961420000024
w × H is the image size.
Optionally, the sequence of still images is obtained by the following formula;
Figure BDA0002412961420000025
wherein, Ik(x, y) is a sequence of moving images,
Figure BDA0002412961420000026
for a sequence of still images, Ek(x, y) is the weight estimate after motion detection,
Figure BDA0002412961420000027
the adjusted reference image sequence is exposed.
Optionally, the weight estimation after the motion detection is obtained by the following formula;
Figure BDA0002412961420000028
wherein τ represents a set threshold value, Ek(x, y) represents weight estimation after motion detection, when Ek(x, y) ═ 0 indicates that the pixel is a static pixel, EkWhen (x, y) '1' indicates that the pixel is a dynamic pixel.
Optionally, the weight map is obtained by the following formula;
Figure BDA0002412961420000029
wherein A isk(x, y) is the signal strength, Bk(x, y) is the exposure intensity,
Figure BDA00024129614200000210
is a weight graph.
Optionally, performing laplacian decomposition on the motion-detected image sequence to obtain an image pyramid
Figure BDA00024129614200000211
Carrying out Gaussian decomposition on the weight map to obtain a weight pyramid
Figure BDA00024129614200000212
Obtaining a fused image pyramid based on the image pyramid and the weight pyramid:
Figure BDA00024129614200000213
wherein,
Figure BDA00024129614200000214
representing a pyramid of the fused image of the layer l,
Figure BDA00024129614200000215
a first level weight pyramid representing a k-th image sequence,
Figure BDA00024129614200000216
a ith layer laplacian image pyramid representing a kth image sequence.
Optionally, acquiring a low-frequency contour region of the fused image pyramid;
filtering the low-frequency information to obtain a high-frequency detail area;
performing adaptive enhancement on the high-frequency detail region;
Figure BDA0002412961420000031
wherein,
Figure BDA0002412961420000032
is the image after the pyramid detail enhancement of the fusion image of the l-th layer,
Figure BDA0002412961420000033
is the low-frequency contour region of the first layer image of the fused image pyramid,
Figure BDA0002412961420000034
is the image adaptive gain factor and is,
Figure BDA0002412961420000035
d is a constant, and the content of the D,
Figure BDA0002412961420000036
is the image local mean square error.
As described above, the multi-exposure image fusion method with adaptive detail enhancement and ghost elimination according to the present invention has the following beneficial effects:
the invention can reserve more image detail information to make the fused image clearer, can effectively detect and eliminate ghost, and can be applied to high dynamic imaging of various scenes.
Drawings
FIG. 1 is a flowchart of a multi-exposure image fusion method with adaptive detail enhancement and ghost elimination according to an embodiment of the present invention;
FIG. 2 is a flowchart of differential image acquisition according to an embodiment of the present invention;
FIG. 3 is an input sequence of the image set "studio" according to an embodiment of the present invention;
FIG. 4 is a comparison of experimental results of the image set "studios" in an embodiment of the present invention, wherein FIG. 1) is experimental results of a Mertens algorithm, and FIG. 2) is experimental results of the present invention, and it can be seen by comparison that the experimental results of the algorithm herein are better retained than the experimental results of the Mertens algorithm in detail;
FIG. 5 is a detail comparison of the experimental results of the image set "studio" according to an embodiment of the present invention, in which FIG. 1) is an enlarged view of the details outside the experimental result window of the Mertens algorithm, and FIG. 2) is an enlarged view of the details outside the experimental result window of the present invention, and it can be seen by comparison that the experimental results of the algorithm herein are more detailed;
FIG. 6 is an input sequence of the image set "Forrestsequence" according to an embodiment of the present invention;
FIG. 7 is a comparison of experimental results of the image set "Forrestsequence" in an embodiment of the present invention, in which FIG. 1) is an experimental result of a Mertens algorithm, and FIG. 2) is an experimental result of the present invention, it can be seen through the comparison that the algorithm herein can effectively detect and eliminate ghosts, and the experimental result of the Mertens algorithm cannot completely eliminate ghosts;
FIG. 8 is an input sequence of the image Set "Set 1" according to one embodiment of the present invention;
fig. 9 is a comparison of the experimental results of the image Set "Set 1", where fig. 1) is the experimental result of Mertens algorithm, and fig. 2) is the experimental result of the present invention, and it can be seen through the comparison that the algorithm herein can effectively detect and eliminate ghosts, and the experimental result of Mertens algorithm cannot completely eliminate ghosts.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1, the present embodiment provides a multi-exposure image fusion method with adaptive detail enhancement and ghost elimination, including:
s11 acquiring L DR image sequences;
s12 constructing a weight map based on the signal intensities and exposure intensities of the L DR image sequence;
s13, carrying out motion detection on the L DR (L ow-Dynamic Range, low Dynamic Range image) image sequence to obtain a static image sequence;
s14 obtaining a fused image pyramid based on the weight map and the static image sequence;
s15, performing self-adaptive enhancement on the fused image pyramid;
s16, carrying out Laplacian reconstruction on the image after self-adaption enhancement to obtain a fusion image Rfinal
The invention can keep more image detail information, so that the fused image is clearer, and the invention can effectively detect and eliminate ghost.
In one embodiment, motion detection is performed on the L DR image sequence based on entropy of image two-dimensional information.
In an embodiment, as shown in fig. 2, the process of obtaining the difference image includes:
s21 obtaining a reference image Iref
S22 is based on the reference picture IrefCarrying out exposure adjustment on each image in the L DR image sequence to obtain an exposure-adjusted image sequence;
s23 according to the reference picture IrefAnd the exposure adjusted image sequence
Figure BDA0002412961420000041
Obtaining a difference image Dk
Wherein, the reference image can be customized in advance.
In one embodiment, the exposure-adjusted reference image sequence is obtained by histogram matching
Figure BDA0002412961420000042
In particular, it will be based on the reference image IrefAnd exposure adjusted reference image sequence
Figure BDA0002412961420000043
Difference is made to obtain a difference image Dk
In one embodiment, the difference image D is calculated by the following formulakThe entropy of the two-dimensional information of the image,
Figure BDA0002412961420000044
wherein, T is image two-dimensional information entropy, which introduces image neighborhood pixel mean value as space characteristic quantity, and forms a characteristic binary group f (i, j) with image gray scale, i represents pixel gray scale value, j represents the pixel mean value of the neighborhood of the pixel 15 × 15, when the neighborhood pixel window is larger than 15 × 15, ghost external contour is easy to keep, when the neighborhood window is smaller than 15 × 15, the ghost is removed and the background information around the ghost is removed, NijThe frequency with which the f (i, j) feature appears in the image,
Figure BDA0002412961420000045
w × H is the image size.
In one embodiment, the sequence of still images is obtained by the following equation;
Figure BDA0002412961420000046
wherein, Ik(x, y) is a sequence of moving images,
Figure BDA0002412961420000047
for a sequence of still images, Ek(x, y) is the weight estimate after motion detection,
Figure BDA0002412961420000051
the adjusted reference image sequence is exposed.
In one embodiment, the weight estimate after motion detection is obtained by the following formula;
Figure BDA0002412961420000052
wherein τ represents a set threshold value, Ek(x, y) represents weight estimation after motion detection, when Ek(x, y) ═ 0 indicates that the pixel is a static pixel, EkWhen (x, y) '1' indicates that the pixel is a dynamic pixel. In this embodiment, τ is 2.
In one embodiment, the weight map is obtained by the following formula;
Figure BDA0002412961420000053
wherein A isk(x, y) is the signal strength, Bk(x, y) is the exposure intensity,
Figure BDA0002412961420000054
is a weight graph.
In an embodiment, obtaining the weight map further includes a pair of: weight graph
Figure BDA0002412961420000055
Normalization processing to obtain Wk(x,y)。
In an embodiment, the image pyramid is obtained by performing laplacian decomposition on the motion-detected image sequence
Figure BDA0002412961420000056
Carrying out Gaussian decomposition on the weight map to obtain a weight pyramid
Figure BDA0002412961420000057
Obtaining a fused image pyramid based on the image pyramid and the weight pyramid:
Figure BDA0002412961420000058
wherein,
Figure BDA0002412961420000059
representing a pyramid of the fused image of the layer l,
Figure BDA00024129614200000510
a first level weight pyramid representing a k-th image sequence,
Figure BDA00024129614200000511
a ith layer laplacian image pyramid representing a kth image sequence.
To better preserve the detail information of the generated HDR image, the method comprises fusing the pyramid image
Figure BDA00024129614200000512
Self-adaptive enhancement is carried out to enable the details of the image to be clearer, and the image is divided into two parts, namely, a low-frequency contour region reflecting the large-scale intensity change of the image can be obtained through low-pass filtering smoothing of the image; and secondly, reflecting the high-frequency area of the small-scale details of the image, and obtaining the low-frequency outline area by subtracting the original image. The detail information of the image becomes clearer after the detail high-frequency area is amplified, and the image details are enhanced. The low-frequency contour region of the image is realized by calculating the average value of local pixels of the image:
Figure BDA00024129614200000513
wherein,
Figure BDA00024129614200000514
is a low-frequency contour region of the l-th layer image of the fused pyramid image, (2n +1)2To calculate the window size of the pixel average, n is 11 in this embodiment.
Figure BDA00024129614200000515
Representing the fused pyramid image of the ith layer.
In one embodiment, a low-frequency contour region of a fused image pyramid is obtained;
filtering the low-frequency information to obtain a high-frequency detail area;
performing adaptive enhancement on the high-frequency detail region;
Figure BDA0002412961420000061
wherein,
Figure BDA0002412961420000062
after the pyramid details of the fusion image of the first layer are enhancedThe image of (a) is displayed on the display,
Figure BDA0002412961420000063
is a fused image pyramid
The low-frequency contour region of the l-layer image,
Figure BDA0002412961420000064
is the image adaptive gain factor and is,
Figure BDA0002412961420000065
d is a constant, and the content of the D,
Figure BDA0002412961420000066
is the image local mean square error.
The high-frequency detail area of the image can be obtained by performing difference after the low-pass filtering and smoothing of the original image, and the high-frequency detail area is subjected to image self-adaptive enhancement, so that the image detail is clearer:
Figure BDA0002412961420000067
wherein,
Figure BDA0002412961420000068
is the image after the detail enhancement of the fusion pyramid in the l-th layer,
Figure BDA0002412961420000069
is the image adaptive gain factor and is,
Figure BDA00024129614200000610
d is a constant, and the average value of the image pixels is taken in the embodiment;
Figure BDA00024129614200000611
the local mean square error of the image is large in the edge of the image or other areas with intense detail change, and the local mean square error is large
Figure BDA00024129614200000612
Is small, therebyPreventing ringing effects. In the smooth region, the local mean square error is small, which
Figure BDA00024129614200000613
Is large and easily amplifies noise, so it is good for
Figure BDA00024129614200000614
The maximum value of (A) can be limited to obtain better effect. In this example
Figure BDA00024129614200000615
The maximum value of (2.5) is obtained, when the value exceeds 2.5, the image is excessively enhanced, and when the value is less than 2.5, the image enhancement effect is poor. Finally, the process is carried out in a batch,
Figure BDA00024129614200000616
obtaining a fusion image R after detail enhancement through Laplace reconstructionfinal
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may comprise any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, etc.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A multi-exposure image fusion method with adaptive detail enhancement and ghost elimination is characterized by comprising the following steps:
acquiring L a DR image sequence;
constructing a weight map based on the signal intensities and exposure intensities of the L DR image sequence;
motion detection is carried out on the L DR image sequence to obtain a static image sequence;
obtaining a fused image pyramid based on the weight map and the static image sequence;
performing self-adaptive enhancement on the fused image pyramid;
performing Laplace reconstruction on the image after the self-adaption enhancement to obtain a fusion image Rfinal
2. The adaptive detail enhancement and ghost elimination multi-exposure image fusion method according to claim 1, wherein motion detection is performed on the L DR image sequence based on image two-dimensional information entropy.
3. The adaptive detail-enhancement and ghost-removal multi-exposure image fusion method according to claim 2, wherein obtaining the difference image comprises:
acquiring a reference image Iref
Based on reference picture IrefCarrying out exposure adjustment on each image in the L DR image sequence to obtain an exposure-adjusted image sequence;
according to the reference image IrefAnd the exposure adjusted image sequence
Figure FDA0002412961410000011
Obtaining a difference image Dk
4. The adaptive detail enhancement and ghost elimination multi-exposure image fusion method according to claim 3, wherein the exposure-adjusted reference image sequence is obtained by histogram matching
Figure FDA0002412961410000012
5. The adaptive detail-enhancement and ghost-removal multi-exposure image fusion method according to claim 3, wherein the difference image D is calculated by the following formulakThe entropy of the two-dimensional information of the image,
Figure FDA0002412961410000013
wherein T is the two-dimensional information entropy of the image, i is the gray value of the pixel, j is the mean value of the pixels in the neighborhood of the pixel 15 × 15, and N isijThe frequency of occurrence of the feature doublet f (i, j) in the image,
Figure FDA0002412961410000014
w × H is the image size.
6. The adaptive detail enhancement and ghost elimination multi-exposure image fusion method according to claim 5, wherein the sequence of static images is obtained by the following formula;
Figure FDA0002412961410000021
wherein, Ik(x, y) is a sequence of moving images,
Figure FDA0002412961410000022
for a sequence of still images, Ek(x, y) is the weight estimate after motion detection,
Figure FDA0002412961410000023
the adjusted reference image sequence is exposed.
7. The adaptive detail-enhancement and ghost-removal multi-exposure image fusion method according to claim 6, wherein the weight estimate after motion detection is obtained by the following formula;
Figure FDA0002412961410000024
wherein τ represents a set threshold value, Ek(x, y) represents weight estimation after motion detection, when Ek(x, y) ═ 0 indicates that the pixel is a static pixel, EkWhen (x, y) '1' indicates that the pixel is a dynamic pixel.
8. The adaptive detail-enhancement and ghost-elimination multi-exposure image fusion method according to claim 1, wherein said weight map is obtained by the following formula;
Figure FDA0002412961410000025
wherein A isk(x, y) is the signal strength, Bk(x, y) is the exposure intensity,
Figure FDA0002412961410000026
is a weight graph.
9. The adaptive detail enhancement and ghosting elimination multi-exposure image fusion method of claim 1, wherein the image pyramid is obtained by performing laplacian decomposition on the motion-detected image sequence
Figure FDA0002412961410000027
Carrying out Gaussian decomposition on the weight map to obtain a weight pyramid
Figure FDA0002412961410000028
Obtaining a fused image pyramid based on the image pyramid and the weight pyramid:
Figure FDA0002412961410000029
wherein,
Figure FDA00024129614100000210
representing a pyramid of the fused image of the layer l,
Figure FDA00024129614100000211
l-th layer weight gold representing k-th image sequenceA character tower is arranged on the frame,
Figure FDA00024129614100000212
a ith layer laplacian image pyramid representing a kth image sequence.
10. The multi-exposure image fusion method with adaptive detail enhancement and ghost elimination according to claim 1, characterized by obtaining a low-frequency contour region of a fused image pyramid;
filtering the low-frequency information to obtain a high-frequency detail area;
performing adaptive enhancement on the high-frequency detail region;
Figure FDA0002412961410000031
wherein,
Figure FDA0002412961410000032
is the image after the pyramid detail enhancement of the fusion image of the l-th layer,
Figure FDA0002412961410000033
is the low-frequency contour region of the first layer image of the fused image pyramid,
Figure FDA0002412961410000034
is the image adaptive gain factor and is,
Figure FDA0002412961410000035
d is a constant, and the content of the D,
Figure FDA0002412961410000036
is the image local mean square error.
CN202010182184.3A 2020-03-16 2020-03-16 Multi-exposure image fusion method for self-adaptive detail enhancement and ghost elimination Active CN111429368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010182184.3A CN111429368B (en) 2020-03-16 2020-03-16 Multi-exposure image fusion method for self-adaptive detail enhancement and ghost elimination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010182184.3A CN111429368B (en) 2020-03-16 2020-03-16 Multi-exposure image fusion method for self-adaptive detail enhancement and ghost elimination

Publications (2)

Publication Number Publication Date
CN111429368A true CN111429368A (en) 2020-07-17
CN111429368B CN111429368B (en) 2023-06-27

Family

ID=71546397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010182184.3A Active CN111429368B (en) 2020-03-16 2020-03-16 Multi-exposure image fusion method for self-adaptive detail enhancement and ghost elimination

Country Status (1)

Country Link
CN (1) CN111429368B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255557A (en) * 2021-06-08 2021-08-13 汪知礼 Video crowd emotion analysis method and system based on deep learning
CN113837055A (en) * 2021-09-18 2021-12-24 南京润楠医疗电子研究院有限公司 Fall detection method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102282838A (en) * 2009-01-19 2011-12-14 夏普株式会社 Methods and Systems for Enhanced Dynamic Range Images and Video from Multiple Exposures
US20150170389A1 (en) * 2013-12-13 2015-06-18 Konica Minolta Laboratory U.S.A., Inc. Automatic selection of optimum algorithms for high dynamic range image processing based on scene classification
CN107220931A (en) * 2017-08-02 2017-09-29 安康学院 A kind of high dynamic range images method for reconstructing based on grey-scale map
WO2018113975A1 (en) * 2016-12-22 2018-06-28 Huawei Technologies Co., Ltd. Generation of ghost-free high dynamic range images
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods
CN110599433A (en) * 2019-07-30 2019-12-20 西安电子科技大学 Double-exposure image fusion method based on dynamic scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102282838A (en) * 2009-01-19 2011-12-14 夏普株式会社 Methods and Systems for Enhanced Dynamic Range Images and Video from Multiple Exposures
US20150170389A1 (en) * 2013-12-13 2015-06-18 Konica Minolta Laboratory U.S.A., Inc. Automatic selection of optimum algorithms for high dynamic range image processing based on scene classification
WO2018113975A1 (en) * 2016-12-22 2018-06-28 Huawei Technologies Co., Ltd. Generation of ghost-free high dynamic range images
CN107220931A (en) * 2017-08-02 2017-09-29 安康学院 A kind of high dynamic range images method for reconstructing based on grey-scale map
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods
CN110599433A (en) * 2019-07-30 2019-12-20 西安电子科技大学 Double-exposure image fusion method based on dynamic scene

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
LEI LV: "Multi-exposure Image Fusion with Layering Adaptive Detail Enhancement and Ghosting removal", 《2020 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND COMPUTER APPLICATIONS (ICAICA)》 *
T. MERTENS 等: "Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography", 《COMPUTER GRAPHICS》 *
WEI ZHANG 等: "Motion-free exposure fusion based on inter-consistency and intra-consistency", 《INFORMATION SCIENCES》 *
吕磊: "多尺度细节增强的多曝光图像融合算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
李彦: "基于多尺度变换的医学图像增强算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
钟家强 等: "基于二维模糊信息熵的差分图像变化检测", 《计算机工程与应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255557A (en) * 2021-06-08 2021-08-13 汪知礼 Video crowd emotion analysis method and system based on deep learning
CN113255557B (en) * 2021-06-08 2023-08-15 苏州优柿心理咨询技术有限公司 Deep learning-based video crowd emotion analysis method and system
CN113837055A (en) * 2021-09-18 2021-12-24 南京润楠医疗电子研究院有限公司 Fall detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111429368B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
Yang et al. Sparse gradient regularized deep retinex network for robust low-light image enhancement
Liu et al. Wavelet-based dual-branch network for image demoiréing
CN109754377B (en) Multi-exposure image fusion method
Li et al. Fast multi-exposure image fusion with median filter and recursive filter
Malm et al. Adaptive enhancement and noise reduction in very low light-level video
CN104021532B (en) A kind of image detail enhancement method of infrared image
CN110136055B (en) Super resolution method and device for image, storage medium and electronic device
CN109447930B (en) Wavelet domain light field full-focusing image generation algorithm
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
WO2016139260A9 (en) Method and system for real-time noise removal and image enhancement of high-dynamic range images
JPH06245113A (en) Equipment for improving picture still more by removing noise and other artifact
CN102968814B (en) A kind of method and apparatus of image rendering
CN107492077B (en) Image deblurring method based on self-adaptive multidirectional total variation
CN111183630B (en) Photo processing method and processing device of intelligent terminal
CN111242860B (en) Super night scene image generation method and device, electronic equipment and storage medium
Moriwaki et al. Hybrid loss for learning single-image-based HDR reconstruction
WO2023273868A1 (en) Image denoising method and apparatus, terminal, and storage medium
CN111353955A (en) Image processing method, device, equipment and storage medium
CN111429368A (en) Multi-exposure image fusion method with self-adaptive detail enhancement and ghost elimination
Lv et al. Low-light image enhancement via deep Retinex decomposition and bilateral learning
Zhang et al. Deep motion blur removal using noisy/blurry image pairs
CN114140481A (en) Edge detection method and device based on infrared image
Lang et al. A real-time high dynamic range intensified complementary metal oxide semiconductor camera based on FPGA
CN110136085B (en) Image noise reduction method and device
Vanmali et al. Multi-exposure image fusion for dynamic scenes without ghost effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240815

Address after: 230000 B-1015, wo Yuan Garden, 81 Ganquan Road, Shushan District, Hefei, Anhui.

Patentee after: HEFEI MINGLONG ELECTRONIC TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: Chongqing University of Posts and telecommunications, No.2 Chongwen Road, Nan'an District, Chongqing 400065

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China