CN113160178A - High dynamic range ghost image removing imaging system and method based on attention module - Google Patents

High dynamic range ghost image removing imaging system and method based on attention module Download PDF

Info

Publication number
CN113160178A
CN113160178A CN202110442869.1A CN202110442869A CN113160178A CN 113160178 A CN113160178 A CN 113160178A CN 202110442869 A CN202110442869 A CN 202110442869A CN 113160178 A CN113160178 A CN 113160178A
Authority
CN
China
Prior art keywords
dynamic range
image
attention
high dynamic
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110442869.1A
Other languages
Chinese (zh)
Inventor
颜成钢
潘潇恺
高含笑
孙垚棋
张继勇
李宗鹏
张勇东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202110442869.1A priority Critical patent/CN113160178A/en
Publication of CN113160178A publication Critical patent/CN113160178A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high dynamic range ghost-removing imaging system and method based on an attention module, wherein an attention network is applied to a high dynamic range ghost-removing algorithm to guide fusion of different images and inhibit saturated and unaligned areas of the images; the present invention innovatively proposes using a learnable attention network to guide the merging process. The attention network is used for generating an attention map to evaluate the importance of different image areas to obtain a required high dynamic range image, characteristics complementary to a reference image can be highlighted to exclude areas with motion and severe saturation, and low dynamic range image characteristics with attention guidance are input into the fusion network. The fusion network is constructed by utilizing the hole residual block, which is beneficial to fully utilizing information from different convolution layers, so that more details are reserved from the input low dynamic range image, the hole convolution enlarges the receptive field, and the recovery of the supersaturated area and the loss of details caused by movement is facilitated.

Description

High dynamic range ghost image removing imaging system and method based on attention module
Technical Field
The invention belongs to the technical field of image processing, relates to a high dynamic image synthesis method based on deep learning, and particularly relates to a high dynamic range ghost image removing imaging method based on an attention module.
Background
High Dynamic Range imaging (HDR) is a technique that is also called a wide Dynamic Range technique and is used to make a camera see the features of an image under very strong contrast. When a high-brightness area and a shadow, a backlight and other areas with relatively low brightness under the irradiation of a strong light source (sunlight, lamps, reflected light and the like) exist in an image at the same time, the image output by the camera is changed into white due to overexposure, and the image quality is seriously influenced because a dark area is changed into black due to underexposure. There is a limitation in the appearance of a camera to the brightest and darker areas in the same scene, which is commonly referred to as "dynamic range". HDR pictures are pictures that use multiple different exposures and then combine them into one picture with software. This has the advantage that you can eventually get a picture with details both in the shadow and in the highlight. In normal photography you may only choose one of the two.
"dynamic range" in the broad sense refers to the span over which something of a variation may change, i.e., the area between the lowest pole to the highest pole of its variation value, which is generally described as the difference between the highest and lowest points. This is a very widely used concept, and when referring to the photographic image index of a camera product, the general "dynamic range" refers to the adaptability of the camera to the illumination reflection of the scene in the photographic scene, specifically the variation range of brightness (contrast) and color temperature (contrast).
On SIGGRAPH in 1997, Paul Debevec submitted a paper entitled "recovering high dynamic range radiation patterns from photographs". This paper describes taking pictures of the same scene at different exposure settings and then combining these pictures with different exposures into a high dynamic range image. Such high dynamic range images can capture a larger dynamic range scene from dark shadows to bright light sources or high reflections.
After SIGGRAPH 98 a year, debavec submitted a paper "rendering artificial objects into real scenes: communicate image-based traditional graphics with global lighting and high dynamic range photos ". In this paper, he photographs smooth chrome spheres using previous techniques to generate what he called "lightprobe", an essentially high dynamic range environment map. This lightprobe is then used for rendering of the composite scene. Unlike ordinary environmental maps that simply provide reflective or refractive information, lightprobe also provides illumination in the scene, which is, in fact, the only source of light. The method achieves an unprecedented photorealistic effect, and provides real-world lighting data for the overall lighting model.
Some manufacturers of digital cameras have been developing HDR technology in recent years, and hardware and software capable of capturing a high dynamic range are built in the cameras. The image sensor uses half of pixels to record normal brightness, and the other half is used to record the dark part of the picture, so as to make the picture obtain more details. Then, other brands of digital single-lens reflex have each introduced their own technology for improving the dynamic Range of images, such as the highlight priority mode of Canon corporation, the Active D-Lighting technology of Nikon corporation, the D-Range Optimizer of Sony corporation, and the dynamic Range expanding function of Bingde corporation. Many small digital cameras from various manufacturers now start to incorporate this feature.
Disclosure of Invention
The invention provides a high dynamic range ghost-removing imaging system and method based on an attention module, aiming at the problems that the theoretical basis of the existing high dynamic range ghost-removing technology is similar and the ghost-removing effect is not ideal.
The implementation steps are as follows: the invention provides a high dynamic range ghost image removing imaging system based on an attention module, which is used for combining three low dynamic range images into one high dynamic image and comprises an image feature extraction module, an attention module and a fusion module, wherein the image feature extraction module comprises a first image feature extraction module, a second image feature extraction module and a third image feature extraction module, and the fusion module comprises a first image feature extraction module, a second image feature extraction module and a second image feature extraction module, wherein the first image feature extraction module comprises a first image feature extraction module, a second image feature extraction module and a second image feature extraction module, and the second image feature extraction module comprises a first image feature extraction module, a second image feature extraction module and a second image feature extraction module, wherein the first image feature extraction module is used for extracting a first image feature extraction module, and the second image feature extraction module is used for extracting a second image feature extraction module, and the second image feature extraction module comprises a second image feature extraction module, a second image feature extraction module and a second image feature extraction module, wherein the second image feature extraction module comprises a second image feature extraction module, a third image feature extraction module, a third image feature extraction module, a third module, a fourth image module, a fourth image module, a fourth and a fourth image module, a fourth and a fourth module, a:
an image feature extraction module: the method is used for sequencing the three low dynamic range images from high to low according to exposure time, obtaining high dynamic range images corresponding to the low dynamic range images through gamma mapping, and adding the low dynamic range images and the corresponding high dynamic range images to obtain 6-channel tensors.
An attention module: inputting the 6-channel tensor obtained by the image feature extraction module into an attention network, extracting features of a non-reference low dynamic range image through the attention network to form an attention map, evaluating the importance of different image areas to obtaining a required high dynamic range image, highlighting the features complementary with the reference image to exclude areas with motion and severe saturation, and inputting the low dynamic range image features with attention guidance into a fusion network.
A fusion module: and performing combined adaptive learning on the global hierarchical features by adopting a global residual error learning strategy through a fusion network and a global feature fusion method after sufficiently obtaining dense local features, combining the shallow features and the deep features to tend to learn residual features, and finally obtaining a final high dynamic range image through tone mapping. The fusion network comprises two convolution layers with convolution kernels of 3 x 3, 3 hole dense blocks and a Relu activation function.
In a second aspect of the present invention, there is provided a method for high dynamic range deghosting imaging based on an attention module, comprising the steps of:
step (1): preprocessing a data set, wherein the data set comprises a plurality of groups of three low dynamic range images with different exposure times, performing the same rotation and random clipping processing on the three low dynamic range images in the same group in the data set, writing the three low dynamic range images and the corresponding exposure times into a list form, and the list comprises the three low dynamic range images and the corresponding exposure times.
Step (2): extracting image features;
2-1: the three low dynamic range images are ranked from high to low as L according to exposure time1、L2、L3
2-2: to L1、L2、L3Performing gamma mapping to obtain high dynamic range image, i.e. high dynamic range image H1、H2、H3The gamma mapping satisfies the following relationship:
Hi=(Li**GAMMA)÷(Ti)
wherein i is 1,2, 3; GAMMA is 2.24, 2.24 is an approximate value of a camera response function, and an HDR real image can be approximately obtained; t isiIs the exposure time of the image.
2-3: the low dynamic range image and the corresponding high dynamic range image are connected together (the number of channels is added), and the tensor X of 6 channels is obtainedi=[Li,Hi],i=1,2,3。
And (3): forming an attention map by an attention network of an attention module;
3-1: mixing XiInputting convolution layer with convolution kernel size of 3 x 3 to obtain feature mapping Z with channel number of 64iI is 1,2,3, wherein Z1,Z3For non-reference feature mapping, Z2Is a reference feature map.
3-2 reaction of Z1,Z3Respectively mapped with reference features Z2Sending into convolution attention module, and passing through two layers of convolution kernel to obtain 3 × 3The convolutional layer is subjected to feature extraction, and the output is changed into [0, 1 ] through a sigmoid activation function]The weight of (c).
3-3: the obtained weights are respectively compared with Z1,Z3Click to obtain Z'1,Z′3
3-4: prepared from Z'1,Z′3Feature mapping and reference feature mapping Z2Combining to obtain an attention map Zs
And (4): obtaining a final high dynamic range image through a fusion network of a fusion module;
4-1: will ZsInputting the data into a convolution layer with convolution kernel of 3 x 3 to obtain a feature mapping F of 64 channels0
4-2: mapping the features to F0Inputting the input into three hole dense blocks in sequence to respectively obtain feature mapping F1,F2,F3. The hole dense block consists of 1 × 1 convolution layers and a Relu activation function, and dense local features can be fully obtained.
4-3: f is to be1,F2,F3Taken together to give F4Will F4Inputting the data into a convolution layer with convolution kernel of 3 x 3, and obtaining a feature mapping F through a Relu activation function5
4-4: mapping the features to F5Obtaining an output final high dynamic range image through tone mapping, wherein the tone mapping satisfies the relation:
Q(H)=[log(1+μH)]÷[log(1+μ)],
wherein q (H) represents the final high dynamic range image, and H ═ F5
Further, the effect is best when μ is 5000.
The invention has the following beneficial effects:
the method has the advantages that: the attention network is innovatively applied to a high dynamic range ghost-removing algorithm, fusion of different images is guided, and saturation and non-aligned regions of the images are restrained
The method has the advantages that: previous image de-ghosting techniques fail to address artifacts due to motion and misalignment, and the present invention innovatively proposes using a learnable attention network to guide the merging process. The attention network is used for generating an attention map to evaluate the importance of different image areas to obtain a required high dynamic range image, characteristics complementary to a reference image can be highlighted to exclude areas with motion and severe saturation, and low dynamic range image characteristics with attention guidance are input into the fusion network.
The method has the advantages that: the fusion network is constructed by utilizing the hole residual block, which is beneficial to fully utilizing information from different convolution layers, so that more details are reserved from the input low dynamic range image, the hole convolution enlarges the receptive field, and the recovery of the supersaturated area and the loss of details caused by movement is facilitated.
Drawings
FIG. 1 is a flow chart of ghost-removing imaging according to an embodiment of the present invention;
FIG. 2 illustrates a deghosting imaging algorithm according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention is first defined and explained below:
L1、L2、L3: three low dynamic range images without processing.
H1、H2、H3: three images are gamma mapped to a high dynamic range domain.
Z1,Z3Feature mapping of non-reference pictures.
Z2Feature mapping of a reference image.
Z′1,Z′3: the non-reference image and the reference image are subjected to feature mapping generated by the attention module, so that undersaturation and oversaturation areas of the non-reference image can be clearly seen, and pollution to details can be eliminated in subsequent processing.
Void residual block: the hole residual block is composed of 1 × 1 convolution layers and Relu activation functions, and dense local features can be fully obtained. The hierarchical information of all convolutional layers is fully utilized, the convolutional features of the indictors are extracted by densely connecting the convolutional layers, more effective features are effectively learned from previous and current local features, and the network training is stable and wide.
Sigmoid activation function: the logic activation function, which compresses one value to a range of 0 to 1, can be applied to the output layer when we finally predict the probability.
Relu activation function: a linear rectification function, when the input is less than 0, the output is 0; when the input is larger than 0, the output is the value of the input, and the active function can enable the network to be converged more quickly, reduce the interdependence relation of parameters and alleviate the occurrence of the overfitting problem.
The data set is the data set from the Attention-guided network for Ghost-free High Dynamic Range Imaging.
FIG. 1 is a flow chart of ghost-removing imaging according to an embodiment of the present invention;
the invention provides a high dynamic range ghost image removing imaging system based on an attention module, which is used for combining three low dynamic range images into one high dynamic image and comprises an image feature extraction module, an attention module and a fusion module, wherein the image feature extraction module comprises a first image feature extraction module, a second image feature extraction module and a third image feature extraction module, and the fusion module comprises a first image feature extraction module, a second image feature extraction module and a second image feature extraction module, wherein the first image feature extraction module comprises a first image feature extraction module, a second image feature extraction module and a second image feature extraction module, and the second image feature extraction module comprises a first image feature extraction module, a second image feature extraction module and a second image feature extraction module, wherein the first image feature extraction module is used for extracting a first image feature extraction module, and the second image feature extraction module is used for extracting a second image feature extraction module, and the second image feature extraction module comprises a second image feature extraction module, a second image feature extraction module and a second image feature extraction module, wherein the second image feature extraction module comprises a second image feature extraction module, a third image feature extraction module, a third image feature extraction module, a third module, a fourth image module, a fourth image module, a fourth and a fourth image module, a fourth and a fourth module, a:
an image feature extraction module: the method is used for sequencing the three low dynamic range images from high to low according to exposure time, obtaining high dynamic range images corresponding to the low dynamic range images through gamma mapping, and adding the low dynamic range images and the corresponding high dynamic range images to obtain 6-channel tensors.
An attention module: inputting the 6-channel tensor obtained by the image feature extraction module into an attention network, extracting features of a non-reference low dynamic range image through the attention network to form an attention map, evaluating the importance of different image areas to obtaining a required high dynamic range image, highlighting the features complementary with the reference image to exclude areas with motion and severe saturation, and inputting the low dynamic range image features with attention guidance into a fusion network.
A fusion module: and performing combined adaptive learning on the global hierarchical features by adopting a global residual error learning strategy through a fusion network and a global feature fusion method after sufficiently obtaining dense local features, combining the shallow features and the deep features to tend to learn residual features, and finally obtaining a final high dynamic range image through tone mapping. The fusion network comprises two convolution layers with convolution kernels of 3 x 3, 3 hole dense blocks and a Relu activation function.
In a second aspect of the present invention, there is provided a method for high dynamic range deghosting imaging based on an attention module, comprising the steps of:
step (1): preprocessing a data set, wherein the data set comprises a plurality of groups of three low dynamic range images with different exposure times, performing the same rotation and random clipping processing on the three low dynamic range images in the same group in the data set, writing the three low dynamic range images and the corresponding exposure times into a list form, and the list comprises the three low dynamic range images and the corresponding exposure times. The purpose of the rotation and random cropping is to increase the number of images and the diversity of data, so as to facilitate the learning of the network.
Step (2): extracting image features;
2-1: the three low dynamic range images are ranked from high to low as L according to exposure time1、L2、L3
2-2: to L1、L2、L3Performing gamma mapping to obtain high dynamic range image, i.e. high dynamic range image H1、H2、H3The gamma mapping satisfies the following relationship:
Hi=(Li**GAMMA)÷(Ti)
wherein i is 1,2, 3; GAMMA is 2.24, 2.24 is an approximate value of a camera response function, and an HDR real image can be approximately obtained; t isiIs the exposure time of the image.
2-3: mapping low dynamic range images to corresponding high dynamic range mapsThe images are concatenated together (number of channels added) to obtain a 6-channel tensor Xi=[Li,Hi],i=1,2,3。
And (3): forming an attention map by an attention network of an attention module;
3-1: mixing XiInputting convolution layer with convolution kernel size of 3 x 3 to obtain feature mapping Z with channel number of 64iI is 1,2,3, wherein Z1,Z3For non-reference feature mapping, Z2Is a reference feature map.
3-2 reaction of Z1,Z3Respectively mapped with reference features Z2Sending the data into a convolution attention module, performing feature extraction on a convolution layer with two layers of convolution kernels of 3 x 3, and changing the output into [0, 1 ] through a sigmoid activation function]The weight of (c).
3-3: the obtained weights are respectively compared with Z1,Z3Dot product to obtain Z1′,Z′3. The purpose of the dot product is to highlight features complementary to the reference image to exclude motion and heavily saturated regions.
3-4: prepared from Z'1,Z′3Feature mapping and reference feature mapping Z2Obtaining an attention map Z in combination (superposition)sInstead of directly coupling Z1,Z3And Z2The combination is to suppress non-saturated regions in the non-reference image, thereby alleviating ghosting from the source image and avoiding unwanted features from entering the merging process.
And (4): obtaining a final high dynamic range image through a fusion network of a fusion module;
4-1: will ZsInputting the data into a convolution layer with convolution kernel of 3 x 3 to obtain a feature mapping F of 64 channels0
4-2: mapping the features to F0Inputting the input into three hole dense blocks in sequence to respectively obtain feature mapping F1,F2,F3. The hole dense block consists of 1 × 1 convolution layers and a Relu activation function, and dense local features can be fully obtained. The purpose of the hole dense block is to make full use of information from different convolutional layers, thus from the inputMore details are retained in the low dynamic range image and the field of view is enlarged, helping to recover details of oversaturated regions and moving object contamination.
4-3: f is to be1,F2,F3Combined (superimposed) to give F4Will F4Inputting the data into a convolution layer with convolution kernel of 3 x 3, and obtaining a feature mapping F through a Relu activation function5
4-4: mapping the features to F5Obtaining an output final high dynamic range image through tone mapping, wherein the tone mapping satisfies the relation:
Q(H)=[log(1+μH)]÷[log(1+μ)],
wherein q (H) represents the final high dynamic range image, and H ═ F5The effect is best when mu is 5000.
FIG. 2 illustrates a deghosting imaging algorithm according to an embodiment of the present invention.
The following table shows the experimental results of the performance comparison of the model of the present invention and the existing model on the same data set.
Figure BDA0003035818170000091

Claims (7)

1. A high dynamic range ghost-removing imaging system based on an attention module is characterized by comprising an image feature extraction module, an attention module and a fusion module:
an image feature extraction module: the device comprises a processing unit, a processing unit and a display unit, wherein the processing unit is used for sequencing three low dynamic range images from high to low according to exposure time, obtaining a high dynamic range image corresponding to the low dynamic range image through gamma mapping, and adding the low dynamic range image and the corresponding high dynamic range image to obtain a 6-channel tensor;
an attention module: inputting the 6-channel tensor obtained by the image feature extraction module into an attention network, extracting features of a non-reference low dynamic range image through the attention network to form an attention map, evaluating the importance of different image areas to obtaining a required high dynamic range image, highlighting the features complementary with the reference image to exclude a movement and severely saturated area, and inputting the low dynamic range image features with attention guidance into a fusion network;
a fusion module: through a fusion network, a global residual error learning strategy is adopted, after dense local features are fully obtained, the global hierarchical features are subjected to combined adaptive learning by adopting a global feature fusion method, shallow features and deep features are combined, residual features tend to be learned, and finally, a final high dynamic range image is obtained through tone mapping; the fusion network comprises two convolution layers with convolution kernels of 3 x 3, 3 hole dense blocks and a Relu activation function.
2. A method for high dynamic range deghosting imaging based on an attention module, comprising the steps of:
step (1): preprocessing the data set;
step (2): extracting image features;
and (3): forming an attention map by an attention network of an attention module;
and (4): and obtaining a final high dynamic range image through a fusion network of the fusion module.
3. The method for attention-module-based high dynamic range deghosting imaging according to claim 2, wherein the step (1): preprocessing a data set, wherein the data set comprises a plurality of groups of three low dynamic range images with different exposure times, performing the same rotation and random clipping processing on the three low dynamic range images in the same group in the data set, writing the three low dynamic range images and the corresponding exposure times into a list form, and the list comprises the three low dynamic range images and the corresponding exposure times.
4. The method for attention-module-based high dynamic range deghosting imaging as described in claim 3, wherein the step (2) is embodied as follows:
2-1: combining three low dynamic range imagesOrdered as L from high to low according to exposure time1、L2、L3
2-2: to L1、L2、L3Performing gamma mapping to obtain high dynamic range image, i.e. high dynamic range image H1、H2、H3The gamma mapping satisfies the following relationship:
Hi=(Li**GAMMA)÷(Ti)
wherein i is 1,2, 3; GAMMA is 2.24, 2.24 is an approximate value of a camera response function, and an HDR real image can be approximately obtained; t isiIs the exposure time of the image;
2-3: the low dynamic range image and the corresponding high dynamic range image are connected together (the number of channels is added), and the tensor X of 6 channels is obtainedi=[Li,Hi],i=1,2,3。
5. The method for attention-module-based high dynamic range deghosting imaging as claimed in claim 4, wherein the step (3) is embodied as follows:
3-1: mixing XiInputting convolution layer with convolution kernel size of 3 x 3 to obtain feature mapping Z with channel number of 64iI is 1,2,3, wherein Z1,Z3For non-reference feature mapping, Z2Mapping for a reference feature;
3-2 reaction of Z1,Z3Respectively mapped with reference features Z2Sending the data into a convolution attention module, performing feature extraction on a convolution layer with two layers of convolution kernels of 3 x 3, and changing the output into [0, 1 ] through a sigmoid activation function]The weight of (c);
3-3: the obtained weights are respectively compared with Z1,Z3Click to obtain Z'1,Z′3
3-4: prepared from Z'1,Z′3Feature mapping and reference feature mapping Z2Combining to obtain an attention map Zs
6. The method for attention-module-based high dynamic range deghosting imaging as claimed in claim 5, wherein the step (4) is embodied as follows:
4-1: will ZsInputting the data into a convolution layer with convolution kernel of 3 x 3 to obtain a feature mapping F of 64 channels0
4-2: mapping the features to F0Inputting the input into three hole dense blocks in sequence to respectively obtain feature mapping F1,F2,F3(ii) a The hole dense block consists of a convolution layer of 1 x 1 and a Relu activation function, and dense local features can be fully obtained;
4-3: f is to be1,F2,F3Taken together to give F4Will F4Inputting the data into a convolution layer with convolution kernel of 3 x 3, and obtaining a feature mapping F through a Relu activation function5
4-4: mapping the features to F5Obtaining an output final high dynamic range image through tone mapping, wherein the tone mapping satisfies the relation:
Q(H)=[log(1+μH)]÷[log(1+μ)],
wherein q (H) represents the final high dynamic range image, and H ═ F5
7. The method of attention-Module-based high dynamic range Deghosting imaging of claim 6, further wherein μ works best for 5000.
CN202110442869.1A 2021-04-23 2021-04-23 High dynamic range ghost image removing imaging system and method based on attention module Withdrawn CN113160178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110442869.1A CN113160178A (en) 2021-04-23 2021-04-23 High dynamic range ghost image removing imaging system and method based on attention module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110442869.1A CN113160178A (en) 2021-04-23 2021-04-23 High dynamic range ghost image removing imaging system and method based on attention module

Publications (1)

Publication Number Publication Date
CN113160178A true CN113160178A (en) 2021-07-23

Family

ID=76869919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110442869.1A Withdrawn CN113160178A (en) 2021-04-23 2021-04-23 High dynamic range ghost image removing imaging system and method based on attention module

Country Status (1)

Country Link
CN (1) CN113160178A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554567A (en) * 2021-07-29 2021-10-26 杭州电子科技大学 Robust ghost image removing system and method based on wavelet transformation
CN114820350A (en) * 2022-04-02 2022-07-29 北京广播电视台 Inverse tone mapping system, method and neural network system thereof
CN114998138A (en) * 2022-06-01 2022-09-02 北京理工大学 High dynamic range image artifact removing method based on attention mechanism
WO2023246392A1 (en) * 2022-06-22 2023-12-28 京东方科技集团股份有限公司 Image acquisition method, apparatus and device, and non-transient computer storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554567A (en) * 2021-07-29 2021-10-26 杭州电子科技大学 Robust ghost image removing system and method based on wavelet transformation
CN113554567B (en) * 2021-07-29 2024-04-02 杭州电子科技大学 Robust ghost-removing system and method based on wavelet transformation
CN114820350A (en) * 2022-04-02 2022-07-29 北京广播电视台 Inverse tone mapping system, method and neural network system thereof
CN114998138A (en) * 2022-06-01 2022-09-02 北京理工大学 High dynamic range image artifact removing method based on attention mechanism
CN114998138B (en) * 2022-06-01 2024-05-28 北京理工大学 High dynamic range image artifact removal method based on attention mechanism
WO2023246392A1 (en) * 2022-06-22 2023-12-28 京东方科技集团股份有限公司 Image acquisition method, apparatus and device, and non-transient computer storage medium

Similar Documents

Publication Publication Date Title
CN113160178A (en) High dynamic range ghost image removing imaging system and method based on attention module
Lee et al. Deep chain hdri: Reconstructing a high dynamic range image from a single low dynamic range image
WO2021179820A1 (en) Image processing method and apparatus, storage medium and electronic device
Lv et al. Fast enhancement for non-uniform illumination images using light-weight CNNs
CN111669514B (en) High dynamic range imaging method and apparatus
CN108965731A (en) A kind of half-light image processing method and device, terminal, storage medium
CN111915525B (en) Low-illumination image enhancement method capable of generating countermeasure network based on improved depth separation
CN110225260B (en) Three-dimensional high dynamic range imaging method based on generation countermeasure network
CN111986084A (en) Multi-camera low-illumination image quality enhancement method based on multi-task fusion
Qian et al. Bggan: Bokeh-glass generative adversarial network for rendering realistic bokeh
Yu et al. Luminance attentive networks for HDR image and panorama reconstruction
WO2023202200A1 (en) Method for reconstructing hdr images, terminal, and electronic device
Zheng et al. Low-light image and video enhancement: A comprehensive survey and beyond
Wu et al. Cycle-retinex: Unpaired low-light image enhancement via retinex-inline cyclegan
Tan et al. High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation
CN112150363B (en) Convolutional neural network-based image night scene processing method, computing module for operating method and readable storage medium
Jiang et al. Mutual retinex: Combining transformer and cnn for image enhancement
CN117237207A (en) Ghost-free high dynamic range light field imaging method for dynamic scene
CN116579940A (en) Real-time low-illumination image enhancement method based on convolutional neural network
CN116597144A (en) Image semantic segmentation method based on event camera
CN116614714A (en) Real exposure correction method and system guided by perception characteristics of camera
Li et al. LDNet: low-light image enhancement with joint lighting and denoising
CN116245968A (en) Method for generating HDR image based on LDR image of transducer
CN114331931A (en) High dynamic range multi-exposure image fusion model and method based on attention mechanism
CN113240605A (en) Image enhancement method for forward and backward bidirectional learning based on symmetric neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210723

WW01 Invention patent application withdrawn after publication