CN114820373A - Single image reconstruction HDR method based on knowledge heuristic - Google Patents

Single image reconstruction HDR method based on knowledge heuristic Download PDF

Info

Publication number
CN114820373A
CN114820373A CN202210460159.6A CN202210460159A CN114820373A CN 114820373 A CN114820373 A CN 114820373A CN 202210460159 A CN202210460159 A CN 202210460159A CN 114820373 A CN114820373 A CN 114820373A
Authority
CN
China
Prior art keywords
knowledge
module
output
heuristic
hdr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210460159.6A
Other languages
Chinese (zh)
Other versions
CN114820373B (en
Inventor
叶茂
王虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210460159.6A priority Critical patent/CN114820373B/en
Publication of CN114820373A publication Critical patent/CN114820373A/en
Application granted granted Critical
Publication of CN114820373B publication Critical patent/CN114820373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a knowledge inspiration-based single image reconstruction HDR method, which is applied to the field of image processing and aims at solving the problem that the recovery result is unstable because the existing prior art is only considered from the content of an LDR image and is not combined with the imaging process from HDR to the LDR image; according to the method, a pipeline simulation mathematical formula is formed by reversely deducing HDR to LDR images, so that an LDR to HDR reconstruction basic module is constructed under the guidance of the formula to reconstruct the HDR image; the method of the invention can effectively enhance the authenticity of the reconstructed image.

Description

Single image reconstruction HDR method based on knowledge inspiration
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image reconstruction technology.
Background
The brightness range in natural scenes is very wide, the light and shade span is large, and therefore more vivid colors present more visual details. However, due to the hardware limitations, most consumer cameras can only capture a limited Range of luminance photos, i.e. the common Low Dynamic Range (LDR) images. While the LDR image reflects most of the information in the real scene to some extent, there is still a lot of detail information lost, especially in scenes with large light-dark span. In recent years, with the development of economy, the popularization of 5G technology and the increase of High Dynamic Range (HDR) displays in the market, there is an increasing demand for video data of more realistic scenes. In medical diagnostics, HDR images can provide more detail, which enables a physician to diagnose a patient condition more accurately; in the fields of television, movies, games and the like, HDR videos can provide wider dynamic range and brighter colors, and the watching experience of consumers can be greatly improved; in the field of computer vision research, such as automatic driving and the like, the HDR image can provide more visual information, and the precision of task completion is ensured. Therefore, the reconstructed HDR image can be used as a bottom-layer image processing method to provide help for subsequent high-level researches.
There are many networks for reconstructing HDR, and there are significant advances in object-wise processing for a wide variety of scenes, but most existing methods only consider the content of the LDR image itself when reconstructing the HDR image, and do not incorporate the HDR-to-LDR image imaging process.
The related prior art:
the invention relates to a method for generating a high dynamic range image from a single low dynamic range image, which is invented by Zhang hong Ying, Zhuenghong and Wu Yao at the southwest university of science and technology, and the publication number is as follows: CN 107045715A. This patent extends the dynamic range of the picture by making a series of transformations on the input LDR image, but it does not take into account the help of the LDR image reconstruction HDR image from HDR to LDR image formation knowledge, so the image generated by this method is not able to recover HDR images satisfactory in detail, especially in the overexposed regions, and the recovery result is unstable due to the large differences between the images.
The invention relates to a rapid HDR video reconstruction method of a billow, Liangjie and Wanghao of Tianjin university, and the publication number is as follows: CN 113973175A. The patent carries out foreground/background separation on an input LDR video frame for reconstruction, and introduces a new problem to the HDR video reconstruction problem, namely how to separate the foreground/background and how to reconstruct a separated scene. The subspace of the HDR image generation problem cannot be effectively reduced. And training the foreground and background separately again increases the error of the generated HDR image.
Disclosure of Invention
In order to solve the technical problem, the invention provides a single image reconstruction HDR method based on knowledge inspiration, which combines the imaging process from HDR to LDR images and can effectively improve the reality of the reconstructed image.
The technical scheme adopted by the invention is as follows: a knowledge heuristic-based single image reconstruction (HDR) method comprises the following steps:
s1, performing 3X3 convolution operation on the LDR image to obtain an LDR image feature map;
s2, carrying out downsampling on the LDR image feature map by using 3X3 convolution;
s3, using a knowledge enlightening block to restore the LDR image feature map sampled in the step S2 for the first time to obtain a pseudo HDR feature map;
s4, after down-sampling the pseudo HDR characteristic diagram, using a knowledge heuristic block to carry out second and third recovery to obtain a reconstructed HDR characteristic diagram;
s5, after the reconstructed feature map is subjected to up-sampling through 3X3 convolution and pixelShuffle operation, a knowledge heuristic block is used for fourth recovery to obtain a recovered HDR feature map;
s6, carrying out up-sampling on the restored HDR characteristic diagram;
and S7, converting the HDR characteristic map after sampling in the step S6 into a target HDR image through convolution of 3X 3.
The invention has the beneficial effects that: the invention provides a single picture reconstruction HDR method based on knowledge inspiration, which can be applied to an image and video enhancement system, wherein the imaging knowledge from HDR to LDR is applied to an end-to-end model construction at home and abroad, the method breaks through the previous construction of HDR images only aiming at LDR content considering reconstruction and mechanically simulating a camera simulation imaging process, simplifies the construction of models, improves the learning capability of the models, further enhances the reality of the reconstructed images, and has certain advance in the technology.
Drawings
Fig. 1 is a schematic diagram of a knowledge heuristic-based HDR reconstruction method for a single picture.
Fig. 2 is a schematic diagram of an HDR image reconstruction overall architecture based on knowledge heuristics.
FIG. 3 is a schematic diagram of a knowledge heuristic block.
Fig. 4 is a hop-and-hop structure based on knowledge heuristics.
FIG. 5 shows the effect of the method of the present invention;
where (a) is the input LDR image one and (b) is the HDR image one generated by the proposed method.
FIG. 6 shows the second effect of the method of the present invention;
where (a) is the input LDR image two and (b) is the HDR image two generated by the proposed method.
FIG. 7 is a noise reduction analysis;
wherein, (a) is the original LDR image, (b) is the HDR image generated by the method of the present invention, (c) is the corresponding noise map, (d) is the noise map processed by the method of the present invention.
Detailed Description
In order to facilitate the understanding of the technical contents of the present invention by those skilled in the art, the present invention will be further explained with reference to the accompanying drawings.
The knowledge inspiration-based single picture reconstruction HDR method disclosed by the invention has the main core that an HDR image imaging basic volume block is constructed through the derived LDR-to-HDR image forming pipeline guidance as shown in figure 1. The end-to-end HDR reconstruction model is constructed using the convolutional blocks and trained by paired data sets, and specific embodiments will be described in detail below.
1. The method comprises the following specific steps of:
11. and analyzing the LDR image forming pipeline, constructing an imaging pipeline from the LDR to the HDR image according to the knowledge, and dividing the imaging pipeline into three modules, namely a detail recovery module R, a sensor parameter adjusting module X and a noise eliminating module Y.
For the traditionally used HDR to LDR image formation pipeline, the present invention guesses a camera with infinite luminance capture capability, and the image formation pipeline for the virtual camera analyzed with the real camera is as follows:
Figure BDA0003621688490000031
wherein g represents a sensor parameter, t represents an exposure time, I 0 Is an offset, I L Representing a field LDR image,. phi.represents a true HDR image, n represents sensor noise, I overflow Representing pixels captured by a camera with infinite capture capability. By deriving this formula in the reverse direction, the present invention obtains the HDR image formation formula shown below:
Figure BDA0003621688490000032
from the above the present invention can clearly be seen the recovery process of LDR-HDR graphics. He contains a total of three parts: 1) in I overflow If not equal to 0, deducing the pixels of the overexposure area; 2) adjusting sensor parameters and exposure time; 3) noise generated during the generation of the LDR image is removed.
12. After analyzing the HDR image forming formula obtained in step 11 and the action sizes and inherent features of the three modules, the present invention expands the formula from the image space to the feature map space, i.e. transforms as follows:
Figure BDA0003621688490000033
the following HDR feature map reconstruction formula can be obtained:
Figure BDA0003621688490000041
L F and H F Respectively representing the feature map of the previous layer and the feature map generated by the knowledge heuristic block, R (-) represents the recovery of the lost overexposed area information to reconstruct the HDR image; x (-) represents a transition from R (L) F ) Is adjusted to obtain the characteristics of the HDR domain. Y (-) represents the noise reduction portion,this noise is generated by the LDR image forming process.
Based on the above analysis and the strong fitting ability of the integrated convolutional neural network, the present invention builds a module suitable for R by using some 3 × 3 convolutions by simulating the functions of X and Y with two 1 × 1 convolutions, as follows:
X(L F )=Conv 1x1 〇Conv 1x1 〇(L F ),
Y(L F )=Conv 1x1 〇Conv 1x1 〇(L F ),
R(L F )=Cat(DC(L F ),...,DC k (L F ))
wherein good represents convolution operation, DC (-) represents densely connected blocks, and a basic convolution module suitable for HDR feature map reconstruction is constructed based on the invention, namely a knowledge heuristic block as shown in FIG. 2, comprising: the device comprises a detail recovery module R, a sensor parameter adjusting module X, a noise elimination module Y and a reconstruction module; the input of the knowledge enlightening module is respectively used as the input of the detail restoring module R, the sensor parameter adjusting module X and the noise eliminating module Y, and the result of the summation of the output of the sensor parameter adjusting module X and the output of the detail restoring module R after the Hadamard product is calculated is used as the first output of the knowledge enlightening module; and the output of the detail restoring module R is subjected to a reconstruction module to obtain an image containing the information of the overexposed area as the second output of the knowledge enlightening module.
The invention shows the structure of the dense connection blocks and the total modules in the detail recovery module R in detail in the attached figure 3, one close connection block in the figure 3 comprises 2 convolutions of 3X3, the output of the first dense link block can be used as the input of the second dense connection block, and so on, then the outputs of the dense connection blocks are spliced together for calculation, namely the output of the R module, and the output of the R module is combined with the X and Y modules to form the output of the knowledge heuristic block.
In addition, based on the heuristic of the above formula, in order to better recover the information of the overexposed region, the present invention redesigns an overexposed loss function for the R module in each knowledge heuristic block as follows:
Figure BDA0003621688490000042
the first term on the left of the above equation excludes the overexposed area information by a mask, and then the second term infers the overexposed area information by useful information.
Wherein | | | purple hair 1 Denotes the L1 norm, I H Representing a true HDR image, K i Representing the image reconstructed by the reconstruction branch (i.e. the reconstruction module in fig. 3), γ is a small parameter to avoid excessive penalty, and γ is 0.1. The reconstruction branch uses one upsampling and one convolutional layer when i is 1 or 4, and two upsampling and two convolutional layers when i is 2 or 3. M represents an overexposure area mask. The invention calculates M by the following formula:
Figure BDA0003621688490000051
where c represents the RGB three channels, τ is 0.83, and I (x, y, c) represents the LDR image.
By using this loss function, the R-network of the present invention is able to focus more attention on the recovery of the information of the overexposed area. It is worth noting that the reconstruction branch is only used in the training phase, and therefore does not introduce additional overhead for the inference phase.
2. Constructing an end-to-end HDR reconstruction model, wherein the specific details are as follows:
21. aiming at the problems of large calculation amount, insufficient accuracy and the like when the HDR characteristic diagram reconstruction basic block constructed in the step 1 is directly used for reconstructing an HDR image, the method adopts a step-by-step mode to reconstruct the HDR image, firstly carries out convolution operation on the LDR image at the head of a module by 3X3 to obtain the LDR image characteristic diagram, then carries out downsampling by using the convolution operation of 3X3, then carries out first reconstruction and recovery on the obtained LDR characteristic diagram by using the knowledge heuristic block constructed in the step 1, then carries out second downsampling on the reconstructed pseudo HDR characteristic diagram, and carries out second and third reconstruction and recovery by using the knowledge heuristic block. And performing upsampling on the feature map after the third reconstruction through 3X3 convolution and pixelShuffle operation, performing fourth recovery through a knowledge heuristic block, performing upsampling on the recovered feature map, and performing convolution conversion into a target HDR image through 3X3 at the tail.
In fig. 2, the first 256 × 64 represents the size of the output of the head, the second 256 × 64 represents the size of the input of the tail, 128 × 64 represents the size of the output of the first knowledge heuristic block, 64 × 64 represents the size of the output of the third heuristic block, and 128 × 64 represents the size of the output of the fourth heuristic block.
22. The end-to-end HDR reconstruction model constructed in step 21 is optimized, and the conventional up-down sampling includes a jump connection structure, but due to the particularity of HDR image reconstruction, that is, a dynamic gap between an LDR image and an HDR image. Resulting in a direct hop-and-loop structure that is not very suitable for HDR reconstruction problems. This is because, first, the LDR to HDR process is strictly speaking an image generation problem. It can be seen from the LDR image forming pipeline that there are a number of noise problems; second, the dynamic range gap between LDR and HDR images makes the feature space of the front and back end of the network inconsistent. The present invention therefore improves the jump-link structure, and optimizes the HDR reconstruction model constructed in step 21 by constructing a noise reduction and mapping suppression branch through a convolution model. For the noise reduction problem, the present invention uses a noise reduction module of residual structure as shown in fig. 4, which includes 23 × 3 convolutions and two activation function operations by which the noise information of the feature map is modified, and can be described using the following formula:
D=Conv 3x3 〇ReLU〇Conv 3x3 〇ReLU(F)+F
f represents the input of the jump connection structure, and D represents the output of the noise reduction module;
for mapping from features close to LDR to feature space close to HDR, the present invention uses a mapping module consisting of two convolution-activation operations, i.e. the last 2 convolution-activation operations in fig. 4, as follows:
Figure BDA0003621688490000069
Figure BDA0003621688490000061
representing the output of the mapping module;
in addition, in order to suppress useless information and reduce visual noise, the present invention filters information generated by the above network again, and the final result is:
Figure BDA0003621688490000062
the AD module represents 1 × 1 convolution, i.e., the adjustment unit in fig. 4, and is used to adjust the scored features. P represents decoding side information of the conventional hop structure. For the scoring mechanism, the cosine similarity is adopted in the invention as follows:
Figure BDA0003621688490000063
wherein the ­ last is a very small parameter to prevent errors in the division by 0.
23. For the model obtained through steps 1 and 2, the HDR image reconstruction problem is solved by defining a simple loss function, which is composed of a main loss function and a mask loss function, and is specifically as follows:
Figure BDA0003621688490000064
wherein
Figure BDA0003621688490000065
Representing the generated HDR image, β is an auxiliary parameter. In the training process of the present invention, β is 0.01. Albeit only by
Figure BDA0003621688490000066
The loss function of (2) can reconstruct an HDR image, but the reconstructed HDR image can be better by adding an overexposure loss function to assist the HDR image reconstruction.
3. The paired data sets were trained, the specific training details are as follows:
31. aiming at the picture problem, the data set used by the invention is an NTIRE2021 data set, scenes of the data set are rich and comprise indoor scenes, outdoor scenes and most pictures with wide brightness ranges, and the data set comprises 1416 paired training pictures and 78 testing pictures. This model is widely used in HDR image generation tasks.
32. For the video problem, the present invention uses the HDRTV dataset, which is trained by LDR and HDR video sets with color gamuts bt.709 and bt.2020, respectively, the model has 1235 pairs of training pictures and 117 test pictures. In the problem, the invention further constructs a perceptual loss function to assist the model to recover the HDR video with better visual effect. The specific details are as follows:
Figure BDA0003621688490000067
wherein fvgg19 Is used to evaluate the deep level feature similarity between the model generated HDR image and the real HDR image. By using the perceptual loss function, the overall loss function of the present invention is as follows:
Figure BDA0003621688490000068
33. the present invention uses an ADAM optimizer to set the learning rate to 2e -4 And is reduced by half 20 million times per iteration. All models are constructed by using a Pythrch frame and trained by NVIDIA GeForce RTX 2080 SUPER, and the total training time is 6 days.
Fig. 5 and fig. 6 show two different LDR images and HDR images generated by the method of the present invention, respectively, where in fig. 7, (a) is an original LDR image, (b) is an HDR image generated by the method of the present invention, (c) is a noise image corresponding to (a), and (d) is a noise image processed by the method of the present invention, and it can be seen from fig. 5, 6, and 7 that overexposed region information can be inferred effectively by the method of the present invention; it can be seen from fig. 5-7 that the method of the present invention is effective in enhancing the authenticity of the reconstructed image.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (7)

1. A single image reconstruction HDR method based on knowledge inspiration is characterized in that a reconstruction network model is built, paired LDR-HDR images are adopted to train the built reconstruction network model, and finally, a to-be-processed LDR is reconstructed according to the trained reconstruction network to obtain a corresponding HDR image;
the reconstructed network model comprises a knowledge heuristic block, wherein the knowledge heuristic block is used for recovering an input image, and the structure of the knowledge heuristic block comprises: the system comprises a detail recovery module R, a sensor parameter adjusting module X and a noise elimination module Y; the input of the knowledge enlightening module is respectively used as the input of the detail restoring module R, the sensor parameter adjusting module X and the noise eliminating module Y, and the output of the sensor parameter adjusting module X and the output of the detail restoring module R calculate the Hadamard product and then add the Hadamard product with the output of the noise eliminating module Y to be used as the output of the knowledge enlightening module.
2. The knowledge heuristic-based single-image reconstruction HDR method is characterized in that a detail recovery module R is formed by a plurality of densely connected blocks, and the expression is as follows:
R(L F )=Cat(DC(L F ),...,DC k (L F ))
wherein good represents convolution operation, DC (-) represents densely connected blocks, L F Represents the input of the knowledge heuristic block of the detail recovery module R.
3. The method as claimed in claim 2, wherein the knowledge heuristic block further comprises a reconstruction module during training, an input of the reconstruction module is an output of the detail restoration module R, and an output of the reconstruction module is an image containing information of an overexposed region.
4. The knowledge heuristic-based single-image reconstruction HDR method as claimed in claim 3, wherein the loss function expression adopted by the knowledge heuristic block in the training process is:
Figure FDA0003621688480000011
wherein IH Representing a true HDR image, K i Representing an image containing information of an overexposed area, M representing a mask of the overexposed area,
Figure FDA0003621688480000012
5. the knowledge-heuristic-based single-image reconstruction (HDR) method as claimed in claim 4, wherein the reconstruction network model specifically comprises four knowledge-heuristic blocks, further comprising: a head 3X3 convolution, 2 downsampling modules, 2 upsampling modules, a tail 3X3 convolution, and three hop-join structures; the four knowledge heuristic blocks are sequentially noted as: a first knowledge heuristic block, a second knowledge heuristic block, a third knowledge heuristic block, and a fourth knowledge heuristic block; the 2 down-sampling modules are sequentially recorded as: the device comprises a first downsampling module and a second downsampling module; the 2 upsampling modules are sequentially noted as: the device comprises a first up-sampling module and a second up-sampling module; the three jump-link mechanisms are sequentially marked as: the first skip connection structure, the second skip connection structure and the third skip connection structure;
the head 3X3 convolution, the first downsampling module, the first knowledge heuristic block, the second downsampling module, the second knowledge heuristic block, the third knowledge heuristic block, the first upsampling module, the fourth knowledge heuristic block, the second upsampling module and the tail 3X3 convolution are in a series structure;
the input of the first hop-join structure comprises the convolution output of the head 3X3 and the output of the fourth knowledge heuristic block, the convolution output of the head 3X3 and the output of the fourth knowledge heuristic block are respectively used as the coding end information and the decoding end information of the first hop-join structure, and the addition result of the output of the first hop-join structure and the output of the fourth knowledge heuristic block is used as the convolution input of the tail 3X 3;
the input of the second skip connection structure comprises the output of a second knowledge heuristic block and the output of a first up-sampling module, the output of the second knowledge heuristic block and the output of the first up-sampling module are respectively used as the coding end information and the decoding end information of the second skip connection structure, and the result of the addition of the output of the second skip connection structure and the output of the first up-sampling module is used as the input of a fourth knowledge heuristic block;
the input of the third skip connection structure comprises the output of a third knowledge heuristic block and the output of a second down-sampling module, the output of the third knowledge heuristic block and the output of the second down-sampling module are respectively used as the coding end information and the decoding end information of the third skip connection structure, and the result of the addition of the output of the third skip connection structure and the output of the third knowledge heuristic block is used as the input of the first up-sampling module.
6. The knowledge-heuristic-based single-image reconstruction (HDR) method according to claim 5, wherein the skip-concatenation structure comprises: the device comprises a noise reduction module, a mapping module and a filtering module; the input of the noise reduction module is the coding end information of the jump-link structure, the output of the noise reduction module is used as the input of the mapping module, the output of the mapping module and the decoding end information of the jump-link structure are used as the input of the filtering module, and the output of the filtering module is used as the output of the jump-link structure.
7. The knowledge heuristic-based single-image reconstruction HDR method according to claim 6, wherein the loss function expression adopted by the reconstruction network model during training is as follows:
Figure FDA0003621688480000021
wherein ,
Figure FDA0003621688480000022
CN202210460159.6A 2022-04-28 2022-04-28 Single image reconstruction HDR method based on knowledge heuristic Active CN114820373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210460159.6A CN114820373B (en) 2022-04-28 2022-04-28 Single image reconstruction HDR method based on knowledge heuristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210460159.6A CN114820373B (en) 2022-04-28 2022-04-28 Single image reconstruction HDR method based on knowledge heuristic

Publications (2)

Publication Number Publication Date
CN114820373A true CN114820373A (en) 2022-07-29
CN114820373B CN114820373B (en) 2023-04-25

Family

ID=82509510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210460159.6A Active CN114820373B (en) 2022-04-28 2022-04-28 Single image reconstruction HDR method based on knowledge heuristic

Country Status (1)

Country Link
CN (1) CN114820373B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150358646A1 (en) * 2013-02-21 2015-12-10 Koninklijke Philips N.V. Improved hdr image encoding and decoding methods and devices
EP3119088A1 (en) * 2015-07-16 2017-01-18 Thomson Licensing Method and device for encoding an image, method and device for decoding an image
CN106464892A (en) * 2014-05-28 2017-02-22 皇家飞利浦有限公司 Methods and apparatuses for encoding HDR images, and methods and apparatuses for use of such encoded images
CN108805836A (en) * 2018-05-31 2018-11-13 大连理工大学 Method for correcting image based on the reciprocating HDR transformation of depth
CN111292264A (en) * 2020-01-21 2020-06-16 武汉大学 Image high dynamic range reconstruction method based on deep learning
CN111709900A (en) * 2019-10-21 2020-09-25 上海大学 High dynamic range image reconstruction method based on global feature guidance
CN113344773A (en) * 2021-06-02 2021-09-03 电子科技大学 Single picture reconstruction HDR method based on multi-level dual feedback
US20210398257A1 (en) * 2020-06-18 2021-12-23 Samsung Electronics Co., Ltd. Method and device for mapping ldr video into hdr video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150358646A1 (en) * 2013-02-21 2015-12-10 Koninklijke Philips N.V. Improved hdr image encoding and decoding methods and devices
CN106464892A (en) * 2014-05-28 2017-02-22 皇家飞利浦有限公司 Methods and apparatuses for encoding HDR images, and methods and apparatuses for use of such encoded images
EP3119088A1 (en) * 2015-07-16 2017-01-18 Thomson Licensing Method and device for encoding an image, method and device for decoding an image
CN108805836A (en) * 2018-05-31 2018-11-13 大连理工大学 Method for correcting image based on the reciprocating HDR transformation of depth
CN111709900A (en) * 2019-10-21 2020-09-25 上海大学 High dynamic range image reconstruction method based on global feature guidance
CN111292264A (en) * 2020-01-21 2020-06-16 武汉大学 Image high dynamic range reconstruction method based on deep learning
US20210398257A1 (en) * 2020-06-18 2021-12-23 Samsung Electronics Co., Ltd. Method and device for mapping ldr video into hdr video
CN113344773A (en) * 2021-06-02 2021-09-03 电子科技大学 Single picture reconstruction HDR method based on multi-level dual feedback

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘真;杨丹丹;朱明;: "HDR图像压缩算法比较研究" *
叶年进: "基于深度学习的HDR成像方法研究" *
周燕琴;吕绪洋;朱雄泳;: "一种改进金字塔的多曝光HDR图像生成方法" *

Also Published As

Publication number Publication date
CN114820373B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN111311490B (en) Video super-resolution reconstruction method based on multi-frame fusion optical flow
WO2021208122A1 (en) Blind video denoising method and device based on deep learning
CN109447907B (en) Single image enhancement method based on full convolution neural network
CN111667424B (en) Unsupervised real image denoising method
CN107403415B (en) Compressed depth map quality enhancement method and device based on full convolution neural network
CN111028150B (en) Rapid space-time residual attention video super-resolution reconstruction method
CN111861902A (en) Deep learning-based Raw domain video denoising method
CN111539884A (en) Neural network video deblurring method based on multi-attention machine mechanism fusion
CN112801901A (en) Image deblurring algorithm based on block multi-scale convolution neural network
CN116051428B (en) Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN114494050A (en) Self-supervision video deblurring and image frame inserting method based on event camera
CN108989731B (en) Method for improving video spatial resolution
CN112200732B (en) Video deblurring method with clear feature fusion
CN116894770A (en) Image processing method, image processing apparatus, and computer program
CN114339030B (en) Network live video image stabilizing method based on self-adaptive separable convolution
CN112750092A (en) Training data acquisition method, image quality enhancement model and method and electronic equipment
CN116456183B (en) High dynamic range video generation method and system under guidance of event camera
CN112529776A (en) Training method of image processing model, image processing method and device
CN113724134A (en) Aerial image blind super-resolution reconstruction method based on residual distillation network
CN113379606A (en) Face super-resolution method based on pre-training generation model
CN116228550A (en) Image self-enhancement defogging algorithm based on generation of countermeasure network
CN116389912B (en) Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera
CN112862675A (en) Video enhancement method and system for space-time super-resolution
CN116757959A (en) HDR image reconstruction method based on Raw domain
CN116823662A (en) Image denoising and deblurring method fused with original features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant