CN113870126B - Bayer image recovery method based on attention module - Google Patents

Bayer image recovery method based on attention module Download PDF

Info

Publication number
CN113870126B
CN113870126B CN202111043024.1A CN202111043024A CN113870126B CN 113870126 B CN113870126 B CN 113870126B CN 202111043024 A CN202111043024 A CN 202111043024A CN 113870126 B CN113870126 B CN 113870126B
Authority
CN
China
Prior art keywords
convolution layer
channel
multiplied
image
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111043024.1A
Other languages
Chinese (zh)
Other versions
CN113870126A (en
Inventor
孙帮勇
魏凌云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dianwei Culture Communication Co ltd
Shenzhen Litong Information Technology Co ltd
Original Assignee
Shenzhen Dianwei Culture Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dianwei Culture Communication Co ltd filed Critical Shenzhen Dianwei Culture Communication Co ltd
Priority to CN202111043024.1A priority Critical patent/CN113870126B/en
Publication of CN113870126A publication Critical patent/CN113870126A/en
Application granted granted Critical
Publication of CN113870126B publication Critical patent/CN113870126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a Bayer image recovery method based on an attention module, which comprises the following steps: 1) Constructing a green channel recovery network, wherein the input of the green channel recovery network is a green sampling image separated by a Bayer image channel, and the green sampling image is output as a reconstructed green channel image; 2) Constructing a feature guiding module, wherein the input of the feature guiding module is a reconstructed green channel diagram and the output of the encoder-decoder, and the output of the feature guiding module is a guiding diagram; 3) Constructing a red channel recovery network, wherein the input of the red channel recovery network is a red sampling graph and a guide graph, and the output of the red channel recovery network is a reconstructed red channel graph; 4) Constructing a blue channel recovery network; 5) Fusing the red channel diagram, the green channel diagram and the blue channel diagram obtained by reconstruction to obtain a reconstructed RGB diagram; 6) And calculating the average absolute error between the reconstructed RGB image and the real image in the image pair, and optimizing the network model by taking the minimization of the L1 loss function as a target. The method of the invention can obtain high-quality reconstructed images.

Description

Bayer image recovery method based on attention module
Technical Field
The invention belongs to the technical field of image processing and deep learning, and relates to a Bayer image recovery method based on an attention module.
Background
Color images are typically represented by three color components, red (R) green (G) blue (B), each color component being referred to as a color channel. RGB digital cameras are currently most popular for recording color images, while most digital cameras employ a single sensor imaging architecture. The image sensor in a single-sensor color digital camera is usually a photosensitive coupling element or a complementary metal oxide semiconductor chip, which only can sense the intensity of light, but not the color, so that a filter needs to be placed in front of the sensor, only light with a certain wavelength is allowed to pass through, when exposure imaging is performed, each pixel position of the sensor only collects one color, an image obtained by direct sampling is called a Bayer image, and a process of reconstructing information of other two color components which are not directly sampled at each pixel position is called image demosaicing.
In current RGB digital cameras, the most common color filter array is the Bayer filter array, whose imaging area is composed of a2 x 2 repeating array, each set of 2 x 2 arrays containing 2 green (G), 1 red (R), 1 blue (B) pixels. This results in 2/3 of the color information being lost, while 1/3 of the sampled color information is mostly contaminated by noise, thereby affecting the quality of the reconstructed image. The image demosaicing is the first step in the image processing flow, and needs to lay a foundation for a subsequent series of image processing tasks, so that the reconstruction of high-quality images is of great significance.
Current methods of image demosaicing can be broadly divided into three categories: interpolation-based methods, sparse representation-based methods, and deep learning-based methods. Some algorithms ignore the correlation between channels in interpolation-based methods, while some algorithms do not have satisfactory reconstruction effects in areas such as edges, textures, etc. even if the correlation between channels is considered. The sparse representation-based method has high precision, but also has higher complexity and is limited in practical application. Based on a deep learning method, the characteristics in the raw image and the correlation between adjacent pixels of each channel are learned through designing a neural network, so that a reconstructed image is obtained, and a certain progress is made; however, some networks firstly downsample the raw image to a half-sized four-channel image, so that the resolution is reduced, the image details are lost, the relative position information between RGB is lost, and the reconstructed image is inaccurate; and there are problems of complexity and difficulty in training of the network model.
Disclosure of Invention
The invention aims to provide a Bayer image recovery method based on an attention module, which solves the problems of difficult training and low reconstruction precision of a network model when demosaicing based on deep learning in the prior art.
The technical scheme of the invention is that the Bayer image recovery method based on the attention module is implemented according to the following steps:
step 1, a green channel recovery network is constructed, the input of the green channel recovery network is a green sampling image separated by a Bayer image channel, and the green sampling image is output as a reconstructed green channel image;
Step 2, constructing a feature guiding module, wherein the input of the feature guiding module is a reconstructed green channel diagram and the output of the encoder-decoder, and the output of the feature guiding module is a guiding diagram;
Step3, constructing a red channel recovery network, wherein the input of the red channel recovery network is a red sampling graph and a guide graph, and the output of the red channel recovery network is a reconstructed red channel graph;
Step 4, constructing a blue channel recovery network, wherein the blue channel recovery network has a similar structural flow to the red channel recovery network, and the difference is that the blue channel recovery network is input into a blue sampling graph and a guide graph, and the blue channel recovery network is output into a reconstructed blue channel graph;
step 5, fusing the red channel diagram, the green channel diagram and the blue channel diagram obtained by reconstruction to obtain a reconstructed RGB diagram;
And 6, calculating the average absolute error between the reconstructed RGB image and the real image in the image pair according to the constructed network model based on the attention module, and optimizing the network model with the aim of minimizing the L1 loss function.
The beneficial effects of the invention are that the invention comprises the following aspects:
1) Most of the existing methods downsample Bayer images into images of four channels RGGB with half size, so that resolution is reduced, details of the images are lost, and relative position information of RGB is lost. The invention provides the pretreatment of channel separation of Bayer images, and the resolution and details of the images and the relative positions of RGB (red, green and blue) are reserved.
2) The invention provides a method for respectively reconstructing images of three Bayer channels, which is different from most methods for applying the same network structure to all channels at all positions, and considers the characteristics of the Bayer lattice structure to enable RGB to apply different networks.
3) The invention utilizes the attention module principle to lead each channel of the feature map to learn different kinds of features, leads the network to select and pay attention to learn useful features independently and inhibits useless features.
4) In the prior art, when the reconstructed green channel image is used as a guide image or used as priori information to reconstruct images of a red channel and a blue channel, most of the reconstructed green channel image adopts simple operations, such as operations of splicing, addition, element multiplication and the like, but features of the priori information cannot be effectively mined. The invention provides a feature guiding module, which effectively fuses prior information of a green channel map by applying a non-homogeneous linear mapping model in a feature domain, so that a high-quality reconstructed image can be obtained.
Drawings
FIG. 1 is a general flow diagram of the method of the present invention;
FIG. 2 is a block flow diagram of a green channel recovery network constructed in accordance with the method of the present invention;
FIG. 3 is a block flow diagram of an attention module constructed in accordance with the method of the present invention;
FIG. 4 is a block flow diagram of a feature guidance module constructed in accordance with the method of the present invention;
fig. 5 is a flow chart of a red/blue channel recovery network constructed in accordance with the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The invention mainly relates to a Bayer image recovery method based on an attention module, which has the following overall thought: firstly, preprocessing a real picture in a data set, sampling the real picture by using a Bayer filter array to obtain a Bayer image, so that an image pair is formed by the Bayer image and the real image, a network is convenient to optimize, and then the obtained Bayer image is subjected to channel splitting to obtain a red sampling picture, a green sampling picture and a blue sampling picture; then, restoring the green channel, extracting learning characteristics from the green sampling graph through the attention module and the encoder-decoder, and reconstructing the green channel graph; secondly, taking the reconstructed green channel diagram as priori information for guiding feature learning of a red channel and a blue channel, and reconstructing the red channel diagram and the blue channel diagram respectively; and finally, combining the obtained three channel diagrams to obtain a final reconstructed RGB diagram.
As shown in fig. 1, the process of the method of the present invention includes four parts, namely green channel recovery, red channel recovery, blue channel recovery, and merging into a reconstructed RGB map. The green channel recovery is mainly realized by learning the characteristics of different channels through a attention module, and the reconstructed green channel diagram is obtained by effectively utilizing the characteristics of multiple scales through a coder-decoder. Red channel recovery fully utilizes prior information of the reconstructed green channel map to reconstruct the red channel map. The process of blue channel recovery is the same as that of red channel recovery. And finally, fusing the reconstructed red channel diagram, the reconstructed green channel diagram and the reconstructed blue channel diagram together to form an RGB three-channel diagram, and finally obtaining the reconstructed RGB diagram.
The method of the invention is implemented by utilizing the principle and the network framework and according to the following steps:
And 1, constructing a green channel recovery network, wherein the main function of the green channel recovery network is to reconstruct a green channel diagram.
The input of the green channel recovery network is a green sampling graph separated by a Bayer image channel, and the size of the green sampling graph is H multiplied by W multiplied by 3, wherein H and W respectively represent the height and the width of an input image; outputting a reconstructed green channel diagram with the size of H multiplied by W multiplied by 1;
as shown in fig. 2, the green channel recovery network includes a convolution operation, an attention module, and a coder-decoder, fig. 2 Representing downsampling operations with bilinear interpolation and 1 x 1 convolution,/>For upsampling operations with bilinear interpolation and 1x 1 convolution,/>Indicating that a stitching operation is performed in dimension 1. The flow structure of the green channel recovery network is as follows: the green sample map (h×w×3) is taken as input→the first convolution layer conv1→the attention module→the second convolution layer conv2→the third convolution layer conv3→the fourth convolution layer conv4→the fifth convolution layer conv5→the sixth convolution layer conv6→the seventh convolution layer conv7→the eighth convolution layer conv8→the output as the reconstructed green channel map (h×w×1).
In this embodiment, the first convolutional layer Conv1 convolutional kernel size is 3×3, the step size is 3, and the total number of feature maps is 64; the second convolution layer Conv 2-seventh convolution layer Conv7 form a coder-decoder, the convolution kernel sizes are all 3 multiplied by 3, the step sizes are all 3, the total number of feature mapping is all 64, and the feature mapping is activated by the ReLU; the output sizes of the second convolution layer Conv2, the third convolution layer Conv3 and the fourth convolution layer Conv4 are H multiplied by W, 1/2 (H multiplied by W) and 1/4 (H multiplied by W) respectively; the output sizes of the fifth convolution layer Conv5, the sixth convolution layer Conv6 and the seventh convolution layer Conv7 are respectively 1/4 (H multiplied by W), 1/2 (H multiplied by W) and H multiplied by W; the eighth convolution layer Conv8 convolution kernel has a size of 1×1, a step size of 1, and a total number of feature maps of 1.
The attention module is used for enabling each channel of the feature map to learn different kinds of features, enabling the network to autonomously select useful features to be focused on, and inhibiting useless features.
As shown in FIG. 3, in the figureFor multiplication by element operation,/>For the per-element addition operation, note that the flow structure of the module is sequentially input, first convolution layer Conv1, second convolution layer Conv2, global average pooling operation, third convolution layer Conv3, fourth convolution layer Conv4, output of second convolution layer Conv2 and output of fourth convolution layer Conv4 are subjected to per-element multiplication operation, and the results of the initial input and multiplication operation are added per element. In this embodiment, the convolution kernel size of the first convolution layer Conv1 is 3×3, the step size is 3, the total number of feature maps is 64, and the feature maps are activated through PReLU; the Conv2 convolution kernel of the second convolution layer has the size of 3 multiplied by 3, the step length of 3 and the total number of feature mapping of 64; the Conv3 convolution kernel of the third convolution layer has the size of 1 multiplied by 1, the step length of 1, the total number of feature mapping is 64, and the feature mapping is activated through PReLU; the convolution kernel size of the fourth convolution layer Conv4 is 1 multiplied by 1, the step size is 1, the total number of feature mapping is 64, and the weight of the feature is obtained through a Sigmoid function.
Step 2, constructing a feature guide module, wherein the inputs of the feature guide module are a reconstructed green channel diagram (the size is H multiplied by W multiplied by 1) and the output of a coder-decoder (the size is H multiplied by W multiplied by 64); the output is a guide map, and the size is H×W×64. Since most demosaicing methods generally use simple operations such as stitching, addition, and element multiplication to fuse prior information of a green channel map to reconstruct red channel and blue channel maps, prior information cannot be effectively mined to obtain a high-quality image. Therefore, the feature guiding module of the step mainly uses a non-homogeneous linear mapping model in the feature domain to effectively fuse the prior information of the green channel map.
As shown in fig. 4, the flow structure of the feature guiding module sequentially includes: the output of the encoder-decoder- & gt 1 st convolution layer Conv1 output and the green channel map undergo multiplication operation by element through the output of the 2 nd convolution layer Conv 2- & gt the output of the encoder-decoder and the output of the multiplication operation undergo addition operation by element- & gt the generation of a guide map. In this embodiment, the convolution kernel size of the 1 st convolution layer Conv1 is 1×1, the step length is 1, and the total number of feature mappings is 64; the convolution kernel size of the 2 nd convolution layer Conv2 is 1×1, the step size is 1, the total number of feature maps is 64, and the feature maps are activated through a Sigmoid function.
Step 3, constructing a red channel recovery network, wherein the red channel recovery network is input into a red sampling graph (the size is H multiplied by W multiplied by 1) and a guide graph (the size is H multiplied by W multiplied by 64); the output is a reconstructed red channel map of size H W1. The function of the red channel recovery network is to use the guide map as a priori information to guide the reconstruction of the red channel map.
As shown in fig. 5, the flow structure of the red channel recovery network is as follows: red sample diagram- & gtI convolution layer Conv1- & gtattention module output and guide diagram are spliced on dimension 1- & gtcoder-decoder- & gtII convolution layer Conv2- & gtreconstructed red channel diagram. In this embodiment, the convolution kernel size of the ith convolution layer Conv1 is 3×3, the step size is 3, and the total number of feature mappings is 64; the convolution kernel size of the II th convolution layer Conv2 is 1 multiplied by 1, the step size is 1, and the total number of feature mapping is 1.
Step 4, constructing a blue channel recovery network, wherein the blue channel recovery network has a similar structural flow to the red channel recovery network, and the difference is that a blue sampling graph (the size is H multiplied by W multiplied by 1) and a guide graph (the size is H multiplied by W multiplied by 64) are input; the output is a reconstructed blue channel map of size H W1.
Step 5, fusing the reconstructed red channel map (H×W×1), green channel map (H×W×1) and blue channel map (H×W×1) to obtain reconstructed RGB map (H×W×3).
And 6, calculating the average absolute error between the reconstructed RGB image (with the size of H multiplied by W multiplied by 3) and the real image (Ground Truth) in the image pair according to the constructed network model based on the attention module, and optimizing the network model by taking the minimization of the L1 loss function as a target.
The L1 loss function is mainly used for measuring the difference between the reconstructed RGB image and the corresponding real image (Ground Truth), and is mainly used for protecting the color and structure information of the image.
The expression of the L1 loss function is:
Wherein N is the number of images in each batch, X i is the reconstructed RGB map obtained in step 5, Is the corresponding real image (i.e., ground Truth).
Since the L1 loss function belongs to a pixel-level loss function, the pixel loss does not actually take into account the image quality (e.g. perceived quality, texture), high frequency details are often lacking, and the resulting texture is too smooth to be satisfactory, so this step reintroduces the perceived loss function so that the generated features are sufficiently similar to the corresponding features of the real image (i.e. Ground Truth), thereby improving the perceived quality of the final reconstructed RGB image, the expression of the perceived loss function being:
Wherein, C j is the channel of the jth layer feature, H j is the height of the jth layer feature, W j is the width of the jth layer feature, ψ j () is the feature map obtained by the jth layer convolution layer in the pretrained VGG19 model, I G is the real image, I R is the reconstructed RGB image;
The loss function expression of the whole Bayer image recovery model is as follows by combining the two loss functions:
L=λ1L12Lp
Where lambda 1、λ2 is the tuning parameter between the L1 loss function and the perceptual loss function.

Claims (3)

1. The Bayer image recovery method based on the attention module is characterized by comprising the following steps of:
step 1, constructing a green channel recovery network, wherein the input of the green channel recovery network is a green sampling image separated by a Bayer image channel, the output is a reconstructed green channel image,
The green channel recovery network comprises a convolution operation, an attention module and a coder-decoder;
The flow structure of the green channel recovery network is as follows: the method comprises the steps of taking a green sampling graph as input, taking a first convolution layer Conv1, taking a notice module, taking a second convolution layer Conv2, taking a third convolution layer Conv3, taking a fourth convolution layer Conv4, taking a fifth convolution layer Conv5, taking a sixth convolution layer Conv6, taking a seventh convolution layer Conv7, taking an eighth convolution layer Conv8 and outputting the reconstructed green channel graph; wherein, the first convolution layer Conv1 convolution kernel has a size of 3×3, a step length of 3, and a total number of feature mappings of 64; the second convolution layer Conv 2-seventh convolution layer Conv7 form a coder-decoder, the convolution kernel sizes are all 3 multiplied by 3, the step sizes are all 3, the total number of feature mapping is all 64, and the feature mapping is activated by the ReLU; the output sizes of the second convolution layer Conv2, the third convolution layer Conv3 and the fourth convolution layer Conv4 are H multiplied by W, 1/2 (H multiplied by W) and 1/4 (H multiplied by W) respectively; the output sizes of the fifth convolution layer Conv5, the sixth convolution layer Conv6 and the seventh convolution layer Conv7 are respectively 1/4 (H multiplied by W), 1/2 (H multiplied by W) and H multiplied by W; the Conv8 convolution kernel of the eighth convolution layer has the size of 1 multiplied by 1, the step length is 1, and the total number of the feature mapping is 1;
The flow structure of the attention module is as follows: input-first convolution layer Conv 1-second convolution layer Conv 2-global average pooling operation-third convolution layer Conv 3-fourth convolution layer Conv 4-output of second convolution layer Conv2 and output of fourth convolution layer Conv4 are multiplied by element-to-initial input and multiplication result are added by element; the convolution kernel size of the first convolution layer Conv1 is 3 multiplied by 3, the step length is 3, the total number of feature mapping is 64, and the feature mapping is activated through PReLU; the Conv2 convolution kernel of the second convolution layer has the size of 3 multiplied by 3, the step length of 3 and the total number of feature mapping of 64; the Conv3 convolution kernel of the third convolution layer has the size of 1 multiplied by 1, the step length of 1, the total number of feature mapping is 64, and the feature mapping is activated through PReLU; the convolution kernel size of the fourth convolution layer Conv4 is 1 multiplied by 1, the step length is 1, the total number of feature mapping is 64, and the weight of the features is obtained through a Sigmoid function;
Step 2, constructing a feature guiding module, wherein the input of the feature guiding module is a reconstructed green channel diagram and the output of the encoder-decoder, and the output of the feature guiding module is a guiding diagram;
The flow structure of the feature guiding module is as follows: the output of the encoder-decoder- & gt 1 st convolution layer Conv1 output and the green channel diagram are subjected to multiplication operation according to elements through the output of the 2 nd convolution layer Conv 2- & gt the output of the encoder-decoder and the output of the multiplication operation are subjected to addition operation according to elements- & gt a guide diagram is generated; the convolution kernel size of the 1 st convolution layer Conv1 is 1 multiplied by 1, the step length is 1, and the total number of feature mapping is 64; the convolution kernel size of the 2 nd convolution layer Conv2 is 1 multiplied by 1, the step length is 1, the total number of feature mapping is 64, and the feature mapping is activated through a Sigmoid function;
Step3, constructing a red channel recovery network, wherein the input of the red channel recovery network is a red sampling graph and a guide graph, and the output of the red channel recovery network is a reconstructed red channel graph;
Step 4, constructing a blue channel recovery network, wherein the blue channel recovery network has a similar structural flow to the red channel recovery network, and the difference is that the blue channel recovery network is input into a blue sampling graph and a guide graph, and the blue channel recovery network is output into a reconstructed blue channel graph;
step 5, fusing the red channel diagram, the green channel diagram and the blue channel diagram obtained by reconstruction to obtain a reconstructed RGB diagram;
And 6, calculating the average absolute error between the reconstructed RGB image and the real image in the image pair according to the network model, and optimizing the network model by taking the L1 loss function minimization as a target.
2. The attention module-based Bayer image restoration method according to claim 1, wherein: the flow structure of the red channel recovery network is as follows: red sampling diagram- & gtI convolution layer Conv1- & gtattention module output and guide diagram are spliced on the dimension of 1- & gtcoder-decoder- & gtII convolution layer Conv2- & gtreconstructed red channel diagram; the convolution kernel size of the first convolution layer Conv1 is 3 multiplied by 3, the step length is 3, and the total number of feature mapping is 64; the convolution kernel size of the II th convolution layer Conv2 is 1 multiplied by 1, the step size is 1, and the total number of feature mapping is 1.
3. The attention module-based Bayer image restoration method according to claim 1, wherein: the expression of the L1 loss function is as follows:
Wherein, for the number of images of each batch, X i is the reconstructed RGB image obtained in step 5, Is a corresponding real image;
In addition, the step introduces a perception loss function, and the expression of the perception loss function is as follows:
Wherein, C j is the channel of the jth layer feature, H j is the height of the jth layer feature, W j is the width of the jth layer feature, ψ j () is the feature map obtained by the jth layer convolution layer in the pretrained VGG19 model, I G is the real image, I R is the reconstructed RGB image;
the two loss functions are synthesized, and the loss function expression of the whole Bayer image recovery model is as follows:
L=λ1L1+2Lp
Where lambda 1、λ2 is the tuning parameter between the L1 loss function and the perceptual loss function.
CN202111043024.1A 2021-09-07 2021-09-07 Bayer image recovery method based on attention module Active CN113870126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111043024.1A CN113870126B (en) 2021-09-07 2021-09-07 Bayer image recovery method based on attention module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111043024.1A CN113870126B (en) 2021-09-07 2021-09-07 Bayer image recovery method based on attention module

Publications (2)

Publication Number Publication Date
CN113870126A CN113870126A (en) 2021-12-31
CN113870126B true CN113870126B (en) 2024-04-19

Family

ID=78989865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111043024.1A Active CN113870126B (en) 2021-09-07 2021-09-07 Bayer image recovery method based on attention module

Country Status (1)

Country Link
CN (1) CN113870126B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009590A (en) * 2019-04-12 2019-07-12 北京理工大学 A kind of high-quality colour image demosaicing methods based on convolutional neural networks
WO2020206630A1 (en) * 2019-04-10 2020-10-15 深圳市大疆创新科技有限公司 Neural network for image restoration, and training and use method therefor
CN111861902A (en) * 2020-06-10 2020-10-30 天津大学 Deep learning-based Raw domain video denoising method
CN111915531A (en) * 2020-08-06 2020-11-10 温州大学 Multi-level feature fusion and attention-guided neural network image defogging method
WO2021003594A1 (en) * 2019-07-05 2021-01-14 Baidu.Com Times Technology (Beijing) Co., Ltd. Systems and methods for multispectral image demosaicking using deep panchromatic image guided residual interpolation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020206630A1 (en) * 2019-04-10 2020-10-15 深圳市大疆创新科技有限公司 Neural network for image restoration, and training and use method therefor
CN110009590A (en) * 2019-04-12 2019-07-12 北京理工大学 A kind of high-quality colour image demosaicing methods based on convolutional neural networks
WO2021003594A1 (en) * 2019-07-05 2021-01-14 Baidu.Com Times Technology (Beijing) Co., Ltd. Systems and methods for multispectral image demosaicking using deep panchromatic image guided residual interpolation
CN111861902A (en) * 2020-06-10 2020-10-30 天津大学 Deep learning-based Raw domain video denoising method
CN111915531A (en) * 2020-08-06 2020-11-10 温州大学 Multi-level feature fusion and attention-guided neural network image defogging method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
汤漫 ; 杨斌 ; .基于快速残差插值和卷积神经网络的去马赛克算法.南华大学学报(自然科学版).2019,(第06期),全文. *
王东升 ; 杨斌 ; .基于梯度局部一致性的自适应Bayer模式彩色图像恢复算法.南华大学学报(自然科学版).2019,(第02期),全文. *
董猛 ; 吴戈 ; 曹洪玉 ; 景文博 ; 于洪洋 ; .基于注意力残差卷积网络的视频超分辨率重构.长春理工大学学报(自然科学版).2020,(第01期),全文. *

Also Published As

Publication number Publication date
CN113870126A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
Liu et al. A spectral grouping and attention-driven residual dense network for hyperspectral image super-resolution
Khashabi et al. Joint demosaicing and denoising via learned nonparametric random fields
CN111127336B (en) Image signal processing method based on self-adaptive selection module
US20050169521A1 (en) Processing of mosaic digital images
Ratnasingam Deep camera: A fully convolutional neural network for image signal processing
CN111986084B (en) Multi-camera low-illumination image quality enhancement method based on multi-task fusion
Hu et al. Convolutional sparse coding for RGB+ NIR imaging
CN113822830B (en) Multi-exposure image fusion method based on depth perception enhancement
Niu et al. Low cost edge sensing for high quality demosaicking
CN113554032B (en) Remote sensing image segmentation method based on multi-path parallel network of high perception
Lu et al. Progressive joint low-light enhancement and noise removal for raw images
Karadeniz et al. Burst photography for learning to enhance extremely dark images
CN112215753A (en) Image demosaicing enhancement method based on double-branch edge fidelity network
CN115564692A (en) Panchromatic-multispectral-hyperspectral integrated fusion method considering width difference
CN114266957A (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN115004220A (en) Neural network for raw low-light image enhancement
Guo et al. Joint demosaicking and denoising benefits from a two-stage training strategy
Vandewalle et al. Joint demosaicing and super-resolution imaging from a set of unregistered aliased images
CN113870126B (en) Bayer image recovery method based on attention module
US20220247889A1 (en) Raw to rgb image transformation
Paul et al. Maximum accurate medical image demosaicing using WRGB based Newton Gregory interpolation method
CN115841523A (en) Double-branch HDR video reconstruction algorithm based on Raw domain
CN115760638A (en) End-to-end deblurring super-resolution method based on deep learning
CN116309163A (en) Combined denoising and demosaicing method for black-and-white image guided color RAW image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240326

Address after: Art and Design Tribe 300-301, No. 3668 Nanhai Avenue, Nanshan District, Shenzhen City, Guangdong Province, 518051

Applicant after: Shenzhen Dianwei Culture Communication Co.,Ltd.

Country or region after: China

Address before: 509 Kangrui Times Square, Keyuan Business Building, 39 Huarong Road, Gaofeng Community, Dalang Street, Longhua District, Shenzhen, Guangdong Province, 518000

Applicant before: Shenzhen Litong Information Technology Co.,Ltd.

Country or region before: China

Effective date of registration: 20240326

Address after: 509 Kangrui Times Square, Keyuan Business Building, 39 Huarong Road, Gaofeng Community, Dalang Street, Longhua District, Shenzhen, Guangdong Province, 518000

Applicant after: Shenzhen Litong Information Technology Co.,Ltd.

Country or region after: China

Address before: 710048 Shaanxi province Xi'an Beilin District Jinhua Road No. 5

Applicant before: XI'AN University OF TECHNOLOGY

Country or region before: China

GR01 Patent grant
GR01 Patent grant