WO2023231535A1 - 一种黑白图像引导的彩色raw图像联合去噪去马赛克方法 - Google Patents
一种黑白图像引导的彩色raw图像联合去噪去马赛克方法 Download PDFInfo
- Publication number
- WO2023231535A1 WO2023231535A1 PCT/CN2023/084429 CN2023084429W WO2023231535A1 WO 2023231535 A1 WO2023231535 A1 WO 2023231535A1 CN 2023084429 W CN2023084429 W CN 2023084429W WO 2023231535 A1 WO2023231535 A1 WO 2023231535A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- color
- raw
- denoising
- demosaicing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000007246 mechanism Effects 0.000 claims abstract description 20
- 238000004088 simulation Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 101100248200 Arabidopsis thaliana RGGB gene Proteins 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000006870 function Effects 0.000 abstract description 39
- 238000012549 training Methods 0.000 abstract description 13
- 238000003384 imaging method Methods 0.000 abstract description 8
- 238000012360 testing method Methods 0.000 abstract description 2
- 230000008447 perception Effects 0.000 abstract 1
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4015—Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Definitions
- the invention relates to the field of image processing, and in particular to a black-and-white image-guided joint denoising and demosaicing method for color RAW images.
- Image denoising and demosaicing are essential parts of the color camera image processing process.
- Existing methods generally first denoise the image in the RAW domain, and then use the demosaic algorithm to convert the image into the RGB domain.
- algorithm models based on neural networks have achieved better results in both denoising and demosaicing tasks.
- neural networks With the support of the huge amount of training data and model parameters, it is possible to use neural networks to model multiple degradation types simultaneously. Therefore, the joint denoising and demosaicing network models both processes simultaneously, which can prevent error accumulation while reusing image features.
- the present invention proposes a joint denoising and demosaicing method for color RAW images guided by black and white images.
- the present invention constructs an alignment-guided image generation module based on the attention mechanism, and conducts training based on the perceptual loss function, thereby generating high-quality alignment for guiding RAW image denoising and demosaicing.
- Boot image At the same time, this The invention conducts jointly guided denoising and demosaic network module training based on the color-structure loss function, so that the restoration results have better visual effects.
- the method of the present invention can handle joint denoising and demosaic scenes of non-aligned guidance containing parallax, while allowing the black and white image itself to be contaminated by noise without affecting the guidance effect.
- the influence of binocular parallax and black-and-white camera's own noise is reduced, and the black-and-white camera image information in the black-and-white-color binocular system is used to assist the joint denoising and demosaicing process of color cameras.
- the present invention provides a method for joint denoising and demosaicing of color RAW images guided by black and white images, which utilizes black and white camera images with a high signal-to-noise ratio to guide color camera RAW images across parallax for joint denoising and demosaicing.
- the steps of the method are as follows:
- step S2 Build a joint denoising and demosaic model based on the data set in step S1.
- the joint denoising and demosaicing model includes an alignment-guided image generation module and a guided denoising and demosaicing module.
- the alignment guidance image generation module solves the feature correlation between black and white images and RAW images in the parallax direction based on the parallax attention mechanism, and uses relevant structural features to construct alignment guidance image features, thereby Obtain the alignment guide image; use the true value of the clean grayscale image corresponding to the RAW image as supervision, and train the alignment guide image generation module based on the perceptual loss function until convergence, so that the alignment guide image generated by the alignment guide image generation module is structurally identical to the RAW image. Probably similar.
- the guided denoising and demosaicing module respectively extracts the features of the RAW image and the features of the alignment guide image. After upsampling the feature resolution of the RAW image, the two are divided into feature channels. Feature fusion is performed by splicing, and finally the clean RGB image corresponding to the RAW image is generated through feature decoding, and the guided denoising and guided demosaicing processes are completed at the same time; the guided denoising and demosaicing module is trained based on the structure-color loss function to converge, making denoising possible. Demosaic results with both accurate color reconstruction and sharp detail structures.
- the alignment guidance image generation module is based on the parallax attention mechanism, fuses the non-aligned black and white camera image information and the color camera image information, and generates a high-quality guidance image aligned with the color camera;
- the guidance denoising and demosaicing module is based on high-quality Alignment guide images to guide the joint denoising and demosaicing process of color camera RAW images.
- step S3 Denoise and demosaic the color RAW image based on the joint denoising and demosaicing model constructed in step S2.
- images are collected simultaneously in dark light.
- the alignment-guided image generation module the additional high signal-to-noise ratio information provided by the black-and-white camera images is used to reduce the impact of non-alignment factors.
- the denoising and demosaicing process of the color camera RAW images is guided to output color images with good visual effects. Low light image.
- Step S1 is based on the existing color binocular image data set to generate a large number of noisy black-and-white-color binocular system simulation data sets in dark light scenes for training and testing of the joint denoising and demosaicing model.
- the construction method of the noisy black-and-white-color binocular system simulation data set is as follows:
- S11 divides the brightness of each pixel in the normal image collected by the binocular color camera by K to simulate a dark light scene.
- color channel value sampling is performed based on the RGGB Bayer pixel arrangement mode to generate a RAW image.
- right View by adding the three color channel values, simulates the complete amount of light entering the black and white camera, and generates a black and white image.
- S13 adds Poisson-Gaussian noise pollution with the same parameters to black and white images and RAW images.
- the construction of the joint denoising and demosaicing model in step S2 includes two parts. One is to build an alignment-guided image generation module, and the other is to build a guided denoising and demosaicing module:
- the alignment-guided image generation module solves the feature correlation between black and white images and RAW images in the parallax direction based on the parallax attention mechanism, and uses relevant structural features to construct alignment-guided image features.
- the specific method is as follows:
- the RAW image is demosaiced based on the traditional demosaicing algorithm, and converted into a single-channel grayscale image by adding three color channels.
- the RAW image feature F raw and the black and white image feature F mono from the left and right views are extracted through the same feature extractor to reduce noise interference and enhance structural information in the feature space.
- the correlation weight matrix M is solved in the disparity direction for the F raw feature and F mono feature, and the correlation in the row direction is used to construct the alignment guidance image feature: where f represents the convolution operation,
- the feature information from the guide black and white image is weighted and fused according to the disparity correlation.
- the feature decoder is used to Features are decoded into high-quality aligned guided images that are consistent with the target color image structure.
- the method to solve the correlation weight matrix M is: for the RAW image feature F raw and the black and white image feature F mono from the left and right views, the tensor dimensions of both are H ⁇ W ⁇ C, where H, W, and C respectively represent the feature tensor. Measure the height, width and number of channels.
- the left view features are passed through a 3 ⁇ 3 convolution layer to generate a query tensor, while the right view features are passed through two different 3 ⁇ 3 convolution layers to generate a key tensor and a value tensor.
- the dimensions of all three are H ⁇ W ⁇ C. Rearrange the dimensions of the key tensor and convert the dimensions into H ⁇ C ⁇ W.
- Perform matrix multiplication of the query tensor and the rearranged key tensor to obtain a correlation calculation result matrix with dimensions H ⁇ W ⁇ W, and obtain the correlation weight matrix M in the row direction through SoftMax operation.
- the perceptual loss function is used as the loss function when training the alignment-guided image generation module.
- This perceptual loss function is used to calculate the distance between the output result and the ideal value in the VGG feature space. By optimizing this distance during training, the generated alignment guidance image focuses on structural reconstruction, which is more efficient.
- the guided denoising and demosaicing module uses the generated high-quality aligned guided images to replace the non-aligned noisy black and white camera images, and guides the joint denoising and demosaicing process of the color camera RAW images.
- the guided denoising and demosaic module respectively extracts the features of noisy RAW images and the features of high-quality aligned guided images, and performs feature fusion in the form of feature channel splicing.
- the guided denoising and demosaicing module is trained based on the structure-color loss function, which directly decodes and generates a clean RGB image corresponding to the RAW image, and simultaneously completes the guided denoising and guided demosaicing processes.
- the neural network mainly learns structural features from alignment-guided images, and mainly learns color features from RAW images, and uses the structure-color loss function as the loss function to guide the training of the joint denoising and demosaicing module.
- the structure-color loss function is defined as: , where L pc is the structure-color joint loss function; De-Marseille for joint denoising g result; Y is the reference true value, set to the true value of the clean RGB image corresponding to the noisy RAW image.
- the first term of the loss function is the structural loss
- F VGG ( ⁇ ) means that image features are extracted through the pre-trained VGG model, and the structural information of the output result is constrained by the VGG spatial features
- the second term of the loss function is the color loss
- F gaussian ( ⁇ ) indicates that the low-frequency information of the image is extracted through Gaussian filtering
- the color loss function calculates the loss in the low-frequency space of the image through the F gaussian ( ⁇ ) function.
- the denoising and demosaicing results are constrained from both the structure and color aspects, so that the structure of the alignment-guided image is fully migrated while maintaining the color accuracy of the output result.
- the beneficial effects of the present invention are: based on the parallax attention mechanism, the non-alignment factors in the guidance process are eliminated, so that the guidance information of the black and white camera image can be accurately assigned to the denoising and demosaicing process of the color camera across the parallax, and at the same time, through the perceptual loss
- the design and combination of function and structure-color loss functions allow accurate high-frequency detailed structures to be reconstructed after denoising, while the demosaicing process has accurate color interpolation effects.
- the present invention has strong application value for binocular imaging equipment in dark light scenes.
- Figure 1 is a flow chart of the method of the present invention.
- Figure 2 is a work flow chart of the alignment guidance image generation module of the present invention.
- Figure 3 is a schematic structural diagram of the feature extractor of the present invention.
- Figure 4 is a schematic diagram of the network structure of the present invention that fuses left and right view features based on the disparity attention mechanism.
- Figure 5 is a schematic diagram of the principle of alignment-guided image generation based on the parallax attention mechanism.
- (a) is the target image visualization result, which is the output image of the left camera;
- (b) is the black and white image, which is the output image of the right camera; for the three rows of L1, L2, and L3 pixels marked in (a) and (b),
- the disparity attention solution results are shown in (c), (d), and (e).
- Figure 6 is a work flow chart of the denoising and demosaicing module of the present invention.
- Figure 7 is a network architecture diagram of the guided denoising and demosaic module of the present invention.
- FIG. 8 is an illustration of a specific embodiment of the present invention in which the alignment guide image is generated based on the black and white image and the guided denoising and demosaicing process is performed.
- (a) is the RAW simulation image under low light of the color camera
- (b) is the visualized image obtained after (a) preliminary demosaicing and brightness stretching
- (c) is the input black and white camera image
- (d) is the generated alignment guide image
- (e) is the generated denoising and demosaicing result
- (f) is the reference target image.
- a black-and-white-color binocular camera simulation data set is constructed, specifically:
- FIG. 1 The network architecture proposed by the method of the present invention is shown in Figure 1, which is mainly divided into two modules: “alignment-guided image generation” and “guided denoising and demosaicing”.
- the alignment-guided image generation module is shown in Figure 2. It is based on the parallax attention mechanism to eliminate non-alignment factors between black and white images and RAW images and generate aligned guide images. At the same time, the module is trained separately with the true value of the black and white image of the RAW image as the supervision information and the perceptual loss as the loss function until convergence.
- the same feature encoder is used for feature encoding.
- the specific structure of the feature encoder is shown in Figure 3.
- the RAW image is demosaiced based on the traditional demosaicing algorithm DDFAPD algorithm, and converted into a single-channel grayscale image by adding three color channels.
- the feature encoding process has two important functions: through feature encoding, the noise levels of the input RAW images and black-and-white images are suppressed, reducing interference on structural similarity judgments; at the same time, the target RAW image and the guide black-and-white image are converted into the same feature space, preserving structural correlation between the two.
- the RAW image feature F raw and the black and white image feature F mono from the left and right views are extracted through the same feature extractor.
- the left and right view features are fused based on the disparity attention mechanism.
- the specific structure is shown in Figure 4. Since non-alignment factors caused by disparity generally appear in the same row, the correlation of left and right view features on the same row is calculated based on the disparity attention mechanism.
- the alignment guidance image feature is composed of a linear combination of black and white image feature values on the corresponding row based on correlation.
- the solution method of the correlation weight matrix M is: for the RAW image feature F raw and the black and white image feature F mono from the left and right views, the tensor dimensions of both are H ⁇ W ⁇ C, where H, W, and C respectively represent the feature tensor. Measure the height, width and number of channels.
- the left view features are passed through a 3 ⁇ 3 convolution layer to generate a query tensor, while the right view features are passed through two different 3 ⁇ 3 convolution layers to generate a key tensor and a value tensor.
- the dimensions of all three are H ⁇ W ⁇ C. Rearrange the dimensions of the key tensor and convert the dimensions into H ⁇ C ⁇ W.
- Perform matrix multiplication of the query tensor and the rearranged key tensor to obtain a correlation calculation result matrix with dimensions H ⁇ W ⁇ W, and obtain the correlation weight matrix M in the row direction through SoftMax operation.
- FIG 5(a) is the visualization result of the target image of the left camera
- Figure 5(b) is the black and white image of the right camera.
- the features are related
- the performance results are shown in Figure 5(c), Figure 5(d), and Figure 5(e).
- the feature correlation matrix forms an obvious peak in the diagonal direction, showing the misalignment structure caused by disparity in disparity attention. The mechanism is better captured.
- decoding generates an alignment guide image based on the reconstructed features.
- the algorithm uses the true grayscale image corresponding to the RAW image as training supervision, and uses perceptual loss as the loss function to train the module.
- the perceptual loss function is defined as: Among them, L p represents the perceptual loss function; Represents the output result of the alignment-guided image generation module. G represents the reference true value as a supervision, which is set to the true value of the clean grayscale image corresponding to the RAW image; F VGG ( ⁇ ) represents the extraction of image features through the pre-trained VGG model.
- the perceptual loss function optimizes the generation results of the alignment-guided image generation module by calculating the pre-trained VGG network feature space distance, making it more consistent in structure and better in human visual perception.
- the guided denoising and demosaicing module is shown in Figure 6. It extracts RAW image features and alignment guided image features respectively, and after upsampling the RAW image feature resolution, fuses the two information through feature splicing, and finally through Feature decoding outputs denoising and demosaicing results. Based on the structure-color loss function, the module is trained so that the denoising and demosaicing results have both accurate color reconstruction and sharp detailed structure.
- the specific network structure of this module is shown in Figure 7.
- the structure-color loss function is defined as follows:
- L pc represents the structure-color loss function
- the first term is the perceptual loss function
- the second term is the color loss function.
- F VGG ( ⁇ ) represents the extraction of image features through the pre-trained VGG model, and the perceptual loss function calculates the loss in the deep feature space of the image through the F VGG ( ⁇ ) function.
- F gaussian ( ⁇ ) indicates that the low-frequency information of the image is extracted through Gaussian filtering, and the color loss function calculates the loss in the low-frequency space of the image through the F gaussian ( ⁇ ) function.
- the joint loss function constrains the various structures of the image content in the VGG feature space, and constrains the overall brightness of the image content in the image low-frequency space.
- the combination of the two helps to fully transfer and guide various features in the image during the training process. class structure into the target image, while preventing the influence of the brightness of the guided image on color reconstruction, thereby obtaining guided denoising and demosaic results with clear structure and accurate color.
- Figure 8 shows the operation result of a specific embodiment based on the method of the present invention, wherein (d) in Figure 8 is the alignment guidance image generated by the method of the present invention, which has less noise, clear structure, and is consistent with the structure of the target color image.
- Figure 8(e) is the denoising and demosaic result generated by the method of the present invention. Compared with the reference target image of Figure 8(f), the structure and color are relatively consistent.
- the present invention proposes a new binocular camera-guided denoising and demosaicing method, which generates high-quality alignment-guided images in non-aligned scenes containing parallax, and obtains denoising and demosaicing results with good visual perception based on the structure-color loss function.
- the imaging advantage of the black-and-white camera that is not interfered by color filters is used to assist the demosaicing process of the color camera, and the black-and-white camera is used to achieve imaging with a greater amount of light when the binocular system is simultaneously imaging. Advantages assist in the denoising process of color cameras.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种黑白图像引导的彩色RAW图像联合去噪去马赛克方法,该方法为:构建黑白-彩色双目相机仿真图像数据集,用于网络模块的训练和测试;利用黑白图像和彩色图像之间的结构相关性,基于视差注意力机制构建对齐引导图像生成模块,采用RAW图像对应的干净灰度图像真值作为监督,以感知损失函数对该模块进行训练,从而生成高质量的对齐引导图像;利用生成的对齐引导图像引导彩色相机RAW图像的联合去噪去马赛克过程;基于结构-颜色损失函数,训练引导去噪去马赛克模块,使得在准确迁移引导图像结构的同时,保证去噪去马赛克结果颜色的准确性。对于暗光场景下的双目成像设备,本发明具有较强的应用价值。
Description
本发明涉及图像处理领域,具体涉及一种黑白图像引导的彩色RAW图像联合去噪去马赛克方法。
图像去噪与去马赛克是彩色相机图像处理流程中必不可少的部分。现有方法一般先在RAW域对图像进行去噪,再利用去马赛克算法将图像转化到RGB域中。随着深度学习技术的发展,基于神经网络的算法模型在去噪和去马赛克任务上均取得了更好的效果。在庞大的训练数据量与模型参数量的支持下,利用神经网络同时建模多种退化类型成为可能。因此,联合去噪去马赛克网络同时对两个过程进行建模,可以防止误差累积,同时复用图像特征。例如,在2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops会议上的“Beyond Joint Demosaicking and Denoising:An Image Processing Pipeline for a Pixel-bin Image Sensor”论文,引入注意力模块、对抗训练等多种机制,提升联合去噪去马赛克效果。然而,虽然神经网络可以较好地学习复原过程,但在信噪比较低的暗光场景下,单图复原算法在噪声去除、细节恢复等方面存在性能瓶颈。
另一方面,随着多传感器设备的发展,多图融合处理成为突破单图算法瓶颈的重要手段。黑白-彩色双目相机系统广泛应用于智能手机等设备,黑白相机相比于彩色相机,成像过程不受到色彩滤波片影响,同时具有更高的进光量,因此在成像质量上具有明显优势。基于黑白相机引导彩色相机进行去噪和去马赛克,可以充分发挥多传感器的组合优势。然而,现有的引导复原神经网络方法无法直接处理非对齐场景。在2019 British Machine Vision Conference会议上的“Robust Joint Image Reconstruction from Color and Monochrome Cameras”论文,首先基于光流对双目图像进行像素级别的配准,来减轻视差带来的影响,随后基于传统优化方法迭代求解复原结果。该方法计算量较大,同时在低信噪比场景下,难以实现准确的配准,进一步影响了后续的联合去噪去马赛克过程。
发明内容
本发明从黑白相机的成像优势出发,提出一种黑白图像引导的彩色RAW图像联合去噪去马赛克方法。为了消除二者之间由于视差引起的非对齐因素,本发明基于注意力机制构造对齐引导图像生成模块,并基于感知损失函数进行训练,从而生成用于引导RAW图像去噪去马赛克的高质量对齐引导图像。同时,本
发明基于颜色-结构损失函数进行联合引导去噪去马赛克网络模块的训练,使得复原结果具有更好的视觉效果。本发明方法通过生成高质量的对齐引导图像,可处理含视差的非对齐引导联合去噪去马赛克场景,同时允许黑白图像本身受到噪声污染且不影响引导效果。基于视差注意力机制降低双目视差和黑白相机自身噪声的影响,利用黑白-彩色双目系统中的黑白相机图像信息,协助彩色相机的联合去噪去马赛克过程。
本发明提供一种黑白图像引导的彩色RAW图像联合去噪去马赛克方法,利用高信噪比的黑白相机图像,跨视差引导彩色相机RAW图像进行联合去噪去马赛克,该方法步骤如下:
S1:构建暗光场景下含噪声的黑白-彩色双目相机仿真数据集。
S2:基于步骤S1中的数据集构建联合去噪去马赛克模型。
所述联合去噪去马赛克模型包括对齐引导图像生成模块和引导去噪去马赛克模块。
S21:构建所述对齐引导图像生成模块,所述对齐引导图像生成模块基于视差注意力机制求解黑白图像和RAW图像在视差方向上的特征相关性,利用相关的结构特征构建对齐引导图像特征,从而得到对齐引导图像;以RAW图像对应的干净灰度图像真值作为监督,基于感知损失函数训练对齐引导图像生成模块至收敛,使得对齐引导图像生成模块生成的对齐引导图像在结构上与RAW图像尽可能相近。
S22:构建所述引导去噪去马赛克模块,所述引导去噪去马赛克模块分别提取RAW图像的特征和对齐引导图像的特征,对RAW图像特征分辨率进行上采样后,对两者以特征通道拼接的方式进行特征融合,最后通过特征解码生成RAW图像对应的干净RGB图像,同时完成引导去噪和引导去马赛克过程;基于结构-颜色损失函数训练引导去噪去马赛克模块至收敛,使得去噪去马赛克结果同时具有准确的颜色重建和锐利的细节结构。
所述对齐引导图像生成模块基于视差注意力机制,融合非对齐的黑白相机图像信息与彩色相机图像信息,生成与彩色相机对齐的高质量引导图像;所述引导去噪去马赛克模块基于高质量的对齐引导图像,对彩色相机RAW图像的联合去噪去马赛克过程进行引导。
S3:基于步骤S2构建的联合去噪去马赛克模型对彩色RAW图像进行去噪去马赛克。针对黑白-彩色双目相机系统,在暗光下同时采集图像。通过对齐引导图像生成模块,利用黑白相机图像提供的额外高信噪比信息,并减轻非对齐因素的影响,同时对彩色相机RAW图像的去噪与去马赛克过程进行引导,输出视觉效果良好的彩色暗光图像。
步骤S1基于已有的彩色双目图像数据集,生成大量的暗光场景下含噪声的黑白-彩色双目系统仿真数据集,用于联合去噪去马赛克模型的训练和测试。含噪声的黑白-彩色双目系统仿真数据集构建方法具体如下:
S11将双目彩色相机采集的正常图像中的每个像素在亮度上除以K以仿真暗光场景。
S12对于左视图,基于RGGB的Bayer像素排列模式进行色彩通道值采样,生成RAW图像。对于右
视图,则通过三个颜色通道值相加的方式,模拟黑白相机完整的进光量,生成黑白图像。
S13对黑白图像与RAW图像添加相同参数的泊松-高斯噪声的污染。
步骤S2中联合去噪去马赛克模型的构建包括两个部分,一是构建对齐引导图像生成模块,二是构建引导去噪去马赛克模块:
所述对齐引导图像生成模块基于视差注意力机制求解黑白图像和RAW图像在视差方向上的特征相关性,利用相关的结构特征构建对齐引导图像特征,具体方法如下:
首先将RAW图像基于传统去马赛克算法进行去马赛克,并通过三个颜色通道相加的形式转化为单通道灰度图像。其次,通过相同的特征提取器提取来自左右视图的RAW图像特征Fraw与黑白图像特征Fmono,在特征空间降低噪声干扰,增强结构信息。随后,基于视差注意力机制,针对Fraw特征和Fmono特征在视差方向上求解相关性权重矩阵M,并利用行方向上的相关性,构建对齐引导图像特征:其中f表示卷积操作,为对齐引导图像特征,由来自引导黑白图像的特征信息根据视差相关性加权融合得来。最后,利用特征解码器将特征解码为与目标彩色图像结构一致的高质量对齐引导图像。
求解相关性权重矩阵M的方法为:对于来自左右视图的RAW图像特征Fraw与黑白图像特征Fmono,两者张量维度均为H×W×C,其中H、W、C分别表示特征张量的高、宽与通道数。首先将左视图特征通过3×3的卷积层,生成query张量,而右视图特征分别通过两个不同的3×3卷积层,生成key张量和value张量,三者维度均为H×W×C。将key张量的维度进行重新排列,将维度转化为H×C×W。将query张量和重排列后的key张量进行矩阵相乘运算,得到维度为H×W×W的相关计算结果矩阵,并通过SoftMax运算得到行方向上的相关性权重矩阵M。
采用感知损失函数作为对齐引导图像生成模块训练时的损失函数。感知损失函数定义为:Lp=其中,Lp表示感知损失函数;表示对齐引导图像生成模块的输出结果G表示作为监督的参考真值,设置为RAW图像对应的干净灰度图像真值;FVGG(·)表示通过预训练的VGG模型提取图像特征。该感知损失函数用于计算在VGG特征空间中输出结果与理想值之间的距离,在训练中通过优化该距离使得生成的对齐引导图像专注于结构重建,更为高效。
所述引导去噪去马赛克模块利用生成的高质量对齐引导图像代替非对齐的含噪黑白相机图像,引导彩色相机RAW图像的联合去噪去马赛克过程。引导去噪去马赛克模块分别提取含噪RAW图像的特征和高质量对齐引导图像的特征,并以特征通道拼接的方式进行特征融合。基于结构-颜色损失函数训练引导去噪去马赛克模块,直接解码生成RAW图像对应的干净RGB图像,同时完成引导去噪和引导去马赛克过程。
神经网络主要从对齐引导图像学习结构特征,主要从RAW图像学习颜色特征,并采用结构-颜色损失函数作为引导联合去噪去马赛克模块训练时的损失函数。结构-颜色损失函数定义为:
,其中,Lpc为.结构-颜色联合损失函数;为联合去噪去马赛
克结果;Y为参考真值,设置为含噪RAW图像对应的干净RGB图像真值。损失函数第一项为结构损失,FVGG(·)表示通过预训练的VGG模型提取图像特征,通过VGG空间特征来约束输出结果的结构信息;损失函数第二项为颜色损失,Fgaussian(·)表示通过高斯滤波提取图像低频信息,颜色损失函数通过Fgaussian(·)函数在图像低频空间计算损失。从结构和颜色两个方面去约束去噪去马赛克结果,使得对齐引导图像的结构在充分迁移的同时保持输出结果的颜色准确性。
本发明的有益效果为:基于视差注意力机制,消除了引导过程中的非对齐因素,使得黑白相机图像的引导信息可以跨视差准确地赋予给彩色相机的去噪去马赛克过程,同时通过感知损失函数和结构-颜色损失函数的设计和组合,使得去噪后重建了精确的高频细节结构,同时去马赛克过程具有准取的颜色插值效果。对于暗光场景下的双目成像设备,本发明具有较强的应用价值。
图1是本发明方法的流程图。
图2是本发明对齐引导图像生成模块的工作流程图。
图3是本发明特征提取器结构示意图。
图4是本发明基于视差注意力机制融合左右视图特征的网络结构示意图。
图5是基于视差注意力机制的对齐引导图像生成原理示意图。其中,(a)是目标图像可视化结果,为左相机输出图像;(b)是黑白图像,为右相机输出图像;对(a)、(b)中标明的L1、L2、L3三行像素,视差注意力求解结果如(c)、(d)、(e)所示。
图6是本发明引导去噪去马赛克模块的工作流程图。
图7是本发明引导去噪去马赛克模块的网络架构图。
图8是本发明基于黑白图像生成的对齐引导图像并进行引导去噪去马赛克过程的具体实施例图示。其中,(a)是彩色相机低光照下RAW仿真图像,(b)是(a)经过初步去马赛克并对亮度进行拉伸后得到的可视化图像,(c)是输入黑白相机图像,(d)是生成的对齐引导图像,(e)是生成的去噪去马赛克结果,(f)是参考目标图像。
以下结合具体实施例和附图进一步说明。
实施例
首先,基于彩色双目图像数据集,构建黑白-彩色双目相机仿真数据集,具体为:
将双目彩色相机采集的正常图像,在亮度上除以K以仿真暗光场景(暗光场景图像的最大像素值低于40)。其后,对于左视图基于RGGB的Bayer像素排列模式,进行色彩通道值采样,生成Bayer RAW图
像。对于右视图,则通过三个颜色通道值相加的方式,模拟黑白相机完整的进光量,生成黑白图像。最后,由于在实际场景中,黑白、彩色相机难免都会受到真实噪声的污染,因此对两者添加了相同参数的泊松-高斯噪声的干扰。由此,得到暗光场景下含噪声的RAW-黑白双目图像对,用于后续网络的训练。
本发明方法提出的网络架构如图1所示,其主要分为“对齐引导图像生成”和“引导去噪去马赛克”两个模块。
对齐引导图像生成模块如图2所示,其基于视差注意力机制,消除黑白图像和RAW图像之间的非对齐因素,生成对齐的引导图像。同时,以RAW图像的黑白图像真值作为监督信息,以感知损失作为损失函数单独训练该模块至收敛。
基于视差注意力机制的对齐引导图像生成过程具体描述如下:
首先,对于左右相机图像,采用同一个特征编码器进行特征编码,特征编码器具体结构如图3所示。首先将RAW图像基于传统去马赛克算法DDFAPD算法进行去马赛克,并通过三个颜色通道相加的形式转化为单通道灰度图像。特征编码过程有两个重要作用:通过特征编码,输入RAW图像和黑白图像的噪声水平得到抑制,降低对结构相似性判断的干扰;同时将目标RAW图像和引导黑白图像转化到同一特征空间,保留两者结构上的相关性。
其次,通过相同的特征提取器提取来自左右视图的RAW图像特征Fraw与黑白图像特征Fmono。
随后,基于视差注意力机制融合左右视图特征,具体结构如图4所示。由于视差引起的非对齐因素一般出现在同一行中,因此基于视差注意力机制计算左右视图特征在同一行上的相关性。针对Fraw特征和Fmono特征在视差方向上求解相关性权重矩阵M,并利用行方向上的相关性,构建对齐引导图像特征:其中f表示卷积操作,为对齐引导图像特征;最后,利用特征解码器将特征解码为与目标彩色图像结构一致的对齐引导图像。其中,对齐引导图像特征由对应行上黑白图像特征值基于相关性线性组合而成。
相关性权重矩阵M的求解方法为:对于来自左右视图的RAW图像特征Fraw与黑白图像特征Fmono,两者张量维度均为H×W×C,其中H、W、C分别表示特征张量的高、宽与通道数。首先将左视图特征通过3×3的卷积层,生成query张量,而右视图特征分别通过两个不同的3×3卷积层,生成key张量和value张量,三者维度均为H×W×C。将key张量的维度进行重新排列,将维度转化为H×C×W。将query张量和重排列后的key张量进行矩阵相乘运算,得到维度为H×W×W的相关计算结果矩阵,并通过SoftMax运算得到行方向上的相关性权重矩阵M。
视差注意力机制的特征相关性可视化结果如图5所示。图5(a)是左相机目标图像可视化结果,图5(b)是右相机黑白图像,对图5(a)、图5(b)中标明的L1、L2、L3三行像素,特征相关性结果如图5(c)、图5(d)、图5(e)所示。特征相关性矩阵在对角线方向形成了明显的峰值,显示了视差引起的错位结构在视差注意力
机制下得到了较好地捕捉。
最后,从重构特征出发解码生成对齐引导图像。为了生成噪声较小、结构清晰的对齐引导图像,算法采用RAW图像对应的真值灰度图像作为训练监督,采用感知损失作为损失函数训练该模块。所述感知损失函数定义为:其中,Lp表示感知损失函数;表示对齐引导图像生成模块的输出结果G表示作为监督的参考真值,设置为RAW图像对应的干净灰度图像真值;FVGG(·)表示通过预训练的VGG模型提取图像特征。感知损失函数通过计算预训练的VGG网络特征空间距离,优化对齐引导图像生成模块的生成结果,使其结构上更为一致、人眼视觉感观更为良好。
所述引导去噪去马赛克模块如图6所示,其分别提取RAW图像特征和对齐引导图像特征,并对RAW图像特征分辨率进行上采样后,通过特征拼接的方式融合两者信息,最后通过特征解码输出去噪去马赛克结果。基于结构-颜色损失函数,对该模块进行训练,使得去噪去马赛克结果同时具有准确的颜色重建和锐利的细节结构。该模块的具体网络结构如图7所示。
结构-颜色损失函数定义如下:
其中,Lpc表示结构-颜色损失函数,第一项为感知损失函数,第二项为颜色损失函数。FVGG(·)表示通过预训练的VGG模型提取图像特征,感知损失函数通过FVGG(·)函数在图像深层特征空间计算损失。Fgaussian(·)表示通过高斯滤波提取图像低频信息,颜色损失函数通过Fgaussian(·)函数在图像低频空间计算损失。联合损失函数在VGG特征空间对图像内容的各种结构做约束,而在图像低频空间对图像内容的整体亮度做约束,两者的组合有助于在训练过程中,充分迁移引导图像中的各类结构到目标图像中,同时防止引导图像亮度对颜色重建的影响,从而得到结构清晰、颜色准确的引导去噪去马赛克结果。
图8为某具体实施例基于本发明方法的运行结果,其中,图8中(d)为本发明方法生成的对齐引导图像,其噪声较小,结构清晰,且与目标彩色图像结构一致。图8(e)是本发明方法生成的去噪去马赛克结果,与图8(f)参考目标图像相比,结构和颜色都较为一致。
本发明提出了全新的双目相机引导去噪去马赛克方法,在包含视差的非对齐场景下,生成了高质量的对齐引导图像,基于结构-颜色损失函数得到视觉感知良好的去噪去马赛克结果。将本发明方法部署到黑白-彩色双目系统中时,利用黑白相机不受颜色滤波片干扰的成像优势辅助彩色相机的去马赛克过程,利用双目系统同时成像时黑白相机更大进光量的成像优势辅助彩色相机的去噪过程。
Claims (6)
- 一种黑白图像引导的彩色RAW图像联合去噪去马赛克方法,其特征在于,利用黑白相机输出图像,跨视差引导彩色相机RAW图像进行联合去噪去马赛克,具体步骤如下:S1:数据集构建,构建暗光场景下含噪声的黑白-彩色双目相机仿真数据集;S2:基于步骤S1中的数据集构建联合去噪去马赛克模型;所述联合去噪去马赛克模型包括对齐引导图像生成模块和引导去噪去马赛克模块;首先,构建所述对齐引导图像生成模块,所述对齐引导图像生成模块基于视差注意力机制求解黑白图像和RAW图像在视差方向上的特征相关性,利用相关的结构特征构建对齐引导图像特征,从而得到对齐引导图像;以RAW图像对应的干净灰度图像真值作为监督,基于感知损失函数训练所述对齐引导图像生成模块至收敛;其次,构建所述引导去噪去马赛克模块,所述引导去噪去马赛克模块分别提取RAW图像的特征和对齐引导图像的特征,并以特征通道拼接的方式进行特征融合,最后通过特征解码生成RAW图像对应的干净RGB图像,同时完成引导去噪和引导去马赛克过程;基于结构-颜色损失函数训练所述引导去噪去马赛克模块至收敛;S3:基于步骤S2构建的联合去噪去马赛克模型对彩色RAW图像进行去噪去马赛克。
- 根据权利要求1所述的基于黑白图像引导的彩色RAW图像联合去噪去马赛克方法,其特征在于,所述步骤S1具体包括以下步骤:1.1将双目彩色相机采集的正常图像中的每个像素在亮度上除以K以仿真暗光场景;1.2对于左视图,基于RGGB的Bayer像素排列模式进行色彩通道值采样,生成RAW图像;对于右视图,通过三个颜色通道值相加的方式模拟黑白相机完整的进光量,生成黑白图像;1.3对黑白图像与RAW图像添加相同参数的泊松-高斯噪声的污染。
- 根据权利要求1所述的基于黑白图像引导的彩色RAW图像联合去噪去马赛克方法,其特征在于,步骤S2中,所述对齐引导图像生成模块基于视差注意力机制求解黑白图像和RAW图像在视差方向上的特征相关性,利用相关的结构特征构建对齐引导图像特征,具体方法为:首先,将RAW图像基于传统去马赛克算法进行去马赛克,并通过三个颜色通道相加的形式转化为单通道灰度图像;其次,通过相同的特征提取器提取来自左右视图的RAW图像特征Fraw与黑白图像特征Fmono;随后,基于视差注意力机制,针对Fraw特征和Fmono特征在视差方向上求解相关性权重矩阵M,并利用行方向上的相关性,构建对齐引导图像特征:其中f表示卷积操作,为对齐引导图像特征;最后,利用特征解码器将特征解码为与目标彩色图像结构一致的对齐引导图像。
- 根据权利要求3所述的基于黑白图像引导的彩色RAW图像联合去噪去马赛克方法,其特征在于,步骤S2中,所述相关性权重矩阵M的求解方法为:对于来自左右视图的RAW图像特征Fraw与黑白图像特征Fmono,两者的张量维度均为H×W×C,其中H、W、C分别表示特征张量的高、宽与通道数;首先将左视图特征通过3×3的卷积层,生成query张量,而右视图特征分别通过两个不共享的3×3卷积层,生成key张量和value张量,三者维度均为H×W×C;将key张量的维度进行重新排列,将维度转化为H×C×W;将query张量和重排列后的key张量进行矩阵相乘运算,得到维度为H×W×W的相关计算结果矩阵,并通过SoftMax运算得到行方向上的相关性权重矩阵M。
- 根据权利要求1所述的基于黑白图像引导的彩色RAW图像联合去噪去马赛克方法,其特征在于,步骤S2中,所述感知损失函数定义为:其中,Lp表示感知损失函数;表示对齐引导图像生成模块的输出结果;G表示作为监督的参考真值,设置为RAW图像对应的干净灰度图像真值;FVGG(·)表示通过预训练的VGG模型提取图像特征。
- 根据权利要求1所述的基于黑白图像引导的彩色RAW图像联合去噪去马赛克方法,其特征在于,步骤S2中,所述的结构-颜色损失函数定义为: 其中,Lpc为结构-颜色联合损失函数;为联合去噪去马赛克结果;Y为参考真值,设置为含噪RAW图像对应的干净RGB图像真值;FVGG(·)表示通过预训练的VGG模型提取图像特征;Fgaussian(·)表示通过高斯滤波提取图像低频信息。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/398,193 US20240320787A1 (en) | 2023-03-21 | 2023-12-28 | Joint denoising and demosaicking method for color raw images guided by monochrome images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310277581.2A CN116309163A (zh) | 2023-03-21 | 2023-03-21 | 一种黑白图像引导的彩色raw图像联合去噪去马赛克方法 |
CN202310277581.2 | 2023-03-21 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/398,193 Continuation US20240320787A1 (en) | 2023-03-21 | 2023-12-28 | Joint denoising and demosaicking method for color raw images guided by monochrome images |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023231535A1 true WO2023231535A1 (zh) | 2023-12-07 |
Family
ID=86784769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/084429 WO2023231535A1 (zh) | 2023-03-21 | 2023-03-28 | 一种黑白图像引导的彩色raw图像联合去噪去马赛克方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240320787A1 (zh) |
CN (1) | CN116309163A (zh) |
WO (1) | WO2023231535A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117523024A (zh) * | 2024-01-02 | 2024-02-06 | 贵州大学 | 一种基于潜在扩散模型的双目图像生成方法及系统 |
CN117974509A (zh) * | 2024-04-02 | 2024-05-03 | 中国海洋大学 | 基于目标检测感知特征融合的两阶段水下图像增强方法 |
CN118505556A (zh) * | 2024-06-13 | 2024-08-16 | 中国人民解放军总医院第一医学中心 | 一种胸部x光异常阴影辅助判断方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492265A (zh) * | 2018-03-16 | 2018-09-04 | 西安电子科技大学 | 基于gan的cfa图像去马赛克联合去噪方法 |
CN111861902A (zh) * | 2020-06-10 | 2020-10-30 | 天津大学 | 基于深度学习的Raw域视频去噪方法 |
CN113658060A (zh) * | 2021-07-27 | 2021-11-16 | 中科方寸知微(南京)科技有限公司 | 基于分布学习的联合去噪去马赛克方法及系统 |
US20220164926A1 (en) * | 2020-11-23 | 2022-05-26 | Samsung Electronics Co., Ltd. | Method and device for joint denoising and demosaicing using neural network |
-
2023
- 2023-03-21 CN CN202310277581.2A patent/CN116309163A/zh active Pending
- 2023-03-28 WO PCT/CN2023/084429 patent/WO2023231535A1/zh unknown
- 2023-12-28 US US18/398,193 patent/US20240320787A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108492265A (zh) * | 2018-03-16 | 2018-09-04 | 西安电子科技大学 | 基于gan的cfa图像去马赛克联合去噪方法 |
CN111861902A (zh) * | 2020-06-10 | 2020-10-30 | 天津大学 | 基于深度学习的Raw域视频去噪方法 |
US20220164926A1 (en) * | 2020-11-23 | 2022-05-26 | Samsung Electronics Co., Ltd. | Method and device for joint denoising and demosaicing using neural network |
CN113658060A (zh) * | 2021-07-27 | 2021-11-16 | 中科方寸知微(南京)科技有限公司 | 基于分布学习的联合去噪去马赛克方法及系统 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117523024A (zh) * | 2024-01-02 | 2024-02-06 | 贵州大学 | 一种基于潜在扩散模型的双目图像生成方法及系统 |
CN117523024B (zh) * | 2024-01-02 | 2024-03-26 | 贵州大学 | 一种基于潜在扩散模型的双目图像生成方法及系统 |
CN117974509A (zh) * | 2024-04-02 | 2024-05-03 | 中国海洋大学 | 基于目标检测感知特征融合的两阶段水下图像增强方法 |
CN118505556A (zh) * | 2024-06-13 | 2024-08-16 | 中国人民解放军总医院第一医学中心 | 一种胸部x光异常阴影辅助判断方法 |
Also Published As
Publication number | Publication date |
---|---|
US20240320787A1 (en) | 2024-09-26 |
CN116309163A (zh) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023231535A1 (zh) | 一种黑白图像引导的彩色raw图像联合去噪去马赛克方法 | |
CN111260560B (zh) | 一种融合注意力机制的多帧视频超分辨率方法 | |
CN109671023A (zh) | 一种人脸图像超分辨率二次重建方法 | |
Pu et al. | Robust high dynamic range (hdr) imaging with complex motion and parallax | |
CN114119424B (zh) | 一种基于光流法和多视角场景的视频修复方法 | |
CN114868384A (zh) | 用于图像处理的设备及方法 | |
Song et al. | Real-scene reflection removal with raw-rgb image pairs | |
CN115209119A (zh) | 一种基于深度神经网络的视频自动着色方法 | |
CN104243970A (zh) | 基于立体视觉注意力机制和结构相似度的3d绘制图像的客观质量评价方法 | |
Liang et al. | Multi-scale and multi-patch transformer for sandstorm image enhancement | |
CN116739932A (zh) | 一种基于盲点自监督的图像去噪深度学习算法 | |
Alamgeer et al. | Light field image quality assessment with dense atrous convolutions | |
CN116208812A (zh) | 一种基于立体事件和强度相机的视频插帧方法及系统 | |
Fang et al. | Priors guided extreme underwater image compression for machine vision and human vision | |
WO2023029233A1 (zh) | 人脸色素检测模型训练方法、装置、设备及存储介质 | |
Zhang et al. | Single image dehazing via reinforcement learning | |
CN115456910A (zh) | 一种用于严重颜色畸变水下图像的颜色恢复方法 | |
Liu et al. | Video Demoiréing with Deep Temporal Color Embedding and Video-Image Invertible Consistency | |
CN114898096A (zh) | 一种人物图像的分割和标注方法及系统 | |
CN114842095A (zh) | 面向虚拟现实的考虑时空关系的最佳缝合线图像融合方法 | |
CN115222606A (zh) | 图像处理方法、装置、计算机可读介质及电子设备 | |
CN115588153B (zh) | 一种基于3D-DoubleU-Net的视频帧生成方法 | |
CN110223268A (zh) | 一种绘制图像质量评价方法 | |
KR102670870B1 (ko) | 딥러닝 기반 영상 노이즈 저감 장치 | |
Martínez et al. | Fast Disparity Estimation from a Single Compressed Light Field Measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23814732 Country of ref document: EP Kind code of ref document: A1 |