CN112614073A - Image rain removing method based on visual quality evaluation feedback and electronic device - Google Patents

Image rain removing method based on visual quality evaluation feedback and electronic device Download PDF

Info

Publication number
CN112614073A
CN112614073A CN202011593324.2A CN202011593324A CN112614073A CN 112614073 A CN112614073 A CN 112614073A CN 202011593324 A CN202011593324 A CN 202011593324A CN 112614073 A CN112614073 A CN 112614073A
Authority
CN
China
Prior art keywords
image
rain
rainy
day
quality evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011593324.2A
Other languages
Chinese (zh)
Inventor
刘家瑛
杨文瀚
胡煜章
郭宗明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202011593324.2A priority Critical patent/CN112614073A/en
Publication of CN112614073A publication Critical patent/CN112614073A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image rain removing method and an electronic device based on visual quality evaluation feedback, which comprises the steps of constructing a paired image data set based on a plurality of sample rain-free images and generated rain marks and rain fog; carrying out rain removing treatment on collected sample rainy-day images, carrying out artificial visual quality evaluation on each treated sample rainy-day image, and constructing a non-paired rainy-day image quality data set; training a first convolution neural network by using an unpaired rainy day image quality data set to obtain a quality evaluation network; training a second convolutional neural network by using the paired image data set and the quality evaluation network constraint to obtain an image rain removal model; and inputting the image to be processed into an image rain removal model to obtain an image after rain removal. The invention uses real rainy images to participate in model training, so that the model can learn to process richer and real degradation types, and visual quality evaluation feedback is introduced, so that the generated rain-removing image has better quality in the subjective sense of human eyes.

Description

Image rain removing method based on visual quality evaluation feedback and electronic device
Technical Field
The invention belongs to the field of image processing and enhancement, and relates to an image rain removing method based on visual quality evaluation feedback and an electronic device.
Background
The deep learning rain-removing era started in 2017. Yang et al construct a network that combines rain mark detection and removal to handle heavy rain, overlapping rain marks and rain fog. The network can detect the position of rain by predicting a binary mask, and remove rain marks by adopting a recursive framework to gradually remove rain fog. The method achieves good effect under the condition of heavy rain. However, the method may erroneously remove the vertical texture and cause underexposure.
In the same year, Fu et al tried to remove rain marks by constructing a deep detail network. The network takes only high frequency details as input and predicts rain marks and clean, rain-free images. This work shows that removing background information from the network input facilitates network training.
Following the work of Yang and Fu et al, in subsequent work, a number of convolutional neural network-based approaches have been proposed. These methods employ more advanced network structures and embed new priors associated with rain, yielding better results in both quantitative and qualitative analysis. However, since these methods are limited by the fully supervised learning paradigm (using a composite rain map and using only constraints based on signal fidelity metrics), the model training targets are not completely consistent with human perception when dealing with real rain scenes never seen in the training process, and in practical application scenarios, the enhanced model may fail.
Disclosure of Invention
Aiming at the problems and the defects of related methods, the invention provides an image rain removing method based on visual quality evaluation feedback and an electronic device, wherein a rain removing neural network model is trained based on a signal fidelity reconstruction loss function and a quality evaluation loss function based on a subjective quality evaluation method, the former ensures that the trained model can realize a rain mark removing effect on synthetic data, and the latter ensures the generalization performance of the method, including the processing effect of non-visible degradation and the improvement of subjective intuition quality.
The technical scheme adopted by the invention comprises the following steps:
an image rain removing method based on visual quality evaluation feedback comprises the following steps:
1) generating a plurality of rainy-day images based on the rain-free images of the samples and the generated rain marks and rain fog, and constructing a paired image data set;
2) carrying out rain removing treatment on collected sample rainy-day images, carrying out artificial visual quality evaluation on each treated sample rainy-day image to obtain an image quality label of each treated sample rainy-day image, and constructing a non-paired rainy-day image quality data set;
3) training a first convolution neural network by using an unpaired rainy day image quality data set to obtain a quality evaluation network;
4) training a second convolutional neural network by using the paired image data set and the quality evaluation network constraint to obtain an image rain removal model;
5) and inputting the image to be processed into an image rain removal model to obtain an image after rain removal.
Further, the method for generating rain marks and rain fog comprises the following steps: a rain mark appearance model was used.
Further, the parameters of the rain fog include: light transmittance and background light.
Further, the rainy day image y is x (1-t) + t α + s, where x is a sample no-rain image, s is rain mark, t is light transmittance, and α is background light.
Further, the method for carrying out rain removing treatment on the sample rainy day image comprises the following steps: DID-MDN, SSIR, SPANet, or JORDER-E.
Further, the first convolutional neural network has a structure of replacing the last layer with a VGG16 network having n cell full-connected layers and a softmax layer, where n is the number of kinds of image quality labels.
Further, the first convolutional neural network is pre-trained on ImageNet prior to training the first convolutional neural network.
Further, the structure of the second convolutional network includes: and a U-Net structure.
Further, by a signal fidelity measure LFidAnd a mass loss function LQualityConstraining learning of a second convolutional neural network, wherein the signal fidelity measure
Figure BDA0002869728340000021
Function of mass loss
Figure BDA0002869728340000022
Is an index of the similarity, and is,
Figure BDA0002869728340000023
for the image after rain removal, y is the image in rainy days in the paired image dataset, D is the quality assessment network, and l is a random number.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-mentioned method when executed.
An electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer to perform the method as described above.
Compared with the prior art, the invention has the following advantages:
1) besides the synthesized training data, the real rainy-day images are used to participate in model training, so that the model can learn to process richer and real degradation types;
2) visual quality evaluation feedback is introduced, so that the generated rain-removing image has better quality in the subjective sense of human eyes.
Drawings
FIG. 1 is a frame diagram of subjective quality assessment of a rain removal method.
FIG. 2 is a frame diagram of an image rain removal model of the present invention.
Detailed Description
In order to further explain the technical method of the invention, the invention is further described in detail by combining the drawings and the specific examples in the specification.
The image rain removing method comprises the following steps:
step 1: a rain/no rain paired image dataset was constructed for a total of 1800 rain/no rain image pairs. According to a rainless image x, corresponding raindrop s and rainfog parameters (light transmittance t and background light alpha) are generated based on a raindrop appearance model (random sampling generation illumination direction parameters, viewing angle parameters and raindrop vibration parameters) [ Garg and Nayar,2016], relevant variables are superposed, and a rainy day image y is generated as x (1-t) + t alpha + s, so that a paired image data set is constructed.
Step 2: constructing an unpaired rainy day image quality data set, collecting rainy day images through a public channel, running a plurality of forefront rain removing methods (such as DID-MDN [ Zhang and Patel,2018], SSIR [ Wei et al, 2019], SPANet [ Hu et al, 2019] and JORDER-E [ Yang et al, 2020] and the like) to remove rain from the rainy day images, and evaluating the visual quality of a processing result by an applicant, wherein corresponding image quality labels (1-10 grades, 10 represents the highest quality, and 1 represents the lowest quality).
And step 3: the quality assessment network D is trained using unpaired rain image quality data sets. As shown in fig. 1, D uses a network structure of VGG16 and replaces the last layer with a fully connected layer of 10 cells. Then softmax is followed. The network was pre-trained on ImageNet and then refined using a rain image quality data set. The score range [1,10], 10 is the highest score.
And 4, step 4: as shown in fig. 2, the rain-removed network is trained using paired image datasets and quality assessment network D constraints. The rain removal network uses a U-Net structure, uses the convolution layer with the step length of 1 to carry out feature extraction and refining, and uses the convolution layer with the step length of 2 to gradually reduce the space size of the feature. Then, feature extraction and refinement are continued using the convolution layer with step size 1, and the spatial size of the feature is gradually enlarged to the original image size using the deconvolution layer with step size 2. Features at the same spatial scale but belonging to different ends of the size scaling are connected using a jump connection, thereby ensuring that information flows smoothly at each scale and maintaining the underlying fine signal structure. Using a signal fidelity metric LFidAnd a mass loss function LQualityAnd (5) restricting the learning of the network. The total loss function L can be expressed as:
L=LFid+λLQuality,
Figure BDA0002869728340000031
Figure BDA0002869728340000032
where λ is a weight parameter. l is a random number between 7 and 12,
Figure BDA0002869728340000033
the image after rain removal, where 10 represents the highest quality in the database. Phi (-) is a structural similarity index.
And 5: in practical application, the image x in rainy days is directly input into the image rain removing model, and the image after rain removal is output
Figure BDA0002869728340000041
The above examples are provided only for the purpose of describing the present invention, and are not intended to limit the scope of the present invention. The scope of the invention is defined by the appended claims. Various equivalent substitutions and modifications can be made without departing from the spirit and principles of the invention, and are intended to be within the scope of the invention.

Claims (10)

1. An image rain removing method based on visual quality evaluation feedback comprises the following steps:
1) generating a plurality of rainy-day images based on the rain-free images of the samples and the generated rain marks and rain fog, and constructing a paired image data set;
2) carrying out rain removing treatment on collected sample rainy-day images, carrying out artificial visual quality evaluation on each treated sample rainy-day image to obtain an image quality label of each treated sample rainy-day image, and constructing a non-paired rainy-day image quality data set;
3) training a first convolution neural network by using an unpaired rainy day image quality data set to obtain a quality evaluation network;
4) training a second convolutional neural network by using the paired image data set and the quality evaluation network constraint to obtain an image rain removal model;
5) and inputting the image to be processed into an image rain removal model to obtain an image after rain removal.
2. The method of claim 1, wherein the method of generating rain marks and fog comprises: using a rain mark appearance model; the method for carrying out rain removing treatment on the sample rainy day image comprises the following steps: DID-MDN, SSIR, SPANet, or JORDER-E.
3. The method of claim 1, wherein the parameters of rain fog comprise: light transmittance and background light.
4. The method of claim 3, wherein the rain image y is x (1-t) + t α + s, where x is a sample no rain image, s is rain mark, t is light transmittance, and α is background light.
5. The method of claim 1, wherein the first convolutional neural network is structured as a VGG16 network with n cell fully-connected layers replacing the last layer and a softmax layer, where n is the number of kinds of image quality labels.
6. The method of claim 1, wherein the first convolutional neural network is pre-trained on ImageNet prior to training the first convolutional neural network.
7. The method of claim 1, wherein the structure of the second convolutional network comprises: and a U-Net structure.
8. The method of claim 1, wherein the signal fidelity measure L is passedFidAnd a mass loss function LQualityConstraining learning of a second convolutional neural network, wherein the signal fidelity measure
Figure FDA0002869728330000011
Function of mass loss
Figure FDA0002869728330000012
Figure FDA0002869728330000013
Phi (-) is the similarity index,
Figure FDA0002869728330000014
for the image after rain removal, y is the image in rainy days in the paired image dataset, D is the quality assessment network, and l is a random number.
9. A storage medium having a computer program stored thereon, wherein the computer program is arranged to, when run, perform the method of any of claims 1-8.
10. An electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the method according to any of claims 1-8.
CN202011593324.2A 2020-12-29 2020-12-29 Image rain removing method based on visual quality evaluation feedback and electronic device Pending CN112614073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011593324.2A CN112614073A (en) 2020-12-29 2020-12-29 Image rain removing method based on visual quality evaluation feedback and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011593324.2A CN112614073A (en) 2020-12-29 2020-12-29 Image rain removing method based on visual quality evaluation feedback and electronic device

Publications (1)

Publication Number Publication Date
CN112614073A true CN112614073A (en) 2021-04-06

Family

ID=75248816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011593324.2A Pending CN112614073A (en) 2020-12-29 2020-12-29 Image rain removing method based on visual quality evaluation feedback and electronic device

Country Status (1)

Country Link
CN (1) CN112614073A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160645A1 (en) * 2022-02-25 2023-08-31 索尼集团公司 Image enhancement method and device
CN117152000A (en) * 2023-08-08 2023-12-01 华中科技大学 Rainy day image-clear background paired data set manufacturing method and device and application thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111288A (en) * 2019-04-15 2019-08-09 电子科技大学 Image enhancement and blind plot quality based on depth assisted learning evaluate network
CN110942436A (en) * 2019-11-29 2020-03-31 复旦大学 Image deblurring method based on image quality evaluation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111288A (en) * 2019-04-15 2019-08-09 电子科技大学 Image enhancement and blind plot quality based on depth assisted learning evaluate network
CN110942436A (en) * 2019-11-29 2020-03-31 复旦大学 Image deblurring method based on image quality evaluation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WENHAN YANG等: "Frame-Consistent Recurrent Video Deraining with Dual-Level Flow", 《IEEE》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160645A1 (en) * 2022-02-25 2023-08-31 索尼集团公司 Image enhancement method and device
CN117152000A (en) * 2023-08-08 2023-12-01 华中科技大学 Rainy day image-clear background paired data set manufacturing method and device and application thereof
CN117152000B (en) * 2023-08-08 2024-05-14 华中科技大学 Rainy day image-clear background paired data set manufacturing method and device and application thereof

Similar Documents

Publication Publication Date Title
Din et al. A novel GAN-based network for unmasking of masked face
Zhu et al. Hard sample aware noise robust learning for histopathology image classification
Huang et al. Semi-supervised neuron segmentation via reinforced consistency learning
Tursun et al. MTRNet++: One-stage mask-based scene text eraser
CN112614073A (en) Image rain removing method based on visual quality evaluation feedback and electronic device
CN113762138A (en) Method and device for identifying forged face picture, computer equipment and storage medium
CN111127354A (en) Single-image rain removing method based on multi-scale dictionary learning
Zhang et al. Diffusionad: Denoising diffusion for anomaly detection
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN115358337A (en) Small sample fault diagnosis method and device and storage medium
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN114494168A (en) Model determination, image recognition and industrial quality inspection method, equipment and storage medium
CN114399638A (en) Semantic segmentation network training method, equipment and medium based on patch learning
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
Xiao et al. Self-explanatory deep salient object detection
CN116701647A (en) Knowledge graph completion method and device based on fusion of embedded vector and transfer learning
Liu et al. Explaining deep neural networks using unsupervised clustering
CN115810128A (en) Image classification model compression method based on greedy strategy
Mandal et al. Neural architecture search for image dehazing
CN114496099A (en) Cell function annotation method, device, equipment and medium
CN113095328A (en) Self-training-based semantic segmentation method guided by Gini index
Punia et al. Automatic detection of liver in CT images using optimal feature based neural network
Shi et al. Refactoring ISP for High-Level Vision Tasks
Agnihotri DeepFake Detection using Deep Neural Networks
CN112257769B (en) Multilayer nuclear magnetic image classification method and system based on reinforcement learning type brain reading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210406