CN112529815A - Method and system for removing raindrops in real image after rain - Google Patents

Method and system for removing raindrops in real image after rain Download PDF

Info

Publication number
CN112529815A
CN112529815A CN202011526580.XA CN202011526580A CN112529815A CN 112529815 A CN112529815 A CN 112529815A CN 202011526580 A CN202011526580 A CN 202011526580A CN 112529815 A CN112529815 A CN 112529815A
Authority
CN
China
Prior art keywords
rain
real image
pixel point
pixel
raindrop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011526580.XA
Other languages
Chinese (zh)
Other versions
CN112529815B (en
Inventor
张世辉
桑榆
李腾飞
杨永亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN202011526580.XA priority Critical patent/CN112529815B/en
Publication of CN112529815A publication Critical patent/CN112529815A/en
Application granted granted Critical
Publication of CN112529815B publication Critical patent/CN112529815B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a method and a system for removing raindrops in a real image after rain. The raindrop removal method includes: acquiring a real image after historical rain; constructing a recursive attention residual error network fusing a long-time and short-time memory model, a space attention mechanism and a residual error block according to the real image after the historical rain; acquiring a real image after rain to be removed, and processing the real image after rain to be removed by using the recursive attention residual error network to generate a real image after rain to be removed; based on an image enhancement theory, filtering residual raindrops in the rain-removed real image by utilizing a raindrop detection and filtering algorithm, and determining a filtered real image after rain; and based on a computer graphics theory, processing the filtered real image after rain by using a pixel value conversion algorithm, and determining the real image after rain after pixel point enhancement. The invention can improve the poor quality of the image after rain removal and the processing effect of the synthesized image.

Description

Method and system for removing raindrops in real image after rain
Technical Field
The invention relates to the field of computer vision, in particular to a method and a system for removing raindrops in a real image after rain.
Background
Rain is liquid water that is condensed from atmospheric water vapor, then becomes heavy, and falls down by gravity. Raindrops can cause image blurring and contrast reduction, and can further influence the progress of visual tasks such as target detection, automatic driving, video tracking and the like, and even cause failure of the visual tasks. The rain is removed from the image or the video, so that the working efficiency of the visual task can be improved, and the relevant visual task can be completed more favorably. Therefore, in recent years, the problem of rain removal has received much attention from scholars.
The restoration of a rainy image degraded in rainy days into a clear image without rain is called as single image rain removal. The existing rain removing methods mainly comprise two main types, namely a video-based rain removing method and a single-image-based rain removing method. The rain removing method based on the video generally divides the video into a plurality of images and then removes rain from the images, namely the rain removing method based on the video is also based on the rain removing of a single image, so that students develop more researches on the rain removing method based on the single image. In the article, "Automatic Single-Image-Based Rain streams Removal Image composition", an Image is first decomposed into a low-frequency component containing an Image structure and a high-frequency component containing Rain and background textures by using a bilateral filter, and then the high-frequency component is decomposed into a Rain component and a Rain-free component by using dictionary learning and sparse coding. A Deep neural network named JORDER is built in an article 'Deep Joint Rain Detection and Removal from a Single Image', and raindrops in an Image are detected and removed. Although this method is good for removing rain from an image containing more raindrops, the rain removing effect is very dependent on the result of raindrop detection, and usually results in a rain-removed image with a too smooth background. In an article "Residual-Guide Network for Single Image tracing", a Residual Network is studied in depth, and a ResguideNet is constructed by guiding a deep feature map by using a Residual of a shallow feature map, so that raindrops in an Image are removed. In the article "Multi-Scale Progressive Fusion Network for Single Image Deraining", MSPFN is constructed according to a pyramid structure and a channel attention mechanism for raindrop detection and removal, but the method has the problems of low Image contrast after the raindrop removal of a synthetic Image, poor raindrop removal effect on a real Image and the like. The method removes raindrops in the synthesized image, and has poor rain removing effect on the real image with wider application range. In order to solve the problem, a method for removing rain for a Real Image based on a semi-supervised Learning idea and constructed Syn2Real network is proposed in an article Syn2Real Transfer Learning for Image tracing using Gaussian Processes, the method cannot ensure the quality of the Image after rain removal, and the processing effect on a synthesized Image is poor.
Disclosure of Invention
The invention aims to provide a method and a system for removing raindrops in a real image after rain, so as to solve the problems of poor quality of the image after rain removal and poor processing effect of a synthetic image.
In order to achieve the purpose, the invention provides the following scheme:
a method for removing raindrops in a real image after rain comprises the following steps:
acquiring a real image after historical rain;
constructing a recursive attention residual error network fusing a long-time and short-time memory model, a space attention mechanism and a residual error block according to the real image after the historical rain;
acquiring a real image after rain to be removed, and processing the real image after rain to be removed by using the recursive attention residual error network to generate a real image after rain to be removed;
based on an image enhancement theory, filtering residual raindrops in the rain-removed real image by utilizing a raindrop detection and filtering algorithm, and determining a filtered real image after rain;
and based on a computer graphics theory, processing the filtered real image after rain by using a pixel value conversion algorithm, and determining the real image after rain after pixel point enhancement.
Optionally, the constructing a recursive attention residual error network fusing a long-term and short-term memory model, a spatial attention mechanism, and a residual block according to the real image after the historical rain specifically includes:
inputting the real image after the historical rain into the long-short time memory model, performing 6 times of recursive training on the long-short time memory model, generating a trained long-short time memory model, and extracting raindrop characteristics;
introducing a space attention mechanism into the trained long-time and short-time memory model, reinforcing the raindrop characteristics, and determining the reinforced raindrop characteristics;
and adopting a residual block to distinguish the strengthened raindrop characteristics and the historical non-raindrop characteristics, and constructing a recursive attention residual network which integrates a long-time memory model, a spatial attention mechanism and the residual block.
Optionally, based on the image enhancement theory, filtering residual raindrops in the rained real image by using a raindrop detection and filtering algorithm, and determining the filtered rained real image, specifically including:
taking each pixel point in the real image after rain removal as a central pixel point, and extracting a 5 multiplied by 5 neighborhood adjacent to each central pixel point as a filtering window;
calculating the brightness difference value between the brightness of the central pixel point and the brightness of each pixel point in the 5 multiplied by 5 neighborhood, and combining the calculated 24 brightness difference values into a first set;
according to the position relation of each brightness difference value in the first set, reversely solving pixel points corresponding to each brightness difference value in the interval [11,62] in the first set, and determining a second set;
if the second set is not empty, determining that the pixel points in the second set are raindrop pixel points;
if the second set is empty, determining that pixel points corresponding to the elements in the first set in the interval [11,62] are non-raindrop pixel points;
and filtering the non-raindrop pixel points, and determining a filtered real image after raining.
Optionally, the determining that the pixel point corresponding to each element in the first set in the [11,62] interval is a raindrop pixel point further includes:
according to the formula
Figure BDA0002850776060000031
Replacing the average pixel value of the pixel points in the second set with the pixel value of each pixel point in the rain-removed image; wherein p isijI is a pixel point p for each pixel point in the image after rain removalijCorresponding abscissa, j is the pixel point pijA corresponding ordinate; i (p)ij) Is pijA pixel value of (a); Φ is the first set; Γ is the second set; p is a radical oftmFor each luminance difference value in the second set at [11,62]]Pixel point corresponding to the interval, t is pixel point ptmCorresponding abscissa, m being the pixel point ptmA corresponding ordinate; i (p)tm) Is ptmThe pixel value of (2).
Optionally, based on a computer graphics theory, processing the filtered real image after rain by using a pixel value conversion algorithm to determine the real image after rain after the enhancement of the pixel points, specifically including:
calculating the local mean value and the local standard deviation in the 3 multiplied by 3 neighborhood of any pixel point in the filtered real image after rain;
and determining the real image after raining after the pixel point is enhanced according to the local mean value and the local standard deviation.
Optionally, the determining the pixel point-enhanced real image after raining according to the local mean and the local standard deviation specifically includes:
according to the formula
Figure BDA0002850776060000041
Determining a real image after raining after pixel point enhancement; wherein,
Figure BDA0002850776060000042
pixel points in the filtered real image after rain are obtained;
Figure BDA0002850776060000043
is composed of
Figure BDA0002850776060000044
A pixel value of (a);
Figure BDA0002850776060000045
is a local mean;
Figure BDA0002850776060000046
is the local standard deviation.
A system for raindrop removal in a real image after rain, comprising:
the real image acquisition module after the historical rain is used for acquiring a real image after the historical rain;
the recursive attention residual error network construction module is used for constructing a recursive attention residual error network fusing a long-time and short-time memory model, a space attention mechanism and a residual error block according to the real image after the historical rain;
the rain-removed real image generation module is used for acquiring a rain-removed real image to be subjected to rain removal, and processing the rain-removed real image to be subjected to rain removal by utilizing the recursive attention residual error network to generate a rain-removed real image;
the filtered real image after rain determining module is used for filtering residual raindrops in the filtered real image after rain by utilizing a raindrop detection and filtering algorithm based on an image enhancement theory and determining a filtered real image after rain;
and the enhanced real image after rain determination module is used for processing the filtered real image after rain by utilizing a pixel value conversion algorithm based on a computer graphics theory and determining the pixel point enhanced real image after rain.
Optionally, the recursive attention residual error network constructing module specifically includes:
the raindrop feature extraction unit is used for inputting the real image after the historical rain into the long-short term memory model, performing 6 times of recursive training on the long-short term memory model, generating a trained long-short term memory model, and extracting raindrop features;
the strengthening unit is used for introducing a space attention mechanism into the trained long-time and short-time memory model, strengthening the raindrop characteristics and determining strengthened raindrop characteristics;
and the recursive attention residual error network construction unit is used for adopting the residual error block to distinguish the strengthened raindrop characteristics and the historical non-raindrop characteristics and constructing a recursive attention residual error network which integrates a long-time memory model, a space attention mechanism and the residual error block.
Optionally, the module for determining the filtered real image after rain specifically includes:
a filtering window extraction unit, configured to extract, as a filtering window, a 5 × 5 neighborhood adjacent to each central pixel point by using each pixel point in the real image after rain removal as the central pixel point;
a first set determining unit, configured to calculate luminance differences between the luminance of the center pixel and the luminance of each pixel in the 5 × 5 neighborhood, and combine the calculated 24 luminance differences into a first set;
a second set determining unit, configured to reversely solve, according to the position relationship of each brightness difference value in the first set, a pixel point corresponding to each brightness difference value in the [11,62] interval in the first set, and determine a second set;
a raindrop pixel point determining unit, configured to determine, if the second set is not empty, that a pixel point in the second set is a raindrop pixel point;
a non-raindrop pixel point determining unit, configured to determine, if the second set is empty, that a pixel point corresponding to each element in the first set in the interval [11,62] is a non-raindrop pixel point;
and the filtered real image after rain determining unit is used for filtering the non-raindrop pixel points and determining the filtered real image after rain.
Optionally, the method further includes:
a pixel value replacement unit for replacing the pixel value according to a formula
Figure BDA0002850776060000051
Replacing each of the rained images after rain removal with the average pixel value of the pixel points in the second setPixel values of the pixels; wherein p isijI is a pixel point p for each pixel point in the image after rain removalijCorresponding abscissa, j is the pixel point pijA corresponding ordinate; i (p)ij) Is pijA pixel value of (a); Φ is the first set; Γ is the second set; p is a radical oftmFor each luminance difference value in the second set at [11,62]]Pixel point corresponding to the interval, t is pixel point ptmCorresponding abscissa, m being the pixel point ptmA corresponding ordinate; i (p)tm) Is ptmThe pixel value of (2).
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a method and a system for removing raindrops in a real image after rain.A Recursive Attention Residual error Network (RARNET) of an LSTM (Long short term memory), a Spatial Attention Mechanism (SAM) and a Residual block (ResBlock, RB) is fused, synthetic data is utilized to train the RARNET, and the constructed RARNET can effectively extract the raindrop characteristics in the real image, so that the raindrops in the real image can be effectively removed; meanwhile, a raindrop detection and filtering algorithm is designed, the algorithm effectively processes raindrops remained in the image after the raindrops are removed through RARNet, and further raindrops in the real image are removed; and processing the filtered real image after rain by using a pixel value conversion algorithm, and determining the real image after rain after the enhancement of the pixel points, so that the problems of low image contrast and poor visual effect after rain removal are solved, and the quality of the image after rain removal and the processing effect of a synthesized image are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method for removing raindrops from a real image after rain according to the present invention;
FIG. 2 is a schematic diagram of a RARNet structure;
FIG. 3 is a schematic diagram showing a part of the experimental results; FIG. 3(a) is a schematic diagram of a plurality of different real images after rain removal, and FIG. 3(b) is a schematic diagram of the result after rain removal by the DNN method; FIG. 3(c) is a graph showing the results after rain removal using the JORDER method; FIG. 3(d) is a diagram showing the result after rain removal by the RESCAN method; FIG. 3(e) is a graph showing the results after rain removal using the PReNet method; FIG. 3(f) is a schematic diagram showing the result of using the Syn2Real method to remove rain; FIG. 3(g) is a graph showing the results after removing rain using the proposed method;
fig. 4 is a structural diagram of a raindrop removal system in a real image after rain according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method and a system for removing raindrops in a real image after rain, which improve the poor quality of the image after rain removal and the processing effect of a synthetic image.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of a method for removing raindrops in a real image after rain according to the present invention, and as shown in fig. 1, the method for removing raindrops in a real image after rain includes:
step 101: and acquiring a real image after historical rain.
Step 102: and constructing a recursive attention residual error network which integrates a long-time and short-time memory model, a space attention mechanism and a residual error block according to the historical real image after rain.
The step 102 specifically includes: inputting the real image after the historical rain into the long-short time memory model, performing 6 times of recursive training on the long-short time memory model, generating a trained long-short time memory model, and extracting raindrop characteristics; introducing a space attention mechanism into the trained long-time and short-time memory model, reinforcing the raindrop characteristics, and determining the reinforced raindrop characteristics; and adopting a residual block to distinguish the strengthened raindrop characteristics and the historical non-raindrop characteristics, and constructing a recursive attention residual network which integrates a long-time memory model, a spatial attention mechanism and the residual block.
Deep network structures can provide deep features to clearly characterize raindrops in a single image, but deep network structures may cause problems with gradient vanishing in back propagation. In order to avoid the gradient disappearance problem caused by network back propagation, the raindrop characteristics and the background information in the rainy image can be more accurately separated, a long-time memory network is introduced, the LSTM with multiple recursions combines the raindrop characteristics learned and extracted in the current recursion stage with the raindrop characteristics in the previous stage to form new characteristics through screening, and transmits the new characteristics to the next recursion stage, so that the raindrop characteristics can be more accurately learned and extracted, and the rainy image can be periodically subjected to rain removal. Since the number of recursions in the LSTM has a large influence on the performance of the LSTM, the number of recursions in the present invention is selected to be 6 through experimental verification.
Due to the influences of factors such as the focal length shot by the camera, the distance between the raindrops and the camera and the like, the raindrops are distributed in the raining image and have spatial characteristics. In order to learn Spatial context information in an image in a direction perception mode to acquire Spatial features of raindrops in the image, a Spatial-Attention Mechanism (SAM) is introduced, and the SAM can strengthen raindrop features and weaken non-raindrop features, so that a guarantee is provided for subsequent raindrop removal work. In order to extract deeper features, a Residual Block (RB) is used as an intermediate transition form, aiming to further separate raindrops and non-raindrops so as to remove raindrops in the real image. Combining the above analysis, a constructed recursive attention Residual Network (R)ARNet) overall framework is shown in figure 2. As can be seen from fig. 2, after two identical images to be rained are inputted into RARNet, the two images first pass through the connection layer lconcatThe two images are connected into a 6-dimensional image. The 6-dimensional image is then passed through a convolution activation layer linAnd outputting a 32-dimensional feature map. Then, the 32-dimensional feature map is input to the most recursive LSTM layer lreAnd outputting a 32-dimensional raindrop feature map. The 32-dimensional feature map output by the LSTM layer is then input to the SAM layer lsamA 32-dimensional feature map with spatial characteristics is extracted. Finally, the SAM layer lsamThe output 32-dimensional feature map is input to a Residual (RB) layer lrbRemoving raindrops in the characteristic diagram, and then passing through the convolution layeroutAnd then outputting a 3-dimensional rain-removing image.
In order to obtain the mapping relationship between the raindrop image and the raindrop-free image, RARNet needs to be trained. Network parameter Θ ═ W1,B1,W2,B2,...,W6,B6]The optimization can be done by minimizing the corresponding loss function between rain and no rain images, which is constructed here based on SSIM. Since RARNet uses a multi-recursive LSTM loop structure, a loss is output after each loop is completed. The number of LSTM cycles in RARNet is set to 6, i.e., 6 losses are output in total. The SSIM loss obtained for each cycle is defined as follows:
Figure BDA0002850776060000081
wherein, YNDenotes an image output at the nth (N ═ 1, 2.., 6) stage, XORepresents the GroudTruth corresponding to the rain image. And (3) monitoring the prediction result of each stage, and integrating 6 loss accumulation processes, wherein the network total loss calculation mode is as follows:
Figure BDA0002850776060000082
wherein, munRepresenting trade-off parameters at the nth stageSeveral, as confirmed by experiments, mu12,...,μ5Take 0.5 μ61.5 is taken.
In the RARNET training process, an ADAM optimizer is used for reducing server loss and optimizing codes, 100 epochs are trained in total, the Batch processing Size of Batch Size is 18, the Patch Size is 100 multiplied by 100, and the initial Learning Rate value of Learning Rate of 1 multiplied by 10-3
Step 103: and acquiring a real image after rain to be removed, and processing the real image after rain to be removed by using the recursive attention residual error network to generate the real image after rain to be removed.
Step 104: based on an image enhancement theory, a raindrop detection and filtering algorithm is used for filtering residual raindrops in the real image after the rain is removed, and the filtered real image after the rain is determined.
The step 104 specifically includes: taking each pixel point in the real image after rain removal as a central pixel point, and extracting a 5 multiplied by 5 neighborhood adjacent to each central pixel point as a filtering window; calculating the brightness difference value between the brightness of the central pixel point and the brightness of each pixel point in the 5 multiplied by 5 neighborhood, and combining the calculated 24 brightness difference values into a first set; according to the position relation of each brightness difference value in the first set, reversely solving pixel points corresponding to each brightness difference value in the interval [11,62] in the first set, and determining a second set; if the second set is not empty, determining that the pixel points in the second set are raindrop pixel points; if the second set is empty, determining that pixel points corresponding to the elements in the first set in the interval [11,62] are non-raindrop pixel points; and filtering the non-raindrop pixel points, and determining a filtered real image after raining.
Most of the residual raindrop shapes are strip-shaped, and the traditional 3X 3 median filtering window cannot contain all raindrop stripes, so that each pixel point p in the raining image after rain removalijExtracting adjacent 5 multiplied by 5 neighborhood omega as a filtering window for the center;
respectively calculating the pixel point pijBrightness B (p) ofij) And omega pixel points { p ] in 5 × 5 neighborhoodi-2j-2,pi-2j-1,...,pi+2j+2And combining the calculated 24 luminance difference values into a set phi;
according to the position relation of the elements in the first set phi, the elements in the set phi are reversely solved to [11,62]]Interval corresponding pixel point ptmCounting the pixels to form a second set gamma, and if the set gamma is not null, calculating the pixel pijFor raindrop pixel points, the average pixel value of the pixel points in the set gamma is calculated
Figure BDA0002850776060000091
Replacing the calculated pixel value I (p) of the pixel pointij) (ii) a If the set gamma is an empty set, the calculated pixel point pijIf the pixel point is a non-raindrop pixel point, the pixel value replacement work is not carried out, and the calculation method is as follows:
Figure BDA0002850776060000092
Figure BDA0002850776060000101
step 105: and based on a computer graphics theory, processing the filtered real image after rain by using a pixel value conversion algorithm, and determining the real image after rain after pixel point enhancement.
The step 105 specifically includes: calculating the local mean value and the local standard deviation in the 3 multiplied by 3 neighborhood of any pixel point in the filtered real image after rain; and determining the real image after raining after the pixel point is enhanced according to the local mean value and the local standard deviation.
In order to solve the problem that the visual effect is poor due to the fact that the contrast of a real image is reduced after rain is removed, the contrast of a rain-removed image after self-adaptive median filtering needs to be enhanced. Because different real rainy images have different contrast characteristics, for example, an image with lower brightness needs to be subjected to contrast enhancement by appropriately enhancing the brightness, and an image with higher brightness needs to be subjected to contrast enhancement by appropriately reducing the brightness. Therefore, an adaptive contrast enhancement method is provided, aiming at determining a contrast enhancement operator through information such as a pixel value, a variance and the like of an image, so that the contrast of a real image after rain removal is enhanced, and the visual quality of the image is improved.
Recording a pixel point in the filtered image of the real image after rain after adaptive median filtering
Figure BDA0002850776060000102
The pixel value of the point is
Figure BDA0002850776060000103
In its 3 x 3 neighborhood, its local mean
Figure BDA0002850776060000104
And local standard deviation
Figure BDA0002850776060000105
Can be represented as;
Figure BDA0002850776060000106
Figure BDA0002850776060000107
wherein p isskIs a pixel point
Figure BDA0002850776060000108
Being a pixel in the central 3X 3 neighborhood, I (p)sk) Is a pixel point pskPixel value of (2), to pixel point
Figure BDA0002850776060000109
Enhanced pixel values
Figure BDA00028507760600001010
Is defined as
Figure BDA00028507760600001011
And sequentially traversing pixel points in the image, thereby realizing the contrast enhancement of the image after rain removal.
FIG. 3 is a schematic diagram showing a part of the experimental results; fig. 3(a) is a schematic diagram of a plurality of different post-rain Real images to be subjected to rain removal, fig. 3(b) is a schematic diagram of a result after rain removal by a DNN method, fig. 3(c) is a schematic diagram of a result after rain removal by a JORDER method, fig. 3(d) is a schematic diagram of a result after rain removal by a RESCAN method, fig. 3(e) is a schematic diagram of a result after rain removal by a prinet method, fig. 3(f) is a schematic diagram of a result after rain removal by a Syn2Real method, and fig. 3(g) is a schematic diagram of a result after rain removal by the proposed method. As can be seen from fig. 3, a large amount of raindrops and raindrops with high brightness exist in the first image and the last image in fig. 3(a), and such raindrops are difficult to remove due to the influence of the focal length of the camera, the distance from the raindrops to the camera, and the like. The existing rain removing method, namely, the method shown in fig. 3(b) to fig. 3(f), has a good effect of removing low-brightness raindrops, but high-brightness raindrops remained in an image after the high-brightness raindrops are removed are almost consistent with high-brightness raindrops in an original image and cause image blurring after rain removing, but the method shown in fig. 3(g) provided by the invention not only has the best effect of removing the low-brightness raindrops, but also can remove part of the high-brightness raindrops, and most importantly, the image blurring after rain removing cannot be caused. For the second and third images without highlight raindrops in fig. 3(a), the method provided by the invention has almost no raindrop residue, but the existing method still has partial raindrops, and more importantly, the method is very vivid in contrast reduction of the image after rain removal, and the visual effect is softer. According to the experiment, the existing rain removing method has the problems that partial raindrops remain after the rain of the real image is removed, details are lost, the contrast is low and the like. The method can remove all low-brightness raindrops in the real image, and can fade or even remove the high-brightness raindrops. Moreover, the method has more vivid restoration of image details after rain removal, high contrast and better visual effect.
Fig. 4 is a structural diagram of a system for removing raindrops in a real image after rain according to the present invention, and as shown in fig. 4, the system for removing raindrops in a real image after rain includes:
a historical real image after rain acquiring module 401, configured to acquire a historical real image after rain.
A recursive attention residual error network constructing module 402, configured to construct a recursive attention residual error network that merges the long-term memory model, the spatial attention mechanism, and the residual error block according to the historical post-rain real image.
The recursive attention residual network constructing module 402 specifically includes: the raindrop feature extraction unit is used for inputting the real image after the historical rain into the long-short term memory model, performing 6 times of recursive training on the long-short term memory model, generating a trained long-short term memory model, and extracting raindrop features; the strengthening unit is used for introducing a space attention mechanism into the trained long-time and short-time memory model, strengthening the raindrop characteristics and determining strengthened raindrop characteristics; and the recursive attention residual error network construction unit is used for adopting the residual error block to distinguish the strengthened raindrop characteristics and the historical non-raindrop characteristics and constructing a recursive attention residual error network which integrates a long-time memory model, a space attention mechanism and the residual error block.
And a rain-removed real image generation module 403, configured to obtain a real image after rain to be removed, and process the real image after rain to be removed by using the recursive attention residual network to generate the real image after rain to be removed.
A module 404 for determining a filtered real image after rain, configured to filter residual raindrops in the filtered real image after rain by using a raindrop detection and filtering algorithm based on an image enhancement theory, and determine the filtered real image after rain.
The module 404 for determining the filtered real image after rain specifically includes: a filtering window extraction unit, configured to extract, as a filtering window, a 5 × 5 neighborhood adjacent to each central pixel point by using each pixel point in the real image after rain removal as the central pixel point; a first set determining unit, configured to calculate luminance differences between the luminance of the center pixel and the luminance of each pixel in the 5 × 5 neighborhood, and combine the calculated 24 luminance differences into a first set; a second set determining unit, configured to reversely solve, according to the position relationship of each brightness difference value in the first set, a pixel point corresponding to each brightness difference value in the [11,62] interval in the first set, and determine a second set; a raindrop pixel point determining unit, configured to determine, if the second set is not empty, that a pixel point in the second set is a raindrop pixel point; a non-raindrop pixel point determining unit, configured to determine, if the second set is empty, that a pixel point corresponding to each element in the first set in the interval [11,62] is a non-raindrop pixel point; and the filtered real image after rain determining unit is used for filtering the non-raindrop pixel points and determining the filtered real image after rain.
The invention also includes: a pixel value replacement unit for replacing the pixel value according to a formula
Figure BDA0002850776060000121
Replacing the average pixel value of the pixel points in the second set with the pixel value of each pixel point in the rain-removed image; wherein p isijI is a pixel point p for each pixel point in the image after rain removalijCorresponding abscissa, j is the pixel point pijA corresponding ordinate; i (p)ij) Is pijA pixel value of (a); Φ is the first set; Γ is the second set; p is a radical oftmFor each luminance difference value in the second set at [11,62]]Pixel point corresponding to the interval, t is pixel point ptmCorresponding abscissa, m being the pixel point ptmA corresponding ordinate; i (p)tm) Is ptmThe pixel value of (2).
And an enhanced real image after rain determination module 405, configured to process the filtered real image after rain by using a pixel value conversion algorithm based on a computer graphics theory, and determine a pixel-enhanced real image after rain.
The invention provides a method and a system for removing raindrops in a real image after rain, which can effectively remove the raindrops in the real image after rain and improve the removal efficiency and the removal efficiency.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A method for removing raindrops in a real image after rain is characterized by comprising the following steps:
acquiring a real image after historical rain;
constructing a recursive attention residual error network fusing a long-time and short-time memory model, a space attention mechanism and a residual error block according to the real image after the historical rain;
acquiring a real image after rain to be removed, and processing the real image after rain to be removed by using the recursive attention residual error network to generate a real image after rain to be removed;
based on an image enhancement theory, filtering residual raindrops in the rain-removed real image by utilizing a raindrop detection and filtering algorithm, and determining a filtered real image after rain;
and based on a computer graphics theory, processing the filtered real image after rain by using a pixel value conversion algorithm, and determining the real image after rain after pixel point enhancement.
2. The method according to claim 1, wherein the constructing a recursive attention residual error network that integrates a long-time memory model, a spatial attention mechanism, and a residual error block according to the historical post-rain real image specifically comprises:
inputting the real image after the historical rain into the long-short time memory model, performing 6 times of recursive training on the long-short time memory model, generating a trained long-short time memory model, and extracting raindrop characteristics;
introducing a space attention mechanism into the trained long-time and short-time memory model, reinforcing the raindrop characteristics, and determining the reinforced raindrop characteristics;
and adopting a residual block to distinguish the strengthened raindrop characteristics and the historical non-raindrop characteristics, and constructing a recursive attention residual network which integrates a long-time memory model, a spatial attention mechanism and the residual block.
3. The method according to claim 1, wherein the step of filtering out the residual raindrops in the real image after raining based on the image enhancement theory by using a raindrop detection and filtering algorithm to determine the filtered real image after raining comprises:
taking each pixel point in the real image after rain removal as a central pixel point, and extracting a 5 multiplied by 5 neighborhood adjacent to each central pixel point as a filtering window;
calculating the brightness difference value between the brightness of the central pixel point and the brightness of each pixel point in the 5 multiplied by 5 neighborhood, and combining the calculated 24 brightness difference values into a first set;
according to the position relation of each brightness difference value in the first set, reversely solving pixel points corresponding to each brightness difference value in the interval [11,62] in the first set, and determining a second set;
if the second set is not empty, determining that the pixel points in the second set are raindrop pixel points;
if the second set is empty, determining that pixel points corresponding to the elements in the first set in the interval [11,62] are non-raindrop pixel points;
and filtering the non-raindrop pixel points, and determining a filtered real image after raining.
4. The method according to claim 3, wherein the determining that the pixel points corresponding to the elements in the first set in the [11,62] interval are raindrop pixel points further comprises:
according to the formula
Figure FDA0002850776050000021
Replacing the average pixel value of the pixel points in the second set with the pixel value of each pixel point in the rain-removed image; wherein p isijI is a pixel point p for each pixel point in the image after rain removalijCorresponding abscissa, j is the pixel point pijA corresponding ordinate; i (p)ij) Is pijA pixel value of (a); Φ is the first set; Γ is the second set; p is a radical oftmFor each luminance difference value in the second set at [11,62]]Pixel point corresponding to the interval, t is pixel point ptmCorresponding abscissa, m being the pixel point ptmA corresponding ordinate; i (p)tm) Is ptmThe pixel value of (2).
5. The method according to claim 1, wherein the step of processing the filtered real image after rain by using a pixel value conversion algorithm based on a computer graphics theory to determine the real image after rain after pixel point enhancement specifically comprises:
calculating the local mean value and the local standard deviation in the 3 multiplied by 3 neighborhood of any pixel point in the filtered real image after rain;
and determining the real image after raining after the pixel point is enhanced according to the local mean value and the local standard deviation.
6. The method for removing raindrops in the real image after raining according to claim 5, wherein the determining the real image after raining after pixel point enhancement according to the local mean and the local standard deviation specifically comprises:
according toFormula (II)
Figure FDA0002850776050000031
Determining a real image after raining after pixel point enhancement; wherein,
Figure FDA0002850776050000032
pixel points in the filtered real image after rain are obtained;
Figure FDA0002850776050000033
is composed of
Figure FDA0002850776050000034
A pixel value of (a);
Figure FDA0002850776050000035
is a local mean;
Figure FDA0002850776050000036
is the local standard deviation.
7. A system for removing raindrops from a real image after rain, comprising:
the real image acquisition module after the historical rain is used for acquiring a real image after the historical rain;
the recursive attention residual error network construction module is used for constructing a recursive attention residual error network fusing a long-time and short-time memory model, a space attention mechanism and a residual error block according to the real image after the historical rain;
the rain-removed real image generation module is used for acquiring a rain-removed real image to be subjected to rain removal, and processing the rain-removed real image to be subjected to rain removal by utilizing the recursive attention residual error network to generate a rain-removed real image;
the filtered real image after rain determining module is used for filtering residual raindrops in the filtered real image after rain by utilizing a raindrop detection and filtering algorithm based on an image enhancement theory and determining a filtered real image after rain;
and the enhanced real image after rain determination module is used for processing the filtered real image after rain by utilizing a pixel value conversion algorithm based on a computer graphics theory and determining the pixel point enhanced real image after rain.
8. The system according to claim 7, wherein the recursive attention residual network construction module specifically comprises:
the raindrop feature extraction unit is used for inputting the real image after the historical rain into the long-short term memory model, performing 6 times of recursive training on the long-short term memory model, generating a trained long-short term memory model, and extracting raindrop features;
the strengthening unit is used for introducing a space attention mechanism into the trained long-time and short-time memory model, strengthening the raindrop characteristics and determining strengthened raindrop characteristics;
and the recursive attention residual error network construction unit is used for adopting the residual error block to distinguish the strengthened raindrop characteristics and the historical non-raindrop characteristics and constructing a recursive attention residual error network which integrates a long-time memory model, a space attention mechanism and the residual error block.
9. The system for removing raindrops in a real image after rain according to claim 7, wherein the module for determining the filtered real image after rain specifically comprises:
a filtering window extraction unit, configured to extract, as a filtering window, a 5 × 5 neighborhood adjacent to each central pixel point by using each pixel point in the real image after rain removal as the central pixel point;
a first set determining unit, configured to calculate luminance differences between the luminance of the center pixel and the luminance of each pixel in the 5 × 5 neighborhood, and combine the calculated 24 luminance differences into a first set;
a second set determining unit, configured to reversely solve, according to the position relationship of each brightness difference value in the first set, a pixel point corresponding to each brightness difference value in the [11,62] interval in the first set, and determine a second set;
a raindrop pixel point determining unit, configured to determine, if the second set is not empty, that a pixel point in the second set is a raindrop pixel point;
a non-raindrop pixel point determining unit, configured to determine, if the second set is empty, that a pixel point corresponding to each element in the first set in the interval [11,62] is a non-raindrop pixel point;
and the filtered real image after rain determining unit is used for filtering the non-raindrop pixel points and determining the filtered real image after rain.
10. The system for removing raindrops in a real image after rain according to claim 9, further comprising:
a pixel value replacement unit for replacing the pixel value according to a formula
Figure FDA0002850776050000041
Replacing the average pixel value of the pixel points in the second set with the pixel value of each pixel point in the rain-removed image; wherein p isijI is a pixel point p for each pixel point in the image after rain removalijCorresponding abscissa, j is the pixel point pijA corresponding ordinate; i (p)ij) Is pijA pixel value of (a); Φ is the first set; Γ is the second set; p is a radical oftmFor each luminance difference value in the second set at [11,62]]Pixel point corresponding to the interval, t is pixel point ptmCorresponding abscissa, m being the pixel point ptmA corresponding ordinate; i (p)tm) Is ptmThe pixel value of (2).
CN202011526580.XA 2020-12-22 2020-12-22 Method and system for removing raindrops in real image after rain Active CN112529815B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011526580.XA CN112529815B (en) 2020-12-22 2020-12-22 Method and system for removing raindrops in real image after rain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011526580.XA CN112529815B (en) 2020-12-22 2020-12-22 Method and system for removing raindrops in real image after rain

Publications (2)

Publication Number Publication Date
CN112529815A true CN112529815A (en) 2021-03-19
CN112529815B CN112529815B (en) 2022-08-30

Family

ID=75002440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011526580.XA Active CN112529815B (en) 2020-12-22 2020-12-22 Method and system for removing raindrops in real image after rain

Country Status (1)

Country Link
CN (1) CN112529815B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591617A (en) * 2021-07-14 2021-11-02 武汉理工大学 Water surface small target detection and classification method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299234A (en) * 2014-09-30 2015-01-21 中国科学院深圳先进技术研究院 Method and system for removing rain field in video data
KR20150072001A (en) * 2013-12-19 2015-06-29 현대자동차주식회사 Image Processing Apparatus and Method for Removing Rain From Image Data
CN105046653A (en) * 2015-06-12 2015-11-11 中国科学院深圳先进技术研究院 Method and system for removing raindrops in videos
CN110111267A (en) * 2019-04-17 2019-08-09 大连理工大学 A kind of single image based on optimization algorithm combination residual error network removes rain method
CN111815528A (en) * 2020-06-30 2020-10-23 上海电力大学 Bad weather image classification enhancement method based on convolution model and feature fusion
CN112085678A (en) * 2020-09-04 2020-12-15 国网福建省电力有限公司检修分公司 Method and system suitable for removing raindrops from power equipment machine patrol image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150072001A (en) * 2013-12-19 2015-06-29 현대자동차주식회사 Image Processing Apparatus and Method for Removing Rain From Image Data
CN104299234A (en) * 2014-09-30 2015-01-21 中国科学院深圳先进技术研究院 Method and system for removing rain field in video data
CN105046653A (en) * 2015-06-12 2015-11-11 中国科学院深圳先进技术研究院 Method and system for removing raindrops in videos
CN110111267A (en) * 2019-04-17 2019-08-09 大连理工大学 A kind of single image based on optimization algorithm combination residual error network removes rain method
CN111815528A (en) * 2020-06-30 2020-10-23 上海电力大学 Bad weather image classification enhancement method based on convolution model and feature fusion
CN112085678A (en) * 2020-09-04 2020-12-15 国网福建省电力有限公司检修分公司 Method and system suitable for removing raindrops from power equipment machine patrol image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591617A (en) * 2021-07-14 2021-11-02 武汉理工大学 Water surface small target detection and classification method based on deep learning
CN113591617B (en) * 2021-07-14 2023-11-28 武汉理工大学 Deep learning-based water surface small target detection and classification method

Also Published As

Publication number Publication date
CN112529815B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN109272509B (en) Target detection method, device and equipment for continuous images and storage medium
CN111861925B (en) Image rain removing method based on attention mechanism and door control circulation unit
CN110969589A (en) Dynamic scene fuzzy image blind restoration method based on multi-stream attention countermeasure network
CN114463218B (en) Video deblurring method based on event data driving
CN109886159B (en) Face detection method under non-limited condition
CN111539888B (en) Neural network image defogging method based on pyramid channel feature attention
CN111179187A (en) Single image rain removing method based on cyclic generation countermeasure network
CN103886585A (en) Video tracking method based on rank learning
CN113111716A (en) Remote sensing image semi-automatic labeling method and device based on deep learning
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN113392711A (en) Smoke semantic segmentation method and system based on high-level semantics and noise suppression
CN113034404A (en) Traffic image deblurring method and device based on multi-scale counterstudy
CN115049739A (en) Binocular vision stereo matching method based on edge detection
CN112529815B (en) Method and system for removing raindrops in real image after rain
CN114821434A (en) Space-time enhanced video anomaly detection method based on optical flow constraint
CN110889868A (en) Monocular image depth estimation method combining gradient and texture features
CN113888426A (en) Power monitoring video deblurring method based on depth separable residual error network
CN116402874A (en) Spacecraft depth complementing method based on time sequence optical image and laser radar data
Ahn et al. Remove and recover: deep end-to-end two-stage attention network for single-shot heavy rain removal
CN115731447A (en) Decompressed image target detection method and system based on attention mechanism distillation
CN111160255B (en) Fishing behavior identification method and system based on three-dimensional convolution network
CN109636738B (en) The single image rain noise minimizing technology and device of double fidelity term canonical models based on wavelet transformation
Wang et al. Research on traditional and deep learning strategies based on optical flow estimation-a review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant