CN107909548B - Video rain removing method based on noise modeling - Google Patents

Video rain removing method based on noise modeling Download PDF

Info

Publication number
CN107909548B
CN107909548B CN201710992669.7A CN201710992669A CN107909548B CN 107909548 B CN107909548 B CN 107909548B CN 201710992669 A CN201710992669 A CN 201710992669A CN 107909548 B CN107909548 B CN 107909548B
Authority
CN
China
Prior art keywords
rain
video
noise
foreground
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710992669.7A
Other languages
Chinese (zh)
Other versions
CN107909548A (en
Inventor
孟德宇
谢琦
赵谦
魏玮
易丽璇
徐宗本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Publication of CN107909548A publication Critical patent/CN107909548A/en
Application granted granted Critical
Publication of CN107909548B publication Critical patent/CN107909548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

A video rain removing method based on noise modeling estimates rain strip noise components and moving foreground in a video at the same time under the assumption of a low-rank background. Firstly, acquiring video data containing rain noise, and initializing a model; establishing a rain chart generation model according to the characteristics of rain noise and video foreground; establishing small block prior distribution of rain strips according to structural characteristics of imaging of rain in a video, wherein rain strips formed by moving raindrops on each small block in an image have consistent directionality; establishing a moving object detection model according to the characteristic of video foreground sparsity; converting the model into a rain removal model under a maximum likelihood estimation framework; and applying the rain-containing video and the rain removing model to obtain a rain removing video and other statistical variables, and outputting the rain removing video. The invention aims to establish a high-quality video rain removal model based on a rain chart generation principle and rain strip noise structural characteristics, so that the video rain removal technology can be more accurately and widely applied to complex rainy scenes with moving prospects.

Description

Video rain removing method based on noise modeling
Technical Field
The invention relates to a video image processing technology for outdoor shooting images, in particular to a video rain removing method based on noise modeling.
Background
Because the shooting quality of the outdoor shooting system is often affected by severe weather (such as rain, snow and fog), details of a shot video or image are damaged, textures are blurred, and a background part is shielded by high-brightness raindrops and raindrops, so that the shot image cannot be used for further processing operations, such as feature extraction, target identification and the like. Therefore, video image rain and snow removal is a technology that has been recently developed in the field of computer vision. On the premise of keeping the details of the video image, the rain removing technology preprocesses the damaged video image, and recovers the quality of the affected video image to the maximum extent, thereby allowing a computer vision algorithm to further analyze the video image.
Rain removal techniques can be summarized in two broad categories, frequency domain information based analysis methods and time domain information based analysis methods. Transforming the image to a frequency domain space based on an analysis method of frequency domain information, and removing high-frequency components in the image which is taken as rain; the analysis method based on the time domain information mainly utilizes several types of characteristics of rain in the image, such as brightness characteristics, shape characteristics, color characteristics, space characteristics and the like.
In the video rain removing method, a method analyzes the distribution rule of rain in adjacent frames of an image based on the color characteristic of raindrops, and because the raindrops have the effect of improving the background brightness, the method mainly utilizes the pixel difference between the adjacent frames to judge whether the pixel point is covered by the rain, but the method cannot be applied to heavy rain scenes and scenes with moving prospects. Since rain and moving objects have similar edge characteristics, distinguishing raindrops from moving prospects has been a key to the problem of rain removal. To overcome such difficulties, a number of rain removal methods have been proposed in succession: one is a probability-based rain removing method, which distinguishes raindrops and moving objects by comparing the rule of the fluctuation of pixel values caused by raindrops and the rule of the fluctuation of pixel values caused by moving objects; one is a method for modeling the raindrops based on structural characteristics, such as dynamic characteristic modeling of forming the raindrops according to raindrop movement described by a random field, and constrained modeling according to an optical model of the raindrops and the sizes and direction angles of the raindrops; one method is to perform a raindrop initial detection first, and then perform post-processing (such as image-guided filtering, SVM feature classification, etc.) on the initial detection result to distinguish raindrops from moving prospects.
The prior art generally considers rain as a deterministic object, and realizes detection and separation in a video by means of characterizing typical features and constructing the deterministic object or learning the discriminative information of the deterministic object different from non-rain images. On one hand, the method focuses on the structural information depiction of rain in the video, and does not fully consider other information in the video, such as the prior structural knowledge of a foreground object and a background scene, so that the complementary action of the beneficial structural knowledge of the non-rain part of the video on the problem of rain removal is not utilized; on the other hand, in order to obtain the special structure information of rain, many video rain removing methods (especially the more recent methods) need to externally construct a rain/rain-free labeled discrimination database to learn the structure of rain, and this information is often difficult to obtain for the rain-carrying video with a specific structure in practice.
Disclosure of Invention
The invention aims to provide a video rain removing method based on noise modeling.
In order to achieve the aim, the invention adopts the technical scheme that:
step S1: obtaining the original video, namely the rain video D belongs to Rmn×TWherein m, n represents the length and width of the video, and T represents the number of video frames), initializing model variables and parameters;
step S2: establishing a statistical model generated by rain strips according to the foreground, the background and the rain-containing noise of the original video;
step S3: based on a low-rank matrix decomposition method, according to the distribution rule and the directional characteristic of the rain strips on the small blocks in a statistical model generated by the rain strips, a maximum likelihood estimation method is utilized to construct a small block prior rain removal model under the condition of no moving foreground;
step S4: constructing a moving object detection model according to the structural characteristics of the video foreground support;
step S5: combining the steps S3 and S4, constructing a comprehensive model for alternately optimizing the moving foreground and the rain strip, and establishing a rain removing algorithm based on noise modeling under the moving foreground;
step S6: and taking the original rain video obtained in the step S1 as an input, and applying the rain removing algorithm based on the noise modeling under the moving foreground in the step S5 to obtain the rain removing video and other statistical variables.
In step S2, a statistical model generated by the rain strip is established according to the foreground, the background, and the noise containing rain of the video:
D=H⊙D+H⊙D
f(H⊙D)=f(UVT)+E
Figure BDA0001441759420000031
wherein D is the input video, H belongs to {0,1}mn×TFor the moving foreground support of the video, the moving foreground support is expanded along the length and the width of the video, and the equivalent tensor expression H belonging to {0,1} is correspondingly obtainedm×n×TIt is defined as:
Figure BDA0001441759420000032
that is, the value of H is 1 at the pixel point with moving foreground in the video, the values of other places are 0, and H is recordedRepresents the orthogonal complement of H, i.e.: h + H=1,H⊙ D represents the portion of the original video without moving foreground;
u, V Low rank decomposition for video background, U ∈ Rmn×r,V∈RT×4Is a low rank matrix r of rank r<min (mn, T), ⊙ is Hadamard product operator, which means multiplication of matrix corresponding elements item by item, i.e. H ⊙ D is still the same size matrix as H, D, each element is H, D is obtained by multiplication of corresponding elements, E is the noise of rain strip on each frame of video after being cut into small blocksiThe ith column of E represents a rain strip on the ith small block, and f is a cutting operator which has the function of selecting small blocks in each frame picture of the original video according to the specified interval and size, pulling each small block into a column vector and splicing the column vector into a matrix;
Figure BDA0001441759420000033
representing the distribution satisfied by the rain strip noise on the ith patch, according to the directional characteristic of the rain strip, a high-dimensional Gaussian mixture distribution is assumed here, and the form of K Gaussian mixture components is as follows:
Figure BDA0001441759420000034
wherein the mean vector of the k-th Gaussian mixture component is mukThe covariance matrix is sigmakIn a mixing ratio of pik
Figure BDA0001441759420000035
In step S3, based on a low-rank matrix decomposition method, a small block prior rain removal model under the condition of no moving foreground is constructed by using a distribution rule and a directional characteristic of a rain strip on a small block and using a maximum likelihood estimation method, and is expressed as the following probability form:
Figure BDA0001441759420000041
the likelihood function of the source data is
Figure BDA0001441759420000042
Taking its logarithmic form:
Figure BDA0001441759420000043
wherein N (f (H)⊙(D-UVT))nk,∑k) Is defined by formula (1);
the small block prior rain removal model under the no-movement prospect converted by the statistical model is an optimization problem as follows:
Figure BDA0001441759420000044
in step S4, according to the structural characteristics of the video foreground support, the following moving object detection models are constructed for distinguishing the video foreground target:
Figure BDA0001441759420000045
wherein D is the input video, B is the background portion in the video, H is the video foreground support, l (-) is the loss function used to measure the similarity between two segments of video, H ⊙ D is the moving foreground portion of the video.
In step S5, using a foreground detection model with foreground support and a static background rain removing method in a maximum likelihood frame, a rain removing model based on noise modeling is an optimized model as follows:
Figure BDA0001441759420000046
the step S6 adopts an EM algorithm to solve the rain removal model formula (3) based on the noise modeling in the step S5, and the specific steps comprise:
s4.1) E step: likelihood probability of updating model hidden variables
Introducing an implicit variable znk∈{0,1},znkIs an indicator variable for judging whether the noise of the rain strip on the nth small block belongs to the kth mixed Gaussian class or not, and has
Figure BDA0001441759420000051
Updating the nth tile EnProbability of belonging to the kth mixed gaussian:
Figure BDA0001441759420000052
s4.2) M step: maximizing the expectation of the full data likelihood function with respect to the hidden variable, i.e., minimizing the reciprocal of the expectation, has the objective function of:
Figure BDA0001441759420000053
where θ represents the parameter set: theta ═ pi, mu, sigma];θoldRepresenting the value of theta updated in the last iteration,
for the convenience of solving, a cut variable L is introduced, and the above optimization problem is equivalent to:
Figure BDA0001441759420000054
s.t.L=f(H⊙D+HοUVT)。
in the step S6, the small block prior rain removal model formula (3) in the step S3 is solved by adopting an EM algorithm, and the objective function in the step M is solved by adopting an alternating direction multiplier method, and the specific steps comprise:
s4.2.1) gives the augmented Lagrangian function of equation (4)
Figure BDA0001441759420000061
Wherein Λ is a multiplier and μ is a number greater than 0;
s4.2.2) establishing an iteration format and a termination condition of the alternative direction multiplier method:
Figure BDA0001441759420000062
Figure BDA0001441759420000063
Figure BDA0001441759420000064
Figure BDA0001441759420000065
Figure BDA0001441759420000066
μk+1=ρμk(11)
where ρ is a normal number greater than 1, and is generally set to 1.05,
the iteration termination condition is as follows:
Figure BDA0001441759420000067
s4.2.3) solving the problems (5), (6), (7), (8) and (9) to give a concrete iterative formula;
s4.2.4) set the initial values for the iterations to: h0=0,U0,V0Generated by the well-known singular value decomposition method acting on D, with an initial Gaussian mean value set to 0 and an initial covariance matrix
Figure BDA0001441759420000068
Is obtained by symmetrical orthogonalization of a random matrix;
s4.2.5) until the iteration meets the termination condition (the termination condition is that the likelihood function descending speed is less than the threshold value or the iteration number reaches the upper limit).
The equation (5) solves the following problem:
Figure BDA0001441759420000071
since the elements in H consist of 0,1, the above optimization problem is equivalent to the following convex optimization problem:
Figure BDA0001441759420000072
the problem is regarded as a first-order binary Markov random field problem, and H is solved by a graph cut algorithm;
the equation (6) solves the following problem:
Figure BDA0001441759420000073
the optimization problem can be divided, L can be solved as follows:
Figure BDA0001441759420000074
the equation (7) solves the following problem:
Figure BDA0001441759420000075
equivalent to solving:
Figure BDA0001441759420000076
is equivalent to:
Figure BDA0001441759420000077
this is a weighted two-norm problem, where f-1Representing an operator for reducing the small blocks into the video, wherein W is a weight tensor which has the same size as the input video, and the value of each point in W is equal to the number of the small blocks overlapped at the point in the original video;
rewrite equation (12) as:
Figure BDA0001441759420000078
u, V the approximation to the optimum can be solved iteratively:
Figure BDA0001441759420000081
is a weighted two-norm problem in which Ut,VtFor the U, V value obtained by the iterative computation of the t step, the solution is as follows according to the rows:
Figure BDA0001441759420000082
the equation (8) solves the following problem:
Figure BDA0001441759420000083
the solution is as follows:
Figure BDA0001441759420000084
Figure BDA0001441759420000086
wherein:
Figure BDA0001441759420000087
on one hand, the random dynamic structure of rain is effectively encoded based on a noise modeling principle, and on the other hand, the sparse and blocky structure characteristics of a foreground target in a video with rain and the low-rank characteristic of a background scene are fully utilized to form an effective complementary effect on extracting the rain information of the video, so that effective video rain removal is realized. In particular, the invention utilizes the mode of pertinently modeling different objects of the video to eliminate the dependence of the video rain removing method on the image data set with rain or without rain in advance, so that the video rain removing method can complete effective video rain removing effect under the unsupervised condition.
Compared with the prior rain removing method, the rain removing method has more universality on rain modeling. The rain and the moving foreground are optimized simultaneously in the model, so that the learning accuracy of the rain and the moving foreground is promoted, the component learning of the rain is more accurate, and the edge information of the moving object is better kept.
Drawings
The invention is further illustrated by means of the attached drawings, the content of which is not in any way limiting.
FIG. 1 is a flow chart of the present invention.
Fig. 2 shows (a) the real raining video data used in example 1, (b) the raining video restored by the novel video raining method based on noise modeling, in the same frame of the video, and the image in the lower right red frame is the result of enlarging the red frame mark part in the original image by two times.
Fig. 3 is a rain map obtained using the novel video rain removal method based on noise modeling in example 1.
Fig. 4(a), (b), and (c) show three layers of small mixed gaussian components extracted from the obtained rain map.
Fig. 5(a), (b), and (c) show covariance matrices corresponding to three gaussian components in a rain map. It can be seen that the covariance matrix of each component is interpretable: the first component corresponds to the relatively sparse raindrops farther from the lens in the original video; the second component corresponds to the dense rain strips closer to the lens in the original video, and the covariance matrix of the third component is close to the diagonal matrix, i.e., the correlation of each point in the component is not strong, and corresponds to the camera noise in the original video.
Fig. 6 is (a) the artificially rained video data used in example 2, and (b) the rained video recovered using a novel video raining method based on noise modeling, in the same frame of video. The rain chart of the artificial addition is from a rain video with a black background shot by a static camera under a real scene, the rain in the rain section has hierarchy, and the size of the rain strip is different according to the distance from the lens.
Fig. 7 is a rain map obtained using the novel video rain removal method based on noise modeling in example 2.
Fig. 8 is a moving foreground plot obtained using the novel video de-raining method based on noise modeling in example 2.
Fig. 9 is a background plot obtained using the novel video rain removal method based on noise modeling in example 2.
Detailed Description
The invention is further described with reference to the following examples.
Example 1
As the experimental object of the present invention, the raining video data as shown in fig. 2(a), which is a real raining video without moving objects photographed in a static scene, is used. The video data size is 288 × 368 × 171, the mixed fraction of the small block mixed gaussians is taken as 3, the maximum iteration step number is 40, the gaussians block size is 2 × 2, and the background rank is 2.
Referring to fig. 1, the process is as follows:
step S1, reading the original video, and initializing each statistical variable and parameter of the model;
step S2, establishing a statistical model generated by rain strips according to the characteristics of the foreground, the background and the rain noise of the video;
D=H⊙D+H⊙D
f(H⊙D)=f(H⊙UVT)+E
Figure BDA0001441759420000101
wherein D is input video data, H is a support of a video foreground, E is noise containing rain, the ith column represents the noise on the ith small block in the video after being converted into the small block, and the high-dimensional Gaussian distribution formula is defined by (1).
Step S3: based on a low-rank matrix decomposition method, a small block prior rain removal model under the condition of no moving foreground is constructed by utilizing the distribution rule and the structural characteristics of rain strips on small blocks and a maximum likelihood estimation method;
Figure BDA0001441759420000102
step S4: since there are no moving objects in the input video used in this example, the support H can be made zero.
Step S5: combining the steps S3 and S4, constructing the raining model:
f(D)=f(UVT)+E
Figure BDA0001441759420000103
the idea of maximum likelihood is utilized, which is equivalent to the following optimization model:
Figure BDA0001441759420000104
step S6: based on the video data input in step S1 and the model parameters set therein, the rain removal model in step S5 is solved to obtain the processing result of fig. 2 (b).
For the convenience of solving, a cut variable L is introduced, and the above optimization problem is equivalent to:
Figure BDA0001441759420000111
s.t.L=f(UVT)
the augmented Lagrangian function is:
Figure BDA0001441759420000112
wherein Λ is a multiplier and μ is a number greater than 0;
the solution process uses the following iterative format:
Figure BDA0001441759420000113
Figure BDA0001441759420000114
Figure BDA0001441759420000115
Figure BDA0001441759420000116
Figure BDA0001441759420000117
μk+l=ρμk(11)
where ρ is a normal number greater than 1, typically set to ρ 1.05, iteration details are given below:
A. in this example, let H be 0, so H is not added to the update here.
And B, (6) solving the following problem:
Figure BDA0001441759420000118
solving the problem with an explicit solution as follows:
Figure BDA0001441759420000121
l is often too large because of the column count of LnThe calculation of (A) is time-consuming, and the calculation can be accelerated by the following approximate solution of the FISTA algorithm:
Figure BDA0001441759420000122
the objective function of FISTA is:
Figure BDA0001441759420000123
wherein L is(i)The value of L obtained in the previous step. The solution to the FISTA objective function is:
Figure BDA0001441759420000124
the formula (7) solves the following problem:
Figure BDA0001441759420000131
this problem is equivalent to:
Figure BDA0001441759420000132
u, V is solved iteratively:
Figure BDA0001441759420000133
this is a weighted two-norm problem whose solution is:
Figure BDA0001441759420000134
the following problem is solved by the formula D (8):
Figure BDA0001441759420000135
the solution is as follows:
Figure BDA0001441759420000136
Figure BDA0001441759420000137
Figure BDA0001441759420000138
when the iteration reaches the end condition, the rain-free part, namely the background UV is obtainedT(FIG. 2(b))), the rain layer is D-UVT(FIG. 3). The three Gaussian components are weighted by gamma block by blocknkThe extraction and the combination are carried out,the corresponding three rain layers (fig. 4(a), (b), (c)) are obtained, and their covariance matrices correspond to fig. 5(a), (b), (c).
Example 2
As the experimental subject of the present invention, the raining video data as shown in fig. 6(a)) is used, which is a real raining video with a moving foreground (person, car) photographed in a static scene. The video data size is 240 × 360 × 119, the blending score of the small block blended gaussians is 3, the maximum iteration step number is 40, and the gaussians block size is 2 × 2.
Step S1, reading the original video, and initializing each statistical variable and parameter of the model;
step S2, establishing a statistical model generated by rain strips according to the characteristics of the foreground, the background and the rain noise of the video;
D=H⊙D+H⊙D
f(H⊙D)=f(H⊙UVT)+E
Figure BDA0001441759420000141
wherein D is input video data, H is a support of a video foreground, E is noise containing rain, the ith column represents the noise on the ith small block in the video after being converted into the small block, and the high-dimensional Gaussian distribution formula is defined by (1).
Step S3: based on a low-rank matrix decomposition method, a small block prior rain removal model under the condition of no moving foreground is constructed by utilizing the distribution rule and the structural characteristics of rain strips on small blocks and a maximum likelihood estimation method;
Figure BDA0001441759420000142
step S4: constructing a moving object detection model according to the structural characteristics of the video foreground support;
Figure BDA0001441759420000143
step S5: combining the steps S3 and S4, constructing a comprehensive model for alternately optimizing the moving foreground and the rain strip, and establishing a small block prior rain removal model under the moving foreground;
Figure BDA0001441759420000144
step S6: solving the rain removal model in the step S5 based on the video data input in the step S1 and the set model parameters;
for the convenience of solving, a cut variable L is introduced, and the above optimization problem is equivalent to:
Figure BDA0001441759420000145
s.t.L=f(H⊙D+H⊙UVT)
the augmented Lagrangian function is:
Figure BDA0001441759420000151
wherein Λ is a multiplier and μ is a number greater than 0;
the solution process uses the following iterative format:
Figure BDA0001441759420000152
Figure BDA0001441759420000153
Figure BDA0001441759420000154
Figure BDA0001441759420000155
Figure BDA0001441759420000156
μk+1=ρμk(11)
where ρ is a normal number greater than 1, typically set to ρ 1.05, iteration details are given below:
the following problem is solved by the formula A, (5):
Figure BDA0001441759420000157
the problem can be solved using a graph cut algorithm package.
And B, (6) solving the following problem:
Figure BDA0001441759420000158
can be solved according to the following steps:
Figure BDA0001441759420000159
l is often too large because of the column count of LnThe calculation of (A) is more time-consuming, and the following FISTA algorithm can be used for approximate solution
Solving and accelerating:
Figure BDA0001441759420000161
the objective function of FISTA is:
Figure BDA0001441759420000162
wherein L is(i)The value of L obtained in the previous step. The solution to the FISTA objective function is:
Figure BDA0001441759420000163
the formula (7) solves the following problem:
Figure BDA0001441759420000164
this problem is equivalent to:
Figure BDA0001441759420000165
u, V is solved iteratively:
Figure BDA0001441759420000171
this is a weighted two-norm problem whose solution is:
Figure BDA0001441759420000172
the following problem is solved by the formula D (8):
Figure BDA0001441759420000173
the solution is as follows:
Figure BDA0001441759420000174
Figure BDA0001441759420000175
Figure BDA0001441759420000176
when the iteration reaches the termination condition, the obtained rain-free part is H ⊙ D + H⊙UVT(FIG. 6(b))), and the rain layer is D-H ⊙ D-H⊙UVT(FIG. 7), the moving object layer is H ⊙ D (FIG. 8), and the background layer is H⊙UVT(FIG. 9)).

Claims (1)

1. A video rain removing method based on noise modeling is characterized by comprising the following steps:
step S1: obtaining the original video, namely the rain video D belongs to Rmn×TWherein m and n represent the length and width of a video, T represents the number of video frames, and model variables and parameters are initialized;
step S2: establishing a statistical model generated by rain strips according to the foreground, the background and the rain noise of the original video:
D=H⊙D+H⊙D
f(H⊙D)=f(UVT)+E
Figure FDA0002405140260000011
wherein D is the input video, H belongs to {0,1}mn×TFor the moving foreground support of the video, the moving foreground support is expanded along the length and the width of the video, and the equivalent tensor expression H belonging to {0,1} is correspondingly obtainedm×n×TIt is defined as:
Figure FDA0002405140260000012
that is, the value of H is 1 at the pixel point with moving foreground in the video, the values of other places are 0, and H is recordedRepresents the orthogonal complement of H, i.e.: h + H=1,H⊙ D represents the portion of the original video without moving foreground;
u, V Low rank decomposition for video background, U ∈ Rmn×r,V∈RT×rIs a low rank matrix r of rank r<min (mn, T), ⊙ is Hadamard product operator, which means multiplication of matrix corresponding elements item by item, i.e. H ⊙ D is still the same size matrix as H, D, each element is H, D is obtained by multiplication of corresponding elements, E is the noise of rain strip on each frame of video after being cut into small blocksiColumn i of E, representing a rain strip on the i-th patch, EnThe n-th column of the E is defined as the reference point, f is a cutting operator which has the function of selecting small blocks in each frame of picture of the original video according to the specified interval and size, pulling each small block into column vectors and splicing the column vectors into a matrix;
Figure FDA0002405140260000013
representing the distribution satisfied by the rain strip noise on the ith patch, according to the directional characteristic of the rain strip, a high-dimensional Gaussian mixture distribution is assumed here, and the form of K Gaussian mixture components is as follows:
Figure FDA0002405140260000014
wherein the mean vector of the k-th Gaussian mixture component is mukThe covariance matrix is sigmakIn a mixing ratio of pik,πk≥0,
Figure FDA0002405140260000021
Step S3: based on a low-rank matrix decomposition method, according to the distribution rule and the directional characteristic of the rain strips on the small blocks in a statistical model generated by the rain strips, a maximum likelihood estimation method is utilized to construct a small block prior rain removal model under the condition of no moving foreground;
step S4: constructing a moving object detection model according to the structural characteristics of the video foreground support;
step S5: combining the steps S3 and S4, constructing a comprehensive model for alternately optimizing the moving foreground and the rain strip, and establishing a rain removing algorithm based on noise modeling under the moving foreground;
step S6: and (4) taking the original rain video obtained in the step (S1) as input, and applying the rain removing algorithm based on the noise modeling under the moving foreground in the step (S5) to obtain a rain removing video and statistical variables.
CN201710992669.7A 2017-05-09 2017-10-23 Video rain removing method based on noise modeling Active CN107909548B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017103193809 2017-05-09
CN201710319380 2017-05-09

Publications (2)

Publication Number Publication Date
CN107909548A CN107909548A (en) 2018-04-13
CN107909548B true CN107909548B (en) 2020-05-15

Family

ID=61841552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710992669.7A Active CN107909548B (en) 2017-05-09 2017-10-23 Video rain removing method based on noise modeling

Country Status (1)

Country Link
CN (1) CN107909548B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510013B (en) * 2018-07-02 2020-05-12 电子科技大学 Background modeling method for improving robust tensor principal component analysis based on low-rank core matrix
CN108900841B (en) * 2018-07-10 2020-01-03 中国科学技术大学 Video coding method based on image rain removal algorithm
CN109150775B (en) * 2018-08-14 2020-03-17 西安交通大学 Robust online channel state estimation method for dynamic change of self-adaptive noise environment
CN109636738B (en) * 2018-11-09 2019-10-01 温州医科大学 The single image rain noise minimizing technology and device of double fidelity term canonical models based on wavelet transformation
CN109859119B (en) * 2019-01-07 2022-08-02 南京邮电大学 Video image rain removing method based on self-adaptive low-rank tensor recovery
CN110610152B (en) * 2019-09-10 2022-03-22 西安电子科技大学 Multispectral cloud detection method based on discriminative feature learning unsupervised network
CN111738932A (en) * 2020-05-13 2020-10-02 合肥师范学院 Automatic rain removing method for photographed image of vehicle-mounted camera
CN111879293B (en) * 2020-07-31 2023-02-10 国家海洋技术中心 Device and method for in-situ measurement of noise characteristics of rainfall on sea
CN113538297B (en) * 2021-08-27 2023-08-01 四川大学 Image rain removing method based on gradient priori knowledge and N-S equation
CN116012263A (en) * 2023-03-27 2023-04-25 四川工程职业技术学院 Image noise removing method and device, storage medium and electronic equipment
CN117152000B (en) * 2023-08-08 2024-05-14 华中科技大学 Rainy day image-clear background paired data set manufacturing method and device and application thereof
CN117455809B (en) * 2023-10-24 2024-05-24 武汉大学 Image mixed rain removing method and system based on depth guiding diffusion model
CN118014892B (en) * 2024-04-08 2024-06-25 北京航空航天大学 Single image rain removal model construction method and rain removal method based on frequency domain comparison regularization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337061A (en) * 2013-07-18 2013-10-02 厦门大学 Rain and snow removing method for image based on multiple guided filtering
CN103700070A (en) * 2013-12-12 2014-04-02 中国科学院深圳先进技术研究院 Video raindrop-removing algorithm based on rain-tendency scale
CN103714517A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Video rain removing method
CN106067163A (en) * 2016-05-24 2016-11-02 中国科学院深圳先进技术研究院 A kind of image rain removing method based on wavelet analysis and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI480810B (en) * 2012-03-08 2015-04-11 Ind Tech Res Inst Method and apparatus for rain removal based on a single image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103337061A (en) * 2013-07-18 2013-10-02 厦门大学 Rain and snow removing method for image based on multiple guided filtering
CN103700070A (en) * 2013-12-12 2014-04-02 中国科学院深圳先进技术研究院 Video raindrop-removing algorithm based on rain-tendency scale
CN103714517A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Video rain removing method
CN106067163A (en) * 2016-05-24 2016-11-02 中国科学院深圳先进技术研究院 A kind of image rain removing method based on wavelet analysis and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A generalized low-rank appearance model for spatio-temporally correlated rain streaks;Y-L;《In Proceedings of the IEEE International Conference on Computer Vision》;20121231;全文 *
Low-Rank Matrix Factorization under General Mixture Noise Distributions;Xiangyong Cao;《2015 IEEE International Conference on Computer Vision (ICCV)》;20160218;全文 *
Using the shape characteristics of rain to identify and remove rain from video;N. Brewer;《In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR)》;20081231;全文 *
一种视频雨滴检测与消除的方法;董蓉;《自动化学报》;20130731;第39卷(第7期);全文 *
基于U-GAN神经网络的视频图像去雨雾技术;江齐;《图形图像》;20081231;全文 *

Also Published As

Publication number Publication date
CN107909548A (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN107909548B (en) Video rain removing method based on noise modeling
CN108520501B (en) Video rain and snow removing method based on multi-scale convolution sparse coding
Bahnsen et al. Rain removal in traffic surveillance: Does it matter?
Dornaika et al. Building detection from orthophotos using a machine learning approach: An empirical study on image segmentation and descriptors
CN106683119B (en) Moving vehicle detection method based on aerial video image
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN106447674B (en) Background removing method for video
CN106097256B (en) A kind of video image fuzziness detection method based on Image Blind deblurring
CN102637298A (en) Color image segmentation method based on Gaussian mixture model and support vector machine
CN110310238B (en) Single image rain removing method based on compression award and punishment neural network reusing original information
CN102156995A (en) Video movement foreground dividing method in moving camera
CN110415260B (en) Smoke image segmentation and identification method based on dictionary and BP neural network
Trouvé et al. Single image local blur identification
CN109784205B (en) Intelligent weed identification method based on multispectral inspection image
CN112233129A (en) Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
CN110111267A (en) A kind of single image based on optimization algorithm combination residual error network removes rain method
CN111274964A (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN112364791A (en) Pedestrian re-identification method and system based on generation of confrontation network
Huang et al. SIDNet: a single image dedusting network with color cast correction
Chen et al. Visual depth guided image rain streaks removal via sparse coding
Babu et al. ABF de-hazing algorithm based on deep learning CNN for single I-Haze detection
CN111160282B (en) Traffic light detection method based on binary Yolov3 network
CN111160274B (en) Pedestrian detection method based on binaryzation fast RCNN (radar cross-correlation neural network)
CN106846260A (en) Video defogging method in a kind of computer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant