CN110047041A - A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method - Google Patents

A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method Download PDF

Info

Publication number
CN110047041A
CN110047041A CN201910158933.6A CN201910158933A CN110047041A CN 110047041 A CN110047041 A CN 110047041A CN 201910158933 A CN201910158933 A CN 201910158933A CN 110047041 A CN110047041 A CN 110047041A
Authority
CN
China
Prior art keywords
pixel
indicates
rain
value
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910158933.6A
Other languages
Chinese (zh)
Other versions
CN110047041B (en
Inventor
宋传鸣
洪旭
王相海
刘丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Normal University
Original Assignee
Liaoning Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Normal University filed Critical Liaoning Normal University
Priority to CN201910158933.6A priority Critical patent/CN110047041B/en
Publication of CN110047041A publication Critical patent/CN110047041A/en
Application granted granted Critical
Publication of CN110047041B publication Critical patent/CN110047041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The present invention discloses that a kind of accuracy is high, robustness is good, processing speed is fast, with rainfall discriminating power, the united Traffic Surveillance Video rain removing method of spatial domain-frequency domain, firstly, video frame is transformed into YUV color space from RGB color;Secondly, carrying out non-lower sampling shearing wave conversion to video frame and resetting low frequency sub-band, to obtain the image rich in edge and profile information, maximum variance between clusters is recycled to obtain whole marginal information figures;Then, the depth map that video frame is calculated using conspicuousness mapping method is sheared wave conversion, reserved high-frequency transformation coefficient by bilateral filtering and non-lower sampling, obtains major side hum pattern;Frame in conjunction with whole marginal information figures, major side hum pattern and two continuous frames is poor, makes the decision of raindrop, rain line region and rainfall;Finally, repairing raindrop/rain line region pixel using Curvature-driven method of diffusion, and then obtains and remove the Traffic Surveillance Video after rain.

Description

A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method
Technical field
The present invention relates to Intelligent traffic video process field, especially one kind to be effective against moving target under rainy weather The loss of target present in detection problem, accuracy is high, robustness is good, processing speed is fast, has adaptive ability, spatial domain- The united Traffic Surveillance Video rain removing method of frequency domain.
Background technique
In the intelligent transportation system using monitor video as analysis means, overcast and rainy weather is automatically tracked for vehicle target Phenomena such as causing certain interference with identification, often will lead to target loss, false retrieval, and then the accuracy of vehicle tracking is influenced, Even there is the phenomenon that tracking failure.In this case, enhancing processing is carried out to the video image under severe weather conditions, gram Take the influence of rainy weather is just particularly important to effectively improve vehicle tracking precision, becomes the hot spot of field of machine vision One of problem.
Although wet weather can show a variety of situations in monitor video, but most of raindrop all have the common spy of four aspects Property: (1) raindrop are very low to the absorptivity of spectrum, and a large amount of reflections and refraction environment light, so that its brightness is apparently higher than background; (2) the radius very little of raindrop, compared with the background range being entirely covered, the visual field scope of raindrop almost be can be ignored, therefore The brightness of this raindrop and the brightness of background are unrelated;(3) in view of the falling speed of raindrop is often very fast and the acquisition frame of monitor video Rate is relatively relatively low, and same raindrop move the rain line to be formed (or rain striped) and do not appear in two continuous video frames generally, To make rain line show apparent space randomness, and the pixel in each frame in same position is also impossible to by rain Drop covering;(4) in same monitoring scene, adjacent several frames rainy line direction it is almost the same.
According to These characteristics, Recent study personnel establish it is a variety of remove rain model, realize to the rain in video and image Drop or rain line are detected and are removed.
Firstly, Gary et al. has found after the brightness and appearance by research raindrop according to raindrop or the brightness of rain line, By raindrop/meet between the brightness value of rain line occlusion area and the brightness value for the background area pixels not being blocked, certain linear is closed System, and then this relationship is modeled using the motion blur based on physics, recycle frame difference method to detect raindrop/rain line, but the party Scenes relatively close from camera and comprising fast moving objects cannot be effectively treated in method.Zhang Yingxiang et al. is detected by color characteristic And rain line is removed, other than detection frame number is different, this method and the method for Gary et al. are closely similar in principle, Er Qieyou In the existing theoretical limitation of method itself, to the treatment effeciency of heavy rain scene and gray scale scene than relatively limited.
Secondly, Barnum et al. is with Gaussian process to raindrop according to raindrop or the motion feature and shape feature of rain line It is modeled, proposes the rain removing method based on frequency domain, but the recovering quality in raindrop region is still not satisfactory enough.Kang etc. People has introduced morphology constituent analysis (Morphological Component Analysis), extracts image by bilateral filtering In edge feature, recycle sparse coding to establish corresponding dictionary and realize the removal of rain line.Unfortunately, due to dictionary atom without Method is distinguished rain line feature and original object edge feature, this method and can be lost to a certain extent while removing rain line Image detail.Then, Huang et al. classifies to dictionary atom using support vector machines and principal component analysis, so that dictionary The description of atom pair rain line feature is more accurate, achieves certain effect.But, this method only accounts for raindrop/rain line shape Shape but ignores the directive function of luminance information, and there are still the erroneous judgements and rainprint residual phenomena to no rain layer.For this purpose, Xue et al. The rainfall detection algorithm in a kind of joint space domain and small echo characteristic of field is proposed, the luminance information and frequency of spatial domain have been combined The texture information in rate domain.However, wavelet transformation can only extract in image along horizontal, vertical and diagonal edge is special Sign, cause this method to capture still to be not efficient enough to the multidirectional feature of rain line, restrict and affect it is subsequent go rain efficiency, Especially the rainy raindrop of drizzle weather/line removal is still not thorough.
Although a variety of rain removing methods have been proposed in domestic and foreign scholars, but video, image data set under overcast and rainy weather It is less, it is difficult by the related rainy day scene of statistical learning scientific discovery on large data sets, valuable data characteristic, and This thinking can not meet the requirement of real-time of Traffic Surveillance Video processing.In raindrop/rain line context of detection, distinguishes raindrop and hide Often there is miss detection when gear region and moving object region, it is especially undesirable to the detection effect under moving scene.? Rainprint removal aspect, existing method is mostly simply using the pixel mean value of video before and after frames as the pixel value after removing rain, inevitably The problems such as remaining there are loss in detail or rainprint, is unable to reach higher human eye subjective quality.Therefore, at present still without a kind of energy The brightness and shape, spatial domain and frequency domain characteristic of raindrop are sufficiently excavated, required data volume is small, strong robustness, is suitable for difference The Traffic Surveillance Video rain removing method of rainfall intensity.
Summary of the invention
The present invention is to provide that a kind of accuracy is high, robustness to solve above-mentioned technical problem present in the prior art Well, processing speed is fast, has the united Traffic Surveillance Video rain removing method of adaptive ability, spatial domain-frequency domain.
The technical solution of the invention is as follows: a kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method, feature It is to carry out in accordance with the following steps:
Step 1. inputs the two continuous frames I of the Traffic Surveillance Video under overcast and rainy weather conditions1、I2If its width is W pixel, Height is H pixel;
Step 2. is by I1、I2Color space be transformed into YUV from RGB, and enable b ← true;
Step 3. utilizes I1Y-componentCalculate depth map:
Step 3.1 willThe 3 uniform fuzzy core κ defined with formula (1)1、κ2、κ3Convolution algorithm is carried out respectively, obtains 3 Width smoothed image F1、F2、F3:
Step 3.2 utilizes forward-difference operator, calculatesF1、F2And F3Along 1 order difference of horizontal direction, if gained knot Fruit is respectively
Step 3.3 utilizes forward-difference operator, calculatesF1、F2And F31 order difference along vertical direction, if gained knot Fruit is respectively
Step 3.4 counts respectivelyWithHistogram, to obtain phase The probability density answeredWith
Step 3.5 forEach pixel (i, j) distribution is calculated according to the definition of formula (2)To original DistributionKullback-Leibler divergence Dk(i, j):
0≤the i < H-1,0≤j < W-1, k ∈ { 1,2,3 }, Wi,jIt indicates centered on pixel (i, j), size is 3 × 3 The window of pixel, (n, m) indicate Wi,jIn any pixel coordinate,It indicates at pixel (n, m)It is right Kullback-Leibler divergence,It indicates at pixel (n, m)It is rightKullback- Leibler divergence, definition are provided by formula (3) and formula (4) respectively:
The k ∈ { 1,2,3 },It indicatesThe probability density of value at pixel (n, m),Table ShowThe probability density of value at pixel (n, m),It indicatesThe probability of value at pixel (n, m) is close Degree,It indicatesThe probability density of value at pixel (n, m);
Step 3.6 is calculated according to the definition of formula (5)Conspicuousness map Idof:
The Idof(i, j) indicates depth map IdofValue at pixel (i, j);
Step 4. is to depth map IdofBilateral filtering is carried out, then acquired results are subjected to 2 grades of non-lower samplings and shear wave conversion, Wherein the directional subband quantity of thick scale and thin scale is respectively 2 and 4, obtains Idof_NSST
Step 5. is by Idof_NSSTLowest frequency coefficient all reset after, carry out 2 grades of non-lower sampling shearing wave inverse transformations, thus Obtain the image I ' rich in edge and profile informationdof
Step 6. is using maximum variance between clusters to I 'dofThresholding is carried out, major side hum pattern Z is obtained1, wherein value The major edge regions of video frame are corresponded to for 1 pixel, and being worth is that 0 pixel then corresponds to the non-edge of video frame;
Step 7. will2 grades of non-lower sampling shearing wave conversions are carried out, wherein the directional subband number of thick scale and thin scale Respectively 2 and 4, it obtains
Step 8. willLowest frequency coefficient all reset after, corresponding inverse transformation is executed, to obtain spatial domain , image rich in edge and profile information
Step 9. utilizes maximum variance between clusters pairThresholding is carried out, whole marginal information figure Q are obtained1, wherein it is worth and is 1 pixel corresponds to the fringe region of video frame, and being worth is that 0 pixel then corresponds to the non-edge of video frame;
Step 10. is according to the definition of formula (6), to whole marginal information figure Q1With major side hum pattern Z1Gathered Difference operation acquires raindrop/rain line Rough Inspection mapping A:
A=Q1\Z1 (6)
Step 11. calculates I according to the definition of formula (7)1And I2Between frame it is poor, to obtain I1Motion detection knot Fruit:
The B (i, j) indicates I1In be located at coordinate (i, j) at motion detection result, wherein B (i, j)=1 indicate I1In Pixel at (i, j) belongs to moving object, and B (i, j)=0 indicates I1In be located at (i, j) at pixel belong to stationary object,It indicatesIn be located at (i, j) at pixel value,Indicate I2Y-component,It indicatesIn be located at (i, j) at Pixel value;
Step 12. calculates the fine detection of raindrop/rain line and schemes R ' according to the definition of formula (8):
R '=A ∩ B (8)
The ∩ indicates set intersection operation;
Step 13. obtains final inspection to R ' carry out morphological dilations operation using the linear structure element that length is 2 Mapping R;
Belong to the ratio that raindrop/rain line number of pixels accounts for whole pixels in step 14. statistics R, if the ratio is less than 0.08, then the rainfall of current rainfall is light rain, is transferred to step 14.1, is otherwise transferred to step 15;
If step 14.1 b=false, is transferred to step 15;Otherwise, b ← false is enabled, and will3 grades of non-lower samplings are carried out to cut Wave conversion is cut, the directional subband number from thick scale to thin scale is respectively 2,4,8, is obtainedReturn step 8;
Step 15. spreads (Curvature Driven using Curvature-driven to detect figure R as two-value exposure mask Diffusions, CDD) method reparation I1Middle raindrop/rain line region pixel value;
Step 15.1 extracts known traffic scene region to detect figure R as two-value exposure maskThe I1(i, j) indicates I1In be located at coordinate Pixel value at (i, j), R (i, j) indicate to be located at the pixel value at coordinate (i, j) in R;
Step 15.2 extracts raindrop to be repaired/rain line region to detect figure R as two-value exposure mask
Step 15.3 establishes the gradient descent flow equation of CDD model according to the definition of formula (9):
The μ0Indicate that the pixel value in known traffic scene region, μ indicate the pixel value for needing to repair, Indicate Nabla operator,Indicate gradient vectorMould it is long, λ is constant, and g (| s |) is indicated Guidance function, definition are provided by formula (10):
Step 15.4 is solved using finite difference calculus iterative solution formula (10) as regionIn each pixel Value, the I after output reparation1
It is finished if all frames of step 16. Traffic Surveillance Video are processed, algorithm terminates;Otherwise, I is enabled1←I2, and 1 untreated video frame is read in, enables it for I2, return step 2.
Compared with prior art, technical characterstic of the invention is: first, compared with wavelet transformation, and non-lower sampling shearing Wave conversion can in vehicles in complex traffic scene more precisely capture movement object edge details and raindrop/rain with different wind directions Line;Second, according to the difference of rainfall, wave conversion is sheared using the non-lower sampling of different series, is saving the same of conversion time When, be conducive to more accurately identify rain line;Third, not only in spatial domain excavation raindrop/rain line depth of field and brightness, but also Frequency domain extracts raindrop/rain line texture, shape feature, and raindrop/rain line is carried out in such a way that spatial domain is combined with frequency domain Detection, and using Curvature-driven method of diffusion repair raindrop/rain line region pixel value, with using video before and after frames pixel The method that mean value carries out rain is compared, and can realize that raindrop/rain line area pixel is repaired while effectively retaining important edges It is multiple, reach higher and goes rain quality;4th, required data volume is small, it is only necessary to which raindrop/rain line can be realized in two continuous video frames Detection and removal, advantageously ensure that the real-time of rain.Therefore, the present invention is high with accuracy, robustness is good, processing speed Fastly, the adaptive feature of rainfall intensity.
Detailed description of the invention
Fig. 1 is that the present invention with prior art Traffic Surveillance Video scene 1 goes rain comparative result figure.
Fig. 2 is that the present invention with prior art Traffic Surveillance Video scene 2 goes rain comparative result figure.
Fig. 3 is raindrop/rain line testing result comparison diagram of the present invention with prior art Traffic Surveillance Video scene 2.
Specific embodiment
A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method of the invention, carries out in accordance with the following steps;
Step 1. inputs the two continuous frames I of the Traffic Surveillance Video under overcast and rainy weather conditions1、I2If its width is W pixel, Height is H pixel;
Step 2. is by I1、I2Color space be transformed into YUV from RGB, and enable b ← true;
Step 3. utilizes I1Y-componentCalculate depth map:
Step 3.1 willThe 3 uniform fuzzy core κ defined with formula (1)1、κ2、κ3Convolution algorithm is carried out respectively, obtains 3 Width smoothed image F1、F2、F3:
Step 3.2 utilizes forward-difference operator, calculatesF1、F2And F3Along 1 order difference of horizontal direction, if gained knot Fruit is respectively
Step 3.3 utilizes forward-difference operator, calculatesF1、F2And F31 order difference along vertical direction, if gained knot Fruit is respectively
Step 3.4 counts respectivelyWithHistogram, to obtain phase The probability density answeredWith
Step 3.5 forEach pixel (i, j) distribution is calculated according to the definition of formula (2)To original DistributionKullback-Leibler divergence Dk(i, j):
0≤the i < H-1,0≤j < W-1, k ∈ { 1,2,3 }, Wi,jIt indicates centered on pixel (i, j), size is 3 × 3 The window of pixel, (n, m) indicate Wi,jIn any pixel coordinate,It indicates at pixel (n, m)It is right Kullback-Leibler divergence,It indicates at pixel (n, m)It is rightKullback- Leibler divergence, definition are provided by formula (3) and formula (4) respectively:
The k ∈ { 1,2,3 },It indicatesThe probability density of value at pixel (n, m),Table ShowThe probability density of value at pixel (n, m),It indicatesThe probability of value at pixel (n, m) is close Degree,It indicatesThe probability density of value at pixel (n, m);
Step 3.6 is calculated according to the definition of formula (5)Conspicuousness map Idof:
The Idof(i, j) indicates depth map IdofValue at pixel (i, j);
Step 4. is to depth map IdofBilateral filtering is carried out, then acquired results are subjected to 2 grades of non-lower samplings and shear wave conversion, Wherein the directional subband quantity of thick scale and thin scale is respectively 2 and 4, obtains Idof_NSST
Step 5. is by Idof_NSSTLowest frequency coefficient all reset after, carry out 2 grades of non-lower sampling shearing wave inverse transformations, thus Obtain the image I ' rich in edge and profile informationdof
Step 6. is using maximum variance between clusters to I 'dofThresholding is carried out, major side hum pattern Z is obtained1, wherein value The major edge regions of video frame are corresponded to for 1 pixel, and being worth is that 0 pixel then corresponds to the non-edge of video frame;
Step 7. will2 grades of non-lower sampling shearing wave conversions are carried out, wherein the directional subband number of thick scale and thin scale Respectively 2 and 4, it obtains
Step 8. willLowest frequency coefficient all reset after, corresponding inverse transformation is executed, to obtain spatial domain , image rich in edge and profile information
Step 9. utilizes maximum variance between clusters pairThresholding is carried out, whole marginal information figure Q are obtained1, wherein it is worth and is 1 pixel corresponds to the fringe region of video frame, and being worth is that 0 pixel then corresponds to the non-edge of video frame;
Step 10. is according to the definition of formula (6), to whole marginal information figure Q1With major side hum pattern Z1Gathered Difference operation acquires raindrop/rain line Rough Inspection mapping A:
A=Q1\Z1 (6)
Step 11. calculates I according to the definition of formula (7)1And I2Between frame it is poor, to obtain I1Motion detection knot Fruit:
The B (i, j) indicates I1In be located at coordinate (i, j) at motion detection result, wherein B (i, j)=1 indicate I1In Pixel at (i, j) belongs to moving object, and B (i, j)=0 indicates I1In be located at (i, j) at pixel belong to stationary object,It indicatesIn be located at (i, j) at pixel value,Indicate I2Y-component,It indicatesIn be located at (i, j) at Pixel value;
Step 12. calculates the fine detection of raindrop/rain line and schemes R ' according to the definition of formula (8):
R '=A ∩ B (8)
The ∩ indicates set intersection operation;
Step 13. obtains final inspection to R ' carry out morphological dilations operation using the linear structure element that length is 2 Mapping R;
Belong to the ratio that raindrop/rain line number of pixels accounts for whole pixels in step 14. statistics R, if the ratio is less than 0.08, then the rainfall of current rainfall is light rain, is transferred to step 14.1, is otherwise transferred to step 15;
If step 14.1 b=false, is transferred to step 15;Otherwise, b ← false is enabled, and will3 grades of non-lower samplings are carried out to cut Wave conversion is cut, the directional subband number from thick scale to thin scale is respectively 2,4,8, is obtainedReturn step 8;
Step 15. spreads (Curvature Driven using Curvature-driven to detect figure R as two-value exposure mask Diffusions, CDD) method reparation I1Middle raindrop/rain line region pixel value;
Step 15.1 extracts known traffic scene region to detect figure R as two-value exposure maskThe I1(i, j) indicates I1In be located at coordinate Pixel value at (i, j), R (i, j) indicate to be located at the pixel value at coordinate (i, j) in R;
Step 15.2 extracts raindrop to be repaired/rain line region to detect figure R as two-value exposure mask
Step 15.3 establishes the gradient descent flow equation of CDD model according to the definition of formula (9):
The μ0Indicate that the pixel value in known traffic scene region, μ indicate the pixel value for needing to repair, Indicate Nabla operator,Indicate gradient vectorMould it is long, λ is constant, in the present embodiment Enable λ=0.25, g (| s |) indicate guidance function, definition is provided by formula (10):
Step 15.4 is solved using finite difference calculus iterative solution formula (10) as regionIn each pixel Value, the I after output reparation1
It is finished if all frames of step 16. Traffic Surveillance Video are processed, algorithm terminates;Otherwise, I is enabled1←I2, and 1 untreated video frame is read in, enables it for I2, return step 2.
It is gone with Garg method, Zhang method, Kim method, Wei method to Traffic Surveillance Video scene 1 using the present invention Rain Comparative result is as shown in Figure 1.Wherein (a) is original video;It (b) is the result of Garg method;It (c) is the knot of Zhang method Fruit;It (d) is the result of Kim method;It (e) is the result of Wei method;(f) result of the invention.It can be seen from figure 1 that with the side Garg Method, Zhang method, Kim method are compared, and the present invention is more thorough to the removal of raindrop/rain line;Compared with Wei method, the present invention The shadow that wheel bottom can also be removed is conducive to the precision for improving moving vehicle tracking.
Rain Comparative result is gone to Traffic Surveillance Video scene 2 with Garg method, Kim method, Ren method using the present invention As shown in Figure 2.Wherein (a) is original video;It (b) is the result of Garg method;It (c) is the result of Kim method;It (d) is the side Ren The result of method;(e) result of the invention.As it is clear from fig. 2 that Garg method and Kim method could not effectively remove whole raindrop/rain Line, Ren method also mistakenly eliminate sport foreground, and the present invention then can fully retain fortune under the premise of effectively removing rain Moving-target.
Raindrop/rain line of Traffic Surveillance Video scene 2 is examined using the present invention and Garg method, Kim method, Ren method It is as shown in Figure 3 to survey Comparative result.Wherein (a) is original video;It (b) is the result of Garg method;It (c) is the result of Kim method; It (d) is the result of Ren method;(e) result of the invention.

Claims (1)

1. a kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method, it is characterised in that carry out in accordance with the following steps:
Step 1. inputs the two continuous frames I of the Traffic Surveillance Video under overcast and rainy weather conditions1、I2If its width is W pixel, height For H pixel;
Step 2. is by I1、I2Color space be transformed into YUV from RGB, and enable b ← true;
Step 3. utilizes I1Y-componentCalculate depth map:
Step 3.1 willThe 3 uniform fuzzy core κ defined with formula (1)1、κ2、κ3Convolution algorithm is carried out respectively, and it is flat to obtain 3 width Sliding image F1、F2、F3:
Step 3.2 utilizes forward-difference operator, calculatesF1、F2And F3Along 1 order difference of horizontal direction, if acquired results point It is not
Step 3.3 utilizes forward-difference operator, calculatesF1、F2And F31 order difference along vertical direction, if acquired results point It is not
Step 3.4 counts respectivelyWithHistogram, to obtain corresponding Probability densityWith
Step 3.5 forEach pixel (i, j) distribution is calculated according to the definition of formula (2)To original distributionKullback-Leibler divergence Dk(i, j):
0≤the i < H-1,0≤j < W-1, k ∈ { 1,2,3 }, Wi,jIt indicates centered on pixel (i, j), size is 3 × 3 pixels Window, (n, m) indicate Wi,jIn any pixel coordinate,It indicates at pixel (n, m)It is right's Kullback-Leibler divergence,It indicates at pixel (n, m)It is rightKullback-Leibler Divergence, definition are provided by formula (3) and formula (4) respectively:
The k ∈ { 1,2,3 },It indicatesThe probability density of value at pixel (n, m),It indicates The probability density of value at pixel (n, m),It indicatesThe probability density of value at pixel (n, m),It indicatesThe probability density of value at pixel (n, m);
Step 3.6 is calculated according to the definition of formula (5)Conspicuousness map Idof:
The Idof(i, j) indicates depth map IdofValue at pixel (i, j);
Step 4. is to depth map IdofBilateral filtering is carried out, then acquired results are subjected to 2 grades of non-lower samplings and shear wave conversion, wherein The directional subband quantity of thick scale and thin scale is respectively 2 and 4, obtains Idof_NSST
Step 5. is by Idof_NSSTLowest frequency coefficient all reset after, 2 grades of non-lower sampling shearing wave inverse transformations are carried out, to obtain Image I ' rich in edge and profile informationdof
Step 6. is using maximum variance between clusters to I 'dofThresholding is carried out, major side hum pattern Z is obtained1, wherein being worth is 1 Pixel corresponds to the major edge regions of video frame, and being worth is that 0 pixel then corresponds to the non-edge of video frame;
Step 7. will2 grades of non-lower sampling shearing wave conversions are carried out, wherein the directional subband number of thick scale and thin scale is distinguished For 2 and 4, obtain
Step 8. willLowest frequency coefficient all reset after, execute corresponding inverse transformation, thus obtain spatial domain, it is rich Image containing edge and profile information
Step 9. utilizes maximum variance between clusters pairThresholding is carried out, whole marginal information figure Q are obtained1, wherein being worth is 1 Pixel corresponds to the fringe region of video frame, and being worth is that 0 pixel then corresponds to the non-edge of video frame;
Step 10. is according to the definition of formula (6), to whole marginal information figure Q1With major side hum pattern Z1Carry out set difference fortune It calculates, acquires raindrop/rain line Rough Inspection mapping A:
A=Q1\Z1 (6)
Step 11. calculates I according to the definition of formula (7)1And I2Between frame it is poor, to obtain I1Motion detection result:
The B (i, j) indicates I1In be located at coordinate (i, j) at motion detection result, wherein B (i, j)=1 indicate I1In be located at Pixel at (i, j) belongs to moving object, and B (i, j)=0 indicates I1In be located at (i, j) at pixel belong to stationary object,It indicatesIn be located at (i, j) at pixel value,Indicate I2Y-component,It indicatesIn be located at (i, j) at Pixel value;
Step 12. calculates the fine detection of raindrop/rain line and schemes R ' according to the definition of formula (8):
R '=A ∩ B (8)
The ∩ indicates set intersection operation;
Step 13. obtains final detection figure to R ' carry out morphological dilations operation using the linear structure element that length is 2 R;
Step 14. statistics R in belongs to the ratio that raindrop/rain line number of pixels accounts for whole pixels, if the ratio less than 0.08, The rainfall of current rainfall is light rain, is transferred to step 14.1, is otherwise transferred to step 15;
If step 14.1 b=false, is transferred to step 15;Otherwise, b ← false is enabled, and willCarry out 3 grades of non-lower sampling shearing waves Transformation, the directional subband number from thick scale to thin scale is respectively 2,4,8, is obtainedReturn step 8;
Step 15. using detect figure R as two-value exposure mask, using Curvature-driven spread (Curvature Driven Diffusions, CDD) method repairs I1Middle raindrop/rain line region pixel value;
Step 15.1 extracts known traffic scene region to detect figure R as two-value exposure maskThe I1(i, j) indicates I1In be located at coordinate Pixel value at (i, j), R (i, j) indicate to be located at the pixel value at coordinate (i, j) in R;
Step 15.2 extracts raindrop to be repaired/rain line region to detect figure R as two-value exposure mask
Step 15.3 establishes the gradient descent flow equation of CDD model according to the definition of formula (9):
The μ0Indicate that the pixel value in known traffic scene region, μ indicate the pixel value for needing to repair, Indicate Nabla operator,Indicate gradient vectorMould it is long, λ is constant, and g (| s |) indicates guidance function, definition by Formula (10) provides:
Step 15.4 is solved using finite difference calculus iterative solution formula (10) as regionIn each pixel value, I after output reparation1
It is finished if all frames of step 16. Traffic Surveillance Video are processed, algorithm terminates;Otherwise, I is enabled1←I2, and read in 1 A untreated video frame enables it for I2, return step 2.
CN201910158933.6A 2019-03-04 2019-03-04 Space-frequency domain combined traffic monitoring video rain removing method Active CN110047041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910158933.6A CN110047041B (en) 2019-03-04 2019-03-04 Space-frequency domain combined traffic monitoring video rain removing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910158933.6A CN110047041B (en) 2019-03-04 2019-03-04 Space-frequency domain combined traffic monitoring video rain removing method

Publications (2)

Publication Number Publication Date
CN110047041A true CN110047041A (en) 2019-07-23
CN110047041B CN110047041B (en) 2023-05-09

Family

ID=67274472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910158933.6A Active CN110047041B (en) 2019-03-04 2019-03-04 Space-frequency domain combined traffic monitoring video rain removing method

Country Status (1)

Country Link
CN (1) CN110047041B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544217A (en) * 2019-08-30 2019-12-06 深圳市商汤科技有限公司 image processing method and device, electronic equipment and storage medium
CN112949560A (en) * 2021-03-24 2021-06-11 四川大学华西医院 Method for identifying continuous expression change of long video expression interval under two-channel feature fusion

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012066564A1 (en) * 2010-11-15 2012-05-24 Indian Institute Of Technology, Kharagpur Method and apparatus for detection and removal of rain from videos using temporal and spatiotemporal properties.
CN104299199A (en) * 2014-10-22 2015-01-21 中国科学院深圳先进技术研究院 Video raindrop detection and removal method based on wavelet transform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012066564A1 (en) * 2010-11-15 2012-05-24 Indian Institute Of Technology, Kharagpur Method and apparatus for detection and removal of rain from videos using temporal and spatiotemporal properties.
CN104299199A (en) * 2014-10-22 2015-01-21 中国科学院深圳先进技术研究院 Video raindrop detection and removal method based on wavelet transform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
华梓铮;华泽玺: "结合NSCT高低频特征的图像边缘检测算法", 四川师范大学学报. 自然科学版 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544217A (en) * 2019-08-30 2019-12-06 深圳市商汤科技有限公司 image processing method and device, electronic equipment and storage medium
CN112949560A (en) * 2021-03-24 2021-06-11 四川大学华西医院 Method for identifying continuous expression change of long video expression interval under two-channel feature fusion
CN112949560B (en) * 2021-03-24 2022-05-24 四川大学华西医院 Method for identifying continuous expression change of long video expression interval under two-channel feature fusion

Also Published As

Publication number Publication date
CN110047041B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
Chen et al. A novel color edge detection algorithm in RGB color space
CN102968782B (en) In a kind of coloured image, remarkable object takes method automatically
CN102298781B (en) Motion shadow detection method based on color and gradient characteristics
CN109472788B (en) Method for detecting flaw on surface of airplane rivet
CN101551853A (en) Human ear detection method under complex static color background
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN111104943A (en) Color image region-of-interest extraction method based on decision-level fusion
CN105096342A (en) Intrusion detection algorithm based on Fourier descriptor and histogram of oriented gradient
CN113221881B (en) Multi-level smart phone screen defect detection method
CN112561899A (en) Electric power inspection image identification method
CN111915558B (en) Pin state detection method for high-voltage transmission line
CN114119586A (en) Intelligent detection method for aircraft skin defects based on machine vision
Nejati et al. License plate recognition based on edge histogram analysis and classifier ensemble
CN110047041A (en) A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method
CN113177439B (en) Pedestrian crossing road guardrail detection method
Pan et al. Single-image dehazing via dark channel prior and adaptive threshold
CN113205494B (en) Infrared small target detection method and system based on adaptive scale image block weighting difference measurement
CN117392565A (en) Automatic identification method for unmanned aerial vehicle power inspection defects
Wang et al. Machine vision-based conveyor belt tear detection in a harsh environment
Aung et al. Study for license plate detection
CN109145875B (en) Method and device for removing black frame glasses in face image
CN112991326A (en) Cleaning quality evaluation method
Uthaib et al. Vehicle plate localization and extraction based on hough transform and bilinear operations
Nikonorov et al. Effective algorithms of flare detection with analysis of the shape in real-time video surveillance systems
Al-Amaren et al. Edge Map Extraction of an Image Based on the Gradient of its Binary Versions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant