CN114863332A - Raindrop detection method based on event camera - Google Patents

Raindrop detection method based on event camera Download PDF

Info

Publication number
CN114863332A
CN114863332A CN202210466828.0A CN202210466828A CN114863332A CN 114863332 A CN114863332 A CN 114863332A CN 202210466828 A CN202210466828 A CN 202210466828A CN 114863332 A CN114863332 A CN 114863332A
Authority
CN
China
Prior art keywords
event
raindrop
point
probability
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210466828.0A
Other languages
Chinese (zh)
Inventor
韩斌
杨君宇
董岩
王硕
龙镇南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202210466828.0A priority Critical patent/CN114863332A/en
Publication of CN114863332A publication Critical patent/CN114863332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Algebra (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a raindrop detection method based on an event camera, which belongs to the field of image processing, and comprises the steps of firstly, using the event camera to collect data of a section of rainfall scene, setting the initial probability P (A) of each event A in the data of the rainfall scene to be 0, searching and determining all adjacent events of the event A in time and space, and then, calculating each event A and all adjacent events B according to the dynamic characteristics and the optical characteristics in the raindrop falling process i Between the probability function score (A, B) i ) And finally, according to the probability value of each event point A after iteration is finished, dividing all data points into raindrop events and non-raindrop events, thereby finishing the raindrop identification work. The method can better finish raindrop identification by using real data in rainy and snowy weather acquired outdoors under the condition of lacking prior information and true value imagesAnd (6) working.

Description

Raindrop detection method based on event camera
Technical Field
The invention relates to the field of image processing, in particular to a raindrop detection method based on an event camera.
Background
In rainy and snowy weather, visibility is greatly reduced due to the existence of raindrops and snowflakes, and the quality of images and videos collected by outdoor cameras or other visual signal collecting systems is seriously influenced. Generally, raindrops near the image processing device can seriously shield objects behind the image processing device, raindrops far away from the image processing device can form mist, dense rain and snow can cause serious light refraction, and the integrity of image information and subsequent image processing are seriously affected.
In recent years, researchers at home and abroad are working on developing various raindrop recognition and rain removal algorithms. These algorithms can be roughly classified into two categories according to the application object, namely raindrop recognition algorithm for video and raindrop recognition algorithm for single image. The former is relatively simple, because the data volume is larger, and information on a timestamp exists, the raindrop identification and removal algorithm based on the traditional time-space model or the deep learning model can be constructed by analyzing the pixel value change of the same pixel point in a small continuous frame or searching for rainstripes in a video.
Raindrop recognition and removal algorithms for a single image can be roughly divided into three ideas. One is to design a filter based on the optical properties and dynamics of raindrops, and process a rainy image into a rainless image through filtering. In the second idea, prior information is used to provide a constraint condition between an original image and a target image, and then an optimization algorithm is used to solve to obtain a target image after rain removal. Also, in recent years, deep learning has been widely used to construct a raindrop recognition algorithm for a single image.
Chinese patent application publication No. CN107909556 discloses a "video image rain removing method based on a convolutional neural network", which uses the convolutional neural network to process the high frequency part in each frame of image, outputs a rain-free image, and synthesizes the rain-free image with the low frequency part to obtain a video image after rain removal.
Chinese patent application publication No. CN104978718 discloses "a video raindrop removal method and system based on image entropy", which determines a rain-containing portion of an image by calculating the local entropy of the image and combining the area and the angle. The method is easily interfered by high-speed moving objects in the video.
Chinese patent application publication No. CN103337061 discloses "an image rain and snow removing method based on multiple guided filtering", which takes a low frequency part of an image as a guide image, and superimposes the guide filtered high frequency part with the low frequency part to obtain a rain-removed image, however, the method may reduce the definition of the image.
Chinese patent application publication No. CN109886900A discloses a "synthesized rain removing method based on dictionary training and sparse representation", which obtains a rain dictionary and a no-language dictionary by constructing a "rain-no-rain" training set, and combines the rain dictionary and the no-rain dictionary after sparse representation processing to obtain a rain-removed image. This method is temporarily only applicable to artificially synthesized rainfall images.
Chinese patent application publication No. CN110111267 discloses "a single image rain-removing method based on an optimization algorithm combined with a residual network", which discloses using an alternating direction multiplier method ADMM to solve a rainy day image imaging model, embedding the residual network and a noise reduction algorithm in an ADMM frame, and dividing an image shot in a rainy day into a clear background part without rain and a rain part, relatively speaking, the method is weaker than other examples in the image rain-removing effect.
Chinese patent application publication No. CN113947538 discloses a "multi-scale efficient convolution self-attention single image rain removal method", which transmits a rain image into a network model fused with an improved transform self-attention module and a multi-scale spatial feature fusion module for iterative training, and outputs a processed image close to a rain-free image through mixed loss function optimization, and this method requires a data set with a scale exceeding most of the existing raindrop data sets, so as to achieve a good training effect.
However, there is currently no raindrop recognition algorithm that is suitable for event cameras. The Event Camera is a special Camera, and an Event Camera (Event Camera) is a visual sensor inspired by biology, unlike a conventional Camera, the Event Camera outputs data, namely an "Event" (t, x, y, p) when and only when the luminance change of a certain pixel point is accumulated to reach a certain threshold, wherein t represents the time when the Event occurs, (x, y) represents the position where the Event occurs, and p is 0 represents that the pixel point is recorded by the Event Camera due to the luminance decrease, and p is 1 represents that the pixel point is recorded by the Event Camera due to the luminance increase. Compared with the traditional camera, the event camera has the characteristics of asynchronous response, low time delay, high dynamic range, low power consumption and the like, and is widely applied to the fields of machine vision, automatic driving, optical flow, motion capture and the like.
Therefore, there is a need to invent a raindrop recognition algorithm based on an event camera.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide a raindrop recognition algorithm based on an event camera, which finds event points derived from raindrops from all events recorded by the event camera. The algorithm analyzes physical characteristics in the raindrop falling process, establishes a space-time correlation model based on the physical characteristics, and predicts the probability of each event recorded by an event camera generating raindrops through the model, so that the raindrop identification work is realized.
According to an aspect of the present invention, there is provided a raindrop detection method based on an event camera, which includes the steps of:
s1: collecting data of a section of rainfall scene by using an event camera, setting the initial probability P (A) of each event A in the data of the rainfall or snowfall scene to be 0, searching and determining all adjacent events (B) of the event A on time and space 1 ,B 2 ,...,B i ) Wherein B is i Represents the ith event that is adjacent to event a in time and space;
s2: according to the dynamic characteristics and the optical characteristics in the raindrop falling process, for each event A, calculating a probability function between the event A and all adjacent events Bscore(A,B i ) Probability function score (A, B) i ) The larger the value is, the more the space-time relationship between the event A and the adjacent event B conforms to the space-time relationship of two adjacent event points in the same rain strip;
s3: updating the probability of each event point A step by step according to the sequence of the timestamps from small to large;
s4: and according to the probability value of each event point A after the iteration is finished, all data points are divided into raindrop events and non-raindrop events, so that the raindrop identification work is finished.
Further, the optical characteristics of the raindrop falling process in step S2 means that, for the RGB camera, in one frame of image, a pixel occupied by a single raindrop is not more than one pixel point,
when the background is covered by raindrops, the brightness of the corresponding pixel points will rise, an event of 'one raindrop appears at a certain position on the lens' occurs once, the event camera records an event point (t, x, y,1), when an event of 'one raindrop leaves from a certain position on the lens' occurs once, the event camera records an event point (t, x, y,0), all the event points (t, x, y,0) or the event points (t, x, y,1) are connected according to the timestamp direction, and an accurate raindrop motion track can be obtained,
wherein t represents the time when the event point occurs, x represents the abscissa corresponding to the occurrence of the event point, y represents the ordinate corresponding to the occurrence of the event point, 1 represents an increase in the brightness of the event point, and 0 represents a decrease in the brightness of the time point.
Further, the dynamic characteristics of the raindrop falling process in step S2 mean that, in the field of view of the lens, the raindrop will be in a state of uniform linear motion, and the final velocity of the raindrop in the vertical direction can be approximately expressed as follows, without considering the influence of wind, ideally:
Figure BDA0003624624700000041
wherein v is a final falling speed of the raindrops in the vertical direction, ρ is a raindrop density, g is a gravitational acceleration, d is a raindrop diameter, μ is an air viscosity coefficient,
think that the raindrop falls in-process and is at the uniform velocity linear motion, and the velocity component in the y direction must be downward at the uniform velocity, and on the horizontal direction, the moving speed of the raindrop in the camera field of vision is decided by wind speed and the removal of camera self on the horizontal direction, thinks that the moving speed u of raindrop on the horizontal direction satisfies the gaussian distribution:
u~N(0,σ 2 )
through observation of raindrop streaks, the gaussian distribution standard deviation σ is considered to satisfy:
σ=0.2v。
further, in S1, if the time difference between two events in A, B and the coordinate difference between the x direction and the y direction in space are smaller than "adjacent threshold" (the adjacent threshold includes event difference dt, coordinate difference dx in x direction, and coordinate difference dy in y direction, and the adjacent threshold is selected according to actual conditions, and the adjacent threshold has no specific physical meaning, and means that the coordinate difference between two points in x axis is smaller than dx, the coordinate difference in y axis is smaller than dy, and the time difference between the events is smaller than dt, and the events are considered to be adjacent), then A, B two events are considered to be adjacent events.
Further, for event A (t) 1 ,x 1 ,y 1 ,p 1 ) And event B (t) 2 ,x 2 ,y 2 ,p 2 ) Event A occurs before event B, t 1 <t 2 The probability distribution function is as follows:
Figure BDA0003624624700000051
where score (A, B) is a probability function between event A and event B, (x) 1 ,y 1 ) Coordinates representing the location of occurrence of event A, t 1 Represents the time of occurrence of event A, p 1 Indicates the polarity of event A, (x) 2 ,y 2 ) Coordinates representing the location of occurrence of event B, t 2 Represents the occurrence time of event B, p 2 Which indicates the polarity of the event B,
when the variation amplitude in the y direction is smaller than y along with the time axis, the A-B is considered to accord with the motion rule of raindrops, the score is 1,
when the time axis in the y direction is rather advanced, it is considered that the rainstripe between A and B is unlikely to exist, the score is-1,
when the variation in the x direction is too large (the x-direction variation value is larger than the y-direction variation value) while decreasing with the time axis in the y direction, the score is 0,
when there is no motion in the y-direction, the magnitude of the motion in the x-direction is small (less than two pixels), giving a score of 0.4.
Further, in step S3, first, the t values of the event points are arranged from small to large, the probability value P of each event point a is updated from front to back along the time stamp, and the corresponding probability value of the event point a is denoted as P 1 Then, the timestamps are reversely sequenced, the event points are arranged according to the sequence of the t value from large to small, the iteration is repeated, and the probability value corresponding to the event point A obtained after the iteration is recorded as P 2 For each event point A, take P 1 、P 2 The larger of which, denoted as the true probability value for event point a,
specifically, the iterative calculation of the probability value P is:
Figure BDA0003624624700000052
wherein, P (A) is the probability of the event point A generating raindrop movement, and (B) is the probability of the event point A generating raindrop movement 1 ,B 2 ,...,B i ) For the adjacent point event of the event point A in time and space in the collected data, num (B) 1 ,B 2 ,...,B i ) Refers to the number of neighbors of event A, score (A, B) i ) For event A with all adjacent events B i A probability function of α ∈ [0,1 ]]Passing a parameter, P (B), for the state between two adjacent event points i ) Refers to a neighboring point B for event A i Event point B i Resulting from the probability of raindrop movement.
Further, in step S4, a probability threshold P is set 0 For each event point A, if P (A)>P 0 And if not, identifying the event point A as a non-raindrop event, and traversing all event points to realize raindrop identification. Probability threshold value P 0 The value taking experience is 0.6-0.7.
Generally, compared with the prior art, the technical scheme conceived by the invention has the following beneficial effects:
the invention provides an event camera-based image processing method, which is characterized in that a space-time correlation model based on the physical characteristics of the raindrop falling process in an event camera is established by analyzing the physical characteristics of the raindrop falling process in the event camera, and the probability of raindrop generation of each event recorded by the event camera is predicted through the model, so that raindrop identification work is completed aiming at real data in rainy and snowy days collected outdoors under the condition of lacking prior information and true value images.
Drawings
Fig. 1 is a flowchart of an event camera-based raindrop recognition algorithm according to an embodiment of the present invention.
Fig. 2 is a visualization result of event camera raw data when a semi-air rainfall scene is shot in the embodiment of the present invention.
Fig. 3 is a raindrop detection result corresponding to fig. 2.
Fig. 4 is a visualization result of the original data of the event camera when the rainfall scene on the road is shot in the embodiment of the invention.
FIG. 5 is a raindrop detection result corresponding to FIG. 4
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Generally, the definition of images and videos collected by a camera working outdoors can be reduced in rainy and snowy weather, and the camera is enabled to capture a target object more easily, so that information loss is caused, and great negative effects are caused on follow-up work. According to the raindrop detection method based on the event camera, physical characteristics in the raindrop falling process are analyzed, a space-time correlation model based on the physical characteristics is established, the probability that each event recorded by the event camera generates raindrops is predicted through the model, and therefore raindrop identification work is completed aiming at real data in the rainy and snowy weather collected outdoors under the condition that prior information and a true value image are lacked.
In particular, in terms of optical characteristics, for a conventional RGB camera, a single raindrop occupies a small number of pixels in one frame of image. And because the light refraction that the raindrop brought can be with the light refraction in the very big angle to the lens, usually, when the background was covered by the raindrop, the luminance of this pixel will rise. As mentioned above, raindrop recognition and removal algorithms for video using non-deep learning approaches are mostly based on two basic assumptions. Two basic assumptions refer to the assumption that a single raindrop occupies less than one pixel, and the assumption is that the raindrop makes a uniform-speed linear motion.
In the event camera, it means that at a certain time, an event "one raindrop appears at a certain position on the lens" generates at most one event point recorded by the event camera, that is, as mentioned above, one output data (t, x, y, p) of the event camera, specifically, since p ═ 1 indicates that the brightness is increased and p ═ 0 indicates that the brightness is decreased, when an event "one raindrop appears at a certain position on the lens" occurs once, the event camera records one event point (t, x, y,1), and when an event "one raindrop leaves from a certain position on the lens" occurs once, the event camera records one event point (t, x, y, 0). Therefore, unlike other objects occupying multiple pixel points (such as pedestrians, vehicles, fallen leaves, birds, etc. which often appear in the data acquisition process, or when the lens itself moves, all objects in the background will be in a moving state), if a segment of event point formed by only one raindrop movement is obtained, all event points (t, x, y,0) or event points (t, x, y,1) are connected according to the timestamp direction, and an accurate raindrop movement track can be obtained.
In the aspect of motion characteristics, compared with other motion forms (walking of pedestrians, vehicle motion, tree and leaf shaking, integral movement of a background when a lens moves and the like which are frequently generated in the data acquisition process), the raindrop falling process has some unique physical characteristics.
First, in the field of view of the lens, the raindrops will be in a uniform linear motion state because the falling distance is long enough. Irrespective of the influence of the wind, the final velocity in the vertical direction in the ideal case can be approximated as:
Figure BDA0003624624700000081
where v is the final falling speed of the raindrops in the vertical direction, ρ is the raindrop density, g is the gravitational acceleration, d is the raindrop diameter, and μ is the air viscosity coefficient.
Because the wind speed can be regarded as unchangeable in the short time, consequently, can think that the raindrop falls in-process and does at the uniform velocity rectilinear motion, and the velocity component in the y direction will be downward at the uniform velocity certainly, and on the horizontal direction, the moving speed of raindrop is decided by wind speed and the removal of camera self on the horizontal direction in the camera field of vision, can think that the moving speed u of raindrop on the horizontal direction satisfies the gaussian distribution:
u~N(0,σ 2 )
through observation of raindrop streaks, it can be approximately considered that the gaussian distribution standard deviation satisfies:
σ=0.2v
the invention is realized by setting a probability function P (A) epsilon [0,1 ] for each event A recorded by an event camera]The larger p (a) indicates that the event a is generated from raindrops, and the initial value is set to 0. For the adjacent event (B) of the event A in time and space 1 ,B 2 ,...,B i ) Setting a probability function score (A, B) i )∈[0,1],score(A,B i ) The larger the size, the larger the size is represented by A-B i The closer to the raindrop drop falling trajectory is in physical characteristics. Thereby, synthesizeConsidering the probability that each adjacent event of event A is generated from raindrops, and the probability function score (A, B) between event A and each adjacent event i ) The probability that the event a is generated from raindrops can be obtained.
Further, the event camera-based raindrop recognition algorithm comprises the following steps:
step one, collecting data of a section of rainfall or snowfall scene by using an event camera, setting an initial probability P (A) to be 0 for each event A in the data, and finding all adjacent events (B) of the event A on time and space 1 ,B 2 ,...,B i );
Step two, calculating a probability function score (A, B) between A and all adjacent events for each event A according to the dynamic characteristics and the optical characteristics summarized in the raindrop falling process i ) The larger the value is, the more the space-time relationship between the event A and the adjacent event conforms to the space-time relationship of two adjacent event points in the same rain strip;
step three, gradually updating the probability of each event point A according to the sequence of the timestamps from small to large;
and step four, according to the probability value of each event point A after the iteration is finished, all data points are divided into raindrop events and non-raindrop events, and therefore the raindrop identification work is finished.
Specifically, in step one, if the time difference between the A, B two events and the coordinate difference between the x direction and the y direction are small, the A, B two events are considered as adjacent events.
In the second step, considering that the raindrop falling process is close to a constant speed and monotonously falls in the y direction under most conditions, the movement speed in the x direction is less than that in the y direction under most conditions, and for an event A (t) 1 ,x 1 ,y 1 ,p 1 ) And event B (t) 2 ,x 2 ,y 2 ,p 2 ) Event A occurs before event B, t 2 <t 2 The following probability distribution function is given:
Figure BDA0003624624700000091
and if and only if the motion along the time axis in the y direction is reduced and the variation amplitude in the x direction is smaller than y, the A-B is considered to accord with the motion rule of the raindrops, and the score is 1. Since raindrops are almost unlikely to rise, when the y direction goes up inversely with the time axis, it is considered that a rainstreak is unlikely to exist between a and B, and the score is-1. When the y-direction decreases with the time axis but the x-direction changes too much, the score is 0, and finally according to the gaussian distribution with respect to the horizontal velocity mentioned in the foregoing, in the case of 95%, the horizontal direction displacement is less than 0.4 times the vertical direction displacement, so that it is set that when there is no movement in the y-direction, the x-direction movement amplitude is also small, the score is 0.4.
In the third step, all events collected and recorded by the event camera are sorted according to the time stamp, that is, the events are arranged from small to large according to the t value of each event point. In this way, when the P value of each event point is updated sequentially from front to back along the time stamp, the iterative work is also performed. Therefore, for the event point with a more advanced timestamp, the P value may not be accurate because of a smaller number of iterations, and to solve this problem, the P value of the event point a obtained by the above steps may be denoted as P 1 Then, the timestamps are reversely sequenced, the event points are arranged according to the sequence of the t value from large to small, the algorithm is repeated, and the P value which is obtained after iteration and corresponds to the event point A is recorded as P 2 At this time, for the event point further back in the time stamp, P 2 The value may be less accurate because of a smaller number of iterations, for which P is taken for each event point A 1 、P 2 The larger one of the two is recorded as the P value of the event point A, so that the problem that the accuracy is influenced by less iteration times can be solved.
Specifically, the iterative algorithm for the P value is as follows:
Figure BDA0003624624700000101
wherein, P (A) is the probability of the event point A generating raindrop movement, and (B) is the probability of the event point A generating raindrop movement 1 ,B 2 ,...,B i ) Score (A, B) is a temporal, spatial neighborhood of event point A in the collected data i ) For the raindrop correlation score given in step two, α ∈ [0,1 ]]Parameters are passed for states between two adjacent event points.
In step four, a probability threshold P can be set 0 For each event point A, if P (A)>P 0 Event point a is identified as a raindrop event, otherwise event point a is identified as a non-raindrop event. And after all event points are traversed, the raindrop identification work can be realized.
In order to describe the method of the present invention in more detail, an embodiment of the present invention provides a raindrop recognition method based on an event camera, and belongs to the technical field of image processing.
Fig. 1 is a flowchart of an event camera-based raindrop recognition algorithm according to an embodiment of the present invention, and as shown in fig. 1, the event camera-based raindrop recognition algorithm according to an embodiment of the present invention includes the following steps:
step one, acquiring data to be processed in rainy and snowy weather by using an event camera: the experiment was performed using a propheesee Gen4 event camera produced by propheesee. The perspective of a lens of the Prophesee Gen4 reaches 82 degrees, the resolution of the chip is 1280x720, and the camera is connected with a notebook computer through a USB3.0 to perform data transmission. In the data acquisition process, a Gen4 camera is used for shooting and recording outdoor scenes for a period of time in rainy and snowy weather.
Step two, the primary processing of data: the data recorded by the event camera is converted into a csv file (the data format recorded by the Gen4 event camera is raw), each line of the file represents all information (t, x, y, p) of one event point, and all data points are arranged along the time stamp, namely all event points are arranged in the direction of increasing t value.
Step three, setting an initial probability P (A) to 0 for each event A in the data, and finding all the adjacent events (B) of the event A in time and space 1 ,B 2 ,...,B i ) The search method is to give a threshold dt, dx, dy if event A (t) 1 ,x 1 ,y 1 ,p 1 ) And event B (t) 2 ,x 2 ,y 2 ,p 2 ) Satisfies the following conditions:
|t 1 -t 2 |<dt
|x 1 -x 2 |<dx
|y 1 -y 2 |<dy
event a and event B are considered spatio-temporally adjacent events.
Step four, sequentially calculating the probability function score (A, B) between the event A and all the adjacent events according to the sequence from small to large along the time stamp i ) The calculation method is mentioned above, and specifically includes:
Figure BDA0003624624700000111
and step five, sequentially updating the P value of each event point according to the sequence from small to large along the time stamp, wherein the calculation formula is given in the above:
Figure BDA0003624624700000112
wherein, P (A) is the probability of the event point A generating raindrop movement, and (B) is the probability of the event point A generating raindrop movement 1 ,B 2 ,...,B i ) For the adjacent point event of the event point A in time and space in the collected data, num (B) 1 ,B 2 ,...,B i ) Refers to the number of neighbors of event A, score (A, B) i ) For event A with all adjacent events B i A probability function of α ∈ [0,1 ]]Passing a parameter, P (B), for the state between two adjacent event points i ) Refers to a neighboring point B for event A i Event point B i Resulting from the probability of raindrop movement. In the data processing process, the state transmission parameter α between two adjacent event points is taken to be 0.8.
For the event point on the rain stripe, there is no case that the y direction rises inversely with the extension of the timestamp, so the score value obtained in step four cannot be-1, and the P value will increase continuously with the increase of the number of iterations before convergence, therefore, for the P valueThe event point which is less than the probability threshold given in the following step six is marked as P by the calculated probability value 1 After all event points are reversely arranged according to the time axis, the calculated probability value is recorded as P 2 The final probability value P can be expressed as:
P=max(p 1 ,p 2 )
then step six is entered.
Step six, setting a probability threshold value P 0 . For each event point a, if:
P(A)>P 0
event a is considered to be formed by raindrop movement, whereas event a is considered to be independent of raindrop movement. All event points considered to have resulted from raindrop movement during a period of time are plotted on the same graph as shown in fig. 3 and 5, and the gray portion of the graph represents the presence of an event point at the pixel point position, and in contrast, all event points during the same period of time are plotted on another graph as shown in fig. 2 and 4.
Fig. 2 is a visualization result of event camera raw data when a semi-air rainfall scene is shot in an embodiment of the present invention, and fig. 3 is a raindrop detection result corresponding to fig. 2; fig. 4 is a visualization result of event camera raw data when a rainfall scene on a road is photographed in the embodiment of the present invention, and fig. 5 is a raindrop detection result corresponding to fig. 4. It can be seen from the comprehensive comparison that fig. 2 to 5 show the comparison between the original data and the identified raindrop event point under different conditions, and the detection result matches the fact. The raindrop identification method provided by the invention is different from all traditional raindrop identification and removal methods based on RGB cameras in the past, and can effectively identify all event points related to raindrop movement by taking the 'event' recorded by the event camera as a processing object.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A raindrop detection method based on an event camera is characterized by comprising the following steps:
s1: collecting data of a section of rainfall scene by using an event camera, setting the initial probability P (A) of each event A in the data of the rainfall scene to be 0, searching and determining all adjacent events (B) of the event A on time and space 1 ,B 2 ,...,B i ) Wherein B is i Represents the ith event that is adjacent to event a in time and space;
s2: for each event a, a probability function score (a, B) between the event a and all adjacent events B is calculated from the dynamics and optical characteristics of the raindrop during its fall i ) Probability function score (A, B) i ) The larger the value is, the more the space-time relationship between the event A and the adjacent event B conforms to the space-time relationship of two adjacent event points in the same rain strip;
s3: updating the probability of each event point A step by step according to the sequence of the timestamps from small to large;
s4: and according to the probability value of each event point A after the iteration is finished, all data points are divided into raindrop events and non-raindrop events, so that the raindrop identification work is finished.
2. The method according to claim 1, wherein the optical characteristics of the raindrop falling process in step S2 means that, for an RGB camera, the pixels occupied by a single raindrop in a frame image are not more than one pixel,
when the background is covered by raindrops, the brightness of the corresponding pixel points will rise, an event of 'one raindrop appears at a certain position on the lens' occurs once, the event camera records an event point (t, x, y,1), when an event of 'one raindrop leaves from a certain position on the lens' occurs once, the event camera records an event point (t, x, y,0), all the event points (t, x, y,0) or the event points (t, x, y,1) are connected according to the timestamp direction, and an accurate raindrop motion track can be obtained,
wherein t represents the time when the event point occurs, x represents the abscissa corresponding to the occurrence of the event point, y represents the ordinate corresponding to the occurrence of the event point, 1 represents an increase in the brightness of the event point, and 0 represents a decrease in the brightness of the time point.
3. The method according to claim 1, wherein the dynamic characteristics of the raindrop falling process in step S2 are that the raindrop will move in a straight line at a constant speed in the field of view of the lens, and the final speed of the raindrop in the vertical direction can be approximately expressed, without considering the influence of wind, as follows:
Figure FDA0003624624690000021
wherein v is a final falling speed of the raindrops in the vertical direction, ρ is a raindrop density, g is a gravitational acceleration, d is a raindrop diameter, μ is an air viscosity coefficient,
considering that the raindrops do uniform linear motion in the falling process, and the velocity component in the y direction is downward at a uniform velocity, considering that the motion velocity u of the raindrops in the horizontal direction satisfies the gaussian distribution:
u~N(0,σ 2 )
through observation of raindrop streaks, the gaussian distribution standard deviation σ is considered to satisfy:
σ=0.2v。
4. the method as claimed in claim 3, wherein in step S1, if the time difference between A, B two events and the coordinate difference between x and y directions are less than the adjacent threshold, A, B two events are considered as adjacent events.
5. The event camera-based raindrop detection method according to claim 4,
for event A (t) 1 ,x 1 ,y 1 ,p 1 ) And event B (t) 2 ,x 2 ,y 2 ,p 2 ) Event A occurs before event B, t 1 <t 2 The probability distribution function is as follows:
Figure FDA0003624624690000022
where score (A, B) is a probability function between event A and event B, (x) 1 ,y 1 ) Coordinates representing the location of occurrence of event A, t 1 Represents the occurrence time of event A, (x) 2 ,y 2 ) Coordinates representing the location of occurrence of event B, t 2 Represents the occurrence time of event B, p 1 Indicating the polarity of event A, p 2 Which indicates the polarity of the event B,
when the variation amplitude in the y direction is smaller than y along with the time axis, the A-B is considered to accord with the motion rule of raindrops, the score is 1,
when the time axis in the y direction is rather advanced, it is considered that the rainstripe between A and B is unlikely to exist, the score is-1,
when the variation value in the x direction is larger than that in the y direction as the time axis decreases in the y direction, the score is 0,
when there is no motion in the y-direction, the magnitude of the motion in the x-direction is less than 2 pixels, with a score of 0.4.
6. The method for detecting raindrops based on an event camera according to claim 5, wherein in step S3,
firstly, arranging the t values of all event points from small to large, sequentially updating the probability value P of each event point A from front to back along a time stamp, and marking the corresponding probability value of the event point A as P 1
Then, reversely ordering the timestamps, arranging the event points according to the sequence of the t value from large to small, repeating the iteration, and recording the probability value corresponding to the event point A obtained after the iteration as P 2
For each event point A, take P 1 、P 2 The larger one of these, denoted as the true probability of event point aThe value of the one or more of the one,
specifically, the iterative calculation of the probability value P is:
Figure FDA0003624624690000031
wherein, P (A) is the probability of the event point A generating raindrop movement, and (B) is the probability of the event point A generating raindrop movement 1 ,B 2 ,...,B i ) For the adjacent point event of the event point A in time and space in the collected data, num (B) 1 ,B 2 ,...,B i ) Refers to the number of neighbors of event A, score (A, B) i ) For event A with all adjacent events B i A probability function of α ∈ [0,1 ]]Passing a parameter, P (B), for the state between two adjacent event points i ) Refers to an adjacent event point B for event A i Event point B i Resulting from the probability of raindrop movement.
7. The method as claimed in claim 6, wherein in step S4, a probability threshold P is set 0 For each event point A, if P (A)>P 0 And if not, identifying the event point A as a non-raindrop event, and traversing all event points to realize raindrop identification.
CN202210466828.0A 2022-04-29 2022-04-29 Raindrop detection method based on event camera Pending CN114863332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210466828.0A CN114863332A (en) 2022-04-29 2022-04-29 Raindrop detection method based on event camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210466828.0A CN114863332A (en) 2022-04-29 2022-04-29 Raindrop detection method based on event camera

Publications (1)

Publication Number Publication Date
CN114863332A true CN114863332A (en) 2022-08-05

Family

ID=82636063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210466828.0A Pending CN114863332A (en) 2022-04-29 2022-04-29 Raindrop detection method based on event camera

Country Status (1)

Country Link
CN (1) CN114863332A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578295A (en) * 2022-11-17 2023-01-06 中国科学技术大学 Video rain removing method, system, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578295A (en) * 2022-11-17 2023-01-06 中国科学技术大学 Video rain removing method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108230254B (en) Automatic detection method for high-speed traffic full lane line capable of self-adapting scene switching
CN109063559B (en) Pedestrian detection method based on improved region regression
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
CN111429484B (en) Multi-target vehicle track real-time construction method based on traffic monitoring video
CN112101433A (en) Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepsORT
JP3816887B2 (en) Apparatus and method for measuring length of vehicle queue
CN100544446C (en) The real time movement detection method that is used for video monitoring
CN104616290A (en) Target detection algorithm in combination of statistical matrix model and adaptive threshold
CN108804992B (en) Crowd counting method based on deep learning
CN107944354B (en) Vehicle detection method based on deep learning
CN113947731B (en) Foreign matter identification method and system based on contact net safety inspection
CN104680559A (en) Multi-view indoor pedestrian tracking method based on movement behavior mode
CN110309765B (en) High-efficiency detection method for video moving target
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN111797738A (en) Multi-target traffic behavior fast extraction method based on video identification
US20220366570A1 (en) Object tracking device and object tracking method
CN112767371A (en) Method and system for adjusting jelly effect through variable damping based on artificial intelligence
CN113223044A (en) Infrared video target detection method combining feature aggregation and attention mechanism
CN107122732B (en) High-robustness rapid license plate positioning method in monitoring scene
CN114724131A (en) Vehicle tracking method and device, electronic equipment and storage medium
CN114863332A (en) Raindrop detection method based on event camera
CN111476314B (en) Fuzzy video detection method integrating optical flow algorithm and deep learning
Song et al. All-day traffic states recognition system without vehicle segmentation
CN111339824A (en) Road surface sprinkled object detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination