CN113191339B - Track foreign matter intrusion monitoring method and system based on video analysis - Google Patents

Track foreign matter intrusion monitoring method and system based on video analysis Download PDF

Info

Publication number
CN113191339B
CN113191339B CN202110734603.4A CN202110734603A CN113191339B CN 113191339 B CN113191339 B CN 113191339B CN 202110734603 A CN202110734603 A CN 202110734603A CN 113191339 B CN113191339 B CN 113191339B
Authority
CN
China
Prior art keywords
rain
video
track
video image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110734603.4A
Other languages
Chinese (zh)
Other versions
CN113191339A (en
Inventor
李阳
陈晓冈
高涛
王列伟
吴国强
夏宝前
张周磊
王远远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Public Works Section Of China Railway Shanghai Bureau Group Co ltd
Nanjing Paiguang Intelligence Perception Information Technology Co ltd
Original Assignee
Ningbo Public Works Section Of China Railway Shanghai Bureau Group Co ltd
Nanjing Paiguang Intelligence Perception Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Public Works Section Of China Railway Shanghai Bureau Group Co ltd, Nanjing Paiguang Intelligence Perception Information Technology Co ltd filed Critical Ningbo Public Works Section Of China Railway Shanghai Bureau Group Co ltd
Priority to CN202110734603.4A priority Critical patent/CN113191339B/en
Publication of CN113191339A publication Critical patent/CN113191339A/en
Application granted granted Critical
Publication of CN113191339B publication Critical patent/CN113191339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61KAUXILIARY EQUIPMENT SPECIALLY ADAPTED FOR RAILWAYS, NOT OTHERWISE PROVIDED FOR
    • B61K9/00Railway vehicle profile gauges; Detecting or indicating overheating of components; Apparatus on locomotives or cars to indicate bad track sections; General design of track recording vehicles
    • B61K9/08Measuring installations for surveying permanent way
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras

Abstract

The invention discloses a track foreign matter intrusion monitoring method based on video analysis. The method comprises the steps of track video acquisition, rain interference removal, dynamic target detection and foreign matter type identification, wherein a video monitoring camera is arranged near a track, video images are transmitted to a track video monitoring platform in real time, then the received video images are subjected to target detection, the rain interference removal can be carried out when raining, dynamic targets are found from the video images, a track foreign matter detection model is used for identifying the dynamic targets, and the foreign matter types and the alarm level are determined. The monitoring method carries out dynamic target identification on the video image based on the background model, can adapt to the environmental change of the background model, can determine the intrusion foreign matter through artificial intelligence analysis and give an alarm, and improves the environmental adaptability and accuracy of monitoring. In addition, the invention also discloses a track foreign matter intrusion monitoring system based on video analysis.

Description

Track foreign matter intrusion monitoring method and system based on video analysis
Technical Field
The invention relates to the field of track safety monitoring, in particular to a track foreign matter intrusion monitoring method and system based on video analysis.
Background
In recent years, the railway industry in China has been developed very rapidly, particularly, the opening of high-speed rails and various passenger dedicated lines leads people to pay special attention to the safety of railways, and most of China is in disastrous rainstorm weather in summer, and the railway lines are more, so the manual inspection cost is high, and the difficulty is high. How to realize the intelligent monitoring of railway flood control district along the line, reduce the cost of labor, it is significant.
Statistically, about 80% of water damage occurs on the roadbed and the lines. The more water damage of the roadbed is caused by the instability of the side slope and the imperfect protection project, and the roadbed is easily attacked by various geological disasters induced by rainstorm. The main water damage types include landslide, debris flow, dangerous rockfall, slope slide collapse and collapse, tree fall invasion limit and the like.
The existing disaster monitoring systems have the problems of more false alarms and poor real-time performance, influence on the advancing state of a train to a certain extent, and have low utilization rate.
In conclusion, how to efficiently, conveniently, real-timely and stably detect the rail foreign matters is a difficult problem in the aspect of railways, and high and new technologies are urgently needed to solve related problems.
Disclosure of Invention
The invention mainly solves the technical problems that a video analysis-based track foreign matter intrusion monitoring method and a video analysis-based track foreign matter intrusion monitoring system are provided, and the problems that in the prior art, accurate video identification is affected due to rain in track foreign matter intrusion monitoring, accuracy of dynamic target identification is not high, an artificial intelligence identification method is lacked, and the like are solved.
In order to solve the technical problems, one technical scheme adopted by the invention is to provide a track foreign matter intrusion monitoring method based on video analysis, which comprises the following steps: acquiring a track video, arranging a camera for carrying out video monitoring on the track close to the track, and transmitting a shot video image to a track video monitoring platform in real time by the camera; dynamic target detection, wherein the track video monitoring platform performs target detection on the received video image to find a dynamic target; and identifying the type of the foreign matter, namely identifying the dynamic target by using a track foreign matter detection model after the dynamic target is found, and determining the type of the foreign matter and the alarm level.
Preferably, a rain interference removing step is further included between the track video acquisition step and the dynamic target detection, when interference to the video image caused by rainy weather occurs, rain characteristic information is extracted from the video image, rain streak interference is removed from the video image according to the rain characteristic information, and then the dynamic target detection is performed on the video image without rain streak interference; the rain disturbing step further comprises: and for the video image, firstly, performing rain feature extraction through a rain feature extraction network to obtain a rain density label and rain print features, then, restoring a rain print image for the rain density label and the rain print features through a rain print image construction network, and inputting a rough rain removal image obtained by subtracting the rain print image from the video image and the video image into a rain removal network together to further obtain an optimized and corrected rain removal image.
Preferably, in the step of dynamically detecting the target, the method further comprises the steps of initializing a background model, classifying pixel points and updating the background model; initializing the background model, selecting a single-frame image sequence in the video, and determining a required video image frame as a background model for target detection; the pixel point classification is to identify a pixel in a corresponding background model or a newly appeared pixel to a pixel in a video image when a frame of video image is input, and if the pixel is not the pixel in the background model, the pixel belongs to a foreground and is used for judging whether an identification target appears or not; and updating the background model, and performing video shooting on the fixed scene, wherein the single-frame images in different time periods are selected as the background model for target detection and are updated.
Preferably, the updating the background model comprises a memoryless update, a time sampling update and a spatial neighborhood update.
Preferably, the foreign matter type identification step further includes a track foreign matter detection model training step, a track foreign matter detection model is built by utilizing a track foreign matter database and a deep learning convolutional neural network, the track foreign matter database is used as the deep learning convolutional neural network, foreign matter feature extraction is carried out on a deep convolutional layer and a pooling layer of the deep learning convolutional neural network, related foreign matter types are learned, iterative training is carried out on the deep learning convolutional neural network repeatedly, the network is converged continuously, a binary cross entropy loss function is used as a standard for measuring network deviation, foreign matter detection and classification are achieved, and an optimal track foreign matter detection model is obtained.
Preferably, the method further comprises track foreign matter detection, wherein the real-time video image after the dynamic target detection is input into a trained track foreign matter detection model, the type of the foreign matter in the dynamic target is detected in real time, and if the foreign matter invasion alarm occurs, the foreign matter position and the recognition confidence coefficient are output.
The invention also provides an embodiment of a track foreign matter intrusion monitoring system based on video analysis, which comprises a video acquisition unit, a dynamic target detection unit and a video analysis unit; the system comprises a video acquisition unit, a dynamic target detection unit and a dynamic target detection unit, wherein the video acquisition unit comprises a camera which is arranged near a track and used for carrying out video monitoring on the track, and the camera transmits a shot video image to the dynamic target detection unit in real time; the dynamic target detection unit is used for carrying out target detection on the received video image and finding a dynamic target from the received video image; and after finding the dynamic target, the video analysis unit identifies the dynamic target by using the track foreign body model and determines the type of the foreign body and the alarm level.
Preferably, the system further comprises an image rain removing unit, which is used for extracting rain characteristic information from the video image acquired by the video acquisition unit when the interference to the video image is caused by rainy weather, removing rain streak interference from the video image according to the rain characteristic information, and then performing dynamic target detection on the video image without rain streak interference through the dynamic target detection unit.
Preferably, the image rain removing unit comprises a rain feature extraction network, a rain print image construction network and a rain removing network, for an original video image, firstly, rain feature extraction is carried out through the rain feature extraction network to obtain a rain density label and rain print features, then, the rain print image is restored from the rain density label and the rain print features through the rain print image construction network, and a rough rain removing image obtained by subtracting the rain print image from the original video image and the original video image are jointly input into the rain removing network to further obtain an optimized modified rain removing image.
Preferably, the dynamic object detection unit further comprises a background model establishing module, a scene object identification module and a background model updating module.
The invention has the beneficial effects that: the invention discloses a track foreign matter intrusion monitoring method based on video analysis. The method comprises the steps of track video acquisition, rain interference removal, dynamic target detection and foreign matter type identification, wherein a video monitoring camera is arranged near a track, video images are transmitted to a track video monitoring platform in real time, then the received video images are subjected to target detection, the rain interference removal can be carried out when raining, dynamic targets are found from the video images, a track foreign matter detection model is used for identifying the dynamic targets, and the foreign matter types and the alarm level are determined. The monitoring method carries out dynamic target identification on the video image based on the background model, can adapt to the environmental change of the background model, can determine the intrusion foreign matter through artificial intelligence analysis and give an alarm, and improves the environmental adaptability and accuracy of monitoring. In addition, the invention also discloses a track foreign matter intrusion monitoring system based on video analysis.
Drawings
FIG. 1 is a flow chart of an embodiment of a video analysis-based track anomaly violation monitoring method according to the present invention;
FIG. 2 is a schematic diagram of a rain removing unit in an embodiment of a video analysis-based track foreign matter intrusion monitoring method according to the present invention;
FIG. 3 is a schematic diagram of a rain feature extraction network in an embodiment of a video analysis-based track foreign body intrusion monitoring method according to the present invention;
FIG. 4 is a schematic diagram of a rainprint image-based network construction in an embodiment of a video analysis-based track foreign object intrusion monitoring method according to the present invention;
FIG. 5 is a schematic diagram of a rain removal network in an embodiment of a video analysis-based track foreign object intrusion monitoring method according to the present invention;
FIG. 6 is a diagram of the components of an embodiment of a video analysis based track anomaly violation monitoring system according to the present invention;
fig. 7 is a flowchart of the operation between the constituent elements of an embodiment of the video analysis-based track anomaly violation monitoring system according to the present invention.
Detailed Description
In order to facilitate an understanding of the invention, the invention is described in more detail below with reference to the accompanying drawings and specific examples. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It is to be noted that, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 shows a flowchart of an embodiment of a video analysis-based track foreign object intrusion monitoring method according to the present invention. In fig. 1, the method comprises the steps of:
step S1: acquiring a track video, arranging a camera for carrying out video monitoring on the track close to the track, and transmitting a shot video image to a track video monitoring platform in real time by the camera;
step S2: dynamic target detection, wherein the track video monitoring platform performs target detection on the received video image to find a dynamic target;
step S3: and identifying the type of the foreign matter, namely identifying the dynamic target by using a track foreign matter model after the dynamic target is found, and determining the type of the foreign matter and the alarm level.
Preferably, in step S1, the camera is usually selected to be erected in a region where there is a tendency for falling objects and geological changes, such as an entrance and an exit of a cave, and a region where debris flows easily. In addition, the camera can be a starry sky high-definition infrared night vision monitoring ball machine, so that the shooting definition is high, shooting can be performed both in the day and at night, and the shooting angle, the shooting magnification and the shooting reduction can be adjusted.
Preferably, the video camera can be hung on the basis of the unmanned aerial vehicle platform to perform dynamic monitoring, and the video camera is mainly used for performing dynamic video monitoring in a large range.
Preferably, a rain interference removing step is further included between the step S2 of detecting the dynamic target and the step S1, when interference to the video image caused by rainy weather occurs, rain feature information is extracted from the video image, rain streak interference is removed from the video image according to the rain feature information, and then dynamic target detection is performed on the video image from which the rain streak interference is removed.
Specifically, as shown in fig. 2, for an original video image, firstly, a rain feature extraction network is used for performing rain feature extraction to obtain a rain density label and a rain print feature, then, a rain print image is restored from the rain density label and the rain print feature through a rain print image construction network, and a rough rain removal image obtained by subtracting the rain print image from the original video image and the original video image are jointly input into a rain removal network to further obtain an optimized and corrected rain removal image.
Further, as shown in fig. 3, a structural composition of the rain feature extraction network is shown, which includes three sets of convolutional layers and pooling layers connected in series, that is: the method comprises the steps of enabling a convolutional layer 1 and a pooling layer 1, a convolutional layer 2 and a pooling layer 2, a convolutional layer 3 and a pooling layer 3 to be output, enabling three rain print characteristics to be output, namely the rain print characteristics 1, the rain print characteristics 2 and the rain print characteristics 3 to be recombined, enabling the three rain print characteristics to be subjected to dimensionality reduction after passing through a full connection layer, outputting one-dimensional characteristics, namely the three-dimensional characteristics are reduced into one-dimensional characteristics, comparing the one-dimensional characteristics with a standard rain density label through a calculation loss function, further obtaining a corresponding one-dimensional rain density label in a current video image, and achieving classification of rain density.
Preferably, in the rain feature extraction network, three convolution layers correspond to three convolution kernels, each convolution kernel should identify rain streak features of different forms, including 3 × 3 convolution kernels, 5 × 5 convolution kernels and 7 × 7 convolution kernels, different convolution kernels are used for extracting different rain streak features, for example, 7 × 7 convolution kernels are used for identifying rain strip features, rain strips generally indicate a rain form in which rain drops are in a band shape and have a high density, rain drops generally indicate a rain form in which rain drops are in a band shape, 5 × 5 convolution kernels identify rain drop features, rain drops indicate a rain form in which rain drops have a low density and are in an intermittent rain form, and 3 × 3 convolution kernels are used for identifying light rain features.
Preferably, the one-dimensional rain density label comprises: 0 indicates no rain, 1 indicates light rain, and 2 indicates heavy rain.
Preferably, the corresponding rain density label in the video image
Figure 83889DEST_PATH_IMAGE001
The calculation of the method is that the rain print features after dimensionality reduction enter a full connection layer (corresponding to a classifier), the full connection layer has the function of returning the rain print features through logistic regression, classifying the features and outputting one-dimensional labels, and then comparing the distance between the one-dimensional features and the real rain density labels through a cross entropy loss function. Expression for finding maximum approximate tag value
Figure 204292DEST_PATH_IMAGE002
As classification criteria, wherein
Figure 249608DEST_PATH_IMAGE003
For the mean square error between each of the pixels,
Figure 972714DEST_PATH_IMAGE004
representing the rain-streak residual information,
Figure 52665DEST_PATH_IMAGE005
is a cross entropy loss function.
It should be noted here that the original video image is normally the same as the original video image
Figure 78390DEST_PATH_IMAGE006
Figure 611003DEST_PATH_IMAGE007
In order to be the original video image,
Figure 137799DEST_PATH_IMAGE008
in order to be a background image,
Figure 72257DEST_PATH_IMAGE009
is a rain print image. To obtain a background picture after removing rain lines
Figure 534462DEST_PATH_IMAGE010
The rain print image is subtracted. But taking into account the original video image
Figure 619617DEST_PATH_IMAGE011
If there is no rain, the rain removal operation causes a smooth distortion of the transition. Thus, rain interference is eliminated by adding a rain density label, i.e.
Figure 622208DEST_PATH_IMAGE012
Figure 348856DEST_PATH_IMAGE013
Is a one-dimensional rain density label and is,
Figure 778700DEST_PATH_IMAGE014
is a rain-streak feature.
Further, as shown in fig. 4, the structural composition of the rainprint image construction network for constructing the rainprint image is shown, which includes four sets of anti-convolution layer and anti-pooling layer connected in series, that is: the system comprises an deconvolution layer 1, an anti-pooling layer 1, an anti-convolution layer 2, an anti-pooling layer 2, an anti-convolution layer 3, an anti-pooling layer 3, an anti-convolution layer 4 and an anti-pooling layer 4, wherein a one-dimensional rain density label R is input from the anti-convolution layer 1, so that the dimensionality obtained after the one-dimensional rain density label passes through the deconvolution layer 1 and the anti-pooling layer 1 is the same as the dimensionality of three rain streak features, and the three rain streak features are input from the anti-convolution layer 2; the anti-pooling layer 4 further outputs three rain-streak patterns, i.e., a rain-streak pattern 1, a rain-streak pattern 2, and a rain-streak pattern 3, which are stacked to obtain a combined rain-streak pattern. In conjunction with the above description of fig. 2, the original video image is subtracted from the combined rainprint image to obtain a rough omitted rainprint image.
Preferably, the rain print is further characterized
Figure 285905DEST_PATH_IMAGE015
And rain density label
Figure 888925DEST_PATH_IMAGE016
Stacking to form complete rainprint information
Figure 797975DEST_PATH_IMAGE017
Removing rain from the video image is to obtain the original video image
Figure 336403DEST_PATH_IMAGE018
Subtracting raindrop information
Figure 330904DEST_PATH_IMAGE019
The rain-removed video image, namely the rough rain-removed image can be obtained
Figure 737615DEST_PATH_IMAGE020
Preferably, to achieve rain density labels
Figure 501172DEST_PATH_IMAGE021
And rain-streak characteristics
Figure 741660DEST_PATH_IMAGE022
Having the same dimensions, and also for rain density labels
Figure 20195DEST_PATH_IMAGE023
Performing upsampling mapping to rain print features
Figure 168279DEST_PATH_IMAGE024
The same dimension.
Further, as shown in fig. 5, a structural composition of the rain removal network is shown, wherein the rain removal network includes 16 sets of convolutional layers and pooling layers connected in series, that is, convolutional layer 1 and pooling layer 1, convolutional layer 2 and pooling layers 2, … …, convolutional layer 16 and pooling layer 16, a raw video image and a coarse rain removal map are synchronously input to convolutional layer 1, wherein the raw video image and the coarse rain removal map include convolution kernels of 4 scales, that is, 3 × 3 convolution kernels, 5 × 5 convolution kernels, 7 × 7 convolution kernels and 9 × 9 convolution kernels, after 16 sets of convolution and pooling processing, the raw video image correspondingly outputs 4 background features, the coarse rain removal map also outputs 4 background features, feature deviation calculation is performed on the background features, feature approximation comparison is performed through a feature loss function, and when a deviation value is smaller than a set threshold value, an optimized modified rain removal map is output. The process is a process of continuously adjusting parameter values in the convolutional network so that the deviation value is continuously reduced.
Preferably, to avoid distortion of the background features of the coarsely rain-removed video image, the coarsely rain-removed image may be
Figure 724025DEST_PATH_IMAGE025
The convolutional neural network input into fig. 5 obtains the background features, and measures the feature loss of the rain-removed rough video image through a loss function, that is:
Figure 932153DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 370087DEST_PATH_IMAGE027
is the size of the image or images,
Figure 118601DEST_PATH_IMAGE028
in order to be a non-linear activation function,
Figure 856749DEST_PATH_IMAGE029
for a rough video image to be taken of rain,
Figure 173461DEST_PATH_IMAGE030
before the rain has not been removedAnd (5) original video images. By calculating a loss function
Figure 364271DEST_PATH_IMAGE031
The method can be used for matching the characteristics of the rain-removed video image and the original video image, and the problem of loss of useful information of the original video image is avoided.
Further, for the dynamic target detection in step S2, the method further includes initializing a background model, classifying pixel points, and updating the background model.
Preferably, in the background model initialization, a single frame image sequence in the video is selected, and a required video image frame is determined as a background model for target detection.
Preferably, since video monitoring usually performs video shooting on a fixed scene, and the change of the fixed scene mainly changes with time, for example, a day scene is obviously different from a night scene, and a morning scene is also different from a midday scene, a single frame image at different time intervals is selected as a background model for target detection, so that a plurality of background models form a background model set.
Preferably, each background model needs to be initialized, that is, sample values of pixels corresponding to each spatial position in the background model are initialized. Preferably, one HSV (hue, saturation, brightness) unit is established for each pixel in the image. The HSV cell is the data storage unit of the pixel. Preferably, for the background model sets, if N background models are included, for a pixel at a selected spatial position, N pixel sample values are selected.
Further, when each background model is initialized, the method further comprises the step of taking values of randomly selected neighborhood pixels to initialize the background model. For example, the peripheral 10 × 10 pixels are randomly selected for one pixel, and the old pixel values in the sample library are replaced with the randomly selected pixels. The random selection mode is used for adapting to a scene with a changed background, for example, a picture has a tree as an image background, and if the leaves shake, the dynamic target detection effect is greatly influenced, so that the random selection of the neighborhood pixels is used for storing possible pixels after the leaves shake and change into a sample library as a selection for updating a background model.
Preferably, for pixel point classification, when a frame of video image is input, it is necessary to identify whether a pixel in the video image belongs to a pixel in a corresponding background model or a newly-appearing pixel, if the pixel in the video image belongs to the background, the pixel belongs to the foreground, and if the pixel in the video image does not belong to the background model, the pixel is used for judging whether a target appears.
Preferably, when the foreground in the front and rear adjacent video images is detected to change, the dynamic target is judged and identified to appear, and the foreground change identification and judgment of the front and rear adjacent video images can be continuously carried out for multiple times so as to enhance the accuracy of judging the appearance of the dynamic target. Preferably, the pixel contents constituting the foreground can be combined together to serve as the identified dynamic target object, and the dynamic target object is sent to the next stage for target type identification.
Preferably, for a selected background point M in the background model, the coordinate position is determined, and after a frame of video image is input, for the background point M in the corresponding position in the image, which corresponds to a pixel value a, then to determine whether the background point is foreground or background, the background point needs to be compared with the background point at the position in each model in the already obtained background model sample set, for example, there are N background models, and the sample value of the position corresponding to each of the N background models is B1, B2, B3, … …, BN, then compares a with these sample values and sets a threshold Q, if the absolute value of a minus B1 is less than or equal to the threshold Q, the background point M in the video image is considered to be similar to B1, and if the subtraction result is greater than the threshold Q, the background point M in the video image is not similar to B1. For the N background models, if at least two of the N background models are similar to each other, the background point M corresponding to the position in the video image is determined to be the background but not the foreground; if only 1 model or no model satisfies the similarity for the N background models, it can be determined that the background point M corresponding to the position in the video image is foreground and not background.
Preferably, updating the background model includes that background updating is memoryless updating, and the memoryless updating refers to randomly taking a sample value from the background sample set to replace a pixel point generating disturbance in the background, that is, when the background has disturbance and needs to update the background model, a sample value of the pixel point sample set is randomly replaced by a new pixel value.
Preferably, the background update is a time sampling update, and the time sampling update mainly aims at updating the background with a certain probability for each pixel when the background does not change for a long time. This is to ensure a background update period, such as background updates due to ambient brightness changes over time. That is, when a pixel is determined as background, it has 1/rate probability to update the background model. rate is a time sampling factor, preferably 16.
Preferably, the background update is neighborhood update, which mainly updates sample values in the sample set, and the background disturbance updates the background while the background sample set also needs to be updated, each pixel randomly replaces one sample in the sample set, and the replacement mode is as follows: and randomly taking a pixel point of the neighborhood to replace the sample value, so that the self-adaptive process can be achieved. When a pixel point needs to be updated, a background model of the neighborhood of the pixel point is randomly selected, and the selected background model is updated by the new pixel point.
Further, the method also comprises a background model updating method under the shaking camera, and the vector displacement speed of the pixel points of the adjacent frames is calculated by adopting a feature matching algorithm, namely the future feature position is predicted by predicting the vector displacement direction of the pixel points, namely the speed. Since the adjacent frames maintain the conservation of pixel brightness, the pixels in the current area
Figure 916475DEST_PATH_IMAGE032
Is offset
Figure 243551DEST_PATH_IMAGE033
Brightness thereof
Figure 731165DEST_PATH_IMAGE034
Is to be kept constant and is to be,
Figure 143691DEST_PATH_IMAGE035
expressed in matrix form as follows:
Figure 437269DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure 946748DEST_PATH_IMAGE037
represents a pixel in
Figure 402000DEST_PATH_IMAGE038
Figure 770665DEST_PATH_IMAGE039
Direction and
Figure 664671DEST_PATH_IMAGE040
the temporal partial derivative, i.e. the luminance value.
Figure 231919DEST_PATH_IMAGE041
Figure 795756DEST_PATH_IMAGE042
Are respectively shown in
Figure 448454DEST_PATH_IMAGE043
,
Figure 973300DEST_PATH_IMAGE044
The speed of the deviation in direction.
Figure 129475DEST_PATH_IMAGE045
Representing all pixels 25 x 25 within a particular region. Region sharing assuming displacement of image
Figure 129792DEST_PATH_IMAGE046
A pixel point, and
Figure 4207DEST_PATH_IMAGE047
by computing the vector
Figure 505595DEST_PATH_IMAGE048
Comprises the following steps:
Figure 516277DEST_PATH_IMAGE049
the formula is the final calculation result after the formula is transformed, wherein,
Figure 749812DEST_PATH_IMAGE050
correspond to
Figure 49206DEST_PATH_IMAGE051
Figure 26389DEST_PATH_IMAGE052
As to the number of the pixel points,
Figure 219473DEST_PATH_IMAGE053
and
Figure 623910DEST_PATH_IMAGE054
represents all pixels in
Figure 410600DEST_PATH_IMAGE055
Figure 925895DEST_PATH_IMAGE056
The sum of the luminance of the directions.
When the matching degree of the predicted pixel brightness value of the area and the brightness value of the real pixel of the area in the next frame reaches more than 50%, the background is not shaken, otherwise, the camera is considered to shake, a background model updating mechanism is triggered, and the background sample base and the background model are updated in each frame, so that the shaking background can be adapted.
The dynamic target detection method can solve the problem of environmental interference, can adapt to model updating under the condition of scene mutation, can realize real-time monitoring, occupies less resources, can realize all-weather monitoring, and has the most important that the detection precision is high, and the algorithm sensitivity can be automatically adjusted.
And step S3 further comprises track foreign matter detection model training, a track foreign matter detection model is built by utilizing a track foreign matter database and a deep learning convolutional neural network, the track foreign matter database is used as the deep learning convolutional neural network, foreign matter feature extraction is carried out on a deep convolutional layer and a pooling layer of the deep learning convolutional neural network, related foreign matter types are learned, iterative training is carried out on the deep learning convolutional neural network repeatedly, the network is converged continuously, a binary cross entropy loss function is used as a standard for measuring network deviation, foreign matter detection and classification are realized, and an optimal track foreign matter detection model is obtained.
A binary cross entropy standard formula:
Figure 504644DEST_PATH_IMAGE058
wherein the content of the first and second substances,
Figure 17665DEST_PATH_IMAGE059
is a predicted value that is output by the network,
Figure 353968DEST_PATH_IMAGE060
is shown as
Figure 735271DEST_PATH_IMAGE061
A grid of a plurality of grids, each grid having a grid,
Figure 840630DEST_PATH_IMAGE062
the number of the prior frames is the number of the prior frames,
Figure 524553DEST_PATH_IMAGE063
indicating that the jth prior box of the ith trellis has a target, a value of 1,
Figure 82573DEST_PATH_IMAGE064
representing that the jth prior frame of the ith grid has no target and the value is 0;
Figure 939671DEST_PATH_IMAGE065
respectively corresponding to the abscissa and ordinate corresponding to the detected central coordinate of the ith grid,
Figure 961853DEST_PATH_IMAGE066
true coordinates which are the corresponding two coordinates;
Figure 878994DEST_PATH_IMAGE067
Figure 127572DEST_PATH_IMAGE068
corresponding to the detected width and height for the ith grid respectively,
Figure 788361DEST_PATH_IMAGE069
Figure 665050DEST_PATH_IMAGE070
corresponding to true width and height;
Figure 18671DEST_PATH_IMAGE071
indicating that the predicted value of the target was not detected,
Figure 285704DEST_PATH_IMAGE072
Figure 687867DEST_PATH_IMAGE073
the confidence coefficient and the real confidence coefficient of the ith grid detection target are respectively corresponding to the ith grid detection target;
Figure 684642DEST_PATH_IMAGE074
Figure 209164DEST_PATH_IMAGE075
respectively corresponding to the ith mesh detection target as classification
Figure 432335DEST_PATH_IMAGE076
And actually is classification
Figure 500172DEST_PATH_IMAGE077
The probability of (a) of (b) being,
Figure 289137DEST_PATH_IMAGE078
representing the target category.
In the expression, the expression is given,
Figure 922243DEST_PATH_IMAGE079
which represents the error in the coordinates of the center,
Figure 898290DEST_PATH_IMAGE080
the error of the coordinates of the width and the height is expressed,
Figure 704572DEST_PATH_IMAGE082
indicating the presence of a confidence error (cross quotient function) for the target,
Figure 675939DEST_PATH_IMAGE083
indicating that there is no confidence error for the target,
Figure 276684DEST_PATH_IMAGE084
representing the classification error (cross quotient function).
Preferably, the method further comprises track foreign matter detection, wherein a real-time video image after the dynamic target detection is input into a track foreign matter detection model, the type of the foreign matter in the dynamic target is detected in real time, and if the foreign matter invasion alarm occurs, the position and the recognition confidence coefficient of the foreign matter are output. The foreign body position is automatically marked to facilitate preview of a user, and the recognition confidence is used for judging the credibility of the foreign body target.
Confidence criterion formula:
Figure 208868DEST_PATH_IMAGE085
. Wherein
Figure 553262DEST_PATH_IMAGE086
The true value of the target is represented as the confidence level,
Figure 51239DEST_PATH_IMAGE087
indicates whether there is a target in the prediction box,
Figure 885203DEST_PATH_IMAGE088
indicating whether the target in the prediction box is a real object.
Preferably, the foreign object is further classified by a logistic network classifier in accordance with the track foreign object recognition, and the output result is a target tag value. The foreign object target can be classified by self after the foreign object is identified, and irrelevant foreign objects can be effectively filtered by the mechanism, so that the type of the sensitive foreign object can be acquired.
Logitics classifier standard formula:
Figure 367000DEST_PATH_IMAGE090
wherein the Logitics classifier first step is to classify the features
Figure 639719DEST_PATH_IMAGE091
Multiplying each layer of training weight x, adding bias b to obtain regression vector Z, and in the second step passing through mapping function
Figure 257782DEST_PATH_IMAGE092
And mapping the regression vector to a classification interval to obtain the category of the current object.
Preferably, the method further comprises alarm grade prediction, the invention adds the alarm grade function prediction on the basis of the existing logic network classifier, the function can be self-adaptively suitable for any scene requirement, and the alarm grade can be classified according to the actual scene, namely 0: no foreign invasion occurred, 1: foreign body invasion occurs, but outside the flood point, 2: the foreign body invades the limit, and the position is in flood control point, nevertheless does not cause the influence to the driving a vehicle, 3: foreign matter invasion occurs and affects the driving safety.
And increasing the error of calculating the alarm degree on the basis of the existing loss function, wherein the error is as follows:
Figure 528226DEST_PATH_IMAGE093
wherein the content of the first and second substances,
Figure 231740DEST_PATH_IMAGE094
i denotes the ith grid of the image,
Figure 855619DEST_PATH_IMAGE095
representing the sampled size of the image, j represents the jth prior box,
Figure 328189DEST_PATH_IMAGE096
indicating whether the jth prior frame of the ith grid contains the target. d represents the classification of the alarm level,
Figure 441638DEST_PATH_IMAGE097
indicating the desired degree of warning of the target,
Figure 694765DEST_PATH_IMAGE098
which is indicative of the actual degree of alarm,
Figure 184652DEST_PATH_IMAGE099
the function of (1) is to calculate the similarity of the ideal case and the actual result.
Figure 714991DEST_PATH_IMAGE100
Representing the target area of the current grid,
Figure 595747DEST_PATH_IMAGE101
representing the area encompassed by the i-grid at different levels.
Figure 273853DEST_PATH_IMAGE102
Indicating whether the target lateral position under the i-grid is under the corresponding level position.
Figure 239535DEST_PATH_IMAGE103
Indicating whether the target longitudinal position under the i-grid is under the corresponding level position.
Based on the above description, the embodiment of the track foreign matter invasion monitoring method based on video analysis disclosed by the invention comprises the steps of track video acquisition, rain interference removal, dynamic target detection and foreign matter type identification, wherein a video monitoring camera is arranged near a track, a video image is transmitted to a track video monitoring platform in real time, then the received video image is subjected to target detection, rain interference removal can be carried out when raining occurs, a dynamic target is found from the video image, the dynamic target is identified by using a track foreign matter detection model, and the type of the foreign matter and the alarm level are determined. The monitoring method carries out dynamic target identification on the video image based on the background model, can adapt to the environmental change of the background model, can determine the intrusion foreign matter through artificial intelligence analysis and give an alarm, and improves the environmental adaptability and accuracy of monitoring.
Based on the same conception, the invention also provides a track foreign matter intrusion monitoring system based on video analysis. Preferably, as shown in fig. 6, the system comprises a video acquisition unit a1, a dynamic target detection unit A3 and a video analysis unit a 4. The video acquisition unit A1 comprises a camera which is arranged near the track and used for carrying out video monitoring on the track, and the camera transmits a shot video image to the dynamic target detection unit in real time; the dynamic target detection unit A3 performs target detection on the received video image, and finds a dynamic target from the received video image; and the video analysis unit A4 identifies the dynamic target by using the track foreign body model after finding the dynamic target, and determines the type of the foreign body and the alarm level.
Preferably, the system further includes an image rain removing unit a2, configured to, when interference to the video image caused by rainy weather occurs, extract rain feature information from the video image acquired by the video acquisition unit, remove rain streak interference from the video image according to the rain feature information, and then perform dynamic target detection on the video image without rain streak interference through the dynamic target detection unit.
Further, referring to the description of the embodiments of fig. 2 to 5, the image rain removing unit includes a rain feature extraction network, a rain print image construction network, and a rain removing network, for an original video image, first, rain feature extraction is performed through the rain feature extraction network to obtain a rain density label and a rain print feature, then, the rain print image is restored from the rain density label and the rain print feature through the rain print image construction network, and a rough rain removing image obtained by subtracting the rain print image from the original video image and the original video image are input to the rain removing network together, so as to further obtain an optimized modified rain removing image.
Preferably, the video acquisition unit mainly refers to a camera erected near the track and a communication line for transmitting video images shot by the camera, the video images are transmitted to the track video monitoring platform, and the track video monitoring platform comprises the image rain removing unit, the dynamic target detection unit and the video analysis unit.
Preferably, the dynamic object detection unit further comprises a background model establishing module, a scene object identification module and a background model updating module. The composition and function of these modules refer to the foregoing description of the dynamic target detection method, and are not described herein again.
In combination with the foregoing, fig. 7 further shows the workflow among the dynamic image rain removing unit, the target detecting unit, and the video analyzing unit in a manner of actually displaying an effect diagram, which is beneficial to better understanding the related technical content of the present invention, and is not described herein again.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent structural changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. A track foreign matter intrusion monitoring method based on video analysis is characterized by comprising the following steps:
acquiring a track video, arranging a camera for carrying out video monitoring on the track close to the track, and transmitting a shot video image to a track video monitoring platform in real time by the camera;
dynamic target detection, wherein the track video monitoring platform performs target detection on the received video image to find a dynamic target;
identifying the type of the foreign matter, namely identifying the dynamic target by using a track foreign matter detection model after the dynamic target is found, and determining the type of the foreign matter and the alarm level;
a rain interference removing step is further included between the track video acquisition step and the dynamic target detection step, when the interference of rainy weather to the video image occurs, rain characteristic information is extracted from the video image, rain streak interference is removed from the video image according to the rain characteristic information, and then the dynamic target detection is carried out on the video image without the rain streak interference;
the rain disturbing step further comprises: for the video image, firstly, performing rain feature extraction through a rain feature extraction network to obtain a rain density label and rain print features, then, restoring a rain print image for the rain density label and the rain print features through a rain print image construction network, and inputting a rough rain removal image obtained by subtracting the rain print image from the video image and the video image into a rain removal network together to further obtain an optimized and corrected rain removal image;
rain characteristic extraction network includes that three groups establish ties convolution layer 1 and pooling layer 1 together, convolution layer 2 and pooling layer 2, convolution layer 3 and pooling layer 3, then output rain print characteristic 1, rain print characteristic 2, rain print characteristic 3, this three rain print characteristic is recombined and is fallen the dimension after the full tie layer and export a one-dimensional characteristic, carries out the comparison through calculating loss function and the rain density label of standard, and then obtains the one-dimensional rain density label that corresponds in the current video image, realizes categorizing rain density, one-dimensional rain density label includes: 0 represents no rain, 1 represents light rain, and 2 represents heavy rain;
the rain streak graph construction network comprises four groups of deconvolution layers 1 and anti-pooling layers 1, anti-convolution layers 2 and anti-pooling layers 2, anti-convolution layers 3 and anti-pooling layers 3, anti-convolution layers 4 and anti-pooling layers 4 which are connected in series, wherein a one-dimensional rain density label is input from the anti-convolution layers 1, three rain streak characteristics are input from the anti-convolution layers 2, the anti-pooling layers 4 further output a rain streak graph 1, a rain streak graph 2 and a rain streak graph 3, and a combined rain streak graph is obtained after stacking;
the rain removing network comprises 16 groups of convolutional layers and pooling layers which are connected in series, an original video image and a rough rain removing graph are synchronously input into the convolutional layers 1 in the rain removing network, after 16 groups of convolutional and pooling processes, the original video image correspondingly outputs 4 background features, the rough rain removing graph also outputs 4 background features, feature deviation calculation is carried out on the background features, feature approximate comparison is carried out through a feature loss function, and when a deviation value is smaller than a set threshold value, an optimized modified rain removing graph is output.
2. The video analysis-based track foreign body intrusion monitoring method according to claim 1, wherein in the step of dynamically detecting the target, the method further comprises the steps of initializing a background model, classifying pixel points and updating the background model; initializing the background model, selecting a single-frame image sequence in the video, and determining a required video image frame as a background model for target detection; the pixel point classification is to identify a pixel in a corresponding background model or a newly appeared pixel to a pixel in a video image when a frame of video image is input, and if the pixel is not the pixel in the background model, the pixel belongs to a foreground and is used for judging whether an identification target appears or not; and updating the background model, and performing video shooting on the fixed scene, wherein the single-frame images in different time periods are selected as the background model for target detection and are updated.
3. The video analysis-based orbital anomaly invasion monitoring method according to claim 2, wherein said updated background model comprises a memoryless update, a temporal sampling update and a spatial neighborhood update.
4. The video analysis-based track foreign body intrusion monitoring method according to claim 1, wherein the foreign body type identification step further comprises track foreign body detection model training, a track foreign body detection model is constructed by using a track foreign body database and a deep learning convolutional neural network, the track foreign body database is used as the deep learning convolutional neural network, foreign body feature extraction is performed on a deep convolutional layer and a pooling layer of the deep learning convolutional neural network, related foreign body types are learned, iterative training is repeated on the deep learning convolutional neural network to enable the network to be converged continuously, a binary cross entropy loss function is used as a standard for measuring network deviation, foreign body detection and classification are achieved, and an optimal track foreign body detection model is obtained.
5. The video analysis-based track foreign object intrusion monitoring method according to claim 4, further comprising track foreign object detection, wherein the track foreign object detection is performed by inputting a real-time video image after detection of the dynamic target into a trained track foreign object detection model, detecting the type of the foreign object in the dynamic target in real time, and outputting the position and recognition confidence level of the foreign object if a foreign object intrusion alarm occurs.
6. A track foreign matter intrusion monitoring system based on video analysis is characterized by comprising a video acquisition unit, a dynamic target detection unit and a video analysis unit; the system comprises a video acquisition unit, a dynamic target detection unit and a dynamic target detection unit, wherein the video acquisition unit comprises a camera which is arranged near a track and used for carrying out video monitoring on the track, and the camera transmits a shot video image to the dynamic target detection unit in real time; the dynamic target detection unit is used for carrying out target detection on the received video image and finding a dynamic target from the received video image; after finding the dynamic target, the video analysis unit identifies the dynamic target by using the track foreign body model to determine the type of the foreign body and the alarm level;
the system also comprises an image rain removing unit, a dynamic target detecting unit and a video image processing unit, wherein the image rain removing unit is used for extracting rain characteristic information from the video image acquired by the video acquisition unit when the interference on the video image caused by rainy weather occurs, removing rain streak interference from the video image according to the rain characteristic information, and then carrying out dynamic target detection on the video image without the rain streak interference through the dynamic target detecting unit;
the image rain removing unit comprises a rain feature extraction network, a rain print image construction network and a rain removing network, for an original video image, firstly, rain feature extraction is carried out through the rain feature extraction network to obtain a rain density label and rain print features, then, the rain print image is restored from the rain density label and the rain print features through the rain print image construction network, a rough rain removing image obtained by subtracting the rain print image from the original video image and the original video image are jointly input into the rain removing network, and an optimized modified rain removing image is further obtained;
rain characteristic extraction network includes that three groups establish ties convolution layer 1 and pooling layer 1 together, convolution layer 2 and pooling layer 2, convolution layer 3 and pooling layer 3, then output rain print characteristic 1, rain print characteristic 2, rain print characteristic 3, this three rain print characteristic is recombined and is fallen the dimension after the full tie layer and export a one-dimensional characteristic, carries out the comparison through calculating loss function and the rain density label of standard, and then obtains the one-dimensional rain density label that corresponds in the current video image, realizes categorizing rain density, one-dimensional rain density label includes: 0 represents no rain, 1 represents light rain, and 2 represents heavy rain;
the rain streak graph construction network comprises four groups of deconvolution layers 1 and anti-pooling layers 1, anti-convolution layers 2 and anti-pooling layers 2, anti-convolution layers 3 and anti-pooling layers 3, anti-convolution layers 4 and anti-pooling layers 4 which are connected in series, wherein a one-dimensional rain density label is input from the anti-convolution layers 1, three rain streak characteristics are input from the anti-convolution layers 2, the anti-pooling layers 4 further output a rain streak graph 1, a rain streak graph 2 and a rain streak graph 3, and a combined rain streak graph is obtained after stacking;
the rain removing network comprises 16 groups of convolutional layers and pooling layers which are connected in series, an original video image and a rough rain removing graph are synchronously input into the convolutional layers 1 in the rain removing network, after 16 groups of convolutional and pooling processes, the original video image correspondingly outputs 4 background features, the rough rain removing graph also outputs 4 background features, feature deviation calculation is carried out on the background features, feature approximate comparison is carried out through a feature loss function, and when a deviation value is smaller than a set threshold value, an optimized modified rain removing graph is output.
7. The video analysis-based track foreign intrusion monitoring system according to claim 6, further comprising a background model building module, a scene object identifying module and a background model updating module for the dynamic object detection unit.
CN202110734603.4A 2021-06-30 2021-06-30 Track foreign matter intrusion monitoring method and system based on video analysis Active CN113191339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110734603.4A CN113191339B (en) 2021-06-30 2021-06-30 Track foreign matter intrusion monitoring method and system based on video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110734603.4A CN113191339B (en) 2021-06-30 2021-06-30 Track foreign matter intrusion monitoring method and system based on video analysis

Publications (2)

Publication Number Publication Date
CN113191339A CN113191339A (en) 2021-07-30
CN113191339B true CN113191339B (en) 2021-10-12

Family

ID=76976750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110734603.4A Active CN113191339B (en) 2021-06-30 2021-06-30 Track foreign matter intrusion monitoring method and system based on video analysis

Country Status (1)

Country Link
CN (1) CN113191339B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657286A (en) * 2021-08-18 2021-11-16 广东电网有限责任公司 Power transmission line monitoring method and device based on unmanned aerial vehicle
TWI800137B (en) * 2021-12-03 2023-04-21 國立虎尾科技大學 Intelligent unmanned aerial vehicle railway monitoring system and method
CN116485799B (en) * 2023-06-25 2023-09-15 成都考拉悠然科技有限公司 Method and system for detecting foreign matter coverage of railway track

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648159A (en) * 2018-05-09 2018-10-12 华南师范大学 A kind of image rain removing method and system
CN109035157A (en) * 2018-06-25 2018-12-18 华南师范大学 A kind of image rain removing method and system based on static rain line
CN111062892A (en) * 2019-12-26 2020-04-24 华南理工大学 Single image rain removing method based on composite residual error network and deep supervision

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697460B (en) * 2018-12-05 2021-06-29 华中科技大学 Object detection model training method and target object detection method
CN109785361A (en) * 2018-12-22 2019-05-21 国网内蒙古东部电力有限公司 Substation's foreign body intrusion detection system based on CNN and MOG
CN110866879B (en) * 2019-11-13 2022-08-05 江西师范大学 Image rain removing method based on multi-density rain print perception
CN111861935B (en) * 2020-07-29 2022-06-03 天津大学 Rain removing method based on image restoration technology
CN112561946B (en) * 2020-12-03 2022-09-13 南京理工大学 Dynamic target detection method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648159A (en) * 2018-05-09 2018-10-12 华南师范大学 A kind of image rain removing method and system
CN109035157A (en) * 2018-06-25 2018-12-18 华南师范大学 A kind of image rain removing method and system based on static rain line
CN111062892A (en) * 2019-12-26 2020-04-24 华南理工大学 Single image rain removing method based on composite residual error network and deep supervision

Also Published As

Publication number Publication date
CN113191339A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN113191339B (en) Track foreign matter intrusion monitoring method and system based on video analysis
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
US8744125B2 (en) Clustering-based object classification
Zheng et al. A novel vehicle detection method with high resolution highway aerial image
US9230175B2 (en) System and method for motion detection in a surveillance video
Choudhury et al. An evaluation of background subtraction for object detection vis-a-vis mitigating challenging scenarios
CN109636795B (en) Real-time non-tracking monitoring video remnant detection method
CN113223059B (en) Weak and small airspace target detection method based on super-resolution feature enhancement
WO2020020472A1 (en) A computer-implemented method and system for detecting small objects on an image using convolutional neural networks
CN111932583A (en) Space-time information integrated intelligent tracking method based on complex background
CN104134068B (en) Monitoring vehicle characteristics based on sparse coding represent and sorting technique
CN107424175B (en) Target tracking method combined with space-time context information
Makhmutova et al. Object tracking method for videomonitoring in intelligent transport systems
CN109086682B (en) Intelligent video black smoke vehicle detection method based on multi-feature fusion
CN111402298A (en) Grain depot video data compression method based on target detection and trajectory analysis
CN110567324B (en) Multi-target group threat degree prediction device and method based on DS evidence theory
EP2447912B1 (en) Method and device for the detection of change in illumination for vision systems
CN117294818A (en) Building site panoramic monitoring method for airport construction
CN113158747A (en) Night snapshot identification method for black smoke vehicle
Balcilar et al. Extracting vehicle density from background estimation using Kalman filter
Sridevi et al. Automatic generation of traffic signal based on traffic volume
Zhang et al. Vehicle detection and tracking in remote sensing satellite vidio based on dynamic association
Li et al. Real-time system for tracking and classification of pedestrians and bicycles
Landabaso et al. Hierarchical representation of scenes using activity information
Singh et al. Moving object detection scheme for automated video surveillance systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant