CN112241974A - Traffic accident detection method, processing method, system and storage medium - Google Patents

Traffic accident detection method, processing method, system and storage medium Download PDF

Info

Publication number
CN112241974A
CN112241974A CN202010474975.3A CN202010474975A CN112241974A CN 112241974 A CN112241974 A CN 112241974A CN 202010474975 A CN202010474975 A CN 202010474975A CN 112241974 A CN112241974 A CN 112241974A
Authority
CN
China
Prior art keywords
target
traffic accident
traffic
monitoring
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010474975.3A
Other languages
Chinese (zh)
Other versions
CN112241974B (en
Inventor
黄荣军
孙鹏飞
原诚寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Original Assignee
Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing New Energy Vehicle Technology Innovation Center Co Ltd filed Critical Beijing New Energy Vehicle Technology Innovation Center Co Ltd
Priority to CN202010474975.3A priority Critical patent/CN112241974B/en
Publication of CN112241974A publication Critical patent/CN112241974A/en
Application granted granted Critical
Publication of CN112241974B publication Critical patent/CN112241974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of traffic monitoring video detection processing, and particularly discloses a traffic accident detection method, a processing method, a system and a storage medium, wherein the traffic accident detection method is based on target tracking information output by a target detection tracking technology, combines an expert system theory and a traffic accident theory to make a traffic accident judgment rule, establishes a rule model of prior knowledge, performs matching, and determines a traffic accident by monitoring a candidate target ID which disappears in an image sequence and combining the ratio of image frames of the candidate target ID in a time window, a target position and a target speed; the real-time performance and the accuracy are higher; the system and the method for detecting and processing the traffic accidents realize the automatic detection, extraction, sending and receiving of the traffic accidents by analyzing the monitoring videos in real time, replace manual work by artificial intelligence to monitor, can realize the all-round monitoring of urban roads under the condition that the coverage of monitoring equipment is wide enough, and can timely and accurately acquire traffic accident information.

Description

Traffic accident detection method, processing method, system and storage medium
Technical Field
The invention relates to the field of traffic monitoring video detection processing, in particular to a traffic accident detection method, a traffic accident processing system and a storage medium.
Background
At present, with the rapid increase of vehicles, the number of traffic accidents is increased every year, and huge death and economic losses are caused. In order to prevent further deterioration after a traffic accident, timely and proper treatment and rescue of the occurred traffic accident are carried out, and the occurred traffic accident must be rapidly and accurately detected and an alarm is timely given out. Therefore, in a road supervision and control system, the method for accurately detecting the traffic accident has very important research significance, and not only can timely process the traffic accident, but also can more intelligently analyze various factors causing the traffic accident, thereby avoiding the occurrence of the traffic accident as much as possible.
In the prior art, the traffic monitoring video in China is mainly detected by manually checking the video of a monitoring center, the detection range relates to each traffic main road of a city, the video sources are thousands of, but most of the video signals which can be simultaneously displayed by the monitoring center are less than hundreds of paths, so that 5% -10% of traffic roads are monitored, monitoring personnel invest 80% -100% of energy, and 90% -95% of the roads are basically in a state of being out of control, and only problem post-inspection can be performed. Therefore, the current traffic accident handling method mainly alarms for the traffic accident participants and provides the traffic police with information such as photos, videos and the like related to the accident, and cannot fully utilize monitoring videos deployed on roads, thereby consuming a large amount of labor and time costs.
Disclosure of Invention
In view of the technical defects and technical drawbacks in the prior art, embodiments of the present invention provide a method, a system, and a storage medium for detecting and processing a traffic accident, which overcome or at least partially solve the above problems, thereby achieving accurate detection of a traffic accident and ensuring fast traffic accident processing.
As an aspect of an embodiment of the present invention, a traffic accident detection method is provided, where the method includes obtaining a parameter matrix of a moving target in a monitored image sequence, where the parameter matrix includes a target ID, a target position, and a target speed;
determining candidate target IDs which disappear from the monitoring image sequence;
starting a time window with preset time, and calculating the ratio of image frames including the candidate target ID in a monitoring image sequence in the time window;
and judging whether the candidate target is an abnormal target or not by combining the target position and the target speed corresponding to the candidate target ID, and determining the traffic accident.
Further, the step of determining candidate object IDs disappearing in the monitoring image sequence includes:
traversing and comparing target IDs contained in image frames in the monitoring image sequence;
determining a target ID that disappears in the image frame;
the moving object excluded from the monitored image picture determines a candidate object ID.
Further, the step of determining whether the candidate target is an abnormal target by combining the target position and the target speed corresponding to the candidate target ID includes:
when the ratio of the image frames including the candidate target ID in the monitoring image sequence in the time window is smaller than a first preset threshold, judging as an abnormal target;
when the ratio of the image frames including the candidate target ID in the monitoring image sequence in the time window is larger than or equal to a first preset threshold, judging whether the moving distance of the target position of the candidate target in the preset time of the time window is smaller than a preset length, and if so, judging as an abnormal target; if not, judging whether the target speed of the candidate target is lower than a preset speed reduction value within the preset time of the time window, if so, judging as an abnormal target; if not, the false alarm is determined.
Further, the method for determining traffic accidents further comprises the following steps:
detecting the occurrence time of the abnormal target in each time window;
when the ratio of the image frames of the abnormal target in the monitoring image sequence in the time window is greater than or equal to a second preset threshold value, counting a preset number of time windows as traffic accidents;
and when the ratio of the image frames including the abnormal target in the monitoring image sequence in the time window is smaller than a second preset threshold value, stopping detection, and counting all the started time windows as traffic accidents.
Further, the step of excluding the moving object determination candidate object ID from the monitored image picture includes:
setting a virtual driving line in an image frame according to an imaging range of the monitoring camera;
and eliminating the moving target with the moving target disappearance position exceeding the virtual driving line.
As another aspect of the embodiments of the present invention, there is provided a traffic accident detection processing method, including:
acquiring and decoding a traffic monitoring video stream;
the method comprises the steps that an image frame target of a monitoring video stream is detected and tracked through a moving target detection and tracking model, and an image frame target tracker is output, wherein a parameter matrix of the target tracker comprises a target ID, a target position and a target speed;
the traffic accident detection method according to the embodiment is used for detecting traffic accidents and outputting traffic accident data.
Further, the method of the target tracker for detecting and tracking the image frame target of the monitoring video stream and outputting the image frame through the moving target detection and tracking model includes:
classifying and positioning the moving target in the image frame based on a YOLOv3 network, and determining an image target result of the image frame;
calculating the image frame prediction quantity at the next moment by using a Kalman filtering algorithm;
evaluating the overlapping degree of the image frame prediction quantity and the image target result at the same moment;
and the target tracker realizes the distribution of moving targets and outputs image frames through Hungarian algorithm.
Further, the method further comprises:
extracting position information and video data corresponding to the traffic accident data to generate accident information, wherein the accident information comprises start and stop time, participants, position information and video data;
screening receiving terminals within a preset range of the position information;
and sending the accident information to a receiving terminal.
In another aspect of the embodiments of the present invention, a traffic accident detection processing system is provided, where the system includes:
the first server is used for storing real-time and/or historical traffic monitoring video streams;
the monitoring workstation is used for calling the traffic monitoring equipment and displaying the traffic monitoring video recorded by the traffic monitoring equipment in real time;
and the field workstation and/or the embedded processor are/is used for processing the traffic monitoring video stream in real time by the traffic accident detection processing method, detecting and outputting traffic accident data.
Further, the system further comprises:
and the second server is used for receiving the traffic accident data output by the field workstation and/or the embedded processor, and calling the position information and the video data corresponding to the traffic accident data in the first server to generate accident information, wherein the accident information comprises start-stop time, participants, position information and video data.
Further, the second server further includes:
a screening module: the receiving terminal is used for screening the position information within a preset range;
a sending module: and the accident information is sent to a receiving terminal.
As another aspect of the embodiments of the present invention, there is provided a storage medium having stored therein a traffic accident detection method according to any of the above-described embodiments, and/or a traffic accident detection processing method according to any of the above-described embodiments.
The embodiment of the invention at least realizes the following technical effects:
the embodiment of the invention provides a traffic accident detection method, a traffic accident detection device, a traffic accident detection processing system, a traffic accident detection processing method and a storage medium, wherein a traffic accident judgment technology is based on target tracking information output by a target detection tracking technology, a traffic accident judgment rule is customized by combining an expert system theory and a traffic accident theory, a rule model of prior knowledge is established, and then matching is carried out, so that the traffic accident detection method has higher real-time performance and accuracy; the traffic accident detection processing system disclosed by the embodiment of the invention realizes automatic detection, extraction, sending and receiving of traffic accidents by analyzing the monitoring video in real time, replaces manual work by artificial intelligence for monitoring, can realize all-round monitoring of urban roads under the condition that the coverage of monitoring equipment is wide enough, and can timely and accurately acquire traffic accident information.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a traffic accident detection method according to an embodiment of the present invention;
FIG. 2 is a detailed flowchart of a traffic accident detection method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a traffic accident detection processing method according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a target detection and tracking method according to an embodiment of the present invention;
FIG. 5 is a flow chart of classification and location using YOLOv3 according to an embodiment of the present invention;
FIG. 6 is a flow chart of a traffic accident detection and processing system according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a traffic accident detection processing system for determining an accident according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a traffic accident detection processing system for pushing accident information according to an embodiment of the present invention.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
The figures and the following description depict alternative embodiments of the invention to teach those skilled in the art how to make and use the invention. Some conventional aspects have been simplified or omitted for the purpose of teaching the present invention. Those skilled in the art will appreciate that variations or substitutions from these embodiments will fall within the scope of the invention. Those skilled in the art will appreciate that the features described below can be combined in various ways to form multiple variations of the invention. Thus, the present invention is not limited to the following alternative embodiments, but is only limited by the claims and their equivalents.
In one embodiment, there is provided a traffic accident detection method, as shown in fig. 1, the method comprising:
s11, acquiring a parameter matrix of a moving target in the monitoring image sequence, wherein the parameter matrix comprises a target ID, a target position and a target speed;
s12 determining candidate target IDs disappeared from the monitoring image sequence;
s13, starting a time window of preset time, and calculating the ratio of image frames in the monitoring image sequence including the candidate target ID in the time window;
s14, judging whether the candidate target is an abnormal target or not by combining the target position and the target speed corresponding to the candidate target ID, and determining the traffic accident.
In this embodiment, a traffic accident determination model based on a priori rule is established according to the target ID, the target position, and the target speed of the moving target, wherein the parameter matrix of the moving target obtained in step S11 may be obtained by a target detection tracking model, and under the condition that the target ID, the target position, and the target speed parameter can be output, a target detection tracking model in the prior art may be selected, or other target detection tracking methods based on a traffic monitoring video may be adopted; s12, detecting the change of the target ID of the moving target, wherein according to the statistics of traffic accidents, after the traffic accidents happen to the moving target, the moving target usually has larger deformation or displacement, such as sag, limb dismember and the like, which directly causes the situation that the target ID disappears, so that the candidate target is determined by comparing and screening the moving target which disappears from the front video and the rear video in the monitoring image sequence in the step; in the step S13, when determining that a candidate target ID is found, starting a time window, where the preset time may be 4S, 5S, 6S, or may be set separately as needed, where generally each second includes 25 to 30 frames of images, when a time window includes 100 frames of image frames, determining how many image frames the candidate target ID appears in, and assuming that 80 frames of the 100 frames include the candidate target ID, calculating a ratio of the image frames including the candidate target ID in the monitored image sequence in the time window to be 80%; and 4, judging an abnormal target by combining the ratio recorded in the step 3, the target position and the change value of the target speed, and determining the traffic accident according to the abnormal target.
In one embodiment, the step of S2 includes:
traversing and comparing target IDs contained in image frames in the monitoring image sequence;
determining a target ID that disappears in the image frame;
the moving object excluded from the monitored image picture determines a candidate object ID.
In this embodiment, since in a normal scene, the moving object drives away or moves away from the monitoring camera, and disappearance of the object ID also occurs, the embodiment further optimally screens and excludes the object ID of the moving object that disappears after screening, and excludes the moving object that drives away or moves away from the disappearing object ID, thereby further accurately selecting the object candidate.
In one embodiment, the step of "excluding the moving object determination candidate object ID from the monitored image picture" in the step of S2 described above includes:
setting a virtual driving line in an image frame according to an imaging range of the monitoring camera;
and eliminating the moving target with the moving target disappearance position exceeding the virtual driving line.
In this embodiment, the moving object corresponding to the disappeared object ID is subjected to the detection of the object position parameter, because the imaging range of the monitoring camera is fixed, the imaging range of the camera can be manually or automatically calibrated, and a virtual driving line is set in the image, where the virtual driving line can set the zebra crossing as a basis, and also can set different virtual driving lines according to requirements. And if the disappearing position of the moving target with the changed target ID crosses the virtual driving line, the target ID is eliminated, namely the moving target driving away from the monitoring picture is eliminated.
In one embodiment, the step of S4 includes:
when the ratio of the image frames including the candidate target ID in the monitoring image sequence in the time window is smaller than a first preset threshold, judging as an abnormal target;
when the ratio of the image frames including the candidate target ID in the monitoring image sequence in the time window is larger than or equal to a first preset threshold, judging whether the moving distance of the target position of the candidate target in the preset time of the time window is smaller than a preset length, and if so, judging as an abnormal target; if not, judging whether the target speed of the candidate target is lower than a preset speed reduction value within the preset time of the time window, if so, judging as an abnormal target; if not, the false alarm is determined.
In the present embodiment, the ratio of the occurrence of the candidate abnormal target, the velocity parameter, and the position parameter are detected within the time window. The first preset threshold may be set to 70-85%, for example, 75%, 80%, 85%, etc., and the preset length may be set to a target set vehicle length, a length of the target detection frame, etc.; the predetermined deceleration may be set in the range of 1-1.8m/s, such as 1.2m/s, 1.3m/s, 1.4 m/s.
Preferably, a time window with a preset time of 5 seconds is set, and if the accumulation of the appearance time of the candidate abnormal object in the time window is less than 80%, the candidate abnormal object is directly identified as the abnormal object without performing the parameter detection of the speed and the position. If the accumulation of the appearance time of the candidate abnormal target in the time window is not less than 80%, firstly, detecting the target position parameter of the candidate abnormal target; if the difference between the position parameters of the candidate abnormal target at the appearance starting moment and the disappearance moment is smaller than the vehicle length, the candidate abnormal target is determined as an abnormal target; if the difference of the target position parameters is not less than the vehicle length, detecting the target speed parameters; if the speed in the time window is reduced to be within 1.3m/s, the abnormal target is determined; otherwise, the alarm is determined as a false alarm.
In one embodiment, the step of S4 further includes:
detecting the occurrence time of the abnormal target in each time window;
when the ratio of the image frames of the abnormal target in the monitoring image sequence in the time window is greater than or equal to a second preset threshold value, counting a preset number of time windows as traffic accidents;
and when the ratio of the image frames including the abnormal target in the monitoring image sequence in the time window is smaller than a second preset threshold value, stopping detection, and counting all the started time windows as traffic accidents.
The traffic accident can continue for a period of time, so the started time window has continuity, the duration of the time window can be set to be 1 minute or other time, and the time window can also be automatically stopped after being detected as a false alarm; in the embodiment, the non-maximum suppression algorithm is operated based on the first occurrence moment of the abnormal target, wherein the second preset threshold may be 70% -85%.
In the embodiment, the earliest occurrence time of the abnormal target is obtained by sequencing the occurrence time of the abnormal target, and a non-maximum suppression algorithm is introduced by taking the earliest occurrence time as a base point; preferably, the time window is 5 seconds, the preset window time is 1 minute, and 12 windows are calculated; firstly, detecting the occurrence time of an abnormal target in each time window, and if the accumulation of the occurrence time of the abnormal target in the time windows is higher than 80%, counting the first 12 time windows as traffic accidents; if the accumulation of the appearance time of the abnormal object in the time window is lower than 80%, the detection is stopped, and all the previous time windows are counted as traffic accidents.
In one embodiment, a specific flow is preferably as shown in fig. 2, and the specific steps include:
s21, acquiring a parameter matrix of a moving object in the monitoring image sequence;
s22, detecting the target ID change of the moving target, and determining the disappeared target ID in the monitoring image sequence;
s23, detecting the position parameter of the disappeared target ID change, and determining a candidate target ID by comparing the position parameter with the set virtual driving line to eliminate the target ID of the driving away monitoring picture;
s24, setting a time window, judging whether the ratio of the image frames including the candidate target ID in the monitoring image sequence in the time window is smaller than a first preset threshold value, if so, turning to S27; if not, go to S25;
s25, judging whether the difference between the target positions of the candidate target IDs from the starting time to the disappearance time is smaller than the preset length corresponding to the target type or not, if yes, turning to S27; if not, go to S26;
s26, judging whether the speed reduction value of the candidate target ID in the time window is smaller than the preset speed reduction value, if yes, turning to S27; if not, go to S28;
s27 is determined as an abnormality target;
s28, judging as false alarm;
s29, judging whether the ratio of the image frames of the abnormal target in the monitoring image sequence in the time window is larger than or equal to a second preset threshold value, if so, turning to S30; if not, go to S31;
s210, counting a preset number of time windows as the time of the traffic accident;
s211, stopping detection, and counting all the time windows after starting as the time of the traffic accident.
In the embodiment, the candidate abnormal target is obtained by judging the position of the target ID in the target parameter matrix, such as disappearance, change and the like, and is analyzed by combining the target speed and the change of the target position in the target parameter matrix, and the traffic accident is filtered; the problem that the traffic accident is a continuous process and the same accident is subjected to multiple detections is solved; the confidence of the judgment result is high, the calculated amount is small, and the practicability is strong.
Based on the same inventive concept, embodiments of the present invention further provide a traffic accident detection processing method, and as the principle of the problem solved by the implementation of the traffic accident detection processing method is performed on the basis of the traffic accident detection method of the foregoing embodiments, the implementation of the traffic accident detection processing method can refer to the foregoing embodiments of the traffic accident detection method, and repeated details are not repeated.
The embodiment provides a traffic accident detection processing method, as shown in fig. 3 to 8, the method includes:
s31, acquiring and decoding a traffic monitoring video stream;
s32, detecting and tracking an image frame target of the monitoring video stream through a moving target detection and tracking model, and outputting an image frame target tracker, wherein a parameter matrix of the target tracker comprises a target ID, a target position and a target speed;
s33 detecting a traffic accident by the traffic accident detecting method according to the above embodiment;
s34 outputs the traffic accident data.
In the embodiment, the method mainly comprises a traffic accident detection and tracking method and a traffic accident processing method. The target detection tracking technology simultaneously tracks a plurality of targets in a section of video, and the data association technology is used for solving the problems that new targets enter and original targets disappear in the multiple targets; the traffic accident judgment method is characterized in that a traffic accident judgment rule is customized based on an expert system theory and a traffic accident theory according to a parameter matrix of a target tracker output by a target detection tracking technology; and establishing a rule model of prior knowledge, and then matching.
In one embodiment, as shown in fig. 4, the method of step S32 includes:
s41 classifying and positioning the moving object in the image frame based on a YOLOv3 network, and determining the image object result of the image frame;
s42, calculating the image frame prediction quantity at the next moment by using a Kalman filtering algorithm;
s43, evaluating the overlapping degree of the image frame prediction quantity and the image target result at the same moment;
and S44, realizing the distribution of the moving targets through Hungarian algorithm, and outputting the target tracker of the image frame.
In this embodiment, a multi-target detection and tracking method with high timeliness and tracking accuracy is provided for a traffic monitoring video scene, a parameter matrix of a target is obtained by applying a moving target detection algorithm based on deep learning and a target fast tracking algorithm based on target matching and distribution to a monitoring image sequence, and parameters of the parameter matrix include information such as a target ID, a vector of a target speed, and a target position.
In this embodiment, the consecutive video frames acquired in step S1 may include a k-1 frame image corresponding to a time k-1, a k frame image corresponding to a time k, and a k +1 frame image corresponding to a time k + 1; as shown in fig. 2, the detecting and tracking step includes:
and at the moment k-1, the image of the (k-1) th frame enters a target detection module to obtain a target detection result of the (k-1) th frame, the target detection result of the (k-1) th frame is input into a target parameter prediction module, and the prediction quantity of the target detection result of the image of the (k-1) th frame at the moment k is output.
At the moment k, the kth frame image enters a target detection module and a target detection result of the kth frame image is output; simultaneously entering a target matching and distributing module by a target detection result of the kth frame image and the forecast of the k moment output in the previous step, creating, modifying and deleting a target tracker, and outputting matching target information at the k moment, namely the target tracker of the kth frame image; and then the target tracker of the image of the kth frame enters a target parameter prediction module, and the prediction quantity of the target tracker of the kth frame at the k +1 moment is output.
At the moment of k +1, the (k + 1) th frame of image enters a target detection module and outputs a target detection result of the (k + 1) th frame of image; simultaneously entering a target matching and distributing module by a target detection result of the (k + 1) th frame of image and a pre-measurement at the k +1 moment, creating, modifying and deleting a target tracker, and outputting matching target information at the k +1 moment, namely the target tracker of the (k + 1) th frame of image; and then the target tracker of the image of the (k + 1) th frame enters a target parameter prediction module, and the prediction quantity of the target tracker of the (k + 1) th frame at the moment of (k + 2) is output.
The three steps are operated circularly, and detection and tracking of the road target in the traffic monitoring video are completed together; and aiming at a group of continuous video frames, the initial frame corresponds to the moment and refers to the moment k-1, and other continuous video frames which are continued refer to the moment k and the moment k + 1.
In an embodiment, in S41, the object in the sequence of surveillance images is classified and located for the YOLOv3 based object detection network. Input information is measured through a multilayer neural network, and a machine learns how to execute tasks through continuous calculation and learning of a large amount of data. The type of object includes one or more of a pedestrian, a cyclist, a tricyclo, a sedan, a sport utility vehicle, a passenger car, a truck. In this embodiment, the target type may include one of the items, or may include seven items, where: sports utility vehicle refers to SUV; of course, the target type can also be adjusted according to requirements.
The specific flow is shown in fig. 5, and the steps include:
s51 dividing the image frame into S × S meshes;
s52, detecting whether the center of the target falls in the grid; if the target center exists in the grid, turning to S23, and if the target center does not exist in the grid, turning to S29;
s53 detecting an object in the grid;
s54 predicting B frames corresponding to the target;
s55, calculating the center coordinate, width and height and confidence score of the frame;
s56 predicting the probability that an object belongs to a class;
s57, calculating the related confidence of the target frame class according to the confidence score and the probability that the target belongs to a certain class;
s58 outputting an image target result;
s59 outputs no target information.
In the embodiment, the target detection result of the image is output, and meanwhile, an input item is provided for the multi-target tracking algorithm. The method comprises the steps of adopting an open-source YOLOv3 network as a basic network, changing network structure parameters, and establishing a target detection model, wherein the modified parameters comprise modified parameters such as max _ batchs, steps, classes, filters and the like. Of the S × S meshes, each mesh is responsible for detecting the target in which the center point falls. B target frames exist in a single grid, each target frame is composed of five-dimensional prediction quantity and comprises a central point coordinate (x, y), a width height (w, h) and a confidence score s of the target framei. The embodiment completes the classification and positioning of the targets on the road, namely, the positions of the targets such as pedestrians, non-motor vehicles, motor vehicles and the like in the image can be accurately acquired.
The specific implementation process can be as follows: the input image is divided into 7 × 7 grids, each grid predicts 5 target frames, and there are 7 targets to be measured, i.e., S ═ 7, B ═ 5, and K ═ 7. The associated confidence of the target bounding box class is the product of the confidence score of the target bounding box and the probability that the target belongs to a certain class. The output target information is a detection result vector of S × B (K +5) ═ 7 × 60.
A Kalman prediction model based on a linear motion model is established in S42, and the state prediction of the next moment of the image frame is carried out on the target tracker or the image target result through a Kalman filtering algorithm; the state parameters comprise position parameters, speed parameters and acceleration parameters of the target, and size parameters and size change parameters of the detection frame; and forming the state parameters into a state vector, forming a measurement vector by the target detection results of the previous frame and the current frame, forming a Kalman filtering equation set by taking Gaussian white noise as system noise and measurement noise, and predicting the value of the next moment image of the state vector as the image frame prediction quantity at the next moment.
Kalman filtering is an algorithm for performing optimal estimation on the system state by using a linear system state equation and through input state vectors and output observation data of the system; the method can realize the optimal estimation of the system state from a series of data with measurement noise under the condition that the measurement variance is known.
In the present embodiment, it is assumed that a moving object linearly moves between two adjacent frames of images, and the position of the moving object in the next frame of image is predicted. And (3) an algorithm for optimal estimation by inputting the state vector and outputting the observation data. Performing state prediction and optimal estimation of the next frame on a target tracker or a target detection result through a Kalman filtering algorithm; the state vector is composed of a target position parameter (x, y), a speed parameter (dx, dy), an acceleration parameter (ddx, ddy) of the moving target, a size parameter (w, h) of the detection frame and a size change parameter (dw, dh), wherein the size parameter (w, h) and the size change parameter (dw, dh) are determined by the size of the target in the image; establishing a Kalman prediction equation set by taking Gaussian white noise as system noise and measurement noise; the target detection results of the previous frame and the current frame can form a measurement vector (x, y, dx, dy, w, h, dw, dh), and the implementation can accurately output the predicted value of the target tracker in the next frame, thereby further improving the matching effect.
In S43, based on the target matching algorithm for the overlap evaluation, the target matching algorithm may associate information of the same moving target on two adjacent images, so as to describe the position sequence of the moving target. In the step, the target detection result is evaluated by using the overlapping degree and the result is output by using a Kalman prediction model. The overlapping degree is the overlapping rate of the candidate frame and the original mark frame, the distance between the image frame prediction quantity and the image target result at the same moment is calculated by using the overlapping degree, and the calculating formula of the overlapping degree is as follows:
Figure BDA0002515583850000151
wherein c is a candidate box; g is an original mark frame; area represents an area; IOU is the degree of overlap.
When the detection precision is high and the frame rate of the video is also high, the overlapping degree is applied to multi-target tracking and is mainly based on the overlapping degree of targets in the previous frame and the next frame. The target matching method of this embodiment associates the target tracker of the current frame with the target detection result, and associates the target tracker with the new target detection result again in the next frame, thereby describing the position trajectory of the target.
S44 is a target allocation algorithm based on Hungarian algorithm, one-to-many matching between all moving targets in two adjacent frame images is obtained through calculation of S43, the target allocation algorithm can establish optimal one-to-one matching, and the moving target allocation problem in the two adjacent frame images (k-1 frame and k frame) is generalized to a bipartite graph problem, namely, the most moving targets in the k-1 frame images are matched as far as possible. By using a recursive approach, the augmented paths are continually searched iteratively, and each time an augmented path is found, a larger match occurs. Therefore, the invention obtains larger matching by continuously splitting and recombining the paired moving targets in the bipartite graph. And if the target tracker does not have the corresponding target detection result, temporarily storing the target tracker or reserving the target tracker.
For the problem that the optimal one-to-one matching is found between the detection targets of the (k-1) th frame and the (k) th frame under the condition of many-to-many, the overlapping degree is obtained by pairwise calculation between all trackers and all target detection results of the (k) th frame through the overlapping degree method, and the overlapping degree is used for data association, namely, the Hungarian algorithm is used for solving the multi-dimensional assignment problem to obtain the maximum one-to-one matching between tracking and detection, and the tracking of the (k) th frame is completed. After the allocation is performed by using the Hungarian algorithm, the successfully allocated objects are regarded as hits, and the unsuccessfully allocated objects are missed. The target allocation has the following two cases:
(a) after the distribution is completed, if the target detection result has no corresponding tracker, a new tracker is established, namely the tracker is established.
(b) After the allocation is completed, if there is a tracker without a corresponding target detection result, the miss situation may be caused by two situations: firstly, the target leaves the scene in the monitoring video, and for the situation, the target tracker should be temporarily stored immediately; and is determined to be lost when the target consecutive multiframes disappear. Secondly, the target is blocked or the target is missed due to inaccuracy of the detection method, and in the face of the situation, the target tracker should be kept for a period of time to keep the chance that the target gets a match when the blocking disappears or is re-detected.
And after the target distribution is completed, the target tracker information of the current frame can be output.
In one embodiment, as shown in fig. 3, the method further comprises:
s35, extracting position information and video data corresponding to the traffic accident data to generate accident information, wherein the accident information comprises start and stop time, participants, position information and video data;
s36, screening receiving terminals within the preset range of the position information;
s37 transmits the accident information to the receiving terminal.
In this embodiment, the outputted traffic accident is specifically applied, and the information is combined with complete traffic accident information and sent to a designated user or terminal, which may be a vehicle-mounted terminal or other intelligent terminals, wherein the manner of screening according to the location information may include a variety of manners, such as within 1 km of the accident range, or passing through the route frequently, or paying attention to the location route, and the like, and may be set as needed.
Based on the same inventive concept, embodiments of the present invention further provide a traffic accident detection processing system, and as the principle of the problem solved by the traffic accident detection processing system is similar to the traffic accident detection processing method of the foregoing embodiments, the implementation of the traffic accident detection processing system can refer to the foregoing embodiments of the traffic accident detection processing method, and repeated details are not repeated.
The present embodiment provides a traffic accident detection processing system, as shown in fig. 6 to 8, the system includes:
the first server is used for storing real-time and/or historical traffic monitoring video streams;
the monitoring workstation is used for calling the traffic monitoring equipment and displaying the traffic monitoring video recorded by the traffic monitoring equipment in real time;
and the field workstation and/or the embedded processor are/is used for processing the traffic monitoring video stream in real time by the traffic accident detection processing method, detecting and outputting traffic accident data.
In this embodiment, the traffic accident detection method mainly involves processing video output by traffic monitoring equipment mounted in a traffic monitoring local area network, automatically detecting and extracting traffic accident data therefrom, and first decoding a traffic monitoring video stream. Then extracting a moving target by using a deep neural network YOLOv3, tracking the moving target based on a detection result, and outputting a parameter matrix of the same moving target in a period of time, wherein the parameter matrix comprises an ID (identity), a position and a velocity vector; and finally, inputting the parameter matrix into a traffic accident judgment algorithm to detect the traffic accident, and finally outputting the traffic accident.
In fig. 6, the traffic monitoring device is mounted to the traffic monitoring lan a through the OPC Server, and the traffic monitoring video stream provided by the traffic monitoring device is processed by the first Server including the real-time database, the field workstation, the embedded processor, and the monitoring workstation at the same time. The real-time database is responsible for storing the video stream in the local area network; the monitoring workstation can call the traffic monitoring equipment in the local area network A and display the video in real time; the field workstation and/or the embedded processor processes the traffic monitoring video stream in the local area network A in real time, detects and extracts traffic accidents; among them, Ethernet is a commonly used computer local area network technology; OPC is an abbreviation for OLE for Process Control, a general communication specification.
The embodiment can monitor and output traffic accident information in real time through the first server, the field workstation and the monitoring workstation, and can also process traffic monitoring video streams in the traffic monitoring local area network A in real time through the embedded processor. In one embodiment, the system further comprises:
and the second server is used for receiving the traffic accident data output by the field workstation and/or the embedded processor, and calling the position information and the video data corresponding to the traffic accident data in the first server to generate accident information, wherein the accident information comprises start-stop time, participants, position information and video data.
In this embodiment, the first server, the field workstation, the embedded processor, and the monitoring workstation are simultaneously mounted on the traffic monitoring lan B, and the second server (Web server) can distribute the external information. And a receiving terminal of the system receives the message sent by the Web server through the Internet. The traffic monitoring local area network a and the traffic monitoring local area network B may be the same local area network or different local area networks.
In one embodiment, the second server further comprises:
a screening module: the receiving terminal is used for screening the position information within a preset range;
a sending module: and the accident information is sent to a receiving terminal.
In the present embodiment, the traffic accident information is transmitted to the receiving terminal. The accident information comprises start and stop time, participants, accident videos and the like, and mainly relates to a Web server mounted in a traffic monitoring local area network B. The Web server receives accident information output by the field workstation and/or the embedded processor through the traffic monitoring local area network B, expands the accident information, and calls the accident occurrence place and the corresponding video clip through the real-time/historical database to form complete accident information. And then, screening the receiving terminals by using the occurrence place information, selecting the receiving terminal closest to the accident occurrence place or the receiving terminal of the gas related to the position point, and sending the complete accident information to the terminal user.
Based on the same inventive concept, embodiments of the present invention further provide a storage medium, and as the principle of the problem solved by the storage medium implemented in the present invention is similar to the traffic accident detection method and the traffic accident detection processing method in the foregoing embodiments, reference may be made to the embodiments of the traffic accident detection method and the traffic accident detection processing method for implementation of the storage medium, and repeated details are not described again.
The present embodiment provides a storage medium having stored therein a traffic accident detection method according to any of the above-described embodiments, and/or a traffic accident detection processing method according to any of the above-described embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method of traffic accident detection, the method comprising:
acquiring a parameter matrix of a moving target in a monitoring image sequence, wherein the parameter matrix comprises a target ID, a target position and a target speed;
determining candidate target IDs which disappear from the monitoring image sequence;
starting a time window with preset time, and calculating the ratio of image frames including the candidate target ID in a monitoring image sequence in the time window;
and judging whether the candidate target is an abnormal target or not by combining the target position and the target speed corresponding to the candidate target ID, and determining the traffic accident.
2. The traffic accident detection method of claim 1, wherein the step of determining candidate object IDs that disappear in the sequence of surveillance images comprises:
traversing and comparing target IDs contained in image frames in the monitoring image sequence;
determining a target ID that disappears in the image frame;
excluding the moving target from the monitoring image picture to determine a candidate target ID; and/or
The step of determining whether the candidate target is an abnormal target by combining the target position and the target speed corresponding to the candidate target ID includes:
when the ratio of the image frames including the candidate target ID in the monitoring image sequence in the time window is smaller than a first preset threshold, judging as an abnormal target;
when the ratio of the image frames including the candidate target ID in the monitoring image sequence in the time window is larger than or equal to a first preset threshold, judging whether the moving distance of the target position of the candidate target in the preset time of the time window is smaller than a preset length, and if so, judging as an abnormal target; if not, judging whether the target speed of the candidate target is lower than a preset speed reduction value within the preset time of the time window, if so, judging as an abnormal target; if not, judging as a false alarm; and/or
The method for determining traffic accidents further comprises the following steps:
detecting the occurrence time of the abnormal target in each time window;
when the ratio of the image frames of the abnormal target in the monitoring image sequence in the time window is greater than or equal to a second preset threshold value, counting a preset number of time windows as traffic accidents;
and when the ratio of the image frames including the abnormal target in the monitoring image sequence in the time window is smaller than a second preset threshold value, stopping detection, and counting all the started time windows as traffic accidents.
3. The traffic accident detection method according to claim 2, wherein the step of excluding the moving object determination candidate object ID from the monitor image picture includes:
setting a virtual driving line in an image frame according to an imaging range of the monitoring camera;
and eliminating the moving target with the moving target disappearance position exceeding the virtual driving line.
4. A traffic accident detection processing method, characterized in that the method comprises:
acquiring and decoding a traffic monitoring video stream;
the method comprises the steps that an image frame target of a monitoring video stream is detected and tracked through a moving target detection and tracking model, and an image frame target tracker is output, wherein a parameter matrix of the target tracker comprises a target ID, a target position and a target speed;
the detection of a traffic accident by the traffic accident detection method according to any of claims 1 to 3, outputting traffic accident data.
5. The traffic accident detection processing method according to claim 4, wherein the method of the target tracker for outputting image frames by detecting and tracking image frame targets of the surveillance video stream through the moving target detection and tracking model comprises:
classifying and positioning the moving target in the image frame based on a YOLOv3 network, and determining an image target result of the image frame;
calculating the image frame prediction quantity at the next moment by using a Kalman filtering algorithm;
evaluating the overlapping degree of the image frame prediction quantity and the image target result at the same moment;
and the target tracker realizes the distribution of moving targets and outputs image frames through Hungarian algorithm.
6. The traffic accident detection processing method of claim 4 or 5, wherein the method further comprises:
extracting position information and video data corresponding to the traffic accident data to generate accident information, wherein the accident information comprises start and stop time, participants, position information and video data;
screening receiving terminals within a preset range of the position information;
and sending the accident information to a receiving terminal.
7. A traffic accident detection and handling system, the system comprising:
the first server is used for storing real-time and/or historical traffic monitoring video streams;
the monitoring workstation is used for calling the traffic monitoring equipment and displaying the traffic monitoring video recorded by the traffic monitoring equipment in real time;
a field workstation and/or an embedded processor for processing the traffic surveillance video stream in real time, detecting and outputting traffic accident data by the traffic accident detection processing method according to claim 4 or 5.
8. The traffic accident detection and processing system of claim 7, wherein the system further comprises: and the second server is used for receiving the traffic accident data output by the field workstation and/or the embedded processor, and calling the position information and the video data corresponding to the traffic accident data in the first server to generate accident information, wherein the accident information comprises start-stop time, participants, position information and video data.
9. The traffic accident detection processing system of claim 8, wherein the second server further comprises:
a screening module: the receiving terminal is used for screening the position information within a preset range;
a sending module: and the accident information is sent to a receiving terminal.
10. A storage medium storing a traffic accident detection method according to any of claims 1 to 3 and/or a traffic accident detection processing method according to any of claims 5 to 6.
CN202010474975.3A 2020-05-29 2020-05-29 Traffic accident detection method, processing method, system and storage medium Active CN112241974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010474975.3A CN112241974B (en) 2020-05-29 2020-05-29 Traffic accident detection method, processing method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010474975.3A CN112241974B (en) 2020-05-29 2020-05-29 Traffic accident detection method, processing method, system and storage medium

Publications (2)

Publication Number Publication Date
CN112241974A true CN112241974A (en) 2021-01-19
CN112241974B CN112241974B (en) 2024-05-10

Family

ID=74170406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010474975.3A Active CN112241974B (en) 2020-05-29 2020-05-29 Traffic accident detection method, processing method, system and storage medium

Country Status (1)

Country Link
CN (1) CN112241974B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158813A (en) * 2021-03-26 2021-07-23 精英数智科技股份有限公司 Real-time statistical method and device for flow target
CN113554881A (en) * 2021-07-20 2021-10-26 周刚 Artificial intelligence road event monitoring method, device, system and storage medium
CN114202733A (en) * 2022-02-18 2022-03-18 青岛海信网络科技股份有限公司 Video-based traffic fault detection method and device
CN115936122A (en) * 2023-03-13 2023-04-07 南京感动科技有限公司 Method for processing alarm data by rule engine based on expert system
CN116935652A (en) * 2023-09-14 2023-10-24 四川国消云科技有限公司 Intelligent traffic information system integrated platform data management system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6810132B1 (en) * 2000-02-04 2004-10-26 Fujitsu Limited Traffic monitoring apparatus
CN102073851A (en) * 2011-01-13 2011-05-25 北京科技大学 Method and system for automatically identifying urban traffic accident
CN103226891A (en) * 2013-03-26 2013-07-31 中山大学 Video-based vehicle collision accident detection method and system
JP2013214143A (en) * 2012-03-30 2013-10-17 Fujitsu Ltd Vehicle abnormality management device, vehicle abnormality management system, vehicle abnormality management method, and program
CN110516556A (en) * 2019-07-31 2019-11-29 平安科技(深圳)有限公司 Multi-target tracking detection method, device and storage medium based on Darkflow-DeepSort
CN111046212A (en) * 2019-12-04 2020-04-21 支付宝(杭州)信息技术有限公司 Traffic accident processing method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6810132B1 (en) * 2000-02-04 2004-10-26 Fujitsu Limited Traffic monitoring apparatus
CN102073851A (en) * 2011-01-13 2011-05-25 北京科技大学 Method and system for automatically identifying urban traffic accident
JP2013214143A (en) * 2012-03-30 2013-10-17 Fujitsu Ltd Vehicle abnormality management device, vehicle abnormality management system, vehicle abnormality management method, and program
CN103226891A (en) * 2013-03-26 2013-07-31 中山大学 Video-based vehicle collision accident detection method and system
CN110516556A (en) * 2019-07-31 2019-11-29 平安科技(深圳)有限公司 Multi-target tracking detection method, device and storage medium based on Darkflow-DeepSort
CN111046212A (en) * 2019-12-04 2020-04-21 支付宝(杭州)信息技术有限公司 Traffic accident processing method and device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158813A (en) * 2021-03-26 2021-07-23 精英数智科技股份有限公司 Real-time statistical method and device for flow target
CN113554881A (en) * 2021-07-20 2021-10-26 周刚 Artificial intelligence road event monitoring method, device, system and storage medium
CN114202733A (en) * 2022-02-18 2022-03-18 青岛海信网络科技股份有限公司 Video-based traffic fault detection method and device
CN115936122A (en) * 2023-03-13 2023-04-07 南京感动科技有限公司 Method for processing alarm data by rule engine based on expert system
CN116935652A (en) * 2023-09-14 2023-10-24 四川国消云科技有限公司 Intelligent traffic information system integrated platform data management system

Also Published As

Publication number Publication date
CN112241974B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN112241974A (en) Traffic accident detection method, processing method, system and storage medium
Ijjina et al. Computer vision-based accident detection in traffic surveillance
CN109887281B (en) Method and system for monitoring traffic incident
Alpatov et al. Vehicle detection and counting system for real-time traffic surveillance
CN111369807B (en) Traffic accident detection method, device, equipment and medium
CN102073851B (en) Method and system for automatically identifying urban traffic accident
KR101095528B1 (en) An outomatic sensing system for traffic accident and method thereof
CN108802758B (en) Intelligent security monitoring device, method and system based on laser radar
Ghahremannezhad et al. Real-time accident detection in traffic surveillance using deep learning
US20130265423A1 (en) Video-based detector and notifier for short-term parking violation enforcement
KR101877294B1 (en) Smart cctv system for crime prevention capable of setting multi situation and recognizing automatic situation by defining several basic behaviors based on organic relation between object, area and object's events
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
Bloisi et al. Argos—A video surveillance system for boat traffic monitoring in Venice
CN101727672A (en) Method for detecting, tracking and identifying object abandoning/stealing event
GB2443739A (en) Detecting image regions of salient motion
CN112241969A (en) Target detection tracking method and device based on traffic monitoring video and storage medium
CN112242058B (en) Target abnormity detection method and device based on traffic monitoring video and storage medium
CN105809954A (en) Traffic event detection method and system
CN112560546B (en) Method and device for detecting throwing behavior and storage medium
CN112257683A (en) Cross-mirror tracking method for vehicle running track monitoring
KR102351476B1 (en) Apparatus, system and method for object collision prediction
CN113361364A (en) Target behavior detection method, device, equipment and storage medium
KR101407394B1 (en) System for abandoned and stolen object detection
CN116543023A (en) Multi-sensor target crowd intelligent tracking method based on correction deep SORT
CN115376037A (en) Station key area safety state monitoring method based on video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100176 floor 10, building 1, zone 2, yard 9, Taihe 3rd Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Beijing National New Energy Vehicle Technology Innovation Center Co.,Ltd.

Address before: 100089 1705 100176, block a, building 1, No. 10, Ronghua Middle Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant before: BEIJING NEW ENERGY VEHICLE TECHNOLOGY INNOVATION CENTER Co.,Ltd.

GR01 Patent grant
GR01 Patent grant