CN114170580A - Highway-oriented abnormal event detection method - Google Patents

Highway-oriented abnormal event detection method Download PDF

Info

Publication number
CN114170580A
CN114170580A CN202111456180.0A CN202111456180A CN114170580A CN 114170580 A CN114170580 A CN 114170580A CN 202111456180 A CN202111456180 A CN 202111456180A CN 114170580 A CN114170580 A CN 114170580A
Authority
CN
China
Prior art keywords
vehicle
detection
track
model
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111456180.0A
Other languages
Chinese (zh)
Inventor
李东升
李建飞
邹宇
张宇杰
刘建华
王东乐
王牣
叶伟强
张锋鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lianyungang Jierui Electronics Co Ltd
Original Assignee
Lianyungang Jierui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lianyungang Jierui Electronics Co Ltd filed Critical Lianyungang Jierui Electronics Co Ltd
Priority to CN202111456180.0A priority Critical patent/CN114170580A/en
Publication of CN114170580A publication Critical patent/CN114170580A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an abnormal event detection method facing to a highway, which comprises the steps of acquiring images through video monitoring equipment, and detecting the position of a vehicle by using a YOLO target detection network; providing an attention-based two-stage hourglass key point detection network to obtain vehicle key points; and acquiring the three-dimensional coordinate position of the key point. And obtaining a three-dimensional motion track by tracking the three-dimensional space position of the vehicle. In the abnormal event analysis, a dynamic clustering method is provided, the detected motion track of the target vehicle is clustered and analyzed according to the three-dimensional track of the vehicle, the abnormal track is identified, and the type of the abnormal track is judged, so that the judgment of the abnormal behavior event of a single vehicle is realized. Meanwhile, the traffic statistics in a period of time are dynamically clustered, so that the traffic abnormal event is accurately researched and judged. The method has the effect of accurately identifying the abnormal event, and solves the problems of low precision and poor robustness of the conventional abnormal event detection method based on the expressway.

Description

Highway-oriented abnormal event detection method
Technical Field
The invention belongs to the technical field of road traffic safety and deep learning, and particularly relates to an abnormal event detection method for an expressway.
Background
With the development of economy and the improvement of urbanization level in China, the construction of highways enters a rapid development period, and a traffic network formed by the highways becomes an important basic traffic facility. On the large background that urban population and traffic vehicles are increasing, the operation management difficulty of the highway is increasing day by day, and the problems of traffic jam and traffic accidents become very prominent problems, so that the safety problem of the highway caused by the problem is not ignored. The highway safety accident not only brings huge pressure to traffic safety, but also causes a great amount of property loss and casualties.
The irregular driving behavior of drivers is one of the important causes of high-speed traffic accidents. Therefore, timely detection and early warning of the detection of the abnormal behavior event of the vehicle on the highway is particularly important. By adopting advanced technical means, the intelligent level of the highway is improved, the information of the highway is quickly acquired and accurately processed, abnormal behavior events on the road are acquired and processed in time, and the intelligent expressway information processing system can assist the road to quickly recover smooth driving to a certain extent and prevent traffic accidents.
The abnormal event detection is an important subject of highway intellectualization, is helpful for timely distinguishing abnormal behaviors of vehicles on a highway, timely sending out safety early warning, providing technical support for rapid event response, and providing detailed data for investigation and evidence obtaining and the like. Among them, the video-based vehicle abnormal behavior detection method is one of the commonly used detection methods. The video-based vehicle abnormal behavior detection method mainly monitors traffic conditions in real time through cameras arranged on two sides of a highway, at sites such as bridges and tunnels, obtains abnormal behavior events in monitoring pictures through a certain algorithm and reports the abnormal behavior events to managers in time.
However, most of the current abnormal event detection methods based on video processing directly analyze two-dimensional coordinates of a vehicle in a video picture to obtain track information of the vehicle, and judge the running state of the vehicle by analyzing the track of the vehicle. Due to the video shooting angle, vehicle shielding and the like, accurate track information is difficult to extract by the method, and therefore the accuracy of event detection is affected.
Disclosure of Invention
The invention aims to provide a method for detecting an abnormal event of an expressway based on a three-dimensional vehicle track aiming at the defects of the conventional two-dimensional vehicle track detection, adopts a vehicle detection and vehicle key point detection method based on a deep learning method, designs a vehicle two-dimensional-three-dimensional coordinate conversion method combining internal and external parameters of a camera and information of a vehicle model, realizes the extraction of a three-dimensional space running track of a vehicle, provides a dynamic clustering method, and analyzes the three-dimensional track and the road traffic state of the vehicle by using the method, thereby realizing the accurate prediction of the abnormal behavior of the vehicle and the abnormal event of the road traffic state in an expressway monitoring scene.
The technical solution for realizing the purpose of the invention is as follows: a highway-oriented abnormal event detection method comprises the following steps:
step 1, reading a real-time video stream, decoding the video and acquiring an image frame;
step 2, preprocessing the image frame, including color space conversion, image scaling and image normalization;
step 3, loading a vehicle detection model, reasoning the preprocessed image, and analyzing and processing a reasoning result to obtain vehicle position and category information;
step 4, loading a vehicle key point detection model, carrying out key point detection on each detected vehicle target, carrying out post-processing on an inference result to obtain key point positions of the vehicle, and selecting main key points of the vehicle;
step 5, calculating to obtain three-dimensional coordinates of main key points of the vehicle by using an internal and external parameter matrix of the camera and vehicle model parameters according to the detected coordinates of the key points of the vehicle;
step 6, calculating the central point of the vehicle in a three-dimensional space according to the calculated three-dimensional coordinates of the main key points of the vehicle;
step 7, taking a central point of a three-dimensional coordinate of a main key point of the vehicle as a tracking target, using GIOU as an evaluation index, matching the detected target by using a Hungarian algorithm, and tracking the target by using Kalman filtering;
step 8, recording the position and the time point of the three-dimensional coordinate center point of the tracking target in the process from appearance to disappearance as the running track of the vehicle;
step 9, according to the extracted vehicle running track, dynamically clustering the vehicle track by using a dynamic clustering algorithm, and marking the vehicle track category so as to identify the abnormal behavior event of the vehicle;
and step 10, counting the traffic statistics in the time interval at regular time intervals, dynamically clustering the statistic states at different moments, marking the traffic state categories, and identifying different traffic running states so as to judge abnormal events in traffic running.
Further, in the step 3, a Yolov3-5layer model is adopted as a vehicle detection model, the model takes the open source data set and the self-established data set as data support, the open source data set and the self-established data set are used for calculating a model anchor, and the open source data set and the self-established data set are trained to obtain a final weight file; the method comprises the steps that a source data set comprises COCO and VOC, and a self-constructed data set is obtained by collecting video data under different traffic scenes and manually marking; the detection result of the Yolov3-5layer model comprises a vehicle type and a vehicle position, wherein the type comprises three types of small vehicles, medium vehicles and large vehicles.
Further, the vehicle key point detection model in the step 4 is a two-stage hourglass model based on an attention mechanism, the attention mechanism is introduced into the hourglass model, and the final weight of the model is obtained by training on a self-established data set; when the self-built data set is built, the left side vehicle front bottom, the right side vehicle front bottom, the left side vehicle front headlamp, the right side vehicle front headlamp, the left side front window bottom, the right side front window bottom, the left side vehicle front upper corner, the right side vehicle front upper corner, the left side vehicle rear upper corner, the right side vehicle rear upper corner, the left side vehicle rear headlamp, the right side vehicle rear headlamp, the left side vehicle rear bottom, the right side vehicle rear bottom, the left side vehicle front upper corner, the right side vehicle front upper corner, the left side vehicle rear upper corner, the right side vehicle rear upper corner, the 8 position points of the right side vehicle rear upper corner are selected as main key points according to the self-defined selection and labeling of the vehicle according to the actual shape of the vehicle.
Further, the loading of the vehicle key point detection model in step 4, performing key point detection on each detected vehicle target, and performing post-processing on an inference result to obtain the key point position of the vehicle specifically includes:
step 4-1, respectively calculating the maximum value of the pixel point of each channel for the output of each stage in the two stages, and recording the coordinate position of the maximum value;
step 4-2, calculating the distance between the maximum coordinate positions of the two stages corresponding to each channel, and if the distance is greater than 5, determining that the outputs of the two stages of the channel are not matched and the key point corresponding to the channel is not detected; otherwise, the key point corresponding to the channel is detected;
and 4-3, traversing each channel, and repeating the step 4-2 until all 14 channels are processed.
Further, in step 5, according to the detected coordinates of the key points of the vehicle, the three-dimensional coordinates of the main key points of the vehicle are calculated by using the internal and external parameter matrixes of the camera and the parameters of the vehicle model, and the specific process includes:
step 5-1, obtaining an internal reference matrix K, a camera rotation matrix R and a translation matrix t of the camera through camera calibration, wherein the camera rotation matrix and the translation matrix form an external reference matrix of the camera, K, R is a real matrix of 3x3, and t is a vector of 3x 1;
step 5-2, for each detected key point, the two-dimensional pixel coordinate of the key point on the image is represented as p ═ u, v,1], wherein u represents the abscissa of the point and v represents the ordinate of the point; for the main key points which are not detected, the following steps 5-3 and 5-4 are not carried out;
and 5-3, calculating coefficients, and estimating the equation coefficients under the condition that the internal reference matrix and the external reference matrix of the camera are known, wherein the equation is as follows:
[xa,ya,za]T=R-1K-1p
[xb,yb,zb]T=R-1t
s=(zb+z)/za
wherein R is-1Representing the inverse of the extrinsic rotation matrix, K-1Representing the inverse of an internal reference matrixArray, [ x ]a,ya,za]TRepresents R-1K-1Intermediate result vector, z, obtained after p operationaRepresenting the component in the vertical direction, x, of the intermediate result vectorb,yb,zbRepresents R-1t intermediate result vector, z, obtained after operationbRepresenting the component of the intermediate result vector in the vertical direction, wherein z represents the value of a preset main key point in the vertical direction; taking z as 0, wherein the z is at four points of the front bottom of the left side vehicle, the front bottom of the right side vehicle, the rear bottom of the left side vehicle and the rear bottom of the right side vehicle; taking z as z from the front upper corner of the left side car, the front upper corner of the right side car, the rear upper corner of the left side car and the rear upper corner of the right side carmodelWherein z ismodelBased on the class of the detected object, z is found by looking up the tablemodel1.4 m; for medium vehicles, zmodel1.85 m; for large vehicles, zmodel=3.3m;
And 5-4, calculating a three-dimensional coordinate point x in the real world according to the two-dimensional coordinate point p of the key point, wherein the calculation formula is as follows:
x=[X Y Z]T=sR-1K-1p-R-1t
in the formula, X, Y and Z represent components of a three-dimensional coordinate point in three directions of space X, Y and Z, respectively.
Further, step 6, calculating a central point of the vehicle in the three-dimensional space according to the calculated three-dimensional coordinates of the main key points of the vehicle, and the specific process includes:
step 6-1, calculating the maximum value and the minimum value of the three-dimensional space coordinates of all the detected main key points in the x, y and z directions to obtain xmin,xmax,ymin,ymax,zmin,zmax
Step 6-2, for each direction in the three directions of x, y and z, respectively calculating the mean value of the maximum value and the minimum value in the direction to obtain xmean,ymean,ymeanWith Xc=(xmean,ymean,ymean) As the center point of the vehicle in three-dimensional space.
Further, in step 7, the step of using the central point of the three-dimensional coordinates of the main key points of the vehicle as a tracking target, using GIOU as an evaluation index, matching the detected target by using the hungarian algorithm, and tracking the target by using kalman filtering specifically includes:
step 7-1, assume that the current frame has MtA detection box, until the last frame is determined to match Mt-1A detection frame, calculating an error matrix E according to the two detection frames, wherein Eij=1-GIOU(Bt,i,Bt-1,j),Bt,iDenotes the ith detection frame in the current frame, Bt-1,jRepresenting the jth detection frame in the existing detection frames, the calculation formula of the GIOU is as follows:
Figure BDA0003386725310000051
in the formula, A ≈ B represents the intersecting area of the detection frames A and B, A ≧ B represents the phase-detecting area of the detection frames A and B, and C represents the area of the frame capable of enclosing the detection frames A and B at minimum;
7-2, calculating the optimal matching between the detected detection frame of the current frame and the existing detection frame by using a Hungarian algorithm according to the error matrix E to obtain the corresponding relation between the two groups of detection frames;
7-3, correcting the position of a center point of each detection frame matched with the existing detection frame in the current frame by using Kalman filtering according to the detected position of the corresponding three-dimensional center point of the vehicle; regarding a detection frame which is not matched with the existing detection frame at the current frame, considering that the detection frame appears for the first time, adding the detection frame into the existing detection frame set, and initializing a Kalman filter by using a central point position corresponding to the detection frame; for an existing detection frame that does not match the detected detection frame, the detection frame is considered to have disappeared, and the detection frame is deleted from the existing detection frame set.
Further, the dynamic clustering algorithm in step 9 includes the steps of:
(1) knowing that there are m samples and each sample belongs to n classes C1,C2,…,CnOne of (a); for a new sample S, the distances between the sample and all existing samples are calculated using the selected metric, resulting in a set of distances:
D={di=dist(S,Si)|i∈[1,m]}
(2) calculating the minimum value d of all the distancesminMin (D), and recording the sample point S for obtaining the minimum valueminAnd the class C to which the sample point belongsopt
(3) Selecting the threshold value T, if dminIf T is less than or equal to T, then the S belongs to CoptClassify S as class Copt(ii) a If d isminIf the sample is more than T, the new sample is not considered to belong to any existing class, and a class C is newly establishedn+1Let S be equal to Cn+1Classify S as class Cn+1(ii) a If the distance between K sample points in two categories is smaller than a set threshold value T, the two categories are considered to be classified into one category, wherein K is>0;
(4) Repeating the above (1) to (3) until no new sample is added.
Further, in step 9, according to the extracted vehicle driving track, the dynamic clustering is performed on the vehicle track by using a dynamic clustering algorithm, and the vehicle track category is labeled, so as to identify an abnormal behavior event of the vehicle, which specifically includes:
step 9-1, according to the vehicle track record, when a new track is obtained, simplifying the track by using a Douglas-Puck method, reducing the number of track points to N, and taking N as 30;
step 9-2, processing the vehicle tracks by using a dynamic clustering method, wherein a distance measurement mode between tracks is selected as a dynamic time deviation distance DTW method;
and 9-3, respectively counting the proportion of the number of each type of vehicle tracks to the total number of the tracks, sequencing the tracks in a descending order, and then labeling and confirming the type of each type of vehicle tracks and vehicle speed records according to the existing labeling database so as to obtain the actual event type meaning corresponding to each type of track, wherein the events which are not matched with the existing labeling database are classified as unknown events.
Further, step 10, counting traffic statistics at regular time intervals, dynamically clustering the statistics states at different times, labeling traffic state categories, and identifying different traffic running states, thereby determining an abnormal event in traffic running, specifically including:
step 10-1, according to the track information of the vehicle, according to a certain time interval, counting the traffic statistics in the time interval in different time periods, including the average speed VmeanAverage traffic flow FmeanAverage time occupancy OmeanAverage headway HmeanAverage queue length LmeanObtaining a State vector State [ V ] describing the traffic operation State in the periodmean,Fmean,Omean,Hmean,Lmean];
Step 10-2, clustering the state vectors by using the dynamic clustering method, wherein the distance measurement standard among the selected samples is Euclidean distance;
and step 10-3, marking and confirming the actual event meanings of all categories according to the existing marking database to obtain the actual event type meanings of all categories, and classifying the events which are not matched with the existing marking database into unknown events.
Compared with the prior art, the invention has the following remarkable advantages:
1. the invention provides a vehicle three-dimensional track extraction method based on deep learning and camera internal and external parameters, which can obtain a running track of a vehicle in a real three-dimensional space, and compared with a traditional two-dimensional running track, the three-dimensional running track can provide richer position information, thereby effectively reducing adverse effects on vehicle behavior judgment caused by vehicle shielding, camera shooting angles and the like and improving the accuracy of vehicle abnormal event detection.
2. The invention provides a method for detecting a vehicle target by utilizing a yolov3-5layer model, which uses 5 yolo layers to process network characteristics under different resolutions so as to identify a target with a smaller size, and the identification of a small target is very necessary for a high-speed monitoring scene with a large visual angle and a small target. The anchors of the model are obtained by calculation through the training data set and the self-building data set, and the model weights are obtained by training on the training data set and the self-building data set. The introduction of the self-built data constructed by the method enables the generalization performance and robustness of the model target detection to better meet the engineering requirements.
3. The invention provides a novel method for detecting key points of a vehicle by using an attention-based two-stage hourglass model. Each stage in the model provides thermodynamic diagram prediction with the same resolution as the original input by using an hourglass-shaped network structure, and the output of the two stages is supervised during training. Each channel of the thermodynamic diagram represents a probability distribution of the occurrence of the corresponding keypoint. Two hourglass network structures are stacked, so that the network in the second stage can learn the interrelation between key points by using the output of the network in the first stage, and the detection precision of the model is improved. The vehicle key points selected according to the actual shape of the vehicle have stronger vehicle three-dimensional shape representativeness, and the shape and the size of the vehicle can be described more accurately.
4. The invention provides a method for detecting two-dimensional to three-dimensional key points by using an internal reference matrix and an external reference matrix of a camera, which converts image coordinates of the two-dimensional key points into world coordinates and endows different targets with different height values according to the types of detected targets, thereby calculating the real positions of the two-dimensional key points in the three-dimensional world, improving the usability and the information richness of the vehicle key points, providing data support for extracting and constructing a three-dimensional driving track of a vehicle and being beneficial to improving the detection precision of a system.
5. The invention provides a vehicle tracking method based on Hungarian algorithm and Kalman filtering by using GIOU as a matching condition. The GIOU can be used for overcoming the problem that the IOU cannot objectively measure the difference between two non-overlapping frames, and can provide a more objective judgment result for predicting the overlapping of the frames. The Hungarian algorithm can quickly solve the corresponding relation between the front frame marking box and the rear frame marking box, so that the position change of the target at different moments is determined. The Kalman filtering method can be used for estimating and correcting the position of the target three-dimensional central point by combining the observed value and the model prediction value, so that the position change of the target three-dimensional central point is more stable.
6. The invention provides a dynamic clustering method, which continuously performs category adjustment along with the addition of new samples, does not need to preset all samples and appoint the number of target categories of clustering, and can be well combined with the actual abnormal event detection clustering problem, thereby providing a stable and accurate clustering result.
7. The invention provides a vehicle abnormal event detection method for vehicle track identification, which is used for identifying abnormal vehicle track data from two aspects of position and speed by respectively carrying out dynamic clustering on position tracks and speed records of vehicles. The dynamic clustering method used by the invention is combined with the label to finally determine the actual event meaning represented by each category. The method has strong plasticity, and can conveniently expand the identified abnormal events and carry out category combination on similar vehicle behavior events.
8. The invention provides a method for identifying abnormal events of traffic states by using dynamic clustering, which is characterized in that according to three-dimensional track data, the traffic volume of roads is counted by time segments to obtain a state vector for describing the traffic running state of each time segment, the state vector is divided into different categories by clustering the state vector, and finally, the actual event type corresponding to each category is confirmed according to the existing labeling database. The clustering method does not need to appoint the number of categories in advance, has strong plasticity, and can realize the rapid expansion of abnormal events and the combination of similar events.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a schematic flow chart of an abnormal event detection method for an expressway according to the present invention.
FIG. 2 is a schematic diagram of a two-stage hourglass network architecture according to the present invention.
FIG. 3 is a schematic diagram of a network structure of a single stage in the attention-based two-stage hourglass network of the present invention.
FIG. 4 is a diagram of the residual blocks with attention mechanism in the attention-based two-stage hourglass network of the present invention.
FIG. 5 is a schematic view of the vehicle key point location of the present invention.
Detailed Description
The present invention is described in further detail below with reference to the attached drawings, it should be noted that the present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment as necessary without inventive contribution after reading the present specification, but all are protected by patent laws within the scope of the claims of the present invention.
The invention provides an expressway-oriented abnormal event detection method, which reconstructs a track of a vehicle in a three-dimensional space based on target detection and key point detection technologies, uses a dynamic clustering method to classify the behavior of the vehicle according to track data, and distinguishes the traffic state of a road according to traffic statistics, thereby realizing the detection of high-speed abnormal events. The method comprises the steps of obtaining images through video monitoring equipment, and detecting the position of a vehicle by using a YOLO target detection network; proposing an attention-based two-stage hourglass keypoint detection network to obtain vehicle keypoints; and then combining the internal and external parameter matrixes of the camera to obtain the three-dimensional coordinate position of the key point of the vehicle. And obtaining the three-dimensional motion track of the vehicle by tracking the three-dimensional space position of the vehicle. In abnormal event analysis, the invention provides a dynamic clustering method, which is used for carrying out clustering analysis on the detected movement track of a target vehicle according to the three-dimensional track of the vehicle, identifying the abnormal track and judging the type of the abnormal track, thereby realizing the judgment of the abnormal behavior event of a single vehicle. Meanwhile, the traffic statistics in a period of time are dynamically clustered, so that the traffic abnormal event is accurately researched and judged. The method breaks through the conventional method for judging the vehicle abnormal event by using the two-dimensional track, and the information richness of the vehicle track is improved by the extracted three-dimensional track. The dynamic clustering method provided by the invention can be used for rapidly capturing the abnormal events, has the effect of accurately identifying the abnormal events, and solves the problems of low precision and poor robustness of the conventional abnormal event detection method based on the expressway.
The embodiment is directed to highway abnormal event detection, and a specific step flow of a method is shown in fig. 1. After reading the external video stream, the method sequentially performs operations such as image preprocessing, target detection, key point detection, two-dimensional coordinate conversion, three-dimensional center point calculation, target tracking processing and the like on the video image to finally generate a vehicle track, and further detects abnormal behaviors of the vehicle according to the vehicle track and road lane information, wherein the detected abnormal behaviors comprise vehicle retrograde motion, lane pressing and changing, slow motion and parking events. The method comprises the following specific steps:
step 1, reading a real-time video stream, decoding the read video, and acquiring an image frame; the real-time video stream obtains original data through an rtsp protocol, and then the original data is decoded by using a software or hardware decoder to obtain an image frame.
Step 2, preprocessing the image frame; the preprocessing process comprises operations such as color space conversion, image scaling, image normalization and the like; and (4) converting the image frame into an RGB format according to the format of the image frame obtained by decoding in the step (1). The input size of the target detection network is 416x416, as needed, so the image size needs to be scaled to 416x 416. The pixels in the image frame range from 0-255, but the input required by the target detection network is 0-1, so the image is normalized and the RGB values for each pixel are divided by 255, scaling the image value range to 0-1 as the final network input.
And 3, detecting the vehicles appearing in the image frames by using the target detection model. In this step, a vehicle target detection model needs to be loaded first, forward reasoning is performed on input image frame data, and analysis post-processing is performed on output of a neural network to obtain a final detection result. For each object, the detection result includes three parts, namely, the detected object position, the class, and the probability of being subordinate to the class. The target detection model uses a Yolov3-5layer model, which includes 5 yolo layers, and is capable of identifying smaller targets. The vehicle detection categories are small-sized vehicles, medium-sized vehicles and large-sized vehicles. The anchor of the model is obtained by comprehensive calculation by using the open source data set and the self-built data set, and the weight file of the model is also obtained by training by using the open source data set and the self-built data set. The self-built data set is obtained by self-construction of the invention, and marking of the vehicle is completed.
And 4, detecting key points of the detected target. The keypoint detection model uses the attention-based two-stage hourglass model proposed by the present invention, and the network structure is shown in fig. 2. Wherein the network structure of each stage is shown in figure 3. The structure of each residual block with attention mechanism is shown in fig. 4. In the residual block structure with attention mechanism, image features are firstly compressed by 1x1 convolution once, and then feature extraction is carried out by 3x3 convolution once. After the convolution of 3x3, the image features are pooled in channel dimensions using a pooling operation, resulting in a mask with a channel number of 1 and the same size as the feature map size, and the result of the convolution of the mask with 3x3 is dot multiplied. Finally, the dot product result adjusts the channel number to the size of the original feature using a convolution operation of 1 × 1, and performs an addition operation with the input feature. In a residual structure requiring channel number adjustment, a 1x1 convolution operation needs to be added after the input of the residual structure, the features are converted into the target channel number, and simultaneously the number of the 1x1 operation channels after dot multiplication is also adjusted to the target channel number, and then the sum is performed. And training a two-stage hourglass model based on attention on the self-built data set to obtain a final weight file. In this step, a trained attention-based two-stage hourglass keypoint detection model needs to be preloaded. During processing, the targets are extracted from the video frames according to the label boxes of the detected vehicle targets. For each target, it was scaled to a size of 64x64 as an input to the hourglass model. The output of each stage in the attention-based two-stage hourglass network has 14 channels, and the coordinates of the pixel points of the numerical value maximum value points in each channel represent the two-dimensional coordinates of a key point corresponding to a vehicle. The corresponding location of each keypoint on the vehicle is shown in fig. 5. The position selection of the 14 key points is determined by the invention according to the vehicle shape selection. For each channel, the distance between the detection results of the channel in the two stages is calculated. If the distance is greater than 5 pixels, the key point corresponding to the channel is not detected; and if the distance is less than 5 pixels, determining that the key point corresponding to the channel is detected.
And 5, selecting 8 main key point positions of the detected key point coordinates, such as the front bottom of a left vehicle, the front bottom of a right vehicle, the rear bottom of a left vehicle, the rear bottom of a right vehicle, the front upper corner of the left vehicle, the front upper corner of the right vehicle, the rear upper corner of the left vehicle, the rear upper corner of the right vehicle and the like, and converting the two-dimensional key points into the coordinate points of the real three-dimensional world by using the internal and external parameter matrix of the camera. The undetected main key points are not processed. In the conversion process, the height of the four key points at the bottom, namely the front bottom of a left side vehicle, the front bottom of a right side vehicle, the rear bottom of the left side vehicle and the rear bottom of the right side vehicle, is made to be 0, and the height value of the four key points at the top, namely the front upper corner of the left side vehicle, the front upper corner of the right side vehicle, the rear upper corner of the left side vehicle and the rear upper corner of the right side vehicle, is subjected to table look-up value according to the category obtained by vehicle target detection, namely different values are obtained according to the category of the vehicle. For small vehicles, the height is 1.4 meters; for medium vehicles, the height is 1.85 meters; for large vehicles, the height is taken to be 3.3 meters.
And 6, calculating the maximum and minimum values of the detected three-dimensional main key points in the three directions of x, y and z, and further calculating the mean value of the maximum value and the minimum value in the three directions, namely the central point of the vehicle in the three-dimensional space world.
And 7, tracking the three-dimensional central point of the vehicle in the image frame. The GIOU is used as a similarity measure between the detected target and the existing target during the tracking process. And sequentially calculating GIOU between each detected target and all the existing detected targets, taking 1-GIOU as an error value, and recording the error value in an error matrix. And calculating by using a Hungarian algorithm according to the obtained error matrix to obtain the optimal matching between the target detected by the frame and the existing target. For each target matched with the existing detection frame, continuously predicting and tracking the position and the speed of the vehicle by using a Kalman filtering method; for the detected target which is not matched with the existing detection frame, initializing a Kalman filter by using the central point position of the position point, and adding the detection frame into the existing detection frame set; and regarding the existing detection frame which is not matched with the current frame detection frame, considering that the detection frame disappears, and deleting the detection frame from the existing detection frame set.
And 8, generating a vehicle running track. And recording the three-dimensional center point position of the vehicle at each moment and the time of the moment according to the time sequence change of the vehicle center point at different moments in the image frame to obtain the driving track of the vehicle, wherein each track point is expressed as p ═ x, y, z and t.
And 9, analyzing and processing the abnormal events of the vehicle. According to the vehicle track record, when a new track is obtained, the track is simplified by using a Douglas-Puck method, and the number of track points is reduced to 30. The compression of the data volume can reduce the calculation amount required by the subsequent distance calculation and improve the actual operation speed of the algorithm. And then calculating the dynamic time deviation Distances (DTW) of the new track and all the existing tracks in sequence, and counting the minimum value of all the dynamic time deviation distances. If the minimum value is less than the set threshold value DthreClassifying the new track into the track category corresponding to the DTW minimum value; if the minimum value is larger than the set threshold value DthreThe new track is considered as a new category independent of all existing track categories. As the number of tracks increases, the DTW value between more than 5 track samples in the two categories is less than the threshold DthreThen the two categories are merged. And finally, respectively counting the proportion of the number of each type of vehicle track and vehicle speed record to the total number of the tracks, sequencing the tracks in a descending order, and then marking and confirming the actual events represented by each type of vehicle track and vehicle speed record according to the existing marking database. Events that do not match to the existing annotation database are classified as unknown events. For unknown event types, the specific abnormal event type can be further confirmed by updating the annotation database. This approach enables dynamic extension of exception event types. The method can find abnormal behavior events of a single vehicle in time, such as vehicle reverse running, vehicle lane change and the like.
And step 10, analyzing and processing the abnormal events of the traffic state. During the processing, according to a certain time interval, according to all the tracks and other information extracted in the time interval, traffic statistics such as average speed, traffic flow, time occupancy, head time distance, queuing length and the like are counted to obtain a state vector describing the traffic running state in the time interval. Judging Euclidean distance between the state vector and all existing state vectors for each newly added state vector, counting to obtain the minimum distance value, and if the minimum distance value is smaller than a set threshold value SthreThen classifying the state vector into the state class corresponding to the minimum value; if the minimum value is larger than the set threshold value SthreThen setting the state vector to a new state class; when the distance between 5 points of the two state categories is smaller than a set threshold value SthreThe two categories are then merged. And marking the actual event meanings of all categories according to the existing marking database to obtain the actual event type meanings of all categories. Events that do not match to the existing annotation database are classified as unknown events. For unknown event types, the specific abnormal event type can be further confirmed by updating the annotation database. The method can find abnormal events of traffic states such as road congestion in time.
By adopting the method for detecting the abnormal events facing the expressway, provided by the embodiment of the invention, the running track of the vehicle in the three-dimensional space can be extracted, the precision of extracting the vehicle track is improved, the influence on extracting the vehicle track caused by the problems of video monitoring angle, vehicle shielding and the like is reduced, and the accuracy of judging the abnormal behavior events of the vehicle is improved.
The foregoing has described the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention as defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. An abnormal event detection method for an expressway, characterized by comprising the following steps:
step 1, reading a real-time video stream, decoding the video and acquiring an image frame;
step 2, preprocessing the image frame, including color space conversion, image scaling and image normalization;
step 3, loading a vehicle detection model, reasoning the preprocessed image, and analyzing and processing a reasoning result to obtain vehicle position and category information;
step 4, loading a vehicle key point detection model, carrying out key point detection on each detected vehicle target, carrying out post-processing on an inference result to obtain key point positions of the vehicle, and selecting main key points of the vehicle;
step 5, calculating to obtain three-dimensional coordinates of main key points of the vehicle by using an internal and external parameter matrix of the camera and vehicle model parameters according to the detected coordinates of the key points of the vehicle;
step 6, calculating the central point of the vehicle in a three-dimensional space according to the calculated three-dimensional coordinates of the main key points of the vehicle;
step 7, taking a central point of a three-dimensional coordinate of a main key point of the vehicle as a tracking target, using GIOU as an evaluation index, matching the detected target by using a Hungarian algorithm, and tracking the target by using Kalman filtering;
step 8, recording the position and the time point of the three-dimensional coordinate center point of the tracking target in the process from appearance to disappearance as the running track of the vehicle;
step 9, according to the extracted vehicle running track, dynamically clustering the vehicle track by using a dynamic clustering algorithm, and marking the vehicle track category so as to identify the abnormal behavior event of the vehicle;
and step 10, counting the traffic statistics in the time interval at regular time intervals, dynamically clustering the statistic states at different moments, marking the traffic state categories, and identifying different traffic running states so as to judge abnormal events in traffic running.
2. The method for detecting the abnormal events facing the expressway according to claim 1, wherein a Yolov3-5layer model is adopted as a vehicle detection model in the step 3, the model takes an open source data set and a self-established data set as data support, a model anchor is calculated by using the open source data set and the self-established data set, and a final weight file is obtained by training on the open source data set and the self-established data set; the method comprises the steps that a source data set comprises COCO and VOC, and a self-constructed data set is obtained by collecting video data under different traffic scenes and manually marking; the detection result of the Yolov3-5layer model comprises a vehicle type and a vehicle position, wherein the type comprises three types of small vehicles, medium vehicles and large vehicles.
3. The method for detecting the abnormal events facing the expressway as recited in claim 1 or 2, wherein the vehicle key point detection model in the step 4 is a two-stage hourglass model based on an attention mechanism, the model introduces the attention mechanism into the hourglass model, and the final weight of the model is trained on a self-established data set; when the self-built data set is built, the left side vehicle front bottom, the right side vehicle front bottom, the left side vehicle front headlamp, the right side vehicle front headlamp, the left side front window bottom, the right side front window bottom, the left side vehicle front upper corner, the right side vehicle front upper corner, the left side vehicle rear upper corner, the right side vehicle rear upper corner, the left side vehicle rear headlamp, the right side vehicle rear headlamp, the left side vehicle rear bottom, the right side vehicle rear bottom, the left side vehicle front upper corner, the right side vehicle front upper corner, the left side vehicle rear upper corner, the right side vehicle rear upper corner, the 8 position points of the right side vehicle rear upper corner are selected as main key points according to the self-defined selection and labeling of the vehicle according to the actual shape of the vehicle.
4. The method for detecting abnormal events facing to expressways according to claim 3, wherein the step 4 of loading the vehicle key point detection model, performing key point detection on each detected vehicle target, and performing post-processing on inference results to obtain key point positions of the vehicle specifically comprises:
step 4-1, respectively calculating the maximum value of the pixel point of each channel for the output of each stage in the two stages, and recording the coordinate position of the maximum value;
step 4-2, calculating the distance between the maximum coordinate positions of the two stages corresponding to each channel, and if the distance is greater than 5, determining that the outputs of the two stages of the channel are not matched and the key point corresponding to the channel is not detected; otherwise, the key point corresponding to the channel is detected;
and 4-3, traversing each channel, and repeating the step 4-2 until all 14 channels are processed.
5. The method for detecting the abnormal events facing the expressway as claimed in claim 4, wherein the step 5 of calculating three-dimensional coordinates of main key points of the vehicle according to the detected coordinates of the key points of the vehicle by using an internal and external parameter matrix of the camera and parameters of the vehicle model comprises the following specific steps:
step 5-1, obtaining an internal reference matrix K, a camera rotation matrix R and a translation matrix t of the camera through camera calibration, wherein the camera rotation matrix and the translation matrix form an external reference matrix of the camera, K, R is a real matrix of 3x3, and t is a vector of 3x 1;
step 5-2, for each detected key point, the two-dimensional pixel coordinate of the key point on the image is represented as p ═ u, v,1], wherein u represents the abscissa of the point and v represents the ordinate of the point; for the main key points which are not detected, the following steps 5-3 and 5-4 are not carried out;
and 5-3, calculating coefficients, and estimating the equation coefficients under the condition that the internal reference matrix and the external reference matrix of the camera are known, wherein the equation is as follows:
[xa,ya,za]T=R-1K-1p
[xb,yb,zb]T=R-1t
s=(zb+z)/za
wherein R is-1Representing the inverse of the extrinsic rotation matrix, K-1Represents the inverse of the reference matrix, [ x ]a,ya,za]TRepresents R-1K-1Intermediate result vector, z, obtained after p operationaRepresenting the component in the vertical direction, x, of the intermediate result vectorb,yb,zbRepresents R-1t intermediate result vector, z, obtained after operationbRepresenting the component of the intermediate result vector in the vertical direction, wherein z represents the value of a preset main key point in the vertical direction; taking z as 0, wherein the z is at four points of the front bottom of the left side vehicle, the front bottom of the right side vehicle, the rear bottom of the left side vehicle and the rear bottom of the right side vehicle; taking z as z from the front upper corner of the left side car, the front upper corner of the right side car, the rear upper corner of the left side car and the rear upper corner of the right side carmodelWherein z ismodelBased on the class of the detected object, z is found by looking up the tablemodel1.4 m; for medium vehicles, zmodel1.85 m; for large vehicles, zmodel=3.3m;
And 5-4, calculating a three-dimensional coordinate point x in the real world according to the two-dimensional coordinate point p of the key point, wherein the calculation formula is as follows:
x=[X Y Z]T=sR-1K-1p-R-1t
in the formula, X, Y and Z represent components of a three-dimensional coordinate point in three directions of space X, Y and Z, respectively.
6. The method for detecting abnormal events facing to expressways according to claim 5, wherein step 6 is carried out by calculating the central point of the vehicle in a three-dimensional space according to the calculated three-dimensional coordinates of the main key points of the vehicle, and the specific process comprises:
step 6-1, calculating the maximum value and the minimum value of the three-dimensional space coordinates of all the detected main key points in the x, y and z directions to obtain xmin,xmax,ymin,ymax,zmin,zmax
Step 6-2, forCalculating the mean value of the maximum value and the minimum value in each of the three directions of x, y and z to obtain xmean,ymean,ymeanWith Xc=(xmean,ymean,ymean) As the center point of the vehicle in three-dimensional space.
7. The expressway-oriented abnormal event detection method according to claim 6, wherein in the step 7, the central point of the three-dimensional coordinates of the main key points of the vehicles is used as a tracking target, GIOU is used as an evaluation index, the detected target is matched by using a Hungarian algorithm, and the target is tracked by using Kalman filtering, and the method specifically comprises the following steps of:
step 7-1, assume that the current frame has MtA detection box, until the last frame is determined to match Mt-1A detection frame, calculating an error matrix E according to the two detection frames, wherein Eij=1-GIOU(Bt,i,Bt-1,j),Bt,iDenotes the ith detection frame in the current frame, Bt-1,jRepresenting the jth detection frame in the existing detection frames, the calculation formula of the GIOU is as follows:
Figure FDA0003386725300000041
in the formula, A ≈ B represents the intersecting area of the detection frames A and B, A ≧ B represents the phase-detecting area of the detection frames A and B, and C represents the area of the frame capable of enclosing the detection frames A and B at minimum;
7-2, calculating the optimal matching between the detected detection frame of the current frame and the existing detection frame by using a Hungarian algorithm according to the error matrix E to obtain the corresponding relation between the two groups of detection frames;
7-3, correcting the position of a center point of each detection frame matched with the existing detection frame in the current frame by using Kalman filtering according to the detected position of the corresponding three-dimensional center point of the vehicle; regarding a detection frame which is not matched with the existing detection frame at the current frame, considering that the detection frame appears for the first time, adding the detection frame into the existing detection frame set, and initializing a Kalman filter by using a central point position corresponding to the detection frame; for an existing detection frame that does not match the detected detection frame, the detection frame is considered to have disappeared, and the detection frame is deleted from the existing detection frame set.
8. The method for detecting abnormal events facing to expressways according to claim 7, wherein the dynamic clustering algorithm in step 9 includes the following steps:
(1) knowing that there are m samples and each sample belongs to n classes C1,C2,…,CnOne of (a); for a new sample S, the distances between the sample and all existing samples are calculated using the selected metric, resulting in a set of distances:
D={di=dist(S,Si)|i∈[1,m]}
(2) calculating the minimum value d of all the distancesminMin (D), and recording the sample point S for obtaining the minimum valueminAnd the class C to which the sample point belongsopt
(3) Selecting the threshold value T, if dminIf T is less than or equal to T, then the S belongs to CoptClassify S as class Copt(ii) a If d isminIf the sample is more than T, the new sample is not considered to belong to any existing class, and a class C is newly establishedn+1Let S be equal to Cn+1Classify S as class Cn+1(ii) a If the distance between K sample points in two categories is smaller than a set threshold value T, the two categories are considered to be classified into one category, wherein K is>0;
(4) Repeating the above (1) to (3) until no new sample is added.
9. The method for detecting abnormal events facing to expressways according to claim 8, wherein in step 9, according to the extracted vehicle driving track, the vehicle track is dynamically clustered by using a dynamic clustering algorithm, and the vehicle track category is marked, so as to identify the abnormal behavior event of the vehicle, and specifically comprises:
step 9-1, according to the vehicle track record, when a new track is obtained, simplifying the track by using a Douglas-Puck method, reducing the number of track points to N, and taking N as 30;
step 9-2, processing the vehicle tracks by using a dynamic clustering method, wherein a distance measurement mode between tracks is selected as a dynamic time deviation distance DTW method;
and 9-3, respectively counting the proportion of the number of each type of vehicle tracks to the total number of the tracks, sequencing the tracks in a descending order, and then labeling and confirming the type of each type of vehicle tracks and vehicle speed records according to the existing labeling database so as to obtain the actual event type meaning corresponding to each type of track, wherein the events which are not matched with the existing labeling database are classified as unknown events.
10. The method for detecting abnormal events facing to expressways according to claim 9, wherein the step 10 of counting traffic statistics at regular intervals, dynamically clustering the statistics states at different times, labeling the traffic state categories, and identifying different traffic running states, thereby determining abnormal events in traffic running specifically comprises:
step 10-1, according to the track information of the vehicle, according to a certain time interval, counting the traffic statistics in the time interval in different time periods, including the average speed VmeanAverage traffic flow FmeanAverage time occupancy OmeanAverage headway HmeanAverage queue length LmeanObtaining a State vector State [ V ] describing the traffic operation State in the periodmean,Fmean,Omean,Hmean,Lmean];
Step 10-2, clustering the state vectors by using the dynamic clustering method, wherein the distance measurement standard among the selected samples is Euclidean distance;
and step 10-3, marking and confirming the actual event meanings of all categories according to the existing marking database to obtain the actual event type meanings of all categories, and classifying the events which are not matched with the existing marking database into unknown events.
CN202111456180.0A 2021-12-01 2021-12-01 Highway-oriented abnormal event detection method Pending CN114170580A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111456180.0A CN114170580A (en) 2021-12-01 2021-12-01 Highway-oriented abnormal event detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111456180.0A CN114170580A (en) 2021-12-01 2021-12-01 Highway-oriented abnormal event detection method

Publications (1)

Publication Number Publication Date
CN114170580A true CN114170580A (en) 2022-03-11

Family

ID=80482448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111456180.0A Pending CN114170580A (en) 2021-12-01 2021-12-01 Highway-oriented abnormal event detection method

Country Status (1)

Country Link
CN (1) CN114170580A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758511A (en) * 2022-06-14 2022-07-15 深圳市城市交通规划设计研究中心股份有限公司 Sports car overspeed detection system, method, electronic equipment and storage medium
CN114822044A (en) * 2022-06-29 2022-07-29 山东金宇信息科技集团有限公司 Driving safety early warning method and device based on tunnel
CN115047894A (en) * 2022-04-14 2022-09-13 中国民用航空总局第二研究所 Unmanned aerial vehicle track measuring and calculating method, electronic equipment and storage medium
CN116453205A (en) * 2022-11-22 2023-07-18 深圳市旗扬特种装备技术工程有限公司 Method, device and system for identifying stay behavior of commercial vehicle
WO2023206236A1 (en) * 2022-04-28 2023-11-02 华为技术有限公司 Method for detecting target and related device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115047894A (en) * 2022-04-14 2022-09-13 中国民用航空总局第二研究所 Unmanned aerial vehicle track measuring and calculating method, electronic equipment and storage medium
CN115047894B (en) * 2022-04-14 2023-09-15 中国民用航空总局第二研究所 Unmanned aerial vehicle track measuring and calculating method, electronic equipment and storage medium
WO2023206236A1 (en) * 2022-04-28 2023-11-02 华为技术有限公司 Method for detecting target and related device
CN114758511A (en) * 2022-06-14 2022-07-15 深圳市城市交通规划设计研究中心股份有限公司 Sports car overspeed detection system, method, electronic equipment and storage medium
CN114758511B (en) * 2022-06-14 2022-11-25 深圳市城市交通规划设计研究中心股份有限公司 Sports car overspeed detection system, method, electronic equipment and storage medium
CN114822044A (en) * 2022-06-29 2022-07-29 山东金宇信息科技集团有限公司 Driving safety early warning method and device based on tunnel
CN116453205A (en) * 2022-11-22 2023-07-18 深圳市旗扬特种装备技术工程有限公司 Method, device and system for identifying stay behavior of commercial vehicle

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN111429484B (en) Multi-target vehicle track real-time construction method based on traffic monitoring video
CN114170580A (en) Highway-oriented abnormal event detection method
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN106980855B (en) Traffic sign rapid identification and positioning system and method
CN110689724B (en) Automatic motor vehicle zebra crossing present pedestrian auditing method based on deep learning
CN104978567A (en) Vehicle detection method based on scenario classification
CN109684986B (en) Vehicle analysis method and system based on vehicle detection and tracking
CN113903008A (en) Ramp exit vehicle violation identification method based on deep learning and trajectory tracking
CN110674887A (en) End-to-end road congestion detection algorithm based on video classification
CN116434159A (en) Traffic flow statistics method based on improved YOLO V7 and Deep-Sort
CN112991769A (en) Traffic volume investigation method and device based on video
CN117037085A (en) Vehicle identification and quantity statistics monitoring method based on improved YOLOv5
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN110889347A (en) Density traffic flow counting method and system based on space-time counting characteristics
CN111627224A (en) Vehicle speed abnormality detection method, device, equipment and storage medium
CN114973169A (en) Vehicle classification counting method and system based on multi-target detection and tracking
Vrtagić et al. Video Data Extraction and Processing for Investigation of Vehicles' Impact on the Asphalt Deformation Through the Prism of Computational Algorithms.
CN114937248A (en) Vehicle tracking method and device for cross-camera, electronic equipment and storage medium
CN114550094A (en) Method and system for flow statistics and manned judgment of tricycle
CN113850112A (en) Road condition identification method and system based on twin neural network
CN112329724A (en) Real-time detection and snapshot method for lane change of motor vehicle
Jehad et al. Developing and validating a real time video based traffic counting and classification
CN117456482B (en) Abnormal event identification method and system for traffic monitoring scene
CN117437792B (en) Real-time road traffic state monitoring method, device and system based on edge calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination