CN114022791A - Vehicle track motion characteristic identification method based on high-altitude visual angle identification system - Google Patents

Vehicle track motion characteristic identification method based on high-altitude visual angle identification system Download PDF

Info

Publication number
CN114022791A
CN114022791A CN202111201539.XA CN202111201539A CN114022791A CN 114022791 A CN114022791 A CN 114022791A CN 202111201539 A CN202111201539 A CN 202111201539A CN 114022791 A CN114022791 A CN 114022791A
Authority
CN
China
Prior art keywords
frame
vehicle
vehicle target
target
mth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111201539.XA
Other languages
Chinese (zh)
Other versions
CN114022791B (en
Inventor
贺宜
曹博
吴超仲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202111201539.XA priority Critical patent/CN114022791B/en
Publication of CN114022791A publication Critical patent/CN114022791A/en
Application granted granted Critical
Publication of CN114022791B publication Critical patent/CN114022791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle track motion characteristic identification method based on a high-altitude visual angle identification system. The high altitude perspective identification system includes: the device comprises an aerial photography shooting device, a calculation processing host and a display device. The method comprises the steps of collecting high-altitude video data by an aerial photography device to produce a high-altitude image training data set and a high-altitude image sequence data set; the high-altitude image training dataset is used for training the YOLOv5 model; carrying out vehicle identification on the high-altitude image sequence data set to obtain a high-altitude image sequence vehicle identification frame set; generating an original vehicle track motion characteristic identification text data set by applying Kalman filtering and Hungarian matching algorithm; through four processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion, a five-level vehicle track motion feature recognition text data set is finally formed. The method can reduce the omission problem of the associated part of the vehicle target data and provides a specific implementation method for extracting the characteristics of the vehicle position, speed, acceleration and lane number.

Description

Vehicle track motion characteristic identification method based on high-altitude visual angle identification system
Technical Field
The invention belongs to the field of intelligent transportation, and particularly relates to a vehicle track motion characteristic identification method based on a high-altitude visual angle identification system.
Background
In recent years, the rapid development of artificial intelligence technology and the automatic driving industry promotes the intelligentization of road traffic on one hand, and puts higher requirements on the acquisition of road traffic information on the other hand. The artificial intelligence technology plays a great role in the fields of feature extraction, data mining and decision control, and meanwhile, the research on the automatic driving technology is tightened in all countries, and the automatic driving is generally divided into six grades: l0 manual driving, L1 assisted driving, L2 semi-automatic driving, L3 high-altitude automatic driving, L4 ultrahigh-level automatic driving, and L5 full-automatic driving. The evaluation of these grades requires real vehicle driving data from a real scene as support, and vehicle trajectory data under real roads can be used to verify vehicles in an autonomous driving mode, thereby evaluating their grades. At this stage, relevant research has been conducted to collect vehicle trajectory data. The chinese patent application CN110751099A of the invention proposes a method for extracting a high-precision track by using an aerial video, which focuses on denoising, splicing, and smoothing processing of a vehicle track, and the extraction of vehicle motion parameters is not described in detail, and in a vehicle target association stage, it does not consider the influence of the change situation of the vehicle motion state of a previous frame on the vehicle state of a current frame. The Chinese patent application CN111611918A also proposes a traffic flow data collection construction method based on aerial photography video, but the method for extracting the traffic flow parameters is deficient, and the target tracking means used by the method is a single-target tracking method which needs to be expanded; the Chinese patent application CN111145545A is superior to cross-camera monitoring in road traffic detection, and is deficient in a method for extracting the motion characteristics of traffic vehicles.
Disclosure of Invention
In order to solve the technical problem, the invention provides a vehicle track motion characteristic identification method based on a high-altitude visual angle identification system.
The high altitude visual angle recognition system is characterized by comprising: the device comprises an aerial photography shooting device, a calculation processing host and a display projection device;
the aerial photographing device is connected with the calculation processing host computer in a wireless mode; the computing processing host is connected with the display projection device in a wired mode;
the aerial photography camera device is used for collecting video image data of vehicles on the road under the high-altitude visual angle and sending the video image data to the calculation processing host computer in a wireless mode; the calculation processing host is used for processing the video image data of the road vehicles at the high altitude view angle acquired by the aerial photography camera device, further obtaining vehicle image recognition results and track generation results through a vehicle track motion characteristic recognition method at the high altitude view angle, and transmitting the vehicle image recognition results and the track generation results to the display projection device for display;
aerial photography camera device is directly over the road surface when long-range shooting, and the device camera of taking photo by plane promptly shoots the sight and is 90 degrees with road surface contained angle.
The vehicle track motion characteristic identification method is characterized by comprising the following steps of:
step 1: the calculation processing host machine wirelessly shoots video image data by using an aerial photography camera device to be positioned right above a road surface, and is used for forming a high-altitude image training data set, carrying out manual marking on the high-altitude image training data set, marking an external rectangular frame of a vehicle target and a vehicle type, and forming a high-altitude image training vehicle marking frame set;
step 2: the calculation processing host machine wirelessly shoots video image data by using an aerial photography device positioned right above a road surface, and the video image data is used for forming a high-altitude image sequence data set and subsequently extracting vehicle track data; the road in the image picture of the high-altitude image sequence data set is positioned in the middle of the image;
and step 3: introducing a YOLOv5 deep learning network model, sequentially inputting each frame of image in the high-altitude image training data set and a vehicle marking frame corresponding to each frame of image in the high-altitude image training vehicle marking frame set into the YOLOv5 deep learning network model for training, constructing a loss function model by using a GIOU method, optimizing a loss function value by using an Adam optimization algorithm, and identifying vehicle targets in the high-altitude image sequence data set by using the trained YOLOv5 deep learning network model to obtain a high-altitude image sequence vehicle identification frame set;
and 4, step 4: starting from the first frame of vehicle target external rectangular frame data in the high-altitude image sequence vehicle target identification frame set, carrying out the following processing procedures: applying Kalman filtering to a previous frame of vehicle target boundary frame to obtain vehicle target estimation frame data of a current frame, performing association matching on the vehicle target identification frame data of the current frame and the vehicle target boundary frame in the vehicle target estimation frame data by using a Hungarian association algorithm, wherein the matching mechanism is IOU distance, obtaining the ID serial number of the vehicle target identification frame data of the current frame, namely the ID serial number of the vehicle target of the current frame, and marking a new ID serial number by the vehicle target frame data of the current frame which is not matched; until the end of the aerial image sequence. Combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the correlation matching process to form an original vehicle track motion characteristic identification text data set;
and 5: and sequentially carrying out four processing processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion on the original vehicle track motion feature identification text data set to finally form a five-level vehicle track motion feature identification text data set.
Preferably, the high-altitude image training data set in step 1 is:
{datae(x,y),e∈[1,E],x∈[1,X],y∈[1,Y]}
wherein, the datae(X, Y) represents the pixel information of the X-th row and the Y-th column of the E-th frame image in the high-altitude image training data set, E is the high-altitude image training data set frame number, X is the row number of the images in the high-altitude image training data set, and Y is the column number of the images in the high-altitude image training data set;
step 1, the high-altitude image training vehicle marking frame set comprises the following steps:
Figure BDA0003305009380000021
wherein the content of the first and second substances,
Figure BDA0003305009380000022
the horizontal coordinate of the upper left corner of the rectangular frame of the mark of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set is represented,
Figure BDA0003305009380000023
representing the vertical coordinate of the upper left corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set;
Figure BDA0003305009380000024
represents the horizontal coordinate of the lower right corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set,
Figure BDA0003305009380000025
representing the vertical coordinate of the lower right corner of a rectangular frame of an nth vehicle target mark in an e frame image in the high-altitude image training vehicle mark frame set; typee,nRepresenting the mark category of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set;
preferably, in step 2, the fixed shooting frame rate of the aerial camera device is FPS, the length of the shot road is L, and the number of the covered pixel units in the road length direction of the shot picture is G; the high-altitude image data shooting sizes are X and Y;
step 2, the high-altitude image sequence dataset is as follows:
{datat(x,y),t∈[1,T],x∈[1,X],y∈[1,Y]}
wherein, the datat(X, Y) represents the pixel information of the X-th line and the Y-th column of the T-th frame image in the high-altitude image sequence data set, T is the total frame number of the high-altitude image sequence data set, X is the line number of the image in the high-altitude image sequence data set, and Y is the column number of the image in the high-altitude image sequence data set;
preferably, the YOLOv5 network framework in step 3 is a yolo5x network structure;
and 3, the high-altitude image sequence vehicle identification frame set comprises the following steps:
Figure BDA0003305009380000026
wherein the content of the first and second substances,
Figure BDA0003305009380000027
represents the horizontal coordinate of the upper left corner of the bounding rectangle frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set,
Figure BDA0003305009380000031
representing the vertical coordinate of the upper left corner of a circumscribed rectangular frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set;
Figure BDA0003305009380000032
represents the horizontal coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle recognition frame set,
Figure BDA0003305009380000033
representing the vertical coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set; typet,nRepresenting the category of the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set;
preferably, in the step 4, the recorded video frame sequence numbers in the current video frame sequence numbers are collected into a set
Framet,n{framet,n}
Wherein the frame ist,nAnd the video sequence number corresponding to the nth vehicle target in the tth frame is shown.
And 4, the Kalman filtering processing process sequentially comprises the following steps: initializing a vehicle target state vector; initializing a state transition matrix, initializing a covariance matrix, initializing an observation matrix and initializing a system noise matrix; predicting the vehicle target state vector of the current frame according to the optimal estimated value of the vehicle target state vector of the previous frame to obtain a predicted value of the vehicle target state vector of the current frame; predicting a current frame vehicle target system error covariance matrix according to the previous frame vehicle target system error covariance matrix to obtain a current frame vehicle target system error covariance matrix predicted value; updating a Kalman coefficient by using the covariance matrix predicted value of the current frame vehicle target system; estimating according to the current frame vehicle target state vector predicted value and the system observation value to obtain the current frame vehicle target state vector optimal estimation value; updating a current frame vehicle target system error covariance matrix; extracting a current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value;
in the process of initializing the vehicle target state vector by Kalman filtering, the characteristics of the vehicle target boundary frame are described by using the abscissa of the center of the vehicle target boundary frame, the ordinate of the center of the boundary frame, the area of the boundary frame and the horizontal-vertical ratio of the boundary frame, and the motion state information of the boundary frame is described by using a linear uniform velocity model, namely:
Figure BDA0003305009380000034
wherein the content of the first and second substances,
Figure BDA0003305009380000035
represents motion state information of the bounding box, u represents the abscissa of the center of the bounding box, v represents the ordinate of the center of the bounding box, s represents the area of the bounding box, r represents the bounding box aspect ratio, typically a constant,
Figure BDA0003305009380000036
represents the rate of change of the abscissa of the center of the bounding box,
Figure BDA0003305009380000037
the ordinate representing the center of the bounding box,
Figure BDA0003305009380000038
the area change rate of the bounding box is indicated. Mth frame of t-1The motion state information of the vehicle target bounding box is described as:
Figure BDA0003305009380000039
wherein the content of the first and second substances,
Figure BDA00033050093800000310
represents motion state information of the mth vehicle target bounding box of the t-1 th frame, ut-1,mAbscissa, v, representing the center of the mth vehicle target bounding box of the t-1 th framet-1,mOrdinate, s, representing the center of the mth vehicle target bounding box in the t-1 th framet-1,mDenotes the area, r, of the m-th vehicle target bounding box of the t-1 th framet-1,mRepresents the aspect ratio of the mth vehicle target boundary box of the t-1 th frame,
Figure BDA00033050093800000311
represents the change rate of the center abscissa of the mth vehicle target bounding box of the t-1 th frame,
Figure BDA00033050093800000312
the ordinate representing the center of the mth vehicle target bounding box of the t-1 th frame,
Figure BDA00033050093800000313
representing the area change rate of the mth vehicle target boundary box of the t-1 th frame;
the calculation formula of the abscissa, the ordinate and the boundary box area of the mth vehicle target frame center in the t-1 frame is as follows:
Figure BDA0003305009380000041
wherein the content of the first and second substances,
Figure BDA0003305009380000042
represents the abscissa of the upper left corner of the mth vehicle target frame of the t-1 th frame,
Figure BDA0003305009380000043
represents the horizontal coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame,
Figure BDA0003305009380000044
represents the ordinate of the upper left corner of the mth vehicle target frame of the t-1 th frame,
Figure BDA0003305009380000045
representing the vertical coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame;
in the initialization state transition matrix in step 4, a state transition matrix F models the motion of the target state vector, and the state transition matrix F corresponding to the adopted uniform motion model is initialized as follows:
Figure BDA0003305009380000046
in the initialization of a covariance matrix, a covariance matrix P represents the uncertainty of target position information, and the covariance matrix is an empirical parameter;
in the initialization of the system noise covariance matrix, because the process noise is not measurable, the system noise covariance matrix Q is generally assumed to conform to normal distribution;
in initializing the observation matrix, the observation matrix H is related to an observable variable, and its values are initialized as:
Figure BDA0003305009380000047
in the initialization observation noise covariance matrix, because the observation noise is not measurable, the observation noise covariance matrix R is generally assumed to conform to normal distribution;
and 4, predicting the current frame vehicle target state vector by the Kalman filtering according to the optimal estimated value of the previous frame vehicle target state vector, wherein the obtained mth frame vehicle target state vector predicted value calculation formula is as follows:
Figure BDA0003305009380000048
wherein the content of the first and second substances,
Figure BDA0003305009380000049
represents the optimal estimated value of the m-th vehicle target state vector in the t-1 th frame,
Figure BDA00033050093800000410
representing the predicted value of the mth vehicle target state vector of the t frame, F is a state transition matrix, B is a control matrix, ut-1,mRepresenting a control gain matrix;
and 4, predicting the current frame vehicle target system error covariance matrix by the Kalman filtering according to the previous frame vehicle target system error covariance matrix, wherein the obtained predicted value of the mth vehicle target system error covariance matrix of the tth frame has the following formula:
Figure BDA00033050093800000510
wherein, Pt-1,mRepresenting the mth vehicle target system error covariance matrix at frame t-1,
Figure BDA0003305009380000051
representing the predicted value of the error covariance matrix of the mth vehicle target system in the tth frame, wherein Q is the covariance matrix of process noise;
thirdly, in the Kalman filtering, updating Kalman coefficients by using the predicted value of the current frame system error covariance matrix, wherein the calculation formula of the mth vehicle target Kalman coefficient of the tth frame is as follows:
Figure BDA0003305009380000052
wherein H is an observation matrix, R is a covariance matrix of observation noise, Kt,mTarget kalman coefficients for the mth vehicle for the tth frame;
step 4, in the Kalman filtering, according to the predicted value of the current frame vehicle target state vector and the system observation value, calculating the optimal estimation value of the current frame vehicle target state vector, wherein the calculation formula of the mth vehicle target state vector optimal estimation value of the tth frame is as follows:
Figure BDA0003305009380000053
wherein the content of the first and second substances,
Figure BDA0003305009380000054
for the mth vehicle target state vector optimal estimation value of the tth frame, ztIs an observed value;
and 4, in the Kalman filtering updating of the current frame system error covariance matrix, the updating calculation formula of the mth vehicle target system error covariance matrix of the tth frame is as follows:
Figure BDA0003305009380000055
wherein, Pt,mThe mth vehicle target system covariance matrix is the mth frame;
step 4, extracting the current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value, wherein the mth target state vector optimal estimation value of the tth frame is described as follows:
Figure BDA0003305009380000056
wherein u ist,mOptimal estimated value, v, of abscissa of center of mth vehicle target bounding box of tth frame representing optimal estimationt,mOptimal estimated value, s, of the ordinate of the center of the mth vehicle target bounding box of the tth frame representing the optimal estimationt,mOptimal estimate, r, of the area of the mth vehicle target bounding box in the tth frame representing the optimal estimatet,mAn optimal estimated value of the aspect ratio of the mth vehicle target bounding box of the tth frame representing the optimal estimation,
Figure BDA0003305009380000057
an optimal estimated value of the change rate of the center abscissa of the mth vehicle target bounding box of the tth frame representing the optimal estimation,
Figure BDA0003305009380000058
the vertical coordinate change rate of the mth vehicle target bounding box center of the tth frame representing the optimal estimation,
Figure BDA0003305009380000059
representing the area change rate of the mth vehicle target boundary box of the mth frame of the optimal estimation;
the current frame vehicle target estimation frame coordinate calculation formula is as follows:
Figure BDA0003305009380000061
wherein the content of the first and second substances,
Figure BDA0003305009380000062
the abscissa of the upper left corner of the mth vehicle target frame representing the optimal estimation,
Figure BDA0003305009380000063
the vertical coordinate of the top left corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
Figure BDA0003305009380000064
the abscissa of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
Figure BDA0003305009380000065
the vertical coordinate of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
therefore, the current frame vehicle target estimation frame set is:
Figure BDA0003305009380000066
preferentially, the Hungarian correlation algorithm in the step 4 carries out matching by calculating the intersection ratio of vehicle target frames IOU;
and 4, calculating the intersection ratio and matching of the vehicle target frames IOU by the Hungarian association algorithm as follows: calculating the IOU intersection ratio of the mth vehicle target estimation frame in the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame in the current frame vehicle target identification frame set, wherein the intersection area calculation formula is as follows:
Figure BDA0003305009380000067
wherein S is1Representing the intersection area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the combined calculation formula is:
Figure BDA0003305009380000068
wherein S is2Representing the combination area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the IOU intersection ratio calculation formula is as follows:
Figure BDA0003305009380000069
the vehicle frame IOU intersection-comparison matching principle of the Hungarian correlation algorithm is as follows: if the calculated IOU intersection ratio of the mth vehicle target estimation frame of the t frame and the nth vehicle target identification frame of the t frame is maximum and belongs to the same vehicle class, the mth vehicle target of the t-1 frame and the nth vehicle target of the t frame belong to the same vehicle target, and the ID serial number of the nth vehicle target of the t frame is marked as the same ID serial number as that of the mth vehicle target of the t-1 frame. The associated vehicle id serial number set is as follows:
IDt,n{idt,n}
wherein idt,nAnd the vehicle id number corresponding to the nth vehicle target in the tth frame is shown.
And 4, combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the association process to form an original vehicle track motion characteristic identification text data set, wherein the formed original vehicle track motion characteristic identification text data set is as follows:
Figure BDA0003305009380000071
preferably, the data preprocessing performed in step 5 is as follows:
firstly, the center point coordinate of a vehicle target frame needs to be calculated, and the calculation formula of the center point coordinate is as follows:
Figure BDA0003305009380000072
Figure BDA0003305009380000073
wherein the content of the first and second substances,
Figure BDA0003305009380000074
represents the abscissa of the center point of the nth vehicle target identification frame of the t-th frame,
Figure BDA0003305009380000075
representing the vertical coordinate of the center point of the nth vehicle target identification frame of the t frame;
secondly, the width and the height of a vehicle target frame need to be calculated, and the calculation formula is as follows:
Figure BDA0003305009380000076
Figure BDA0003305009380000077
wherein, wt,nIndicates the width, h, of the nth vehicle target frame in the t-th framet,nIndicating the height of the nth vehicle target frame in the tth frame.
Forming a first-level vehicle track motion characteristic identification text data set:
Figure BDA0003305009380000078
when the data preprocessing is carried out on the first-level vehicle track motion characteristic identification text data set, firstly, screening out the vehicle track motion characteristic identification text data by using a threshold discrimination method to form a second-level vehicle track motion characteristic identification text data set, wherein the discrimination formula is as follows:
Figure BDA0003305009380000079
Figure BDA00033050093800000710
wherein the content of the first and second substances,
Figure BDA00033050093800000711
represents the abscissa after the threshold decision is screened out,
Figure BDA00033050093800000712
denotes the ordinate, X, after the threshold decision was screened1Represents the lower limit of the abscissa judgment threshold, X2The expression represents the upper limit of the abscissa judgment threshold, Y1Denotes the lower limit of the ordinate decision threshold, Y2Represents the upper limit of the ordinate judgment threshold;
secondly, counting the vehicle tracks with the same ID serial number, if the number of the video frames is less than a fixed value, judging as a fragmentary track segment, and clearing, wherein the judgment formula is as follows:
Figure BDA00033050093800000713
wherein the content of the first and second substances,
Figure BDA00033050093800000714
represents the number of video frames for which the vehicle ID is a value, and threshold represents a fixed value;
the formed secondary vehicle track motion characteristic identification text data set comprises:
Figure BDA0003305009380000081
wherein the content of the first and second substances,
Figure BDA0003305009380000082
the video frame number corresponding to the vehicle target frame after the data screening is represented,
Figure BDA0003305009380000083
the serial number of the vehicle id corresponding to the frame of the vehicle target after the data screening is represented,
Figure BDA0003305009380000084
indicating the width of the vehicle target frame after data screening,
Figure BDA0003305009380000085
and the height of the vehicle target frame after data screening is represented.
And 5, the motion characteristic extraction process comprises the following steps:
the method comprises the following steps of firstly calculating the vehicle speed of each vehicle id serial number under each video frame serial number, and specifically calculating the speed of a current frame by using the position difference value and the time difference value of the current frame and a previous frame, wherein the speed comprises the transverse speed and the longitudinal speed of the vehicle. The formed data set is a three-level vehicle track motion characteristic identification text data set, and a vehicle transverse speed and vehicle longitudinal speed calculation formula is as follows:
Figure BDA0003305009380000086
Figure BDA0003305009380000087
wherein the content of the first and second substances,
Figure BDA0003305009380000088
represents the abscissa of the center point of the vehicle target frame of the t frame under the vehicle id serial number corresponding to the nth vehicle target of the t frame,
Figure BDA0003305009380000089
represents the abscissa of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,
Figure BDA00033050093800000810
represents the vertical coordinate of the center point of the vehicle target frame of the t frame under the serial number of the vehicle id corresponding to the nth vehicle target of the t frame,
Figure BDA00033050093800000811
represents the longitudinal coordinate of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th framet,n,xRepresents the lateral velocity, v, of the center point of the nth vehicle target frame in the t-th framet,n,yRepresents the longitudinal speed of the center point of the nth vehicle target frame in the tth frame,
Figure BDA00033050093800000812
the video frame sequence number of the t-1 frame after the data screening is represented;
since the speed calculation of each frame utilizes the position data of the previous frame, the vehicle speed of the first frame in each id vehicle frame sequence cannot be calculated, so that a cubic polynomial is adopted for fitting, and the calculation formula is as follows:
Figure BDA00033050093800000813
in the formula (f)3(vx,2,vx,3,vx,4,vx,5) To relate to vx,2,vx,3,vx,4,vx,5Cubic function of vx,1Is the first frame x-direction velocity, f3(vy,2,vy,3,vy,4,vy,5) Is about vy,2,vy,3,vy,4,vy,5Cubic function of vy,1Is the first frame y-direction velocity; v. ofx,2,vx,3,vx,4,vx,5The speeds of the 2 nd, 3 rd, 4 th and 5 th frames of vehicles with different id are respectively;
secondly, calculating the vehicle acceleration of each vehicle id serial number under each video frame serial number, wherein the specific process is that the acceleration is calculated by using the speed difference value and the time difference value of the current frame and the previous frame, and the acceleration comprises the vehicle transverse acceleration and the vehicle longitudinal acceleration, so that a three-level vehicle track motion characteristic identification text data set is formed, and the calculation formula of the vehicle transverse acceleration and the vehicle longitudinal acceleration is as follows:
Figure BDA0003305009380000091
Figure BDA0003305009380000092
wherein the content of the first and second substances,
Figure BDA0003305009380000093
represents the transverse speed of the central point of the nth target frame of the tth frame under the corresponding vehicle id serial number of the nth vehicle target frame of the tth frame,
Figure BDA0003305009380000094
represents the center of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frameThe speed of the spot in the transverse direction,
Figure BDA0003305009380000095
the longitudinal speed of the center point of the nth target frame of the tth frame under the corresponding vehicle id number of the nth vehicle target frame of the tth frame is represented,
Figure BDA0003305009380000096
represents the longitudinal speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frame, at,n,xRepresents the longitudinal acceleration of the center point of the target frame of the nth vehicle in the t frame, at,n,yRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame;
in the same way as the speed, the accelerations of the first frames of different vehicles are fitted by using a cubic polynomial, and the calculation formula is as follows:
Figure BDA0003305009380000097
the formed three-level vehicle track motion characteristic identification text data set comprises:
Figure BDA0003305009380000098
and 5, detecting the lane number as follows:
firstly, carrying out linear fitting on vehicle position coordinate data in a three-level vehicle track motion characteristic identification text data set to obtain a fitting straight line, wherein the fitting straight line expression is as follows:
Figure BDA0003305009380000099
wherein the content of the first and second substances,
Figure BDA00033050093800000910
show about
Figure BDA00033050093800000911
And
Figure BDA00033050093800000912
a represents the slope of the straight line, B represents the intercept of the straight line;
secondly, respectively calculating the distance from the vehicle position coordinate data in the three-level vehicle track motion characteristic identification text data set to the fitting straight line, wherein the calculation formula is as follows:
Figure BDA00033050093800000913
the lane number is judged by using a threshold judgment method to form a four-level vehicle track motion characteristic identification text data set, and the formula for judging the lane number is as follows:
{lanet,n=k,if distk,1≤dist≤distk,2}
wherein, lanet,nIndicates the lane number where the nth vehicle target frame center point is located in the t-th frame, k indicates the determined lane number, distk,1Indicates the k-th lane boundary lower bound, distk,2Represents the kth lane boundary upper limit;
the formed four-level vehicle track motion characteristic identification text data set comprises:
Figure BDA0003305009380000101
and 5, converting the coordinates into: converting the pixel unit number covered in the road length direction and the actual road length, and forming a five-level vehicle track motion characteristic identification text data set after conversion, wherein the conversion ratio is as follows:
Figure BDA0003305009380000102
wherein q is the conversion ratio;
the four-level vehicle track motion characteristic identification text data set parameter conversion process comprises the following steps:
Figure BDA0003305009380000103
wherein the content of the first and second substances,
Figure BDA0003305009380000104
represents the abscissa of the center point of the nth vehicle target frame after coordinate conversion,
Figure BDA0003305009380000105
represents the vertical coordinate of the center point of the nth vehicle target frame of the t frame after coordinate conversion,
Figure BDA0003305009380000106
represents the width of the nth vehicle target frame of the tth frame after coordinate conversion,
Figure BDA0003305009380000107
represents the height, v, of the nth vehicle target frame of the t frame after coordinate conversiont,n,x,qRepresents the transverse speed v of the center point of the target frame of the nth vehicle of the t frame after coordinate conversiont,n,y,qRepresents the longitudinal speed of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,x,qRepresents the lateral acceleration of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,y,qRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame after coordinate conversion;
the formed five-level vehicle track motion characteristic identification text data set comprises:
Figure BDA0003305009380000108
the invention has the advantages that: firstly, a new vehicle track characteristic identification method is provided, which is different from the existing patent, the method of the invention applies a YOLOv5 identification model, applies Kalman filtering and Hungarian algorithm based on a uniform motion model, and provides a method for extracting the characteristics of vehicle speed, acceleration and lane number; the method overcomes the defects of the prior patent in the vehicle motion characteristic method process, the applied Kalman filtering can slow down the omission of the relevant part of the vehicle target data, and the speed, acceleration and lane number extraction method can effectively extract the vehicle motion characteristics.
Drawings
FIG. 1: is a schematic view of the device of the invention;
FIG. 2: is a working scene diagram of the invention;
FIG. 3: is a flow chart of the method of the present invention;
FIG. 4: extracting a test chart for the vehicle track by the method;
FIG. 5: the invention is a lane number detection test chart.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention are described below clearly and completely, and it is obvious that the described embodiments are some, not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, which is a schematic view of the apparatus of the present invention, the technical solution of the apparatus of the present invention is a high altitude visual angle recognition system, which is characterized by comprising:
the device comprises an aerial photography shooting device, a calculation processing host and a display projection device;
the aerial photographing device is connected with the calculation processing host computer in a wireless mode; the computing processing host is connected with the display projection device in a wired mode;
the aerial photography camera device is used for collecting video image data of vehicles on the road under the high-altitude visual angle and sending the video image data to the calculation processing host computer in a wireless mode; the calculation processing host is used for processing the video image data of the road vehicles at the high altitude view angle acquired by the aerial photography camera device, further obtaining vehicle image recognition results and track generation results through a vehicle track motion characteristic recognition method at the high altitude view angle, and transmitting the vehicle image recognition results and the track generation results to the display projection device for display;
the model selection of the aerial photography camera device is as follows: xinjiang DJI magic Air 2;
the computing processing host is configured to: I9-9900K type CPU; NVIDA GeForce RTX 3080 model GPU; a Huashuo PRIME Z390-A type mainboard; two DDR 43000 HZ 16G memory banks; GW-EPS model 1250DA power supply;
the display screen is selected as follows: AOC22B2H model display screen;
as shown in fig. 2, the aerial photography device is located right above the road surface during remote photography, that is, the included angle between the photography sight line of the aerial photography device camera and the road surface is 90 degrees.
As shown in fig. 3, the method for identifying the vehicle track motion characteristics includes the following steps:
step 1: the calculation processing host machine wirelessly shoots video image data by using an aerial photography camera device to be positioned right above a road surface, and is used for forming a high-altitude image training data set, carrying out manual marking on the high-altitude image training data set, marking an external rectangular frame of a vehicle target and a vehicle type, and forming a high-altitude image training vehicle marking frame set;
step 1, the high-altitude image training data set comprises:
{datae(x,y),e∈[1,E],x∈[1,X],y∈[1,Y]}
wherein, the datae(X, Y) represents the pixel information of the X-th row and the Y-th column of the E-th frame image in the high-altitude image training data set, E is the high-altitude image training data set frame number, X is the row number of the images in the high-altitude image training data set, and Y is the column number of the images in the high-altitude image training data set;
step 1, the high-altitude image training vehicle marking frame set comprises the following steps:
Figure BDA0003305009380000111
wherein the content of the first and second substances,
Figure BDA0003305009380000112
the horizontal coordinate of the upper left corner of the rectangular frame of the mark of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set is represented,
Figure BDA0003305009380000113
representing the vertical coordinate of the upper left corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set;
Figure BDA0003305009380000121
represents the horizontal coordinate of the lower right corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set,
Figure BDA0003305009380000122
representing the vertical coordinate of the lower right corner of a rectangular frame of an nth vehicle target mark in an e frame image in the high-altitude image training vehicle mark frame set; typee,nRepresenting the mark category of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set;
step 2: the calculation processing host machine wirelessly shoots video image data by using an aerial photography device positioned right above a road surface, and the video image data is used for forming a high-altitude image sequence data set and subsequently extracting vehicle track data; the road in the image picture of the high-altitude image sequence dataset is positioned in the middle of the image.
Step 2, the fixed shooting frame rate of the aerial shooting device is FPS, the length of a shot road is 152 meters, and the unit number of covered pixels in the length direction of the shot image road is G3840; the size of the high-altitude image data shooting is X3840 and Y2160;
step 2, the high-altitude image sequence dataset is as follows:
{datat(x,y),t∈[1,T],x∈[1,X],y∈[1,Y]}
wherein, the datat(x, y) represents the x-th row and y-th column pixel information of the T-th frame image in the high-altitude image sequence data set, and T is the high-altitude image sequenceThe number of data aggregation frames is 19200, X is the number of lines of the images in the high-altitude image sequence data set, and Y is the number of columns of the images in the high-altitude image sequence data set;
and step 3: introducing a YOLOv5 deep learning network model, sequentially inputting each frame of image in the high-altitude image training data set and a vehicle marking frame corresponding to each frame of image in the high-altitude image training vehicle marking frame set to a YOLOv5 deep learning network model for training, constructing a loss function model by using a GIOU method, optimizing a loss function value by using an Adam optimization algorithm, and identifying vehicle targets in the high-altitude image sequence data set by using the trained YOLOv5 deep learning network model to obtain a high-altitude image sequence vehicle identification frame set.
Step 3, the YOLOv5 network framework is specifically a yolo5x network structure;
and 3, the high-altitude image sequence vehicle identification frame set comprises the following steps:
Figure BDA0003305009380000123
wherein the content of the first and second substances,
Figure BDA0003305009380000124
represents the horizontal coordinate of the upper left corner of the bounding rectangle frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set,
Figure BDA0003305009380000125
representing the vertical coordinate of the upper left corner of a circumscribed rectangular frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set;
Figure BDA0003305009380000126
represents the horizontal coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle recognition frame set,
Figure BDA0003305009380000127
representing the first in the set of high-altitude image sequence vehicle identification framesthe vertical coordinate of the lower right corner of a rectangular frame externally connected with the nth vehicle target in the t frames of images; typet,nRepresenting the category of the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set;
and 4, step 4: starting from the first frame of vehicle target external rectangular frame data in the high-altitude image sequence vehicle target identification frame set, carrying out the following processing procedures: applying Kalman filtering to a previous frame of vehicle target boundary frame to obtain vehicle target estimation frame data of a current frame, performing association matching on the vehicle target identification frame data of the current frame and the vehicle target boundary frame in the vehicle target estimation frame data by using a Hungarian association algorithm, wherein the matching mechanism is IOU distance, obtaining the ID serial number of the vehicle target identification frame data of the current frame, namely the ID serial number of the vehicle target of the current frame, and marking a new ID serial number by the vehicle target frame data of the current frame which is not matched; until the end of the aerial image sequence. And combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the correlation matching process to form an original vehicle track motion characteristic identification text data set.
Step 4, in the recording of the current video frame sequence numbers, the recorded video frame sequence number set is
Framet,n{framet,n}
Wherein the frame ist,nAnd the video sequence number corresponding to the nth vehicle target in the tth frame is shown.
And 4, the Kalman filtering processing process sequentially comprises the following steps: initializing a vehicle target state vector; initializing a state transition matrix, initializing a covariance matrix, initializing an observation matrix and initializing a system noise matrix; predicting the vehicle target state vector of the current frame according to the optimal estimated value of the vehicle target state vector of the previous frame to obtain a predicted value of the vehicle target state vector of the current frame; predicting a current frame vehicle target system error covariance matrix according to the previous frame vehicle target system error covariance matrix to obtain a current frame vehicle target system error covariance matrix predicted value; updating a Kalman coefficient by using the covariance matrix predicted value of the current frame vehicle target system; estimating according to the current frame vehicle target state vector predicted value and the system observation value to obtain the current frame vehicle target state vector optimal estimation value; updating a current frame vehicle target system error covariance matrix; extracting a current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value;
in the process of initializing the vehicle target state vector by Kalman filtering, the characteristics of the vehicle target boundary frame are described by using the abscissa of the center of the vehicle target boundary frame, the ordinate of the center of the boundary frame, the area of the boundary frame and the horizontal-vertical ratio of the boundary frame, and the motion state information of the boundary frame is described by using a linear uniform velocity model, namely:
Figure BDA0003305009380000131
wherein the content of the first and second substances,
Figure BDA0003305009380000132
represents motion state information of the bounding box, u represents the abscissa of the center of the bounding box, v represents the ordinate of the center of the bounding box, s represents the area of the bounding box, r represents the bounding box aspect ratio, typically a constant,
Figure BDA0003305009380000133
represents the rate of change of the abscissa of the center of the bounding box,
Figure BDA0003305009380000134
the ordinate representing the center of the bounding box,
Figure BDA0003305009380000135
the area change rate of the bounding box is indicated. The motion state information of the mth vehicle target boundary box of the t-1 th frame is described as follows:
Figure BDA0003305009380000136
wherein the content of the first and second substances,
Figure BDA0003305009380000137
represents motion state information of the mth vehicle target bounding box of the t-1 th frame, ut-1,mAbscissa, v, representing the center of the mth vehicle target bounding box of the t-1 th framet-1,mOrdinate, s, representing the center of the mth vehicle target bounding box in the t-1 th framet-1,mDenotes the area, r, of the m-th vehicle target bounding box of the t-1 th framet-1,mRepresents the aspect ratio of the mth vehicle target boundary box of the t-1 th frame,
Figure BDA0003305009380000138
represents the change rate of the center abscissa of the mth vehicle target bounding box of the t-1 th frame,
Figure BDA0003305009380000139
the ordinate representing the center of the mth vehicle target bounding box of the t-1 th frame,
Figure BDA00033050093800001310
representing the area change rate of the mth vehicle target boundary box of the t-1 th frame;
the calculation formula of the abscissa, the ordinate and the boundary box area of the mth vehicle target frame center in the t-1 frame is as follows:
Figure BDA00033050093800001311
wherein the content of the first and second substances,
Figure BDA0003305009380000141
represents the abscissa of the upper left corner of the mth vehicle target frame of the t-1 th frame,
Figure BDA0003305009380000142
represents the horizontal coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame,
Figure BDA0003305009380000143
represents the ordinate of the upper left corner of the mth vehicle target frame of the t-1 th frame,
Figure BDA0003305009380000144
representing the vertical coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame;
in the initialization state transition matrix in step 4, a state transition matrix F models the motion of the target state vector, and the state transition matrix F corresponding to the adopted uniform motion model is initialized as follows:
Figure BDA0003305009380000145
in the initialization of a covariance matrix, a covariance matrix P represents the uncertainty of target position information, and the covariance matrix is an empirical parameter;
in the initialization of the system noise covariance matrix, because the process noise is not measurable, the system noise covariance matrix Q is generally assumed to conform to normal distribution;
in initializing the observation matrix, the observation matrix H is related to an observable variable, and its values are initialized as:
Figure BDA0003305009380000146
in the initialization observation noise covariance matrix, because the observation noise is not measurable, the observation noise covariance matrix R is generally assumed to conform to normal distribution;
and 4, predicting the current frame vehicle target state vector by the Kalman filtering according to the optimal estimated value of the previous frame vehicle target state vector, wherein the obtained mth frame vehicle target state vector predicted value calculation formula is as follows:
Figure BDA0003305009380000147
wherein the content of the first and second substances,
Figure BDA0003305009380000148
represents the optimal estimated value of the m-th vehicle target state vector in the t-1 th frame,
Figure BDA0003305009380000149
representing the predicted value of the mth vehicle target state vector of the t frame, F is a state transition matrix, B is a control matrix, ut-1,mRepresenting a control gain matrix;
and 4, predicting the current frame vehicle target system error covariance matrix by the Kalman filtering according to the previous frame vehicle target system error covariance matrix, wherein the obtained predicted value of the mth vehicle target system error covariance matrix of the tth frame has the following formula:
Figure BDA00033050093800001410
wherein, Pt-1,mRepresenting the mth vehicle target system error covariance matrix at frame t-1,
Figure BDA00033050093800001411
representing the predicted value of the error covariance matrix of the mth vehicle target system in the tth frame, wherein Q is the covariance matrix of process noise;
thirdly, in the Kalman filtering, updating Kalman coefficients by using the predicted value of the current frame system error covariance matrix, wherein the calculation formula of the mth vehicle target Kalman coefficient of the tth frame is as follows:
Figure BDA0003305009380000151
wherein H is an observation matrix, R is a covariance matrix of observation noise, Kt,mTarget kalman coefficients for the mth vehicle for the tth frame;
step 4, in the Kalman filtering, according to the predicted value of the current frame vehicle target state vector and the system observation value, calculating the optimal estimation value of the current frame vehicle target state vector, wherein the calculation formula of the mth vehicle target state vector optimal estimation value of the tth frame is as follows:
Figure BDA0003305009380000152
wherein the content of the first and second substances,
Figure BDA0003305009380000153
for the mth vehicle target state vector optimal estimation value of the tth frame, ztIs an observed value;
and 4, in the Kalman filtering updating of the current frame system error covariance matrix, the updating calculation formula of the mth vehicle target system error covariance matrix of the tth frame is as follows:
Figure BDA0003305009380000154
wherein, Pt,mThe mth vehicle target system covariance matrix is the mth frame;
step 4, extracting the current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value, wherein the mth target state vector optimal estimation value of the tth frame is described as follows:
Figure BDA0003305009380000155
wherein u ist,mOptimal estimated value, v, of abscissa of center of mth vehicle target bounding box of tth frame representing optimal estimationt,mOptimal estimated value, s, of the ordinate of the center of the mth vehicle target bounding box of the tth frame representing the optimal estimationt,mOptimal estimate, r, of the area of the mth vehicle target bounding box in the tth frame representing the optimal estimatet,mAn optimal estimated value of the aspect ratio of the mth vehicle target bounding box of the tth frame representing the optimal estimation,
Figure BDA0003305009380000156
an optimal estimated value of the change rate of the center abscissa of the mth vehicle target bounding box of the tth frame representing the optimal estimation,
Figure BDA0003305009380000157
ordinate of the mth vehicle target bounding box center of the tth frame representing the optimal estimationThe rate of change of the rate of change,
Figure BDA0003305009380000158
representing the area change rate of the mth vehicle target boundary box of the mth frame of the optimal estimation;
the current frame vehicle target estimation frame coordinate calculation formula is as follows:
Figure BDA0003305009380000159
wherein the content of the first and second substances,
Figure BDA00033050093800001510
the abscissa of the upper left corner of the mth vehicle target frame representing the optimal estimation,
Figure BDA00033050093800001511
the vertical coordinate of the top left corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
Figure BDA0003305009380000161
the abscissa of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
Figure BDA0003305009380000162
the vertical coordinate of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
therefore, the current frame vehicle target estimation frame set is:
Figure BDA0003305009380000163
preferentially, the Hungarian correlation algorithm in the step 4 carries out matching by calculating the intersection ratio of vehicle target frames IOU;
and 4, calculating the intersection ratio and matching of the vehicle target frames IOU by the Hungarian association algorithm as follows: calculating the IOU intersection ratio of the mth vehicle target estimation frame in the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame in the current frame vehicle target identification frame set, wherein the intersection area calculation formula is as follows:
Figure BDA0003305009380000164
wherein S is1Representing the intersection area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the combined calculation formula is:
Figure BDA0003305009380000165
wherein S is2Representing the combination area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the IOU intersection ratio calculation formula is as follows:
Figure BDA0003305009380000166
the vehicle frame IOU intersection-comparison matching principle of the Hungarian correlation algorithm is as follows: if the calculated IOU intersection ratio of the mth vehicle target estimation frame of the t frame and the nth vehicle target identification frame of the t frame is maximum and belongs to the same vehicle class, the mth vehicle target of the t-1 frame and the nth vehicle target of the t frame belong to the same vehicle target, and the ID serial number of the nth vehicle target of the t frame is marked as the same ID serial number as that of the mth vehicle target of the t-1 frame. The associated vehicle id serial number set is as follows:
IDt,n{idt,n}
wherein idt,nAnd the vehicle id number corresponding to the nth vehicle target in the tth frame is shown.
And 4, combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the association process to form an original vehicle track motion characteristic identification text data set, wherein the formed original vehicle track motion characteristic identification text data set is as follows:
Figure BDA0003305009380000167
as shown in fig. 4, a vehicle trajectory extraction test chart is identified for the original vehicle trajectory motion feature recognition text data set;
and 5: and sequentially carrying out four processing processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion on the original vehicle track motion feature identification text data set to finally form a five-level vehicle track motion feature identification text data set.
And step 5, the data preprocessing comprises the following steps:
firstly, the center point coordinate of a vehicle target frame needs to be calculated, and the calculation formula of the center point coordinate is as follows:
Figure BDA0003305009380000171
Figure BDA0003305009380000172
wherein the content of the first and second substances,
Figure BDA0003305009380000173
represents the abscissa of the center point of the nth vehicle target identification frame of the t-th frame,
Figure BDA0003305009380000174
representing the vertical coordinate of the center point of the nth vehicle target identification frame of the t frame;
secondly, the width and the height of a vehicle target frame need to be calculated, and the calculation formula is as follows:
Figure BDA0003305009380000175
Figure BDA0003305009380000176
wherein, wt,nIndicates the width, h, of the nth vehicle target frame in the t-th framet,nIndicating the height of the nth vehicle target frame in the tth frame.
Forming a first-level vehicle track motion characteristic identification text data set:
Figure BDA0003305009380000177
when the data preprocessing is carried out on the first-level vehicle track motion characteristic identification text data set, firstly, screening out the vehicle track motion characteristic identification text data by using a threshold discrimination method to form a second-level vehicle track motion characteristic identification text data set, wherein the discrimination formula is as follows:
Figure BDA0003305009380000178
Figure BDA0003305009380000179
wherein the content of the first and second substances,
Figure BDA00033050093800001710
represents the abscissa after the threshold decision is screened out,
Figure BDA00033050093800001711
denotes the ordinate, X, after the threshold decision was screened1Represents the lower limit of the abscissa judgment threshold, X2The expression represents the upper limit of the abscissa judgment threshold, Y1Denotes the lower limit of the ordinate decision threshold, Y2Represents the upper limit of the ordinate judgment threshold;
secondly, counting the vehicle tracks with the same ID serial number, if the number of the video frames is less than a fixed value, judging as a fragmentary track segment, and clearing, wherein the judgment formula is as follows:
Figure BDA00033050093800001712
wherein the content of the first and second substances,
Figure BDA00033050093800001713
represents the number of video frames for which the vehicle ID is a value, and threshold represents a fixed value;
the formed secondary vehicle track motion characteristic identification text data set comprises:
Figure BDA00033050093800001714
wherein the content of the first and second substances,
Figure BDA00033050093800001715
the video frame number corresponding to the vehicle target frame after the data screening is represented,
Figure BDA00033050093800001716
the serial number of the vehicle id corresponding to the frame of the vehicle target after the data screening is represented,
Figure BDA00033050093800001717
indicating the width of the vehicle target frame after data screening,
Figure BDA00033050093800001718
and the height of the vehicle target frame after data screening is represented.
And 5, the motion characteristic extraction process comprises the following steps:
the method comprises the following steps of firstly calculating the vehicle speed of each vehicle id serial number under each video frame serial number, and specifically calculating the speed of a current frame by using the position difference value and the time difference value of the current frame and a previous frame, wherein the speed comprises the transverse speed and the longitudinal speed of the vehicle. The formed data set is a three-level vehicle track motion characteristic identification text data set, and a vehicle transverse speed and vehicle longitudinal speed calculation formula is as follows:
Figure BDA0003305009380000181
Figure BDA0003305009380000182
wherein the content of the first and second substances,
Figure BDA0003305009380000183
represents the abscissa of the center point of the vehicle target frame of the t frame under the vehicle id serial number corresponding to the nth vehicle target of the t frame,
Figure BDA0003305009380000184
represents the abscissa of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,
Figure BDA0003305009380000185
represents the vertical coordinate of the center point of the vehicle target frame of the t frame under the serial number of the vehicle id corresponding to the nth vehicle target of the t frame,
Figure BDA0003305009380000186
represents the longitudinal coordinate of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th framet,n,xRepresents the lateral velocity, v, of the center point of the nth vehicle target frame in the t-th framet,n,yRepresents the longitudinal speed of the center point of the nth vehicle target frame in the tth frame,
Figure BDA0003305009380000187
the video frame sequence number of the t-1 frame after the data screening is represented;
since the speed calculation of each frame utilizes the position data of the previous frame, the vehicle speed of the first frame in each id vehicle frame sequence cannot be calculated, so that a cubic polynomial is adopted for fitting, and the calculation formula is as follows:
Figure BDA0003305009380000188
in the formula (f)3(vx,2,vx,3,vx,4,vx,5) To relate to vx,2,vx,3,vx,4,vx,5Cubic function of vx,1Is the first frame x-direction velocity, f3(vy,2,vy,3,vy,4,vy,5) Is about vy,2,vy,3,vy,4,vy,5Cubic function of vy,1Is the first frame y-direction velocity; v. ofx,2,vx,3,vx,4,vx,5The speeds of the 2 nd, 3 rd, 4 th and 5 th frames of vehicles with different id are respectively;
secondly, calculating the vehicle acceleration of each vehicle id serial number under each video frame serial number, wherein the specific process is that the acceleration is calculated by using the speed difference value and the time difference value of the current frame and the previous frame, and the acceleration comprises the vehicle transverse acceleration and the vehicle longitudinal acceleration, so that a three-level vehicle track motion characteristic identification text data set is formed, and the calculation formula of the vehicle transverse acceleration and the vehicle longitudinal acceleration is as follows:
Figure BDA0003305009380000189
Figure BDA00033050093800001810
wherein the content of the first and second substances,
Figure BDA0003305009380000191
represents the transverse speed of the central point of the nth target frame of the tth frame under the corresponding vehicle id serial number of the nth vehicle target frame of the tth frame,
Figure BDA0003305009380000192
represents the center of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frameThe speed of the spot in the transverse direction,
Figure BDA0003305009380000193
the longitudinal speed of the center point of the nth target frame of the tth frame under the corresponding vehicle id number of the nth vehicle target frame of the tth frame is represented,
Figure BDA0003305009380000194
represents the longitudinal speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frame, at,n,xRepresents the longitudinal acceleration of the center point of the target frame of the nth vehicle in the t frame, at,n,yRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame;
in the same way as the speed, the accelerations of the first frames of different vehicles are fitted by using a cubic polynomial, and the calculation formula is as follows:
Figure BDA0003305009380000195
the formed three-level vehicle track motion characteristic identification text data set comprises:
Figure BDA0003305009380000196
and 5, detecting the lane number as follows:
firstly, carrying out linear fitting on vehicle position coordinate data in a three-level vehicle track motion characteristic identification text data set to obtain a fitting straight line, wherein the fitting straight line expression is as follows:
Figure BDA0003305009380000197
wherein the content of the first and second substances,
Figure BDA0003305009380000198
show about
Figure BDA0003305009380000199
And
Figure BDA00033050093800001910
a represents the slope of the straight line, B represents the intercept of the straight line, and a is calculated to be-0.008725 and B is 1189;
secondly, respectively calculating the distance from the vehicle position coordinate data in the three-level vehicle track motion characteristic identification text data set to the fitting straight line, wherein the calculation formula is as follows:
Figure BDA00033050093800001911
the lane number is judged by using a threshold judgment method to form a four-level vehicle track motion characteristic identification text data set, and the formula for judging the lane number is as follows:
{lanet,n=k,if distk,1≤dist≤distk,2}
wherein, lanet,nIndicates the lane number where the nth vehicle target frame center point is located in the t-th frame, k indicates the determined lane number, distk,1Indicates the k-th lane boundary lower bound, distk,2Represents the kth lane boundary upper limit;
the formed four-level vehicle track motion characteristic identification text data set comprises:
Figure BDA00033050093800001912
as shown in fig. 5, the test diagram for detecting the lane number of the four-level vehicle track motion characteristic recognition text data set is shown, wherein continuous straight lines in the diagram represent lane spacing lines to be detected;
and 5, converting the coordinates into: converting the pixel unit number covered in the road length direction and the actual road length, and forming a five-level vehicle track motion characteristic identification text data set after conversion, wherein the conversion ratio is as follows:
Figure BDA0003305009380000201
wherein q is the conversion ratio, and q is calculated to be 0.03958;
the four-level vehicle track motion characteristic identification text data set parameter conversion process comprises the following steps:
Figure BDA0003305009380000202
wherein the content of the first and second substances,
Figure BDA0003305009380000203
represents the abscissa of the center point of the nth vehicle target frame after coordinate conversion,
Figure BDA0003305009380000204
represents the vertical coordinate of the center point of the nth vehicle target frame of the t frame after coordinate conversion,
Figure BDA0003305009380000205
represents the width of the nth vehicle target frame of the tth frame after coordinate conversion,
Figure BDA0003305009380000206
represents the height, v, of the nth vehicle target frame of the t frame after coordinate conversiont,n,x,qRepresents the transverse speed v of the center point of the target frame of the nth vehicle of the t frame after coordinate conversiont,n,y,qRepresents the longitudinal speed of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,x,qRepresents the lateral acceleration of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,y,qRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame after coordinate conversion;
the formed five-level vehicle track motion characteristic identification text data set comprises:
Figure BDA0003305009380000207
finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A vehicle track motion characteristic identification method based on a high-altitude visual angle identification system is characterized by comprising the following steps:
the high altitude visual angle recognition system is characterized by comprising: the device comprises an aerial photography shooting device, a calculation processing host and a display projection device;
the aerial photographing device is connected with the calculation processing host computer in a wireless mode; the computing processing host is connected with the display projection device in a wired mode;
the aerial photography camera device is used for collecting video image data of vehicles on the road under the high-altitude visual angle and sending the video image data to the calculation processing host computer in a wireless mode; the calculation processing host is used for processing the video image data of the road vehicles at the high altitude view angle acquired by the aerial photography camera device, further obtaining vehicle image recognition results and track generation results through a vehicle track motion characteristic recognition method at the high altitude view angle, and transmitting the vehicle image recognition results and the track generation results to the display projection device for display;
the aerial photographing device is positioned right above a road surface during remote photographing, namely an included angle between a photographing sight line of a camera of the aerial photographing device and the road surface is 90 degrees;
the vehicle track motion characteristic identification method comprises the following steps:
step 1: the calculation processing host machine wirelessly shoots video image data by using an aerial photography camera device to be positioned right above a road surface, and is used for forming a high-altitude image training data set, carrying out manual marking on the high-altitude image training data set, marking an external rectangular frame of a vehicle target and a vehicle type, and forming a high-altitude image training vehicle marking frame set;
step 2: the calculation processing host machine wirelessly shoots video image data by using an aerial photography device positioned right above a road surface, and the video image data is used for forming a high-altitude image sequence data set and subsequently extracting vehicle track data; the road in the image picture of the high-altitude image sequence data set is positioned in the middle of the image;
and step 3: introducing a YOLOv5 deep learning network model, sequentially inputting each frame of image in the high-altitude image training data set and a vehicle marking frame corresponding to each frame of image in the high-altitude image training vehicle marking frame set into the YOLOv5 deep learning network model for training, constructing a loss function model by using a GIOU method, optimizing a loss function value by using an Adam optimization algorithm, and identifying vehicle targets in the high-altitude image sequence data set by using the trained YOLOv5 deep learning network model to obtain a high-altitude image sequence vehicle identification frame set;
and 4, step 4: starting from the first frame of vehicle target external rectangular frame data in the high-altitude image sequence vehicle target identification frame set, carrying out the following processing procedures: applying Kalman filtering to a previous frame of vehicle target boundary frame to obtain vehicle target estimation frame data of a current frame, performing association matching on the vehicle target identification frame data of the current frame and the vehicle target boundary frame in the vehicle target estimation frame data by using a Hungarian association algorithm, wherein the matching mechanism is IOU distance, obtaining the ID serial number of the vehicle target identification frame data of the current frame, namely the ID serial number of the vehicle target of the current frame, and marking a new ID serial number by the vehicle target frame data of the current frame which is not matched; until the high-altitude image sequence is finished; combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the correlation matching process to form an original vehicle track motion characteristic identification text data set;
and 5: and sequentially carrying out four processing processes of data preprocessing, motion feature extraction, lane number detection and coordinate conversion on the original vehicle track motion feature identification text data set to finally form a five-level vehicle track motion feature identification text data set.
2. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 1, the high-altitude image training data set comprises:
{datae(x,y),e∈[1,E],x∈[1,X],y∈[1,Y]}
wherein, the datae(X, Y) represents the pixel information of the X-th row and the Y-th column of the E-th frame image in the high-altitude image training data set, E is the high-altitude image training data set frame number, X is the row number of the images in the high-altitude image training data set, and Y is the column number of the images in the high-altitude image training data set;
step 1, the high-altitude image training vehicle marking frame set comprises the following steps:
Figure FDA0003305009370000011
wherein the content of the first and second substances,
Figure FDA0003305009370000012
the horizontal coordinate of the upper left corner of the rectangular frame of the mark of the nth vehicle target in the e frame image in the high-altitude image training vehicle mark frame set is represented,
Figure FDA0003305009370000021
representing the vertical coordinate of the upper left corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set;
Figure FDA0003305009370000022
represents the horizontal coordinate of the lower right corner of the rectangular frame of the nth vehicle target mark in the e frame image in the high-altitude image training vehicle mark frame set,
Figure FDA0003305009370000023
representing the vertical coordinate of the lower right corner of a rectangular frame of an nth vehicle target mark in an e frame image in the high-altitude image training vehicle mark frame set; typee,nRepresenting high-altitude image trainingAnd marking the mark type of the nth vehicle target in the e frame image in the vehicle marking frame set.
3. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 2, the fixed shooting frame rate of the aerial shooting device is FPS, the length of a shot road is L, and the unit number of covered pixels in the length direction of the shot image road is G; the high-altitude image data shooting sizes are X and Y;
step 2, the high-altitude image sequence dataset is as follows:
{datat(x,y),t∈[1,T],x∈[1,X],y∈[1,Y]}
wherein, the datat(X, Y) represents the pixel information of the X-th line and the Y-th column of the T-th frame image in the high-altitude image sequence data set, T is the integrated frame number of the high-altitude image sequence data, X is the line number of the image in the high-altitude image sequence data set, and Y is the column number of the image in the high-altitude image sequence data set.
4. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 3, the YOLOv5 network framework is specifically a yolo5x network structure;
and 3, the high-altitude image sequence vehicle identification frame set comprises the following steps:
Figure FDA0003305009370000024
wherein the content of the first and second substances,
Figure FDA0003305009370000025
represents the horizontal coordinate of the upper left corner of the bounding rectangle frame of the nth vehicle target in the t frame of image in the high-altitude image sequence vehicle identification frame set,
Figure FDA0003305009370000026
representing the nth vehicle in the t frame image in the high-altitude image sequence vehicle identification frame setMarking the vertical coordinate of the upper left corner of the external rectangular frame;
Figure FDA0003305009370000027
represents the horizontal coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle recognition frame set,
Figure FDA0003305009370000028
representing the vertical coordinate of the lower right corner of a rectangular frame circumscribed by the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set; typet,nAnd representing the category of the nth vehicle target in the t frame image in the high-altitude image sequence vehicle identification frame set.
5. The vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: step 4, in the recording of the current video Frame sequence number, the recorded video Frame sequence number set is Framet,n{framet,n}
Wherein the frame ist,nThe video sequence number corresponding to the nth vehicle target of the tth frame is represented;
and 4, the Kalman filtering processing process sequentially comprises the following steps: initializing a vehicle target state vector; initializing a state transition matrix, initializing a covariance matrix, initializing an observation matrix and initializing a system noise matrix; predicting the vehicle target state vector of the current frame according to the optimal estimated value of the vehicle target state vector of the previous frame to obtain a predicted value of the vehicle target state vector of the current frame; predicting a current frame vehicle target system error covariance matrix according to the previous frame vehicle target system error covariance matrix to obtain a current frame vehicle target system error covariance matrix predicted value; updating a Kalman coefficient by using the covariance matrix predicted value of the current frame vehicle target system; estimating according to the current frame vehicle target state vector predicted value and the system observation value to obtain the current frame vehicle target state vector optimal estimation value; updating a current frame vehicle target system error covariance matrix; extracting a current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value;
in the process of initializing the vehicle target state vector by Kalman filtering, the characteristics of the vehicle target boundary frame are described by using the abscissa of the center of the vehicle target boundary frame, the ordinate of the center of the boundary frame, the area of the boundary frame and the horizontal-vertical ratio of the boundary frame, and the motion state information of the boundary frame is described by using a linear uniform velocity model, namely:
Figure FDA0003305009370000031
wherein the content of the first and second substances,
Figure FDA0003305009370000032
represents motion state information of the bounding box, u represents the abscissa of the center of the bounding box, v represents the ordinate of the center of the bounding box, s represents the area of the bounding box, r represents the bounding box aspect ratio, typically a constant,
Figure FDA0003305009370000033
represents the rate of change of the abscissa of the center of the bounding box,
Figure FDA0003305009370000034
the ordinate representing the center of the bounding box,
Figure FDA0003305009370000035
representing the rate of change of the area of the bounding box; the motion state information of the mth vehicle target boundary box of the t-1 th frame is described as follows:
Figure FDA0003305009370000036
wherein the content of the first and second substances,
Figure FDA0003305009370000037
represents motion state information of the mth vehicle target bounding box of the t-1 th frame, ut-1,mIs shown asAbscissa, v, of center of mth vehicle target bounding box of t-1 framet-1,mOrdinate, s, representing the center of the mth vehicle target bounding box in the t-1 th framet-1,mDenotes the area, r, of the m-th vehicle target bounding box of the t-1 th framet-1,mRepresents the aspect ratio of the mth vehicle target boundary box of the t-1 th frame,
Figure FDA0003305009370000038
represents the change rate of the center abscissa of the mth vehicle target bounding box of the t-1 th frame,
Figure FDA0003305009370000039
the ordinate representing the center of the mth vehicle target bounding box of the t-1 th frame,
Figure FDA00033050093700000310
representing the area change rate of the mth vehicle target boundary box of the t-1 th frame;
the calculation formula of the abscissa, the ordinate and the boundary box area of the mth vehicle target frame center in the t-1 frame is as follows:
Figure FDA00033050093700000311
wherein the content of the first and second substances,
Figure FDA00033050093700000312
represents the abscissa of the upper left corner of the mth vehicle target frame of the t-1 th frame,
Figure FDA00033050093700000313
represents the horizontal coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame,
Figure FDA00033050093700000314
represents the ordinate of the upper left corner of the mth vehicle target frame of the t-1 th frame,
Figure FDA00033050093700000315
representing the vertical coordinate of the lower right corner of the mth vehicle target frame of the t-1 th frame;
in the initialization state transition matrix in step 4, a state transition matrix F models the motion of the target state vector, and the state transition matrix F corresponding to the adopted uniform motion model is initialized as follows:
Figure FDA0003305009370000041
in the initialization of a covariance matrix, a covariance matrix P represents the uncertainty of target position information, and the covariance matrix is an empirical parameter;
in the initialization of the system noise covariance matrix, because the process noise is not measurable, the system noise covariance matrix Q is generally assumed to conform to normal distribution;
in initializing the observation matrix, the observation matrix H is related to an observable variable, and its values are initialized as:
Figure FDA0003305009370000042
in the initialization observation noise covariance matrix, because the observation noise is not measurable, the observation noise covariance matrix R is generally assumed to conform to normal distribution;
and 4, predicting the current frame vehicle target state vector by the Kalman filtering according to the optimal estimated value of the previous frame vehicle target state vector, wherein the obtained mth frame vehicle target state vector predicted value calculation formula is as follows:
Figure FDA0003305009370000043
wherein the content of the first and second substances,
Figure FDA0003305009370000044
represents the optimal estimated value of the m-th vehicle target state vector in the t-1 th frame,
Figure FDA0003305009370000045
representing the predicted value of the mth vehicle target state vector of the t frame, F is a state transition matrix, B is a control matrix, ut-1,mRepresenting a control gain matrix;
and 4, predicting the current frame vehicle target system error covariance matrix by the Kalman filtering according to the previous frame vehicle target system error covariance matrix, wherein the obtained predicted value of the mth vehicle target system error covariance matrix of the tth frame has the following formula:
Figure FDA0003305009370000046
wherein, Pt-1,mRepresenting the mth vehicle target system error covariance matrix at frame t-1,
Figure FDA0003305009370000047
representing the predicted value of the error covariance matrix of the mth vehicle target system in the tth frame, wherein Q is the covariance matrix of process noise;
thirdly, in the Kalman filtering, updating Kalman coefficients by using the predicted value of the current frame system error covariance matrix, wherein the calculation formula of the mth vehicle target Kalman coefficient of the tth frame is as follows:
Figure FDA0003305009370000048
wherein H is an observation matrix, R is a covariance matrix of observation noise, Kt,mTarget kalman coefficients for the mth vehicle for the tth frame;
step 4, in the Kalman filtering, according to the predicted value of the current frame vehicle target state vector and the system observation value, calculating the optimal estimation value of the current frame vehicle target state vector, wherein the calculation formula of the mth vehicle target state vector optimal estimation value of the tth frame is as follows:
Figure FDA0003305009370000051
wherein the content of the first and second substances,
Figure FDA0003305009370000052
for the mth vehicle target state vector optimal estimation value of the tth frame, ztIs an observed value;
and 4, in the Kalman filtering updating of the current frame system error covariance matrix, the updating calculation formula of the mth vehicle target system error covariance matrix of the tth frame is as follows:
Figure FDA0003305009370000053
wherein, Pt,mThe mth vehicle target system covariance matrix is the mth frame;
step 4, extracting the current frame vehicle target estimation frame set from the current frame vehicle target state vector optimal estimation value, wherein the mth target state vector optimal estimation value of the tth frame is described as follows:
Figure FDA0003305009370000054
wherein u ist,mOptimal estimated value, v, of abscissa of center of mth vehicle target bounding box of tth frame representing optimal estimationt,mOptimal estimated value, s, of the ordinate of the center of the mth vehicle target bounding box of the tth frame representing the optimal estimationt,mOptimal estimate, r, of the area of the mth vehicle target bounding box in the tth frame representing the optimal estimatet,mAn optimal estimated value of the aspect ratio of the mth vehicle target bounding box of the tth frame representing the optimal estimation,
Figure FDA0003305009370000055
an optimal estimated value of the change rate of the center abscissa of the mth vehicle target bounding box of the tth frame representing the optimal estimation,
Figure FDA0003305009370000056
the vertical coordinate change rate of the mth vehicle target bounding box center of the tth frame representing the optimal estimation,
Figure FDA0003305009370000057
representing the area change rate of the mth vehicle target boundary box of the mth frame of the optimal estimation;
the current frame vehicle target estimation frame coordinate calculation formula is as follows:
Figure FDA0003305009370000058
wherein the content of the first and second substances,
Figure FDA0003305009370000059
the abscissa of the upper left corner of the mth vehicle target frame representing the optimal estimation,
Figure FDA00033050093700000510
the vertical coordinate of the top left corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
Figure FDA00033050093700000511
the abscissa of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation,
Figure FDA00033050093700000512
the vertical coordinate of the lower right corner of the mth vehicle target frame of the tth frame representing the optimal estimation;
therefore, the current frame vehicle target estimation frame set is:
Figure FDA00033050093700000513
4, matching the Hungarian correlation algorithm by calculating the intersection ratio of vehicle target frames IOU;
and 4, calculating the intersection ratio and matching of the vehicle target frames IOU by the Hungarian association algorithm as follows: calculating the IOU intersection ratio of the mth vehicle target estimation frame in the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame in the current frame vehicle target identification frame set, wherein the intersection area calculation formula is as follows:
Figure FDA0003305009370000061
wherein S is1Representing the intersection area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the combined calculation formula is:
Figure FDA0003305009370000062
wherein S is2Representing the combination area of the mth vehicle target estimation frame of the tth frame in the current frame vehicle target estimation frame set and the nth vehicle target identification frame of the tth frame in the current frame vehicle target identification frame set;
the IOU intersection ratio calculation formula is as follows:
Figure FDA0003305009370000063
the vehicle frame IOU intersection-comparison matching principle of the Hungarian correlation algorithm is as follows: if the calculated IOU intersection ratio of the mth vehicle target estimation frame of the t frame and the nth vehicle target identification frame of the t frame is maximum and belongs to the same vehicle class, the mth vehicle target of the t-1 frame and the nth vehicle target of the t frame belong to the same vehicle target, and the ID serial number of the nth vehicle target of the t frame is marked as the same ID serial number as that of the mth vehicle target of the t-1 frame; the associated vehicle id serial number set is as follows:
IDt,n{idt,n}
wherein idt,nThe serial number of the vehicle id corresponding to the nth vehicle target of the tth frame is represented;
and 4, combining the video Frame serial number, the vehicle ID serial number and the high-altitude image sequence vehicle target Frame set after the association process to form an original vehicle track motion characteristic identification text data set, wherein the formed original vehicle track motion characteristic identification text data set is as follows:
Figure FDA0003305009370000064
6. the vehicle track motion feature identification method based on the high-altitude visual angle identification system according to claim 1, characterized in that: and step 5, the data preprocessing comprises the following steps:
firstly, the center point coordinate of a vehicle target frame needs to be calculated, and the calculation formula of the center point coordinate is as follows:
Figure FDA0003305009370000065
Figure FDA0003305009370000066
wherein the content of the first and second substances,
Figure FDA0003305009370000067
represents the abscissa of the center point of the nth vehicle target identification frame of the t-th frame,
Figure FDA0003305009370000068
representing the vertical coordinate of the center point of the nth vehicle target identification frame of the t frame;
secondly, the width and the height of a vehicle target frame need to be calculated, and the calculation formula is as follows:
Figure FDA0003305009370000071
Figure FDA0003305009370000072
wherein, wt,nIndicates the width, h, of the nth vehicle target frame in the t-th framet,nIndicating the height of the nth vehicle target frame of the tth frame;
forming a first-level vehicle track motion characteristic identification text data set:
Figure FDA0003305009370000073
when the data preprocessing is carried out on the first-level vehicle track motion characteristic identification text data set, firstly, screening out the vehicle track motion characteristic identification text data by using a threshold discrimination method to form a second-level vehicle track motion characteristic identification text data set, wherein the discrimination formula is as follows:
Figure FDA0003305009370000074
Figure FDA0003305009370000075
Figure FDA0003305009370000076
Figure FDA0003305009370000077
wherein the content of the first and second substances,
Figure FDA0003305009370000078
represents the abscissa after the threshold decision is screened out,
Figure FDA0003305009370000079
denotes the ordinate, X, after the threshold decision was screened1Represents the lower limit of the abscissa judgment threshold, X2The expression represents the upper limit of the abscissa judgment threshold, Y1Denotes the lower limit of the ordinate decision threshold, Y2Represents the upper limit of the ordinate judgment threshold;
secondly, counting the vehicle tracks with the same ID serial number, if the number of the video frames is less than a fixed value, judging as a fragmentary track segment, and clearing, wherein the judgment formula is as follows:
Figure FDA00033050093700000710
wherein the content of the first and second substances,
Figure FDA00033050093700000711
represents the number of video frames for which the vehicle ID is a value, and threshold represents a fixed value;
the formed secondary vehicle track motion characteristic identification text data set comprises:
Figure FDA00033050093700000712
wherein the content of the first and second substances,
Figure FDA00033050093700000713
the video frame number corresponding to the vehicle target frame after the data screening is represented,
Figure FDA00033050093700000714
the serial number of the vehicle id corresponding to the frame of the vehicle target after the data screening is represented,
Figure FDA00033050093700000715
representing the frame of the vehicle target after data screeningThe width of the paper is less than the width of the paper,
Figure FDA00033050093700000716
representing the height of the vehicle target frame after data screening;
and 5, the motion characteristic extraction process comprises the following steps:
firstly, calculating the vehicle speed of each vehicle id serial number under each video frame serial number, wherein the specific process is to calculate the speed of a current frame by using the position difference value and the time difference value of the current frame and a previous frame, and the speed comprises the transverse speed and the longitudinal speed of the vehicle; the formed data set is a three-level vehicle track motion characteristic identification text data set, and a vehicle transverse speed and vehicle longitudinal speed calculation formula is as follows:
Figure FDA0003305009370000081
Figure FDA0003305009370000082
wherein the content of the first and second substances,
Figure FDA0003305009370000083
represents the abscissa of the center point of the vehicle target frame of the t frame under the vehicle id serial number corresponding to the nth vehicle target of the t frame,
Figure FDA0003305009370000084
represents the abscissa of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,
Figure FDA0003305009370000085
represents the vertical coordinate of the center point of the vehicle target frame of the t frame under the serial number of the vehicle id corresponding to the nth vehicle target of the t frame,
Figure FDA0003305009370000086
indicates the nth frameThe vehicle target frame corresponds to the vertical coordinate of the center point of the vehicle target frame of the t-1 th frame under the vehicle id serial number, vt,n,xRepresents the lateral velocity, v, of the center point of the nth vehicle target frame in the t-th framet,n,yRepresents the longitudinal speed of the center point of the nth vehicle target frame in the tth frame,
Figure FDA0003305009370000087
the video frame sequence number of the t-1 frame after the data screening is represented;
since the speed calculation of each frame utilizes the position data of the previous frame, the vehicle speed of the first frame in each id vehicle frame sequence cannot be calculated, so that a cubic polynomial is adopted for fitting, and the calculation formula is as follows:
Figure FDA0003305009370000088
in the formula (f)3(vx,2,vx,3,vx,4,vx,5) To relate to vx,2,vx,3,vx,4,vx,5Cubic function of vx,1Is the first frame x-direction velocity, f3(vy,2,vy,3,vy,4,vy,5) Is about vy,2,vy,3,vy,4,vy,5Cubic function of vy,1Is the first frame y-direction velocity; v. ofx,2,vx,3,vx,4,vx,5The speeds of the 2 nd, 3 rd, 4 th and 5 th frames of vehicles with different id are respectively;
secondly, calculating the vehicle acceleration of each vehicle id serial number under each video frame serial number, wherein the specific process is that the acceleration is calculated by using the speed difference value and the time difference value of the current frame and the previous frame, and the acceleration comprises the vehicle transverse acceleration and the vehicle longitudinal acceleration, so that a three-level vehicle track motion characteristic identification text data set is formed, and the calculation formula of the vehicle transverse acceleration and the vehicle longitudinal acceleration is as follows:
Figure FDA0003305009370000089
Figure FDA00033050093700000810
wherein the content of the first and second substances,
Figure FDA00033050093700000811
represents the transverse speed of the central point of the nth target frame of the tth frame under the corresponding vehicle id serial number of the nth vehicle target frame of the tth frame,
Figure FDA00033050093700000812
represents the transverse speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t-th frame,
Figure FDA00033050093700000813
the longitudinal speed of the center point of the nth target frame of the tth frame under the corresponding vehicle id number of the nth vehicle target frame of the tth frame is represented,
Figure FDA0003305009370000091
represents the longitudinal speed of the center point of the mth target frame of the t-1 th frame under the vehicle id serial number corresponding to the nth vehicle target frame of the t frame, at,n,xRepresents the longitudinal acceleration of the center point of the target frame of the nth vehicle in the t frame, at,n,yRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame;
in the same way as the speed, the accelerations of the first frames of different vehicles are fitted by using a cubic polynomial, and the calculation formula is as follows:
Figure FDA0003305009370000092
the formed three-level vehicle track motion characteristic identification text data set comprises:
Figure FDA0003305009370000093
and 5, detecting the lane number as follows:
firstly, carrying out linear fitting on vehicle position coordinate data in a three-level vehicle track motion characteristic identification text data set to obtain a fitting straight line, wherein the fitting straight line expression is as follows:
Figure FDA0003305009370000094
wherein the content of the first and second substances,
Figure FDA0003305009370000095
show about
Figure FDA0003305009370000096
And
Figure FDA0003305009370000097
a represents the slope of the straight line, B represents the intercept of the straight line;
secondly, respectively calculating the distance from the vehicle position coordinate data in the three-level vehicle track motion characteristic identification text data set to the fitting straight line, wherein the calculation formula is as follows:
Figure FDA0003305009370000098
the lane number is judged by using a threshold judgment method to form a four-level vehicle track motion characteristic identification text data set, and the formula for judging the lane number is as follows:
{lanet,n=k,if distk,1≤dist≤distk,2}
wherein, lanet,nIndicates the lane number where the nth vehicle target frame center point is located in the t-th frame, k indicates the determined lane number, distk,1Indicates the k-th lane boundary lower bound, distk,2Represents the kth lane boundary upper limit;
the formed four-level vehicle track motion characteristic identification text data set comprises:
Figure FDA0003305009370000099
and 5, converting the coordinates into: converting the pixel unit number covered in the road length direction and the actual road length, and forming a five-level vehicle track motion characteristic identification text data set after conversion, wherein the conversion ratio is as follows:
Figure FDA00033050093700000910
wherein q is the conversion ratio;
the four-level vehicle track motion characteristic identification text data set parameter conversion process comprises the following steps:
Figure FDA0003305009370000101
wherein the content of the first and second substances,
Figure FDA0003305009370000102
represents the abscissa of the center point of the nth vehicle target frame after coordinate conversion,
Figure FDA0003305009370000103
represents the vertical coordinate of the center point of the nth vehicle target frame of the t frame after coordinate conversion,
Figure FDA0003305009370000104
represents the width of the nth vehicle target frame of the tth frame after coordinate conversion,
Figure FDA0003305009370000105
represents the height, v, of the nth vehicle target frame of the t frame after coordinate conversiont,n,x,qTo representThe transverse speed v of the center point of the target frame of the nth vehicle of the t frame after coordinate conversiont,n,y,qRepresents the longitudinal speed of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,x,qRepresents the lateral acceleration of the center point of the target frame of the nth vehicle of the t frame after coordinate conversion, at,n,y,qRepresenting the longitudinal acceleration of the center point of the nth vehicle target frame of the tth frame after coordinate conversion;
the formed five-level vehicle track motion characteristic identification text data set comprises:
Figure FDA0003305009370000106
CN202111201539.XA 2021-10-15 2021-10-15 Vehicle track motion feature recognition method based on high-altitude visual angle recognition system Active CN114022791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111201539.XA CN114022791B (en) 2021-10-15 2021-10-15 Vehicle track motion feature recognition method based on high-altitude visual angle recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111201539.XA CN114022791B (en) 2021-10-15 2021-10-15 Vehicle track motion feature recognition method based on high-altitude visual angle recognition system

Publications (2)

Publication Number Publication Date
CN114022791A true CN114022791A (en) 2022-02-08
CN114022791B CN114022791B (en) 2024-05-28

Family

ID=80056377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111201539.XA Active CN114022791B (en) 2021-10-15 2021-10-15 Vehicle track motion feature recognition method based on high-altitude visual angle recognition system

Country Status (1)

Country Link
CN (1) CN114022791B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070105100A (en) * 2006-04-25 2007-10-30 유일정보시스템(주) System for tracking car objects using mosaic video image and a method thereof
US20170301109A1 (en) * 2016-04-15 2017-10-19 Massachusetts Institute Of Technology Systems and methods for dynamic planning and operation of autonomous systems using image observation and information theory
US20180137376A1 (en) * 2016-10-04 2018-05-17 Denso Corporation State estimating method and apparatus
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN112329569A (en) * 2020-10-27 2021-02-05 武汉理工大学 Freight vehicle state real-time identification method based on image deep learning system
CN112884816A (en) * 2021-03-23 2021-06-01 武汉理工大学 Vehicle feature deep learning recognition track tracking method based on image system
CN113269098A (en) * 2021-05-27 2021-08-17 中国人民解放军军事科学院国防科技创新研究院 Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070105100A (en) * 2006-04-25 2007-10-30 유일정보시스템(주) System for tracking car objects using mosaic video image and a method thereof
US20170301109A1 (en) * 2016-04-15 2017-10-19 Massachusetts Institute Of Technology Systems and methods for dynamic planning and operation of autonomous systems using image observation and information theory
US20180137376A1 (en) * 2016-10-04 2018-05-17 Denso Corporation State estimating method and apparatus
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN112329569A (en) * 2020-10-27 2021-02-05 武汉理工大学 Freight vehicle state real-time identification method based on image deep learning system
CN112884816A (en) * 2021-03-23 2021-06-01 武汉理工大学 Vehicle feature deep learning recognition track tracking method based on image system
CN113269098A (en) * 2021-05-27 2021-08-17 中国人民解放军军事科学院国防科技创新研究院 Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle

Also Published As

Publication number Publication date
CN114022791B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN104282020B (en) A kind of vehicle speed detection method based on target trajectory
CN109460709B (en) RTG visual barrier detection method based on RGB and D information fusion
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN108053419A (en) Inhibited and the jamproof multiscale target tracking of prospect based on background
CN102542289B (en) Pedestrian volume statistical method based on plurality of Gaussian counting models
CN108364466B (en) Traffic flow statistical method based on unmanned aerial vehicle traffic video
CN103544483B (en) A kind of joint objective method for tracing based on local rarefaction representation and system thereof
CN105260749B (en) Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN105404857A (en) Infrared-based night intelligent vehicle front pedestrian detection method
CN104517095B (en) A kind of number of people dividing method based on depth image
CN106778659B (en) License plate recognition method and device
CN112884816B (en) Vehicle feature deep learning recognition track tracking method based on image system
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
CN105512618B (en) Video tracing method
CN101094413A (en) Real time movement detection method in use for video monitoring
CN107315095A (en) Many vehicle automatic speed-measuring methods with illumination adaptability based on Video processing
CN106919895A (en) For the tracking and system of moving target
CN109961013A (en) Recognition methods, device, equipment and the computer readable storage medium of lane line
CN113111722A (en) Automatic driving target identification method based on improved Mask R-CNN
CN107230219A (en) A kind of target person in monocular robot is found and follower method
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
CN103065163A (en) Rapid target detection and recognition system and method based on static picture
CN103605960A (en) Traffic state identification method based on fusion of video images with different focal lengths
CN112446353A (en) Video image trace line detection method based on deep convolutional neural network
CN116883893A (en) Tunnel face underground water intelligent identification method and system based on infrared thermal imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant