CN114037736A - Vehicle detection and tracking method based on self-adaptive model - Google Patents

Vehicle detection and tracking method based on self-adaptive model Download PDF

Info

Publication number
CN114037736A
CN114037736A CN202111346003.7A CN202111346003A CN114037736A CN 114037736 A CN114037736 A CN 114037736A CN 202111346003 A CN202111346003 A CN 202111346003A CN 114037736 A CN114037736 A CN 114037736A
Authority
CN
China
Prior art keywords
vehicle
dynamic
box
time
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111346003.7A
Other languages
Chinese (zh)
Other versions
CN114037736B (en
Inventor
李占坤
傅春耘
王苏娟
王西洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202111346003.7A priority Critical patent/CN114037736B/en
Priority claimed from CN202111346003.7A external-priority patent/CN114037736B/en
Publication of CN114037736A publication Critical patent/CN114037736A/en
Application granted granted Critical
Publication of CN114037736B publication Critical patent/CN114037736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Abstract

The invention relates to a vehicle detection and tracking method based on a self-adaptive model, and belongs to the field of automatic driving. The method comprises the following steps: 1) preprocessing point cloud data at the t-1 moment and the t moment; 2) converting the point cloud at the time t-1 into a laser radar coordinate system at the time t, and respectively constructing a grid map under a polar coordinate system according to two continuous frames of point clouds; 3) determining possible dynamic objects at the time t, and finding out an associated object of each dynamic object at the time t-1 through data association; 4) fitting the dynamic object and the related object thereof based on the convex hull model, and screening out dynamic vehicle candidates by using a motion evidence criterion; 5) deducing the real size and pose of the vehicle; 6) finding out an associated object of the dynamic vehicle at the t +1 moment through data association, and then screening the dynamic vehicle by using a motion consistency criterion; 7) judging whether a following relation exists between the new vehicle and other vehicles; 8) the vehicle is tracked using a tag-based multi-bernoulli filter.

Description

Vehicle detection and tracking method based on self-adaptive model
Technical Field
The invention belongs to the field of automatic driving, and relates to a vehicle detection and tracking method based on an adaptive model.
Background
The method based on the model is widely applied to the detection of the dynamic object with a certain shape, has higher accuracy and ensures the real-time performance of the algorithm, thereby being very suitable for the detection of the vehicle. The existing models for vehicle detection are mainly divided into two categories, namely non-fixed size models and fixed size models.
The non-fixed-size model-related vehicle models mainly include a surgical model, an L-shape model, and a convex hull model. The model with the non-fixed size completely trusts vehicle measurement, the pose obtained by measurement fitting is directly used as the real pose of the vehicle, the model has higher pose estimation accuracy when being applied to vehicle pose estimation near the Lidar, but has larger estimation error when being used for estimating the pose of a vehicle far away from the Lidar or a vehicle with serious self-covering.
Fixed-size model-dependent vehicle models are mainly possible domain models and improved convex hull models. The fixed size model is mainly used for carrying out pose inference by using a fixed size according to a reliable point after model fitting, so that even if the measured point cloud is incomplete, such as a head or a tail of a vehicle is scanned by Lidar, a more accurate pose estimation can be obtained through the model. The pose inference ensures that the vehicle poses at different moments are at the same position under the vehicle coordinate system, which is beneficial to the detection of the vehicle. However, regardless of the position of the vehicle relative to the sensor, the fixed-size model makes a pose inference on the vehicle even if the vehicle is in the vicinity of the sensor, and therefore the pose estimation effect of the nearby vehicle is not as good as that of the non-fixed-size model.
Disclosure of Invention
In view of the above, the present invention provides a vehicle detecting and tracking method based on an adaptive model.
In order to achieve the purpose, the invention provides the following technical scheme:
a vehicle detection and tracking method based on an adaptive model comprises the following steps:
step 1: and respectively preprocessing the point clouds at the t-1 moment and the t moment, wherein the preprocessing process comprises ground removal and clustering.
Step 2: and converting the class at the t-1 moment into the laser radar coordinate system at the t moment according to the GPS information, and then constructing a two-dimensional grid map in a polar coordinate system based on the classes of two continuous frames.
And step 3: and comparing the grid states of the grid map at the continuous time to obtain possible dynamic objects at the time t, and then performing data association to find an associated object of each dynamic object in the previous frame.
And 4, step 4: all dynamic objects and their associated objects are fitted based on the convex hull model and the motion evidence is used to screen out dynamic vehicle candidates from them.
And 5: and deducing the real size of the vehicle according to the fitted bounding box, and calculating the deduced pose of the dynamic vehicle based on the deduced size.
Step 6: preprocessing the point cloud at the moment t +1, performing data association on the vehicle candidate at the moment t and the class at the moment t +1 again to obtain an associated object of the dynamic vehicle candidate at the moment t +1, fitting the associated object of the dynamic vehicle candidate at the moment t +1 by using a convex hull model, performing size and pose inference on the associated object, finally verifying the dynamic vehicle candidate by using a motion consistency criterion, and if the motion consistency passes, determining that the vehicle candidate is a dynamic vehicle.
And 7: judging whether group driving behaviors exist between the new vehicle and other vehicles, and if the group driving behaviors exist between the new vehicle and other vehicles, initializing the new vehicle by using speed information of a following vehicle; if the new vehicle and other vehicles have no vehicle following relationship, the new vehicle and other vehicles are initialized by using a default initialization method.
And 8: the vehicle is tracked using a labeled-poly-bernoulli LMB filter.
The ground removal in the step 1 adopts a method based on plane model fitting; clustering adopts a clustering algorithm based on Euclidean distance.
The step 2 specifically comprises the following steps:
2-1: at the time t-1, for each class obtained by preprocessing, the point cloud contained in the class is represented as
Figure BDA0003354129420000021
2-2: converting the class at the time t-1 into the coordinate of the laser radar coordinate system at the time t by using the GPS data of the vehicle-mounted sensor
Figure BDA0003354129420000022
Figure BDA0003354129420000023
Is a rotational-translation matrix that is transformed from the lidar coordinate system to the IMU coordinate system,
Figure BDA0003354129420000024
is that
Figure BDA0003354129420000025
The inverse of the matrix of (a) is,
Figure BDA0003354129420000026
R(γt)、R(ψt) And
Figure BDA0003354129420000027
respectively obtaining a yaw angle rotation matrix, a pitch angle rotation matrix and a yaw angle rotation matrix at the moment t; r (gamma)t-1)、R(ψt-1) And
Figure BDA0003354129420000028
respectively a yaw angle rotation matrix, a pitch angle rotation matrix and a yaw angle rotation matrix at the moment t-1; t istAnd Tt-1The translation vectors at time t and time t-1, respectively. For each class at time t-1, the coordinate transformation in step 2-2 is performed.
2-3: and constructing a grid map under a polar coordinate system based on the point clouds at the t-1 moment and the t moment respectively.
The step 3 specifically comprises the following steps:
3-1: for each class at the time t, examining the number of the grids occupied by the class which have a state change under two continuous frames of grid maps, and if the state change occurs, examining the number of the grids
Figure BDA0003354129420000029
Then the class is considered as a possible dynamic object, WaIs the average width of the vehicle, ArFor the angular resolution of the grid map, | O | | | represents the distance from the centroid of the class to the origin of the lidar coordinate system.
3-2: temporarily taking the mass center of each class as the pose of each class, performing data association on all possible dynamic objects at the time t and the classes at the time t-1 by using a Hungarian algorithm, and if the data association of a certain dynamic object at the time t-1 fails, regarding the dynamic object as a false pick; otherwise, calculating its direction vector
Figure BDA00033541294200000210
Figure BDA00033541294200000211
And
Figure BDA00033541294200000212
respectively representing the x and y coordinates of the centroid of the dynamic object,
Figure BDA0003354129420000031
and
Figure BDA0003354129420000032
x and y coordinates representing the centroid of the associated object of the dynamic object, respectively.
The step 4 specifically comprises the following steps:
4-1: based on DtAnd respectively fitting each dynamic object and the associated object thereof into a bounding box by using a convex hull model.
4-2: all the dynamic objects and the motion evidences between the related objects are examined, and if the motion evidences pass through, the dynamic objects are considered as a dynamic vehicle candidate.
The step 5 specifically comprises the following steps:
5-1: for a bounding box, pgRepresenting the geometric center of the radar, and defining one of the four end points of the bounding box, which is closest to the origin of the coordinate system of the laser radar as a reliable point
Figure BDA0003354129420000033
Let L denote the length and width of the bounding boxboxAnd Wbox,prThe other end point of the short side is pshort,prThe other end point of the long side is plong,HboxIs the orientation vector of the bounding box and is expressed as
Figure BDA0003354129420000034
HboxIs consistent with the driving direction of the vehicle, and is | | | Hbox||=Lbox
5-2: note prAnd pshortHas a midpoint of
Figure BDA0003354129420000035
The coordinates are expressed as
Figure BDA0003354129420000036
Pointing the laser radar coordinate origin to pvVector representation of
Figure BDA0003354129420000037
Calculating a mask angle
Figure BDA0003354129420000038
δ is a preset value which makes θoSlightly larger to avoid some critical situations; calculating relative orientation angle
Figure BDA0003354129420000039
Wherein the content of the first and second substances,
Figure BDA00033541294200000310
if thetao>θrhThe vehicle is in a self-obscuring state.
5-3: and deducing the real size and the pose of the vehicle according to the bounding box. The method specifically comprises the following steps:
5-3-1: setting a distance dthreVehicle Standard dimension Table T, vehicle average Length and Width LaAnd Wa
5-3-2:pgThe distance to the origin of the coordinate system is denoted dgIf d isg<dthreIt means that the vehicle is close to the sensor, otherwise it is far from the sensor.
5-3-3: if the vehicle is located near the sensor and the vehicle is self-obscuring, then tables T and W are based on the standard vehicle dimensionsboxCalculating the estimated length L of the vehicle by linear interpolationinferWhile W isinfer=Wbox
5-3-4: if the vehicle is near the sensor but there is no self-masking, then Linfer=Lbox,Winfer=Wbox
5-3-5: if the vehicle is far from the sensor, the standard vehicle size tables T and W are also usedboxCalculating the estimated length L of the vehicle by linear interpolationinferWhile W isinfer=Wbox
5-3-6: according to pr,plong,pshortAnd LinferAnd WinferAnd calculating the pose inference center of the vehicle.
5-4: calculating the pose inference centers of all vehicle candidates by the method from step 5-1 to step 5-3
Figure BDA00033541294200000311
Calculating the pose inference centers of all associated objects by using the method from step 5-1 to step 5-3
Figure BDA00033541294200000312
The step 6 specifically comprises the following steps:
6-1: carrying out data association on the vehicle candidate at the time t and the time t +1 class based on the centroid, and if a certain vehicle candidateIf the associated object cannot be found at the moment of t +1, the vehicle candidate is regarded as a false picking; if the association is successful, calculating a direction vector D of the class at the t +1 momentt+1
6-2: based on the direction vector Dt+1Fitting the associated object of the vehicle candidate at the time t +1 by using a convex hull model, and recording the orientation vector of the associated object as
Figure BDA0003354129420000041
Calculating the pose inference center of the associated object by using the method from step 5-1 to step 5-3
Figure BDA0003354129420000042
6-2: computing
Figure BDA0003354129420000043
And
Figure BDA0003354129420000044
angle of (2)
Figure BDA0003354129420000045
And
Figure BDA0003354129420000046
and
Figure BDA0003354129420000047
angle of (2)
Figure BDA0003354129420000048
The speed of the vehicle from time t-1 to t is expressed as
Figure BDA0003354129420000049
The speed of the vehicle from t to t +1 is represented as
Figure BDA00033541294200000410
Calculating vtAnd vt+1Angle of (D) of
Figure BDA00033541294200000411
If the dynamic vehicle candidate satisfies the following three conditions simultaneously:
Figure BDA00033541294200000412
min(vt,vt+1)>vminand
Figure BDA00033541294200000413
the dynamic vehicle candidate is considered a dynamic vehicle.
Figure BDA00033541294200000414
For vehicle orientation change angle threshold, vminIn the case of a dynamic vehicle speed threshold,
Figure BDA00033541294200000415
is a dynamic vehicle speed direction change angle threshold.
The step 7 specifically comprises the following steps:
7-1: the Tracker which records survival at the moment of T +1 is T1,T2,…TmAnd the newly-born vehicle detected at the current moment is G1,G2,…Gn
7-2: for a certain new vehicle GqAnd a Tracker Tp
Figure BDA00033541294200000416
And ppEach represents GqPose inference center and T at time T +1pCalculating G for the filtered state at time t +1qAnd TpDifference in orientation angle at time t +1
Figure BDA00033541294200000417
According to GqIs calculated in the direction of
Figure BDA00033541294200000418
To ppLongitudinal distance of
Figure BDA00033541294200000419
According to GqIs calculated in the direction of
Figure BDA00033541294200000420
To ppTransverse distance of
Figure BDA00033541294200000421
If G isqAnd TpThe following conditions are satisfied:
Figure BDA00033541294200000422
then consider GqAnd TpHaving a car-following relationship and using TpVelocity and velocity covariance of Gq
Figure BDA00033541294200000423
Is a lateral distance threshold value when following a car,
Figure BDA00033541294200000424
is a longitudinal distance threshold value when following a car,
Figure BDA00033541294200000425
is the orientation angle difference threshold value of the two vehicles when following the vehicle.
The invention has the beneficial effects that: the vehicle measurement model firstly infers the real size of the vehicle according to the measurement information, and then infers the pose by using the inferred vehicle size, so that the vehicle measurement model has better pose estimation accuracy while ensuring the detection effect.
When the dynamic state of the new vehicle is initialized, whether the following relationship exists between the new vehicle and other vehicles is detected, if the following relationship exists, the new vehicle is initialized by utilizing the speed information of the following vehicle, and the method not only improves the rapid convergence capability of the filter, but also improves the accuracy of state estimation.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of the algorithm of the present invention;
FIG. 2 is a schematic view of a vehicle point cloud at various locations; (a) the vehicle is positioned at the side of the sensor; (b) the vehicle is positioned right in front of the sensor; (c) locating the vehicle remotely from the sensor;
FIG. 3 is a schematic diagram of self-obscuring inference.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, the present invention provides an adaptive model-based vehicle detection and tracking method, which includes the following steps:
step 1, point clouds at the time of t-1 and the time of t are preprocessed respectively, and the preprocessing process comprises ground removal and clustering.
And 2, converting the class at the t-1 moment into a laser radar coordinate system at the t moment according to the GPS information, and then constructing a two-dimensional grid map in a polar coordinate system based on the classes of two continuous frames.
And 3, comparing the grid states of the grid map at the continuous time to obtain possible dynamic objects at the time t, and then performing data association to find an associated object of each dynamic object in the previous frame.
And 4, fitting all dynamic objects and the related objects thereof based on the convex hull model, and screening out dynamic vehicle candidates from the dynamic objects and the related objects by using the motion evidence.
And 5, deducing the real size of the vehicle according to the bounding box obtained by fitting, and calculating the deduced pose of the dynamic vehicle based on the deduced size.
And 6, preprocessing the point cloud at the time of t +1, performing data association on the vehicle candidate at the time of t and the class at the time of t +1 again to obtain an associated object of the dynamic vehicle candidate at the time of t +1, fitting the associated object of the dynamic vehicle candidate at the time of t +1 by using a convex hull model, performing size and pose inference on the associated object, finally verifying the dynamic vehicle candidate by using a motion consistency criterion, and if the motion consistency passes, determining that the vehicle candidate is a dynamic vehicle.
Step 7, judging whether a group driving behavior exists between the new vehicle and other vehicles, and if the group driving behavior exists between the new vehicle and other vehicles, initializing the new vehicle by using the speed information of the following vehicle; if the new vehicle and other vehicles have no vehicle following relationship, the new vehicle and other vehicles are initialized by using a default initialization method.
And 8, tracking the vehicle by using a label Bernoulli (LMB) filter.
The ground removal in the step 1 adopts a method based on plane model fitting; clustering adopts a clustering algorithm based on Euclidean distance.
The step 2 specifically comprises the following steps:
2-1 at the time t-1, for each class obtained by preprocessing, the point cloud contained in the class is represented as
Figure BDA0003354129420000061
2-2 converting the class at the time t-1 into the coordinate of the laser radar coordinate system at the time t by using the GPS data of the vehicle-mounted sensor
Figure BDA0003354129420000062
Figure BDA0003354129420000063
Is a rotational-translation matrix that is transformed from the lidar coordinate system to the IMU coordinate system,
Figure BDA0003354129420000064
is that
Figure BDA0003354129420000065
The inverse of the matrix of (a) is,
Figure BDA0003354129420000066
R(γt)、R(ψt) And
Figure BDA0003354129420000067
respectively obtaining a yaw angle rotation matrix, a pitch angle rotation matrix and a yaw angle rotation matrix at the moment t; r (gamma)t-1)、R(ψt-1) And
Figure BDA0003354129420000068
respectively a yaw angle rotation matrix, a pitch angle rotation matrix and a yaw angle rotation matrix at the moment t-1; t istAnd Tt-1The translation vectors at time t and time t-1, respectively. For each class at time t-1, the coordinate transformation in step 2-2 is performed.
2-3, respectively based on the grid map of the point cloud under the t-1 moment and the t moment in the polar coordinate system.
The step 3 specifically comprises the following steps:
3-1 for each class at the time t, examining the number of the grids occupied by the class which have the state change under the grid maps of two continuous frames, and if the state change occurs, examining the number of the grids
Figure BDA0003354129420000069
Then the class is considered as a possible dynamic object, WaIs the average width of the vehicle, ArFor the angular resolution of the grid map, | O | | | represents the distance from the centroid of the class to the origin of the lidar coordinate system. In this embodiment, WaTaking 1.8, ArGet
Figure BDA00033541294200000610
3-2, temporarily taking the mass center of each class as the pose of each class, performing data association on all possible dynamic objects at the time t and the classes at the time t-1 by using a Hungarian algorithm, and if the data association of a certain dynamic object at the time t-1 fails, regarding the dynamic object as a false pick; otherwise, calculating its direction vector
Figure BDA0003354129420000071
Figure BDA0003354129420000072
And
Figure BDA0003354129420000073
respectively representing the x and y coordinates of the centroid of the dynamic object,
Figure BDA0003354129420000074
and
Figure BDA0003354129420000075
x and y coordinates representing the centroid of the associated object of the dynamic object, respectively.
The step 4 specifically comprises the following steps:
4-1 is based on DtAnd respectively fitting each dynamic object and the associated object thereof into a bounding box by using a convex hull model.
4-2, all the dynamic objects and the motion evidences between the related objects are examined, and if the motion evidences pass, the dynamic objects are considered as a dynamic vehicle candidate.
FIG. 2 is a schematic view of a vehicle point cloud at various locations; (a) the vehicle is positioned at the side of the sensor; (b) the vehicle is positioned right in front of the sensor; (c) the vehicle is located remotely from the sensor. FIG. 3 is a schematic diagram of self-obscuring inference. As shown in fig. 2 (a), when the vehicle is located on the side of the smart vehicle and the distance is short, the Lidar can observe two surfaces of the vehicle, and a relatively accurate and reliable pose can be determined through model fitting; however, when the vehicle is located right in front of the smart vehicle, as shown in (b), the self-masking of the vehicle causes measurement defects, and in this case, the vehicle pose obtained by model fitting deviates from the true value seriously; when the automobile is far away from the sensor, on one hand, the laser has a divergent characteristic, and on the other hand, the measurement is seriously lost due to the increasingly poor view angle of the automobile and the sensor, so that the pose estimation accuracy is reduced. In order to reduce the pose estimation error, the invention designs a pose inference method based on an adaptive model, which can adaptively infer the length and width of a vehicle according to a bounding box, so that the error of subsequent pose inference is greatly reduced.
The step 5 specifically comprises the following steps:
5-1 for a bounding box, pgRepresenting the geometric center of the radar, and defining one of the four end points of the bounding box, which is closest to the origin of the coordinate system of the laser radar as a reliable point
Figure BDA0003354129420000076
Note the length and width of the bounding boxAre respectively LboxAnd Wbox,prThe other end point of the short side is pshort,prThe other end point of the long side is plong,HboxIs the orientation vector of the bounding box and is expressed as
Figure BDA0003354129420000077
HboxIs consistent with the driving direction of the vehicle, and is | | | Hbox||=Lbox
5-2 note prAnd pshortHas a midpoint of
Figure BDA0003354129420000078
The coordinates are expressed as
Figure BDA0003354129420000079
Pointing the laser radar coordinate origin to pvVector representation of
Figure BDA00033541294200000710
Calculating a mask angle
Figure BDA00033541294200000711
δ is a preset value which makes θoSlightly larger to avoid some critical situations; calculating relative orientation angle
Figure BDA00033541294200000712
Wherein the content of the first and second substances,
Figure BDA00033541294200000713
if thetao>θrhThe vehicle is in a self-obscuring state. In this example delta takes 0.044 pi.
And 5-3, deducing the real size and the pose of the vehicle according to the bounding box. The method specifically comprises the following steps:
5-3-1 setting the distance dthreVehicle Standard dimension Table T, vehicle average Length and Width LaAnd Wa. In this example dthre40, T is shown in Table 1, La=4.8,Wa=1.8。
TABLE 1 Standard vehicle dimension Table
Length 3.2 3.5 3.8 4.3 4.8 5.2 9.65 10.2
Width 1.4 1.5 1.65 1.75 1.8 1.95 2.3 2.5
5-3-2pgThe distance to the origin of the coordinate system is denoted dgIf d isg<dthreIt means that the vehicle is in the vicinity of the sensor, otherwise it is in the vicinity of the sensorThe device is far away.
5-3-3 if the vehicle is located near the sensor and the vehicle is self-obscuring, then tables T and W are based on the standard vehicle dimensionsboxCalculating the estimated length L of the vehicle by linear interpolationinferWhile W isinfer=Wbox
5-3-4 if the vehicle is near the sensor, but there is no self-shadowing, then Linfer=Lbox,Winfer=Wbox
5-3-5 if the vehicle is far from the sensor, it is also according to the standard vehicle size tables T and WboxCalculating the estimated length L of the vehicle by linear interpolationinferWhile W isinfer=Wbox
5-3-6 according to pr,plong,pshortAnd LinferAnd WinferAnd calculating the pose inference center of the vehicle.
5-4 calculating the pose inference centers of all vehicle candidates by using the method from the step 5-1 to the step 5-3
Figure BDA0003354129420000081
Calculating the pose inference centers of all associated objects by using the method from step 5-1 to step 5-3
Figure BDA0003354129420000082
The step 6 specifically comprises the following steps:
6-1, performing data association on the vehicle candidate at the time t and the vehicle candidate at the time t +1 based on the centroid, and if a certain vehicle candidate cannot find an associated object at the time t +1, regarding the vehicle candidate as a false picking; if the association is successful, calculating a direction vector D of the class at the t +1 momentt+1
6-2 based on the direction vector Dt+1Fitting the associated object of the vehicle candidate at the time t +1 by using a convex hull model, and recording the orientation vector of the associated object as
Figure BDA0003354129420000083
Calculating associated objects by the method of steps 5-1 to 5-3Pose inference center
Figure BDA0003354129420000084
6-2 calculation
Figure BDA0003354129420000085
And
Figure BDA0003354129420000086
angle of (2)
Figure BDA0003354129420000087
And
Figure BDA0003354129420000088
and
Figure BDA0003354129420000089
angle of (2)
Figure BDA00033541294200000810
The speed of the vehicle from time t-1 to t is expressed as
Figure BDA00033541294200000811
The speed of the vehicle from t to t +1 is represented as
Figure BDA00033541294200000812
Calculating vtAnd vt+1Angle of (D) of
Figure BDA00033541294200000813
If the dynamic vehicle candidate satisfies the following three conditions simultaneously:
Figure BDA00033541294200000814
min(vt,vt+1)>vminand
Figure BDA00033541294200000815
the dynamic vehicle candidate is considered a dynamic vehicle.
Figure BDA00033541294200000816
For vehicle orientation change angle threshold, vminIn the case of a dynamic vehicle speed threshold,
Figure BDA0003354129420000091
is a dynamic vehicle speed direction change angle threshold. In the present example, it is shown that,
Figure BDA0003354129420000092
get
Figure BDA0003354129420000093
vminTaking the mixture at a speed of 0.1m/s,
Figure BDA0003354129420000094
get
Figure BDA0003354129420000095
The step 7 specifically comprises the following steps:
the Tracker that survived at time 7-1 and T +1 is T1,T2,…TmAnd the newly-born vehicle detected at the current moment is G1,G2,…Gn
7-2 for a new vehicle GqAnd a Tracker Tp
Figure BDA0003354129420000096
And ppEach represents GqPose inference center and T at time T +1pCalculating G for the filtered state at time t +1qAnd TpDifference in orientation angle at time t +1
Figure BDA0003354129420000097
According to GqIs calculated in the direction of
Figure BDA0003354129420000098
To ppLongitudinal distance of
Figure BDA0003354129420000099
According to GqIs calculated in the direction of
Figure BDA00033541294200000910
To ppTransverse distance of
Figure BDA00033541294200000911
If G isqAnd TpThe following conditions are satisfied:
Figure BDA00033541294200000912
then consider GqAnd TpHaving a car-following relationship and using TpVelocity and velocity covariance of Gq
Figure BDA00033541294200000913
Is a lateral distance threshold value when following a car,
Figure BDA00033541294200000914
is a longitudinal distance threshold value when following a car,
Figure BDA00033541294200000915
is the orientation angle difference threshold value of the two vehicles when following the vehicle. In the present example, the number of the first and second,
Figure BDA00033541294200000916
taking the mixture with the diameter of 1.5m,
Figure BDA00033541294200000917
taking out the materials of 40m,
Figure BDA00033541294200000918
get
Figure BDA00033541294200000919
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (3)

1. A vehicle detection and tracking method based on an adaptive model is characterized in that: the method comprises the following steps:
step 1: and respectively preprocessing the point clouds at the t-1 moment and the t moment, wherein the preprocessing process comprises ground removal and clustering.
Step 2: converting the class at the t-1 moment into the laser radar coordinate system at the t moment according to the GPS information, and then constructing a two-dimensional grid map under a polar coordinate system based on the classes of two continuous frames;
and step 3: comparing the grid states of the grid map at the continuous time to obtain possible dynamic objects at the time t, and then performing data association to find an associated object of each dynamic object in the previous frame;
and 4, step 4: fitting all dynamic objects and their associated objects based on a convex hull model, and screening out dynamic vehicle candidates from the objects by using motion evidence;
and 5: deducing the real size of the vehicle according to the bounding box obtained by fitting, and calculating the deduced pose of the dynamic vehicle based on the deduced size;
step 6: preprocessing the point cloud at the time of t +1, performing data association on the vehicle candidate at the time of t and the class at the time of t +1 again to obtain an associated object of the dynamic vehicle candidate at the time of t +1, fitting the associated object of the dynamic vehicle candidate at the time of t +1 by using a convex hull model, finally verifying the dynamic vehicle candidate by using a motion consistency criterion, and if the motion consistency passes, considering the vehicle candidate as a dynamic vehicle;
and 7: judging whether group driving behaviors exist between the new vehicle and other vehicles, and if the group driving behaviors exist between the new vehicle and other vehicles, initializing the new vehicle by using speed information of a following vehicle; if the new vehicle and other vehicles have no vehicle following relationship, initializing the new vehicle and other vehicles by using a default initialization method;
and 8: the vehicle is tracked using a labeled-poly-bernoulli LMB filter.
2. The adaptive model-based vehicle detection and tracking method according to claim 1, wherein: the step 5 specifically comprises the following steps:
5-1: for a bounding box, pgRepresenting the geometric center of the radar, and defining one of the four end points of the bounding box, which is closest to the origin of the coordinate system of the laser radar as a reliable point
Figure FDA0003354129410000011
Let L denote the length and width of the bounding boxboxAnd Wbox,prThe other end point of the short side is pshort,prThe other end point of the long side is plong,HboxIs the orientation vector of the bounding box and is expressed as
Figure FDA0003354129410000012
HboxIs consistent with the driving direction of the vehicle, and is | | | Hbox||=Lbox
5-2: note prAnd pshortHas a midpoint of
Figure FDA0003354129410000013
The coordinates are expressed as
Figure FDA0003354129410000014
Pointing the laser radar coordinate origin to pvVector representation of
Figure FDA0003354129410000015
Calculating a mask angle
Figure FDA0003354129410000016
δ is a preset value such that θoBecomes larger to avoid critical situations; calculating relative orientation angle
Figure FDA0003354129410000017
Wherein the content of the first and second substances,
Figure FDA0003354129410000021
if thetao>θrhThe vehicle is in a self-covering state;
5-3: deducing the real size and the pose of the vehicle according to the bounding box; the method specifically comprises the following steps:
5-3-1: setting a distance dthreVehicle Standard dimension Table T, vehicle average Length and Width LaAnd Wa
5-3-2:pgThe distance to the origin of the coordinate system is denoted dgIf d isg<dthreThen it means that the vehicle is close to the sensor, otherwise it is farther from the sensor;
5-3-3: if the vehicle is located near the sensor and the vehicle is self-obscuring, then tables T and W are based on the standard vehicle dimensionsboxCalculating the estimated length L of the vehicle by linear interpolationinferWhile W isinfer=Wbox
5-3-4: if the vehicle is near the sensor but there is no self-masking, then Linfer=Lbox,Winfer=Wbox
5-3-5: if the vehicle is far from the sensor, the standard vehicle size tables T and W are also usedboxCalculating the estimated length L of the vehicle by linear interpolationinferWhile W isinfer=Wbox
5-3-6: according to pr,plong,pshortAnd LinferAnd WinferCalculating a pose inference center of the vehicle;
5-4: calculating the pose inference centers of all vehicle candidates by the method from step 5-1 to step 5-3
Figure FDA0003354129410000022
Calculating pose inference centers of all associated objects by using the method of steps 5-1-5-3
Figure FDA0003354129410000023
3. The adaptive model-based vehicle detection and tracking method according to claim 2, wherein: the step 7 specifically includes:
7-1: the Tracker which records survival at the moment of T +1 is T1,T2,…TmAnd the newly-born vehicle detected at the current moment is G1,G2,…Gn
7-2: for a certain new vehicle GqAnd a Tracker Tp
Figure FDA0003354129410000024
And ppEach represents GqPose inference center and T at time T +1pCalculating G for the filtered state at time t +1qAnd TpDifference in orientation angle at time t +1
Figure FDA0003354129410000025
According to GqIs calculated in the direction of
Figure FDA0003354129410000026
To ppLongitudinal distance of
Figure FDA0003354129410000027
According to GqIs calculated in the direction of
Figure FDA0003354129410000028
To ppTransverse distance of
Figure FDA0003354129410000029
If G isqAnd TpThe following conditions are satisfied:
Figure FDA00033541294100000210
then consider GqAnd TpHaving a car-following relationship and using TpVelocity and velocity covariance of Gq
Figure FDA00033541294100000211
Is a lateral distance threshold value when following a car,
Figure FDA00033541294100000212
is a longitudinal distance threshold value when following a car,
Figure FDA00033541294100000213
is the orientation angle difference threshold value of the two vehicles when following the vehicle.
CN202111346003.7A 2021-11-15 Vehicle detection and tracking method based on self-adaptive model Active CN114037736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111346003.7A CN114037736B (en) 2021-11-15 Vehicle detection and tracking method based on self-adaptive model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111346003.7A CN114037736B (en) 2021-11-15 Vehicle detection and tracking method based on self-adaptive model

Publications (2)

Publication Number Publication Date
CN114037736A true CN114037736A (en) 2022-02-11
CN114037736B CN114037736B (en) 2024-05-14

Family

ID=

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612665A (en) * 2022-03-15 2022-06-10 北京航空航天大学 Pose estimation and dynamic vehicle detection method based on normal vector histogram features

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
EP3751515A1 (en) * 2014-09-29 2020-12-16 Crown Equipment Corporation Industrial vehicles with point fix based localization
CN112734811A (en) * 2021-01-21 2021-04-30 清华大学 Obstacle tracking method, obstacle tracking device and chip
US11002859B1 (en) * 2020-02-27 2021-05-11 Tsinghua University Intelligent vehicle positioning method based on feature point calibration
CN113030960A (en) * 2021-04-06 2021-06-25 陕西国防工业职业技术学院 Monocular vision SLAM-based vehicle positioning method
CN113034504A (en) * 2021-04-25 2021-06-25 重庆大学 Plane feature fusion method in SLAM mapping process
CN113537077A (en) * 2021-07-19 2021-10-22 江苏省特种设备安全监督检验研究院 Label multi-Bernoulli video multi-target tracking method based on feature pool optimization

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3751515A1 (en) * 2014-09-29 2020-12-16 Crown Equipment Corporation Industrial vehicles with point fix based localization
US11002859B1 (en) * 2020-02-27 2021-05-11 Tsinghua University Intelligent vehicle positioning method based on feature point calibration
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN112734811A (en) * 2021-01-21 2021-04-30 清华大学 Obstacle tracking method, obstacle tracking device and chip
CN113030960A (en) * 2021-04-06 2021-06-25 陕西国防工业职业技术学院 Monocular vision SLAM-based vehicle positioning method
CN113034504A (en) * 2021-04-25 2021-06-25 重庆大学 Plane feature fusion method in SLAM mapping process
CN113537077A (en) * 2021-07-19 2021-10-22 江苏省特种设备安全监督检验研究院 Label multi-Bernoulli video multi-target tracking method based on feature pool optimization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KAIQI LIU等: "Fast Dynamic Vehicle Detection in Road Scenarios Based on Pose Estimation with Convex-Hull Model", 《SENSORS》, 17 July 2019 (2019-07-17) *
李占坤: "基于自适应模型的动态车辆检测与跟踪研究", 《万方数据》, 1 November 2023 (2023-11-01) *
邹斌;刘康;王科未;: "基于三维激光雷达的动态障碍物检测和追踪方法", 汽车技术, no. 08, 24 August 2017 (2017-08-24) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612665A (en) * 2022-03-15 2022-06-10 北京航空航天大学 Pose estimation and dynamic vehicle detection method based on normal vector histogram features
CN114612665B (en) * 2022-03-15 2022-10-11 北京航空航天大学 Pose estimation and dynamic vehicle detection method based on normal vector histogram features

Similar Documents

Publication Publication Date Title
CN101609504B (en) Method for detecting, distinguishing and locating infrared imagery sea-surface target
US8705792B2 (en) Object tracking using linear features
CN111667512B (en) Multi-target vehicle track prediction method based on improved Kalman filtering
CN110794406B (en) Multi-source sensor data fusion system and method
CN111583369A (en) Laser SLAM method based on facial line angular point feature extraction
CN110531357A (en) Estimate the method and radar sensing system of mobile target velocity magnitude in a horizontal plane
EP2166375B1 (en) System and method of extracting plane features
CN113370977B (en) Intelligent vehicle forward collision early warning method and system based on vision
JP6906471B2 (en) Target information estimation device, program and method for estimating the direction of a target from a point cloud
CN110070565B (en) Ship track prediction method based on image superposition
CN110969064A (en) Image detection method and device based on monocular vision and storage equipment
CN112037268B (en) Environment sensing method based on probability transfer model in dynamic scene
CN115908539A (en) Target volume automatic measurement method and device and storage medium
CN113723425B (en) Aircraft model identification method, device, storage medium and equipment
CN110766728B (en) Combined image feature accurate matching method based on deep learning
CN114037736A (en) Vehicle detection and tracking method based on self-adaptive model
CN114037736B (en) Vehicle detection and tracking method based on self-adaptive model
CN113702941B (en) Point cloud speed measuring method based on improved ICP
CN113554705B (en) Laser radar robust positioning method under changing scene
CN111948638B (en) Target length estimation method based on high-resolution range profile and application thereof
CN114049542A (en) Fusion positioning method based on multiple sensors in dynamic scene
Adalı et al. Detecting road lanes under extreme conditions: A quantitative performance evaluation
CN117789198B (en) Method for realizing point cloud degradation detection based on 4D millimeter wave imaging radar
Pan et al. Fast vanishing point estimation based on particle swarm optimization
Bahramgiri Trailer Articulation Angle Detection and Tracking for Trailer Backup Assistant Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant