CN112990124B - Vehicle tracking method and device, electronic equipment and storage medium - Google Patents

Vehicle tracking method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112990124B
CN112990124B CN202110450913.3A CN202110450913A CN112990124B CN 112990124 B CN112990124 B CN 112990124B CN 202110450913 A CN202110450913 A CN 202110450913A CN 112990124 B CN112990124 B CN 112990124B
Authority
CN
China
Prior art keywords
vehicle
state information
detected
tracked
pixel position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110450913.3A
Other languages
Chinese (zh)
Other versions
CN112990124A (en
Inventor
郑炜栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN202110450913.3A priority Critical patent/CN112990124B/en
Publication of CN112990124A publication Critical patent/CN112990124A/en
Application granted granted Critical
Publication of CN112990124B publication Critical patent/CN112990124B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a vehicle tracking method, a vehicle tracking device, electronic equipment and a storage medium, wherein the method comprises the following steps: carrying out vehicle detection on the image frames acquired in real time to obtain the vehicle pixel position of the detected vehicle of each frame of image; for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame; for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle. Accurate and stable vehicle tracking can be realized.

Description

Vehicle tracking method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of intelligent vehicle technologies, and in particular, to a vehicle tracking method and apparatus, an electronic device, and a storage medium.
Background
In the field of automatic driving perception, perceiving other vehicles around a vehicle is a very important technology, and accurately perceiving the relative positions and motion states of other vehicles is an important prerequisite for safe driving of an automatic driving vehicle.
At present, the conventional method for detecting the vehicle is as follows: the method comprises the steps of obtaining images shot by a vehicle-mounted camera in real time, positioning the position of a vehicle in the images by utilizing a neural network detection model, and calculating the position and the motion state of each target relative to the vehicle through an image multi-target tracking algorithm.
The image multi-target tracking means that interested targets in an image are detected frame by frame in an image sequence, and then the motion information of the targets in the moving process is continuously updated, so that the complete motion tracks of the targets are obtained.
In the conventional vehicle tracking algorithm, the position of a grounding point of a target vehicle in an image is determined, and the position of the target relative to the vehicle is calculated according to calibration parameters of a vehicle-mounted camera on the assumption that the ground where the vehicle is located is a plane.
However, the vehicle body may have pitch and the road may have different degrees of slope, so that a certain error may exist in the detected plane where the vehicle is located, and the target tracking state is not accurate and stable enough.
Disclosure of Invention
An object of the embodiments of the present application is to provide a vehicle tracking method, device, electronic device, and storage medium, so as to achieve accurate and stable vehicle tracking. The specific technical scheme is as follows:
to achieve the above object, an embodiment of the present application provides a vehicle tracking method, including:
carrying out vehicle detection on the image frames acquired in real time to obtain the vehicle pixel position of the detected vehicle of each frame of image;
for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame;
for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle.
Optionally, the vehicle pixel position includes: a left boundary, a right boundary, an upper boundary, a lower boundary, and a ground line; the method further comprises the following steps:
determining the side body state of the detected vehicle according to the left deviation of the left boundary line and the left boundary, the right deviation of the right boundary line and the right boundary and a preset boundary deviation threshold; the side body state comprises a left side body, a right side body and a complete side body;
when the side body state is the left side body, adjusting the right boundary to be equal to the right boundary;
when the side body state is the right side body, adjusting the left boundary line to be equal to the left boundary;
when the side body state is the complete side body, adjusting the left boundary to be equal to the left boundary, and adjusting the right boundary to be equal to the right boundary;
the method further comprises the following steps: if the grounding line is not between the upper boundary and the lower boundary, adjusting the grounding line to be equal to the lower boundary.
Optionally, the step of matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle in the current image frame and the tracked vehicle determined in the previous frame for the current image frame includes:
based on the vehicle pixel position of the detected vehicle in the current image frame and the vehicle pixel position of the tracked vehicle determined in the previous frame, acquiring the image position similarity of each detected vehicle and each tracked vehicle;
calculating the similarity of the physical positions of each detected vehicle and each tracked vehicle based on the physical positions of the detected vehicles in the current image frame and the physical positions of the tracked vehicles determined in the previous frame; the physical position of the detected vehicle is calculated by adopting a triangulation distance measuring principle according to the vehicle pixel position of the detected vehicle, the size information and the course angle of the tracked vehicle;
and carrying out weighted summation on the image position similarity and the physical position similarity to obtain a similarity coefficient of each detected vehicle and each tracked vehicle, judging whether the similarity coefficient meets a preset condition, and if so, determining that the detected vehicle and the tracked vehicle are successfully matched.
Optionally, the step of updating the state information of the tracked vehicle by using a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle for the tracked vehicle successfully matched with the detected vehicle includes:
aiming at a tracked vehicle which is successfully matched with a detected vehicle, constructing a preset number of state information sampling points based on state information of the tracked vehicle, and determining the weight of each state information sampling point;
predicting the state information of each state information sampling point based on a preset state information transfer matrix, and mapping the predicted state information to a vehicle pixel position based on the mapping relation between the state information and the vehicle pixel position;
weighting the vehicle pixel position obtained by mapping based on the weight of each state information sampling point to obtain the predicted quantity of the vehicle pixel position of the tracked vehicle;
updating the state information of the tracked vehicle based on a difference of the predicted amount of vehicle pixel positions of the tracked vehicle and the vehicle pixel positions of the detected vehicle.
Optionally, the step of mapping the predicted state information to the vehicle pixel position based on the mapping relationship between the state information and the vehicle pixel position includes:
converting the position information in the predicted state information into a vehicle body coordinate system based on a conversion matrix of a vehicle body coordinate system and a global coordinate system determined in real time;
generating a plurality of prediction angular points under an Inertial Measurement Unit (IMU) coordinate system of the tracked vehicle according to the size information, the course angle and the converted position information in the predicted state information;
mapping the plurality of prediction angular points to an image plane based on camera calibration parameters to obtain a plurality of image angular points;
and converting the plurality of image corner points into vehicle pixel positions.
Optionally, the method further includes:
and initializing state information of the detected vehicle for the detected vehicle which is not matched with the tracking vehicle successfully, and determining the detected vehicle as the tracking vehicle of the current image frame.
Optionally, the step of initializing the state information of the detected vehicle includes:
acquiring the vehicle type of the detected vehicle, and initializing the size information of the detected vehicle based on the vehicle type;
calculating the course angle of the detected vehicle according to the vehicle pixel position of the detected vehicle and the calibration parameters of the vehicle-mounted camera;
and calculating the position information of the detected vehicle by adopting a triangulation distance measuring principle according to the vehicle pixel position of the detected vehicle, the size information of the detected vehicle and the course angle of the detected vehicle.
Optionally, the step of calculating the heading angle of the detected vehicle according to the vehicle pixel position of the detected vehicle and the calibration parameter of the vehicle-mounted camera includes:
determining pixel coordinates of a lower boundary corner point and pixel coordinates of a ground corner point based on the vehicle pixel position of the detected vehicle;
respectively mapping the pixel coordinates of the lower boundary angular point and the pixel coordinates of the grounding angular point into a first coordinate and a second coordinate under an IMU coordinate system according to vehicle-mounted camera calibration parameters;
and calculating the course angle of the detected vehicle according to the first coordinate and the second coordinate under the IMU coordinate system.
To achieve the above object, an embodiment of the present application further provides a vehicle tracking apparatus, including:
the detection module is used for carrying out vehicle detection on the image frames acquired in real time to obtain the vehicle pixel position of the detected vehicle of each frame of image;
the matching module is used for matching the detected vehicle and the tracked vehicle according to the similarity between the detected vehicle of the current image frame and the tracked vehicle determined by the previous frame aiming at the current image frame;
the updating module is used for updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle aiming at the tracked vehicle which is successfully matched with the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle.
In order to achieve the above object, an embodiment of the present application further provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing any method step when executing the program stored in the memory.
To achieve the above object, an embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any of the above method steps.
The embodiment of the application has the following beneficial effects:
by adopting the vehicle tracking method, the vehicle tracking device, the electronic equipment and the storage medium, vehicle detection is carried out on the image frames acquired in real time, and the vehicle pixel position of the detected vehicle of each frame of image is obtained; for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame; for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle. Therefore, a nonlinear mapping relation between the vehicle state information and the vehicle pixel position is established, and the state information of the tracked vehicle is updated by adopting a filtering algorithm based on the nonlinear mapping relation. Therefore, the plane where the vehicle is located does not need to be determined, and the nonlinear mapping relation between the vehicle state information and the pixel position of the vehicle is not influenced by the pitching of the vehicle body and the gradient of the ground, so that the accuracy and the stability of vehicle tracking can be improved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is also obvious for a person skilled in the art to obtain other embodiments according to the drawings.
FIG. 1 is a schematic flow chart of a vehicle tracking method provided by an embodiment of the present application;
FIG. 2 is a schematic view of a vehicle attitude provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a side-on-side condition of a vehicle according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of matching a detected vehicle with a tracked vehicle according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a process for updating status information of a tracked vehicle according to an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating mapping of state information to vehicle pixel locations according to an embodiment of the present disclosure;
FIG. 7(a) is a schematic diagram of a predicted corner point of a tracked vehicle in an IMU coordinate system according to an embodiment of the present application;
fig. 7(b) is a schematic diagram of an image corner point of a tracked vehicle in an image plane according to an embodiment of the present application;
FIG. 7(c) is a schematic diagram of tracking vehicle pixel locations of a vehicle according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a vehicle tracking device provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
In order to solve the technical problem that vehicle tracking is not accurate and stable enough in the prior art, the embodiment of the application provides a vehicle tracking method and device, electronic equipment and a storage medium.
The vehicle tracking method provided by the embodiment of the application can be executed by a vehicle tracking device, and the vehicle tracking device can be configured in an intelligent vehicle. It is easy to understand that tracking the vehicle means obtaining the real-time position, speed, etc. information of the vehicle.
Referring to fig. 1, fig. 1 is a schematic flow chart of a vehicle tracking method provided in an embodiment of the present application, and the method may include the following steps:
s101: and carrying out vehicle detection on the image frames acquired in real time to obtain the vehicle pixel position of the detected vehicle of each frame of image.
In the embodiment of the application, the vehicle-mounted camera can acquire images in real time in the driving process of the vehicle. For example, one frame of image is acquired at a specific time interval, which may be set smaller for better tracking effect.
In the embodiment of the application, the vehicle detection can be carried out on the image frame based on the neural network model. For example, the image frames acquired in real time are input into a neural network model of the Center Net, so that the detected vehicle of each frame of image can be determined, and the vehicle pixel position of the detected vehicle can be obtained.
In the embodiment of the present application, the vehicle pixel position may be represented by a left boundary, a right boundary, an upper boundary, a lower boundary, and a ground line.
For ease of description, the left border is shown as
Figure 132702DEST_PATH_IMAGE001
And the left boundary line is represented as
Figure 49842DEST_PATH_IMAGE002
And the right border is shown as
Figure 829580DEST_PATH_IMAGE003
And the right boundary line is represented as
Figure 224789DEST_PATH_IMAGE004
The upper boundary is represented as
Figure 101478DEST_PATH_IMAGE005
The lower boundary is shown as
Figure 923941DEST_PATH_IMAGE006
The ground line is represented as
Figure 190974DEST_PATH_IMAGE007
. Specifically, 8 postures of the vehicle are detected, as shown in fig. 2, fig. 2 is a schematic diagram of the vehicle posture provided by the embodiment of the present application, the 8 postures can be summarized into three side body states, namely, a left side body, a right side body and a complete side body, as shown in fig. 3, and fig. 3 is a schematic diagram of the vehicle side body state provided by the embodiment of the present application.
In the definition, the left boundary should be equal to or less than the left boundary, the right boundary should be equal to or more than the right boundary, and the ground line should be between the upper boundary and the lower boundary.
In addition, in an ideal state, when the detected vehicle is a left side body, the right boundary is equal to the right boundary; when the vehicle is detected to be a right side body, the left boundary is equal to the left boundary line; when the vehicle is detected as being fully sideways, the left boundary is equal to the left boundary and the right boundary is equal to the right boundary.
However, the vehicle pixel position output by the neural network may not meet the ideal condition, and therefore, the vehicle pixel position needs to be preprocessed. In the embodiment of the application, the side body state of the vehicle can be determined according to the left boundary, the right boundary and the right boundary output by the neural network, and adaptive adjustment can be performed according to the side body state.
Specifically, the lateral state of the detected vehicle may be determined according to a left deviation of the left boundary line from the left boundary, a right deviation of the right boundary line from the right boundary, and a preset boundary deviation threshold.
Setting a boundary deviation threshold T, left deviation
Figure 452191DEST_PATH_IMAGE008
Figure 121070DEST_PATH_IMAGE009
Deviation of the right part of the body
Figure 114433DEST_PATH_IMAGE010
If, if
Figure 868763DEST_PATH_IMAGE011
Judging that the detected vehicle is a left side body; if it is
Figure 668092DEST_PATH_IMAGE012
Judging that the detected vehicle is a right side body; when in use
Figure 191477DEST_PATH_IMAGE013
And judging that the detected vehicle is a complete side body. When the side body state is the left side body, adjusting the right boundary to be equal to the right boundary; when the side body state is the right side body, adjusting the left boundary line to be equal to the left boundary; when the side state is full side, adjusting the left boundary is equivalent to the left boundary, and adjusting the right boundary is equivalent to the right boundary.
Further, if the ground line is not between the upper boundary and the lower boundary in the vehicle pixel location output by the neural network, adjusting the ground line is equivalent to the lower boundary.
S102: and matching the detected vehicle and the tracked vehicle according to the similarity of the detected vehicle of the current image frame and the tracked vehicle determined by the previous frame aiming at the current image frame.
In the embodiment of the application, in the process of tracking the vehicle, the vehicle detected by two frames of images before and after the vehicle needs to be matched.
In the embodiment of the application, the current image frame may represent a newly acquired image frame, and for the current image frame, the similarity between each detected vehicle of the current image frame and each tracked vehicle determined in the previous frame may be respectively calculated, so as to implement matching between the detected vehicle and the tracked vehicle.
In one embodiment of the present application, a similarity between the license plate number of the detected vehicle in the current image frame and the license plate number of the tracked vehicle determined in the previous frame may be calculated, and if the license plate number similarity is high, the detected vehicle and the tracked vehicle may be considered to be matched.
In another embodiment of the present application, in order to obtain a more accurate matching result, referring to fig. 4, fig. 4 is a schematic flowchart of a process for matching a detected vehicle and a tracked vehicle provided in the embodiment of the present application, and the detected vehicle and the tracked vehicle may be matched based on the following steps:
s401: and acquiring the image position similarity of each detected vehicle and each tracked vehicle based on the vehicle pixel position of the detected vehicle in the current image frame and the vehicle pixel position of the tracked vehicle determined in the previous frame.
In the embodiment of the present application, for example, for the ith tracking target and the jth detection target, the image position similarity M1 between the ith tracking target and the jth detection target may be calculated according to the transverse overlapping degree and the longitudinal overlapping degree of the image positions of the ith tracking target and the jth detection target, and the aspect ratio of the image detection frameij
Specifically, the transverse contact ratio is represented by the difference between the minimum value of the right boundaries of the two components and the maximum value of the left boundaries of the two components, and the ratio of the difference between the maximum value of the right boundaries of the two components and the minimum value of the left boundaries of the two components; the longitudinal coincidence degree is represented by the difference between the minimum value of the upper boundary and the maximum value of the lower boundary of the two, and the ratio of the difference between the maximum value of the upper boundary and the minimum value of the lower boundary of the two.
The aspect ratio of the image detection frame may be used to calculate an image position similarity weighted value w1, and the image position similarity weighted value w1 may represent the weight of the lateral overlap ratio and the longitudinal overlap ratio in calculating the image position similarity, and may be adaptively adjusted according to the motion state of the vehicle. For example, when the yaw angle is large when the vehicle is turning, a large image position similarity weighting value w1 may be set, and the value may range from 0 to 1, for example, 0.3, 0.5, 0.7, and 0.9.
S402: calculating the similarity of the physical positions of each detected vehicle and each tracked vehicle based on the physical positions of the detected vehicles in the current image frame and the physical positions of the tracked vehicles determined in the previous frame; the physical position of the detected vehicle is calculated by adopting a triangulation distance measuring principle according to the pixel position of the detected vehicle, the size information and the course angle of the tracked vehicle.
In the embodiment of the application, in order to obtain a more accurate matching result, the similarity of the physical positions of the detected vehicle and the tracked vehicle can be calculated.
The physical position of the tracked vehicle is known, when the tracked vehicle is matched with the detected vehicle, the detected vehicle can be assumed to be matched with the tracked vehicle, and then the physical position of the detected vehicle can be calculated according to the pixel position of the detected vehicle, the size information and the course angle of the tracked vehicle, wherein the physical position is not the real physical position of the detected vehicle and is only used for calculating the similarity of the physical positions of the detected vehicle and the tracked vehicle.
As an example, for the ith tracking target and the jth detection target, knowing the size information of the ith tracking target, the size information including length, width, and height, and knowing the heading angle of the ith tracking target, the physical position of the jth detection target can be calculated by using the following formula of the principle of triangulation:
Figure 621321DEST_PATH_IMAGE014
Figure 862947DEST_PATH_IMAGE015
Figure 722360DEST_PATH_IMAGE016
Figure 365831DEST_PATH_IMAGE017
wherein the content of the first and second substances,
Figure 700997DEST_PATH_IMAGE018
which represents the focal length of the onboard camera,
Figure 492235DEST_PATH_IMAGE019
represents the abscissa of the optical center of the vehicle-mounted camera,
Figure 571050DEST_PATH_IMAGE020
indicating the heading angle of the ith tracking target,
Figure 69027DEST_PATH_IMAGE021
indicates the width of the ith tracking target,
Figure 840674DEST_PATH_IMAGE022
an abscissa indicating the physical position of the jth detection target,
Figure 853630DEST_PATH_IMAGE023
an ordinate indicating the physical position of the jth detection target,
Figure 736135DEST_PATH_IMAGE024
indicating the right boundary of the jth detection target,
Figure 88619DEST_PATH_IMAGE025
indicating the left boundary of the jth detection target.
Further, the physical location similarity between the detected vehicle and the tracked vehicle may be calculated, and as an example, the physical location similarity is calculated by using the following formula:
Figure 93484DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure 531419DEST_PATH_IMAGE027
representing the similarity of the physical positions of the ith tracking target and the jth detection target,
Figure 952036DEST_PATH_IMAGE028
an abscissa indicating the physical position of the ith tracking target,
Figure 424605DEST_PATH_IMAGE029
a vertical coordinate indicating the physical position of the ith tracking target, T represents a set threshold value,
Figure 101837DEST_PATH_IMAGE030
representing an absolute value.
S403: and carrying out weighted summation on the image position similarity and the physical position similarity to obtain a similarity coefficient of each detected vehicle and each tracked vehicle, judging whether the similarity coefficient meets a preset condition, and if so, determining that the detected vehicle and the tracked vehicle are successfully matched.
As an example, the similarity coefficient M between the detected vehicle and the tracked vehicle is calculated using the following formulaij
Figure 27067DEST_PATH_IMAGE031
Wherein w1 represents an image position similarity weighted value, w2 represents a physical position similarity weighted value, and features can be performed according to actual requirements, and the two satisfy w1+ w2=1, and M1ijIndicating the image position similarity of the ith tracking target and the jth detection target,
Figure 251375DEST_PATH_IMAGE027
and representing the similarity of the physical positions of the ith tracking target and the jth detection target.
In the embodiment of the application, whether the similarity coefficient meets the preset condition or not can be judged, if yes, the matching between the detected vehicle and the tracked vehicle is determined to be successful, the preset condition can be set according to actual requirements, for example, the smaller the numerical value of the similarity coefficient is, the higher the similarity degree of the detected vehicle and the tracked vehicle is, a threshold value can be set, and when the similarity coefficient is smaller than the threshold value, the successful matching between the detected vehicle and the tracked vehicle is determined.
S103: for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle.
In the embodiment of the application, the matching results of the detected vehicle in the current image frame and the tracked vehicle determined in the previous frame coexist in three situations. In the first case: a certain detected vehicle is matched with a certain tracked vehicle; in the second case: a certain detected vehicle cannot be matched with a tracked vehicle; in the third case: a certain tracked vehicle cannot match the detected vehicle.
In the first case, the detected vehicle and the tracked vehicle that match each other are actually the same vehicle, and the vehicle pixel position of the detected vehicle can represent the observed amount of new state information of the tracked vehicle, so that the state update of the tracked vehicle can be performed.
Specifically, in the embodiment of the present application, for a detected vehicle and a tracked vehicle that are matched with each other, the state information of the tracked vehicle may be updated by using a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle. Wherein the state information of the vehicle includes a physical location, speed information, size information, and a heading angle.
The state updating of the tracking vehicle can be carried out by adopting Unscented Kalman Filtering (UKF).
The core of Unscented kalman filtering is lossless (UT) conversion, and the principle is as follows: assuming a non-linear transformation function
Figure 375189DEST_PATH_IMAGE032
Wherein
Figure 659540DEST_PATH_IMAGE033
Is composed of
Figure 72067DEST_PATH_IMAGE034
Dimension vector, known expectation
Figure 100065DEST_PATH_IMAGE035
Sum variance
Figure 343965DEST_PATH_IMAGE036
Then can be converted into a structure through UT
Figure 533638DEST_PATH_IMAGE037
An
Figure 433461DEST_PATH_IMAGE038
Dot
Figure 999571DEST_PATH_IMAGE039
At the same time construct
Figure 363557DEST_PATH_IMAGE039
Corresponding weight value
Figure 724131DEST_PATH_IMAGE040
And then obtain
Figure 111250DEST_PATH_IMAGE041
The statistical properties of (a).
In the embodiment of the application, the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle can be used as the nonlinear transformation function of the unscented Kalman filtering.
In the unscented Kalman filtering process, predicting state information according to the state information of the tracked vehicle, and mapping the predicted state information to a vehicle pixel position as a prediction quantity; and the vehicle pixel position of the vehicle is detected as an observed quantity, and the state information of the tracked vehicle can be updated according to the predicted quantity and the observed quantity.
According to the predicted quantity and the observed quantity, the process of updating the UKF can refer to the related technology, and is not described in detail.
It is easy to understand that after the state information of the tracked vehicle is updated, the detected vehicle of the current image frame is updated to the tracked vehicle, and after a new image frame is acquired, the steps of S101-S103 are executed again, and the state information of the tracked vehicle is continuously updated, so as to realize the tracking of the vehicle.
By adopting the vehicle tracking method provided by the embodiment of the application, vehicle detection is carried out on the image frames acquired in real time, and the vehicle pixel position of the detected vehicle of each image frame is obtained; for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame; for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle. Therefore, a nonlinear mapping relation between the vehicle state information and the vehicle pixel position is established, and the state information of the tracked vehicle is updated by adopting a filtering algorithm based on the nonlinear mapping relation. Therefore, the plane where the vehicle is located does not need to be determined, and the nonlinear mapping relation between the vehicle state information and the pixel position of the vehicle is not influenced by the pitching of the vehicle body and the gradient of the ground, so that the accuracy and the stability of vehicle tracking can be improved.
In an embodiment of the present application, referring to fig. 5, fig. 5 is a schematic flowchart of a process for updating the state information of the tracked vehicle according to the embodiment of the present application, and as shown in fig. 5, the step S103 may specifically include the following steps:
s501: and aiming at the tracked vehicle which is successfully matched with the detected vehicle, constructing a preset number of state information sampling points based on the state information of the tracked vehicle, and determining the weight of each state information sampling point.
In the embodiment of the application, the state information of the vehicle comprises the physical position, the speed information, the size information and the heading angle of the vehicle.
The physical location represents the position of the vehicle in a global coordinate system, using
Figure 41903DEST_PATH_IMAGE042
And
Figure 932499DEST_PATH_IMAGE043
indication, speed for
Figure 729554DEST_PATH_IMAGE044
And
Figure 338389DEST_PATH_IMAGE045
the size information includes length, width and height, respectively
Figure 574199DEST_PATH_IMAGE046
For indicating course angle
Figure 584880DEST_PATH_IMAGE047
And (4) showing.
UT conversions may be performed based on tracking vehicle state information to construct multiple state information sampling points, i.e.
Figure 552836DEST_PATH_IMAGE038
And each state information sampling point comprises a physical position, speed information, size information and a course angle, and the weight of each state information sampling point is determined.
S502: and predicting the state information of each state information sampling point based on a preset state information transfer matrix, and mapping the predicted state information to the vehicle pixel position based on the mapping relation between the state information and the vehicle pixel position.
In the embodiment of the application, different state information has different state information transition matrixes.
For example, the motion state information defining the vehicle includes a physical location and speed information, and since the product of the speed and the time difference of the adjacent image frames can represent the change of the physical location, the transition matrix of the motion state information may be:
Figure 711285DEST_PATH_IMAGE048
wherein the content of the first and second substances,
Figure 422889DEST_PATH_IMAGE049
representing the time difference of adjacent image frames.
Specifically, the motion state information of the ith state information sampling point is set as
Figure 288077DEST_PATH_IMAGE050
And then the predicted value of the motion state information of the state information sampling point is as follows:
Figure 426934DEST_PATH_IMAGE051
for the size information and the heading angle, since the time difference between adjacent image frames is small, it can be considered that the size information and the heading angle information remain unchanged, and thus the transition matrix of the size information can be set as:
Figure 10362DEST_PATH_IMAGE052
the transition matrix for the heading angle may be set to:
Figure 260078DEST_PATH_IMAGE053
according to the transition matrix, the state information prediction result of each state information sampling point can be obtained. Further, the predicted state information may be mapped to the vehicle pixel position based on the mapping relationship between the state information and the vehicle pixel position.
In an embodiment of the present application, referring to fig. 6, fig. 6 is a schematic flowchart of a process for mapping status information to a vehicle pixel position provided in the embodiment of the present application, and as shown in fig. 6, the step of mapping predicted status information to a vehicle pixel position may specifically include the following steps:
s601: and converting the position information in the predicted state information into the vehicle body coordinate system based on the real-time determined conversion matrix of the vehicle body coordinate system and the global coordinate system.
In the embodiment of the application, during the running process of the vehicle, the running track of the vehicle can be updated in real time according to the wheel speed sensor, the steering wheel angle sensor and the like.
Since the coordinates of the other vehicle directly observed by the onboard camera are relative to the body coordinate system, and the determined coordinates of the tracked vehicle are relative to the global coordinate system during the vehicle tracking process, the transformation matrices of the body coordinate system and the global coordinate system are updated synchronously with the updating of the vehicle trajectory of the vehicle.
By the coordinate conversion, the range of the motion state amount of the tracked vehicle can be narrowed. If the tracking is performed directly under the vehicle body coordinate system, the movement speed of the target vehicle is relative to the vehicle, so the dynamic range of the movement speed of the tracking vehicle is the speed of the tracking vehicle superimposed on the speed of the vehicle, and when the tracking is opposite to the vehicle, the dynamic range of the movement speed of the tracking vehicle is very large.
In this step, since the position information in the predicted state information of the tracked vehicle is in the global coordinate system, it is necessary to convert it to the vehicle body coordinate system first.
Specifically, the following formula can be used for conversion:
Figure 245351DEST_PATH_IMAGE054
wherein the content of the first and second substances,
Figure 555110DEST_PATH_IMAGE055
and the transformation matrix represents the real-time determined vehicle body coordinate system and the global coordinate system.
S602: and generating a plurality of prediction angular points under an Inertial Measurement Unit (IMU) coordinate system of the tracked vehicle according to the size information, the course angle and the converted position information in the predicted state information.
The number of the predicted corner points can be selected according to actual requirements, and as an example, the number of the generated predicted corner points is 8.
In the embodiment of the application, for each state information sampling point, a plurality of prediction corner points under an IMU coordinate system of a tracked vehicle can be generated based on the state information prediction value of the state information sampling point.
Referring to fig. 7(a), fig. 7(a) is a schematic diagram of a corner point of a tracked vehicle under an IMU coordinate system according to an embodiment of the present application.
S603: and mapping the plurality of predicted corner points to an image plane based on camera calibration parameters to obtain a plurality of image corner points.
In the embodiment of the application, each prediction corner point can be mapped to an image plane based on camera calibration parameters to obtain a plurality of image corner points, wherein each image corner point represents a pixel position of an image.
The formula mapped to the image plane by the IMU coordinate system may be:
Figure 251933DEST_PATH_IMAGE056
wherein the content of the first and second substances,
Figure 305340DEST_PATH_IMAGE057
representing camera calibration parameters.
Referring to fig. 7(b), fig. 7(b) is a schematic diagram of tracking an image corner point of a vehicle in an image plane according to an embodiment of the present application.
S604: the plurality of image corners are converted into vehicle pixel locations.
In the embodiment of the present application, after obtaining a plurality of image corner points, the image corner points may be converted into vehicle pixel positions, that is, into a data format including a left boundary, a right boundary, an upper boundary, and a lower boundary.
Referring to fig. 7(c), fig. 7(c) is a schematic diagram of tracking vehicle pixel positions of a vehicle according to an embodiment of the present application.
S503: and weighting the vehicle pixel position obtained by mapping based on the weight value of each state information sampling point to obtain the prediction quantity of the vehicle pixel position of the tracked vehicle.
In the embodiment of the application, after each state information sampling point is mapped to a vehicle pixel position, the mapped vehicle pixel position can be weighted according to the weight of each state information sampling point, and the prediction quantity of the vehicle pixel position of the tracked vehicle is obtained.
S504: the state information of the tracked vehicle is updated based on a difference between the predicted amount of the vehicle pixel position of the tracked vehicle and the vehicle pixel position of the detected vehicle.
In the embodiment of the application, the vehicle pixel position of the vehicle is detected as the observed quantity, and the state information of the tracked vehicle can be updated according to the predicted quantity and the observed quantity of the vehicle pixel position.
Specifically, a difference value between the predicted quantity and the observed quantity of the pixel position of the vehicle is calculated, and UKF updating is carried out according to the difference value to obtain updated state information of the tracked vehicle.
In one embodiment of the present application, to further improve the accuracy of updating the status information of the tracked vehicle, the different status information may be updated in a certain order.
Specifically, the motion state information, including physical location and velocity information, may be updated first. For the motion state information, the predicted amount and the observed amount of the vehicle pixel position can be represented by a vehicle pixel width, i.e., a difference value between a right boundary line and a left boundary line, and a ground line center point, i.e., an average value of the left boundary line and the right boundary line.
As described above, the observed quantity is calculated from the vehicle pixel position of the detected vehicle, the predicted quantity is calculated based on the weighting calculation of the vehicle pixel position mapped by each state information sampling point, and the UKF can be updated by subtracting the two values to obtain the updated physical position and speed information.
In order to improve the accuracy of updating the size information, the step returns to step S602 on the basis of the updated physical position, a new predicted amount of the pixel position of the vehicle can be obtained, and the size information of the tracked vehicle is updated based on the difference between the predicted amount of the pixel position of the new vehicle and the pixel position of the vehicle of the detected vehicle.
Specifically, for the size information, the predicted amount and the observed amount of the vehicle pixel position can be represented by a vehicle pixel width, a vehicle pixel height, and a vehicle side pixel length, wherein the vehicle pixel height is a difference value between an upper boundary line and a lower boundary line, and the vehicle side pixel length is a difference value between a left boundary line and a left boundary line, or a difference value between a right boundary line and a right boundary line.
Similarly, for the size information, based on the difference between the predicted quantity and the observed quantity, UKF is updated to obtain updated size information.
In order to improve the accuracy of the updated heading angle, the step returns to step S602 on the basis of the updated physical location and the updated size information, a predicted amount of a new vehicle pixel location can be obtained, and the heading angle of the tracked vehicle is updated based on a difference between the predicted amount of the new vehicle pixel location and the vehicle pixel location of the detected vehicle.
Specifically, for the heading angle, the predicted quantity and the observed quantity of the pixel position of the vehicle can be represented by the included angle between the side edge and the bottom edge of the vehicle and the length of the pixel on the side edge of the vehicle, wherein the included angle between the side edge and the bottom edge of the vehicle can be calculated by adopting an arctangent function based on the length of the pixel on the side edge of the vehicle and the difference value between the lower boundary of the vehicle and the grounding line.
Similarly, aiming at the course angle, based on the difference value of the pre-measurement and the observation quantity, UKF is updated to obtain the updated course angle.
In the embodiment of the application, if the matching result of the detected vehicle in the current image frame and the tracked vehicle determined in the previous frame is the second case, that is, a certain detected vehicle cannot match the tracked vehicle. The detected vehicle may be newly present in the field of view of the vehicle and for subsequent tracking thereof, status information of the detected vehicle may be initialized and determined as the tracked vehicle for the current image frame.
Specifically, the initialized state information includes vehicle position information, speed information, size information, and heading angle. Wherein the speed information may be initially 0.
In initializing the size information, the vehicle type of the detected vehicle may be acquired, and the size information of the detected vehicle may be initialized with the vehicle type. The vehicle type may indicate the size of the vehicle, such as a small-sized vehicle, a medium-sized vehicle, and a large-sized vehicle, and the size information of the detected vehicle may be initialized based on the size information of the different predetermined vehicle types.
In the course of initializing the course angle, the course angle of the detected vehicle can be calculated according to the vehicle pixel position of the detected vehicle and the calibration parameter of the camera.
The method specifically comprises the following refining steps:
step 11: and determining the pixel coordinates of the lower boundary corner point and the pixel coordinates of the grounding corner point based on the vehicle pixel positions of the detected vehicle.
Specifically, two corner points can be determined according to the pixel positions of the vehicle, and are used for calculating a heading angle, and the two corner points are respectively marked as a lower boundary corner point and a grounding corner point. The pixel abscissa of the lower boundary corner point is a left boundary or a right boundary, the ordinate is the lower boundary, the pixel abscissa of the grounding corner point is a left boundary or a right boundary, and the ordinate is a grounding wire.
As an example, if it is detected that the vehicle is in a right side body state, it may be determined that the abscissa of the pixel of the lower boundary corner point is a right boundary, and the ordinate is a lower boundary; the pixel abscissa of the grounding corner point is the right boundary, and the ordinate is the grounding line.
Step 12: and respectively mapping the pixel coordinates of the lower boundary angular point and the pixel coordinates of the grounding angular point into a first coordinate and a second coordinate under an IMU coordinate system according to the calibration parameters of the vehicle-mounted camera.
In the embodiment of the application, the vehicle-mounted camera calibration parameters may include camera internal parameters and camera external parameters, and the pixel coordinates of the boundary corner point and the pixel coordinates of the grounding corner point may be mapped to the IMU coordinates according to the vehicle-mounted camera calibration parameters to obtain the first coordinates and the second coordinates.
As an example, the internal parameters of an onboard camera are:
Figure 145120DEST_PATH_IMAGE058
the external parameters of the vehicle-mounted camera are as follows:
Figure 688096DEST_PATH_IMAGE059
using fixed height ranging method, setting pixel coordinate as
Figure 980537DEST_PATH_IMAGE060
Coordinates thereof in IMU coordinates
Figure 837635DEST_PATH_IMAGE061
Comprises the following steps:
Figure 594238DEST_PATH_IMAGE062
Figure 245800DEST_PATH_IMAGE063
Figure 291116DEST_PATH_IMAGE064
Figure 686325DEST_PATH_IMAGE065
step 13: and calculating the course angle of the detected vehicle according to the first coordinate and the second coordinate under the IMU coordinate system.
As one example, the heading angle may be calculated based on a difference in ordinate between the second coordinate and the first coordinate, a difference in abscissa between the second coordinate and the first coordinate, and an arctan function.
After initializing the heading angle, the position information of the detected vehicle can be calculated by adopting a triangulation distance measuring principle according to the vehicle pixel position of the detected vehicle, the size information of the detected vehicle and the heading angle of the detected vehicle.
The process of calculating the position information of the detected vehicle based on the principle of triangulation distance measurement can be referred to above, and is not described herein. Since the calculated position information is relative to the vehicle coordinate system, it can be converted into the global coordinate system to obtain the initialized position information of the detected vehicle.
In addition, parameters of the unscented Kalman filter, including process noise, may also be initialized
Figure 297435DEST_PATH_IMAGE066
Measuring noise
Figure 385477DEST_PATH_IMAGE067
Variance matrix
Figure 652510DEST_PATH_IMAGE068
And traceless conversion parameters
Figure 412262DEST_PATH_IMAGE069
In the embodiment of the application, if the matching result between the detected vehicle in the current image frame and the tracked vehicle determined in the previous frame is the third situation, that is, a certain tracked vehicle cannot match the detected vehicle. It may be that the tracked vehicle has left the vehicle field of view and the status information for the tracked vehicle may be deleted. Or, the state information of the tracked vehicle is temporarily reserved, and if the detected vehicles of the subsequent frames of images cannot be matched with the tracked vehicle, the state information of the tracked vehicle is deleted.
Corresponding to the embodiment of the vehicle tracking method provided by the embodiment of the application, the embodiment of the application also provides a vehicle tracking device, and referring to fig. 8, the device may include the following modules:
the detection module 801 is configured to perform vehicle detection on image frames acquired in real time to obtain vehicle pixel positions of detected vehicles of each image frame;
a matching module 802, configured to match, for a current image frame, a detected vehicle and a tracked vehicle based on a similarity between the detected vehicle of the current image frame and a tracked vehicle determined in a previous frame;
an updating module 803, configured to update, for a tracked vehicle that is successfully matched with a detected vehicle, state information of the tracked vehicle by using a filtering algorithm based on the state information of the tracked vehicle and a vehicle pixel position of the detected vehicle, where the state information includes a physical position, speed information, size information, and a heading angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle.
By adopting the vehicle tracking device provided by the embodiment of the application, vehicle detection is carried out on the image frames acquired in real time, and the vehicle pixel position of the detected vehicle of each image frame is obtained; for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame; for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle. Therefore, a nonlinear mapping relation between the vehicle state information and the vehicle pixel position is established, and the state information of the tracked vehicle is updated by adopting a filtering algorithm based on the nonlinear mapping relation. Therefore, the plane where the vehicle is located does not need to be determined, and the nonlinear mapping relation between the vehicle state information and the pixel position of the vehicle is not influenced by the pitching of the vehicle body and the gradient of the ground, so that the accuracy and the stability of vehicle tracking can be improved.
The method and the device are based on the same application concept, and because the principles of solving the problems of the method and the device are similar, the implementation of the device and the method can be mutually referred, and repeated parts are not repeated.
The embodiment of the present application further provides an electronic device, as shown in fig. 9, which includes a processor 901, a communication interface 902, a memory 903, and a communication bus 904, where the processor 901, the communication interface 902, and the memory 903 complete mutual communication through the communication bus 904,
a memory 903 for storing computer programs;
the processor 901 is configured to implement the following steps when executing the program stored in the memory 903:
carrying out vehicle detection on the image frames acquired in real time to obtain the vehicle pixel position of the detected vehicle of each frame of image;
for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame;
for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
By adopting the electronic equipment provided by the embodiment of the application, vehicle detection is carried out on the image frames acquired in real time, and the vehicle pixel position of the detected vehicle of each frame of image is obtained; for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame; for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle. Therefore, a nonlinear mapping relation between the vehicle state information and the vehicle pixel position is established, and the state information of the tracked vehicle is updated by adopting a filtering algorithm based on the nonlinear mapping relation. Therefore, the plane where the vehicle is located does not need to be determined, and the nonlinear mapping relation between the vehicle state information and the pixel position of the vehicle is not influenced by the pitching of the vehicle body and the gradient of the ground, so that the accuracy and the stability of vehicle tracking can be improved.
In yet another embodiment provided herein, there is also provided a computer readable storage medium having a computer program stored therein, the computer program when executed by a processor implementing the steps of any of the vehicle tracking methods described above.
In yet another embodiment provided herein, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of any of the vehicle tracking methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the embodiments of the vehicle tracking device, the electronic device, the computer-readable storage medium and the computer program product, since they are substantially similar to the embodiments of the vehicle tracking method, the description is relatively simple, and the relevant points can be referred to the partial description of the embodiments of the vehicle tracking method.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (10)

1. A vehicle tracking method, characterized in that the method comprises:
carrying out vehicle detection on the image frames acquired in real time based on the neural network model to obtain the vehicle pixel position of the detected vehicle of each frame of image; the vehicle pixel locations include: a left boundary, a right boundary, an upper boundary, a lower boundary, and a ground line;
for the current image frame, matching the detected vehicle and the tracked vehicle based on the similarity between the detected vehicle of the current image frame and the tracked vehicle determined in the previous frame;
for a tracked vehicle which is successfully matched with a detected vehicle, updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle;
the step of updating the state information of the tracked vehicle by adopting a filtering algorithm for the tracked vehicle successfully matched with the detected vehicle based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle comprises the following steps:
aiming at a tracked vehicle which is successfully matched with a detected vehicle, constructing a preset number of state information sampling points based on state information of the tracked vehicle, and determining the weight of each state information sampling point;
predicting the state information of each state information sampling point based on a preset state information transfer matrix, and mapping the predicted state information to a vehicle pixel position based on the mapping relation between the state information and the vehicle pixel position;
weighting the vehicle pixel position obtained by mapping based on the weight of each state information sampling point to obtain the predicted quantity of the vehicle pixel position of the tracked vehicle;
updating the state information of the tracked vehicle based on a difference of the predicted amount of vehicle pixel positions of the tracked vehicle and the vehicle pixel positions of the detected vehicle.
2. The method of claim 1, further comprising:
determining the side body state of the detected vehicle according to the left deviation of the left boundary line and the left boundary, the right deviation of the right boundary line and the right boundary and a preset boundary deviation threshold; the side body state comprises a left side body, a right side body and a complete side body;
when the side body state is the left side body, adjusting the right boundary to be equal to the right boundary;
when the side body state is the right side body, adjusting the left boundary line to be equal to the left boundary;
when the side body state is the complete side body, adjusting the left boundary to be equal to the left boundary, and adjusting the right boundary to be equal to the right boundary;
the method further comprises the following steps: if the grounding line is not between the upper boundary and the lower boundary, adjusting the grounding line to be equal to the lower boundary.
3. The method of claim 2, wherein the step of matching the detected vehicle and the tracked vehicle based on the similarity of the detected vehicle of the current image frame to the tracked vehicle determined from the previous frame for the current image frame comprises:
based on the vehicle pixel position of the detected vehicle in the current image frame and the vehicle pixel position of the tracked vehicle determined in the previous frame, acquiring the image position similarity of each detected vehicle and each tracked vehicle;
calculating the similarity of the physical positions of each detected vehicle and each tracked vehicle based on the physical positions of the detected vehicles in the current image frame and the physical positions of the tracked vehicles determined in the previous frame; the physical position of the detected vehicle is calculated by adopting a triangulation distance measuring principle according to the vehicle pixel position of the detected vehicle, the size information and the course angle of the tracked vehicle;
and carrying out weighted summation on the image position similarity and the physical position similarity to obtain a similarity coefficient of each detected vehicle and each tracked vehicle, judging whether the similarity coefficient meets a preset condition, and if so, determining that the detected vehicle and the tracked vehicle are successfully matched.
4. The method of claim 1, wherein the step of mapping the predicted state information to vehicle pixel locations based on a mapping of the state information to vehicle pixel locations comprises:
converting the position information in the predicted state information into a vehicle body coordinate system based on a conversion matrix of a vehicle body coordinate system and a global coordinate system determined in real time;
generating a plurality of prediction angular points under an Inertial Measurement Unit (IMU) coordinate system of the tracked vehicle according to the size information, the course angle and the converted position information in the predicted state information;
mapping the plurality of prediction angular points to an image plane based on camera calibration parameters to obtain a plurality of image angular points;
and converting the plurality of image corner points into vehicle pixel positions.
5. The method of claim 1, further comprising:
and initializing state information of the detected vehicle for the detected vehicle which is not matched with the tracking vehicle successfully, and determining the detected vehicle as the tracking vehicle of the current image frame.
6. The method of claim 5, wherein the step of initializing the state information of the detected vehicle comprises:
acquiring the vehicle type of the detected vehicle, and initializing the size information of the detected vehicle based on the vehicle type;
calculating the course angle of the detected vehicle according to the vehicle pixel position of the detected vehicle and the calibration parameters of the vehicle-mounted camera;
and calculating the position information of the detected vehicle by adopting a triangulation distance measuring principle according to the vehicle pixel position of the detected vehicle, the size information of the detected vehicle and the course angle of the detected vehicle.
7. The method of claim 6, wherein the step of calculating the heading angle of the test vehicle based on the vehicle pixel locations of the test vehicle and the onboard camera calibration parameters comprises:
determining pixel coordinates of a lower boundary corner point and pixel coordinates of a ground corner point based on the vehicle pixel position of the detected vehicle;
respectively mapping the pixel coordinates of the lower boundary angular point and the pixel coordinates of the grounding angular point into a first coordinate and a second coordinate under an IMU coordinate system according to vehicle-mounted camera calibration parameters;
and calculating the course angle of the detected vehicle according to the first coordinate and the second coordinate under the IMU coordinate system.
8. A vehicle tracking apparatus, characterized in that the apparatus comprises:
the detection module is used for carrying out vehicle detection on the image frames acquired in real time based on the neural network model to obtain the vehicle pixel position of the detected vehicle of each frame of image; the vehicle pixel locations include: a left boundary, a right boundary, an upper boundary, a lower boundary, and a ground line;
the matching module is used for matching the detected vehicle and the tracked vehicle according to the similarity between the detected vehicle of the current image frame and the tracked vehicle determined by the previous frame aiming at the current image frame;
the updating module is used for updating the state information of the tracked vehicle by adopting a filtering algorithm based on the state information of the tracked vehicle and the vehicle pixel position of the detected vehicle aiming at the tracked vehicle which is successfully matched with the detected vehicle, wherein the state information comprises the physical position, the speed information, the size information and the course angle of the vehicle; the nonlinear transformation function adopted by the filtering algorithm represents the nonlinear mapping relation between the state information of the vehicle and the pixel position of the vehicle;
the update module is specifically configured to:
aiming at a tracked vehicle which is successfully matched with a detected vehicle, constructing a preset number of state information sampling points based on state information of the tracked vehicle, and determining the weight of each state information sampling point;
predicting the state information of each state information sampling point based on a preset state information transfer matrix, and mapping the predicted state information to a vehicle pixel position based on the mapping relation between the state information and the vehicle pixel position;
weighting the vehicle pixel position obtained by mapping based on the weight of each state information sampling point to obtain the predicted quantity of the vehicle pixel position of the tracked vehicle;
updating the state information of the tracked vehicle based on a difference of the predicted amount of vehicle pixel positions of the tracked vehicle and the vehicle pixel positions of the detected vehicle.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202110450913.3A 2021-04-26 2021-04-26 Vehicle tracking method and device, electronic equipment and storage medium Active CN112990124B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110450913.3A CN112990124B (en) 2021-04-26 2021-04-26 Vehicle tracking method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450913.3A CN112990124B (en) 2021-04-26 2021-04-26 Vehicle tracking method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112990124A CN112990124A (en) 2021-06-18
CN112990124B true CN112990124B (en) 2021-08-06

Family

ID=76340191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450913.3A Active CN112990124B (en) 2021-04-26 2021-04-26 Vehicle tracking method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112990124B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593219B (en) * 2021-06-30 2023-02-28 北京百度网讯科技有限公司 Traffic flow statistical method and device, electronic equipment and storage medium
CN113792634B (en) * 2021-09-07 2022-04-15 北京易航远智科技有限公司 Target similarity score calculation method and system based on vehicle-mounted camera
CN114091521B (en) * 2021-12-09 2022-04-26 深圳佑驾创新科技有限公司 Method, device and equipment for detecting vehicle course angle and storage medium
CN116701478B (en) * 2023-08-02 2023-11-24 蘑菇车联信息科技有限公司 Course angle determining method, course angle determining device, computer equipment and storage medium
CN116863124B (en) * 2023-09-04 2023-11-21 所托(山东)大数据服务有限责任公司 Vehicle attitude determination method, controller and storage medium
CN117553695B (en) * 2024-01-11 2024-05-03 摩斯智联科技有限公司 Method and device for calculating vehicle height and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2014253606A1 (en) * 2013-04-16 2015-11-05 Bae Systems Australia Limited Landing system for an aircraft
CN108734103A (en) * 2018-04-20 2018-11-02 复旦大学 The detection of moving target and tracking in satellite video
CN111047627A (en) * 2019-11-14 2020-04-21 中山大学 Smooth constraint unscented Kalman filtering method and target tracking method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10241191B2 (en) * 2014-08-25 2019-03-26 Princeton Satellite Systems, Inc. Multi-sensor target tracking using multiple hypothesis testing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2014253606A1 (en) * 2013-04-16 2015-11-05 Bae Systems Australia Limited Landing system for an aircraft
CN108734103A (en) * 2018-04-20 2018-11-02 复旦大学 The detection of moving target and tracking in satellite video
CN111047627A (en) * 2019-11-14 2020-04-21 中山大学 Smooth constraint unscented Kalman filtering method and target tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于UKF滤波的汽车纵向和侧向速度估计算法研究;时艳茹;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20110915;C035-16 *

Also Published As

Publication number Publication date
CN112990124A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112990124B (en) Vehicle tracking method and device, electronic equipment and storage medium
CN111127513B (en) Multi-target tracking method
CN109188438B (en) Yaw angle determination method, device, equipment and medium
Schöller et al. Targetless rotational auto-calibration of radar and camera for intelligent transportation systems
CN115063454B (en) Multi-target tracking matching method, device, terminal and storage medium
CN113447923A (en) Target detection method, device, system, electronic equipment and storage medium
CN113015924A (en) Apparatus and method for characterizing an object based on measurement samples from one or more position sensors
CN112862890B (en) Road gradient prediction method, device and storage medium
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN110992424A (en) Positioning method and system based on binocular vision
JP2021128761A (en) Object tracking device of road monitoring video and method
CN115546705A (en) Target identification method, terminal device and storage medium
CN114120149A (en) Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium
CN110637209A (en) Method, apparatus, and computer-readable storage medium having instructions for estimating a pose of a motor vehicle
CN112967347B (en) Pose calibration method, pose calibration device, robot and computer readable storage medium
CN110864670B (en) Method and system for acquiring position of target obstacle
CN116844124A (en) Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium
CN114913500B (en) Pose determination method and device, computer equipment and storage medium
Song et al. 3D vehicle model-based PTZ camera auto-calibration for smart global village
CN115236643A (en) Sensor calibration method, system, device, electronic equipment and medium
CN117437770A (en) Target state estimation method, device, electronic equipment and medium
CN114415129A (en) Visual and millimeter wave radar combined calibration method and device based on polynomial model
CN117437771A (en) Target state estimation method, device, electronic equipment and medium
CN109919998B (en) Satellite attitude determination method and device and terminal equipment
CN115166701B (en) System calibration method and device for RGB-D camera and laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220316

Address after: 430051 No. b1336, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee after: Yikatong (Hubei) Technology Co.,Ltd.

Address before: 430056 building B (qdxx-f7b), No.7 building, qiedixiexin science and Technology Innovation Park, South taizihu innovation Valley, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee before: HUBEI ECARX TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right