CN117872346A - Object tracking method, device, equipment and storage medium - Google Patents
Object tracking method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN117872346A CN117872346A CN202311746622.4A CN202311746622A CN117872346A CN 117872346 A CN117872346 A CN 117872346A CN 202311746622 A CN202311746622 A CN 202311746622A CN 117872346 A CN117872346 A CN 117872346A
- Authority
- CN
- China
- Prior art keywords
- data
- current
- tracking point
- tracking
- prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000008447 perception Effects 0.000 claims abstract description 63
- 238000001914 filtration Methods 0.000 claims abstract description 53
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000007704 transition Effects 0.000 claims description 12
- 230000007613 environmental effect Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000012549 training Methods 0.000 description 15
- 230000008859 change Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/66—Radar-tracking systems; Analogous systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/66—Radar-tracking systems; Analogous systems
- G01S13/72—Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses an object tracking method, an object tracking device, object tracking equipment and a storage medium. The method comprises the following steps: under the condition of comprising the object to be tracked, acquiring real-time perception data of the object to be tracked at the current moment; determining current state data and current tracking point data of the current moment after filtering processing for the real-time sensing data, the history state data and the history tracking point data determined at the previous moment in a Kalman filtering mode; determining predicted center point data at the current moment based on the current state data, the current tracking point data and the historical tracking point data; the method comprises the steps of determining current tracking point prediction data based on prediction center point data and real-time perception data to conduct object tracking based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data to be used for determining tracking point prediction data at the next moment. The problem of low object tracking accuracy is solved, and object tracking accuracy is improved.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an object tracking method, apparatus, device, and storage medium.
Background
With the rapid development of automatic driving technology, object tracking is one of key technologies of automatic driving, and has important significance for realizing safe and accurate automatic driving. However, in the related object tracking technical solution, the object tracking method based on the center point often ignores factors of dynamic changes in the driving environment, such as unstable detection or shielding of vehicles, pedestrians and obstacles, which results in inaccurate selected tracking points and even erroneous judgment.
Disclosure of Invention
The invention provides an object tracking method, an object tracking device, object tracking equipment and a storage medium, so as to improve object tracking accuracy.
According to an aspect of the present invention, there is provided an object tracking method, the method comprising:
under the condition of comprising an object to be tracked, acquiring real-time perception data of the object to be tracked at the current moment;
determining current state data and current tracking point data of the current moment after filtering processing for the real-time sensing data, the history state data and the history tracking point data determined at the previous moment in a Kalman filtering mode;
determining prediction center point data at the current moment based on the current state data, the current tracking point data and the historical tracking point data;
And determining current tracking point prediction data based on the prediction center point data and the real-time perception data, so as to track an object based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data for determining tracking point prediction data of the next moment.
According to another aspect of the present invention, there is provided an object tracking apparatus, comprising:
the sensing data acquisition module is used for acquiring real-time sensing data of the object to be tracked at the current moment under the condition of comprising the object to be tracked;
the Kalman filtering module is used for determining current state data and current tracking point data of the current moment after filtering processing for the real-time perception data, the history state data and the history tracking point data determined at the previous moment in a Kalman filtering mode;
the central point prediction module is used for determining prediction central point data at the current moment based on the current state data, the current tracking point data and the historical tracking point data;
the object tracking module is used for determining current tracking point prediction data based on the prediction center point data and the real-time perception data, performing object tracking based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data to be used for determining tracking point prediction data at the next moment.
According to another aspect of the present invention, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the object tracking method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to perform the object tracking method of any of the embodiments of the present invention.
According to the technical scheme, under the condition that the object to be tracked is included, real-time perception data of the object to be tracked at the current moment are obtained; determining current state data and current tracking point data of the current moment after filtering processing for the real-time sensing data, the history state data and the history tracking point data determined at the previous moment in a Kalman filtering mode; determining predicted center point data at the current moment based on the current state data, the current tracking point data and the historical tracking point data; the method comprises the steps of determining current tracking point prediction data based on prediction center point data and real-time perception data to conduct object tracking based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data to be used for determining tracking point prediction data at the next moment. The problem of low object tracking accuracy is solved, and object tracking accuracy is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an object tracking method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a specific tracking point to be selected according to an embodiment of the present invention;
FIG. 3 is a flow chart of another object tracking method provided according to an embodiment of the present invention;
FIG. 4 is a flow chart of a specific object tracking method according to an embodiment of the present invention;
FIG. 5 is a block diagram of an object tracking device according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It is noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present invention and in the foregoing figures, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of an object tracking method according to an embodiment of the present invention, where the embodiment is applicable to a scenario of object tracking based on kalman filtering, and the scenario may be executed by an object tracking device, and the object tracking device may be implemented in a form of hardware and/or software and configured in a processor of an electronic device.
As shown in fig. 1, the object tracking method includes the steps of:
s110, under the condition of including the object to be tracked, acquiring real-time perception data of the object to be tracked at the current moment.
The object to be tracked is an object to be tracked, and the object to be tracked can be a person, an animal or other objects. It will be appreciated that the object to be tracked is related to a specific application scenario, for example, for an autopilot scenario, the object to be tracked may be an obstacle located around the vehicle; for a security detection scenario, the object to be tracked may be a person or suspicious item that makes an abnormal behavior.
It will be appreciated that for intelligent driving or automatic driving, the evaluation of the automatic driving system depends largely on the perception data, which may include image or point cloud data, by converting the environmental information of the environment in which the vehicle is located into digital signals for data analysis and data processing, thereby providing decisions and control for intelligent driving or automatic driving.
In this embodiment, acquiring real-time sensing data of an object to be tracked at a current moment includes: and acquiring real-time sensing data of the object to be tracked at the current moment based on the sensing equipment. In particular, the sensing device may comprise an image acquisition device and a radar device, for example, the image acquisition device may be a monocular camera or a binocular camera, and the corresponding sensing data comprises image data; the radar device may be at least one of a lidar, a millimeter wave radar and an ultrasonic radar, and the corresponding real-time perception data includes point cloud data.
Optionally, the perceptual data is preprocessed to update the perceptual data. Specifically, filtering is performed on point cloud data based on a point cloud filtering method, noise and abnormal points in the point cloud data are removed, so that denoising processing is performed on the point cloud data, wherein the point cloud filtering method comprises Gaussian filtering, median filtering, filtering based on statistics and the like. The method has the advantages that the quality and the accuracy of the perceived data can be improved, and a more reliable data base is provided for subsequent processing.
In this embodiment, under the condition that the object to be tracked is included, after acquiring the real-time sensing data of the object to be tracked at the current moment, the method further includes: acquiring environmental perception data at the current moment under the condition of not including an object to be tracked; and performing target detection on the environmental perception data at the current moment based on a pre-trained target detection model to obtain current state data corresponding to each target object.
The target object is an object to be tracked, which may be a person, an animal and/or an object, and for different scenes, the target object may be an object of interest corresponding to the application scene.
The state data is determined based on perceived data of the object to be tracked, for example, the state data may include tracking point coordinates, orientation angles, and object dimensions.
It can be understood that, under the condition that the initial moment and the object to be tracked are located outside the sensing range of the sensing device, the object to be tracked corresponding to the previous moment does not exist at the current moment, so that the environmental sensing data needs to be subjected to target detection to obtain the state data corresponding to the target object, and the target object at the moment is subjected to object tracking at the next moment. Further, the environmental perception data is screened based on the state data corresponding to the target object, so as to obtain perception data corresponding to each object.
The object detection model (Object Detection Model) is used to determine status information such as type, number and location of object objects in the context awareness data, and may be, for example, at least one of a spatial pyramid pooling convolutional network (spatial pyramid pooling network, SPP-Net), a regional convolutional neural network (Region-Convolutional Neural Networks, R-CNN), a single-step detector (Single Shot MultiBox Detector, SSD), a single-step detector (you only look once, YOLO) series model, or an overFeat model.
It will be appreciated that prior to training the target detection model, a training sample set corresponding to the target detection model needs to be constructed to train the target model based on each training sample in the training sample set.
In this embodiment, constructing a training sample set includes: and acquiring an environment-aware image based on a camera, determining an area image corresponding to each object in the environment-aware image for each object in the environment-aware image, adding rectangular frames comprising all pixels of the current target object to each area in the environment-aware image, taking the center point coordinates of the rectangular frames, the orientation angle of the object and the size of the object as labels of the current target object, and taking the labels and the environment-aware image as training samples to obtain a training sample set.
Further, training the target detection model based on each training sample in the training sample set includes: for each training sample, inputting an environment perception image in the current training sample into a target detection model to obtain a rectangular frame center point coordinate corresponding to each object, an object orientation angle and an object size; determining a loss value based on a label corresponding to the object in the training sample set, so as to correct model parameters in the target detection model based on the loss value; and taking the convergence of the loss function in the target detection model as a training target to obtain the target detection model.
Further, the environment sensing data are input into a target detection model which is obtained through training in advance, coordinates, orientation angles and sizes of a center point of a rectangular frame corresponding to each target object are obtained, and the information is used as state data corresponding to the target objects; further, based on the state data, the environmental awareness data is filtered to obtain real-time awareness data corresponding to each target object.
The current state data is exemplified by m (k) { m_size (k), m_count (k), m_yaw (k) }, wherein m_size (k) is the size data of the target object at the current time, m_count (k) is the center point data of the target object at the current time, and m_yaw (k) is the orientation angle data of the target object at the current time.
S120, determining current state data and current tracking point data of the current moment after filtering processing for the real-time perception data, the historical state data and the historical tracking point data determined at the previous moment in a Kalman filtering mode.
It can be understood that, because the sensing data corresponding to the object to be tracked includes the sensing data corresponding to all points of the object to be tracked, but because the data volume is too large, one point can be selected as a tracking point, and the state data corresponding to the tracking point is used as tracking point data; it should be noted that, based on the difference between the current state data and the historical state data corresponding to the real-time sensing data, the movement degree of the object to be tracked may be determined. Considering the object to be tracked as a whole, the current state data can be determined based on the movement degree of the object to be tracked and the historical tracking point data; further, current tracking point data is determined based on the current state data.
The tracking point data may include state data corresponding to the tracking point. For example, the history tracking point data may be coordinates, orientation angles, and sizes of the history time tracking points, and the current state data may be coordinates, orientation angles, and sizes of the current time tracking points.
The historical state data may be state data corresponding to the sensing data at the previous time. The state information may include at least one of a position, a velocity, and an acceleration of the object to be tracked, the predicted state information corresponding to the current time-of-day sensing data, and the history state information corresponding to the sensing data at a time prior to the current time-of-day.
Among them, kalman filtering (Kalman filtering) is a method for estimating a past state, a present state, or a future state of an object to be tracked using a linear system state equation. The Kalman filtering assumes that the state data of the tracking points are random and all follow Gaussian distribution, each state data has a corresponding mean value and variance (representing uncertainty), an optimal estimation of the current tracking point corresponding to the historical tracking point data is determined based on a Kalman filtering mode, the optimal estimation of the current state data corresponding to the historical state data is determined as the current tracking point data, and the optimal estimation is corrected based on real-time perception data to obtain the current state data of the current moment after filtering processing.
In this embodiment, determining, by means of kalman filtering, the current state data and the current tracking point data of the current time after the filtering process for the real-time sensing data and the history state data and the history tracking point data determined at the previous time includes: determining current state data based on the Kalman filtering preset state transition matrix, the real-time sensing data and the historical state data; the current tracking point data is determined based on the current state data and the historical tracking point data.
The preset state transition matrix is a corresponding relation between the historical state data and the current state data.
Optionally, the state transition matrix is obtained by pre-calibration. Specifically, calibrating the state transition matrix may include: for each current state data, determining the current state data and the historical state data corresponding to the current state data, further determining the corresponding relation between the current state data and the historical state data, and determining a state transition matrix between the current state data and the historical state data based on the corresponding relation.
Further, determining the current state data based on the preset state transition matrix, the real-time perception data and the historical state data of the Kalman filtering comprises: determining optimal current state data corresponding to the historical state transition data based on a preset state transition matrix of Kalman filtering; and correcting the optimal current state data in the ideal state based on the real-time sensing data to obtain the current state data after Kalman filtering processing.
It can be understood that, for all points of the object to be tracked, including the tracking point and the points other than the tracking point, there is a correspondence between the real-time sensing data corresponding to the tracking point and the sensing data corresponding to the points other than the tracking point. Therefore, for the real-time sensing data, based on the state data of the object to be tracked and the corresponding relation, the tracking point sensing data of the tracking point can be determined, and then the current state data of the tracking point, namely the current tracking point data, is obtained. Further, the current tracking point data is modified based on the historical tracking point data to update the current tracking point data.
Specifically, for each object to be tracked, based on the state transition matrix in the Kalman filtering algorithm and the historical state information corresponding to the object to be tracked at the previous moment, the state data at the current moment is predicted, and the predicted state data corresponding to each tracked object is determined and used as the current state data. And determining state data corresponding to the current tracking point based on the historical tracking point data, and obtaining the current tracking point data based on the current state data and the state data corresponding to the current tracking point.
Illustratively, denoising the current state data m (k) to obtain updated current state data x (k); determining a product of the state transition matrix (a) and the historical state data x (k-1) based on the formula (1), and taking the product as the predicted current state data x (k) with the current time:
x(k)=Ax(k-1) (1);
wherein x (k-1) represents historical state data filtered at time (k-1); x (k) includes the center point coordinates (x_cent), the size (x_size), the orientation angle (x_yaw) of the current tracking object; the center point coordinate (x_cent) of the current tracking object is taken as the current tracking point data.
S130, based on the current state data, the current tracking point data and the historical tracking point data, determining prediction center point data of the current moment.
The current state data further includes center point data of a history time, for example, the center point data of the history time may be center point data of an object to be tracked at a previous time.
It will be appreciated that since it is not possible for an object to remain absolutely stationary, the state information of the object may change from the current time to the historical time for the same object to be tracked, and thus a degree of change in the object to be tracked may be determined based on the historical tracking point data and the current tracking point data.
Specifically, in the case where the tracking point data includes the history time center point data, a degree of change of the object to be tracked may be determined based on the history tracking point data and the current tracking point data to predict the degree of change of the center point data; and determining the predicted central point data at the current moment based on the predicted change degree of the central point data and the central point data at the historical moment in the current state data.
In this embodiment, determining the predicted center point data at the current time based on the current state data, the current tracking point data, and the historical tracking point data includes: and determining the prediction center point data of the current moment according to the current state data, the current tracking point data and the historical tracking point data based on a preset corresponding relation, wherein the preset corresponding relation comprises the corresponding relation between the current tracking point data, the current state data and the historical tracking point data and the prediction center point data of the current moment.
It can be understood that when the time difference between the historical time and the current time is unchanged, the change degree of the historical tracking point data and the current tracking point data of the object to be tracked is also unchanged; therefore, with respect to the degree of change, the correspondence between the current tracking point data, the current state data, the history tracking point data, and the predicted center point data at the current time is determined.
Specifically, a first corresponding relation among historical tracking point data, current tracking point data and change degree, and a second corresponding relation among current state data, predicted central point data at the current moment and change degree are preset; further, a degree of change of the object to be tracked is determined based on the current tracking point data and the historical tracking point data and the first correspondence, and predicted center point data is determined based on the degree of change, the current state data and the second correspondence.
Illustratively, the predicted center point data x_center (k) at the current time is determined based on the current tracking point data (x_cent) at the time k and the history tracking point data (track_index) obtained at the time (k-1) in the formula (2):
x_center(k)=x_cent(k)-g(track_index(k-1),x_size(k),x_yaw(k)) (2);
the current tracking point data (x_cent) is a matrix of 2 rows and one column composed of an abscissa and an ordinate of a center coordinate of the current tracking point, g (track_index (k-1), x_size (k), x_yaw (k)) represents an intermediate process value from the current tracking point data to the predicted center point data, track_index (k-1) represents the history tracking point data which is the tracking point data at the (k-1) th moment, x_size (k) is size data in the current state data at the k-th moment, and x_yaw (k) th moment is an orientation angle in the current state data.
Wherein g (track_index (k-1), x_size (k), x_yaw (k)) is determined based on formula (3):
the parameters x_size_l and x_size_w represent length data and width data in the size data, respectively, and the track_index_1 and track_index_2 are the abscissa and the ordinate of the track point track_index, respectively.
And S140, determining current tracking point prediction data based on the prediction center point data and the real-time perception data, so as to track the object based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data for determining the tracking point prediction data at the next moment.
The current tracking point prediction data is obtained by correcting prediction center point data based on real-time perception data, and corresponds to the prediction center point data.
Specifically, considering that points of the object to be tracked at the current moment, which correspond to the same position as the historical tracking points, may be blocked, other points of the object to be tracked are determined to be the current tracking points based on the predicted central point data and the real-time perception data, and the predicted data of the current tracking points are obtained based on the relative position relationship between the central point and the other points; and taking the current tracking point prediction data as current tracking point data, and further taking the current tracking point data as history tracking point data at the next moment to continue tracking prediction.
In this embodiment, determining the current tracking point prediction data based on the prediction center point data and the real-time perceptual data includes: for each tracking point to be selected corresponding to the object to be tracked, determining a moving distance corresponding to the current tracking point to be selected based on the predicted central point data and the real-time perception data; and determining the current tracking point prediction data based on each moving distance and real-time perception data.
The tracking point to be selected may be any point corresponding to the object to be tracked, for example, may be a boundary point of the object to be tracked, or may be 4 corner points or center points of a rectangular frame corresponding to the object to be tracked.
The moving distance is for the tracking point to be selected and corresponds to the point of the object to be tracked, which is located at the same position as the tracking point to be selected, in the previous time, for example, the moving distance may be a distance between a center point of a rectangular frame corresponding to the object to be tracked at the current time and a center point of a rectangular frame corresponding to the object to be tracked at the previous time.
For each tracking point to be selected corresponding to the object to be tracked, determining the moving distance corresponding to the current tracking point to be selected based on the predicted central point data and the real-time perception data comprises the following steps: for each tracking point to be selected, determining the predicted data of the tracking point to be selected based on the predicted central point data and the relative position relation between the central point and the tracking point to be selected; and determining current tracking point data corresponding to the real-time perception data, and determining the distance between the current tracking data and the predicted data of the tracking point to be selected, and taking the distance as the moving distance corresponding to the tracking point to be selected.
For example, referring to fig. 2, where a solid line box 201 represents four corner points and a center point of a rectangular box corresponding to real-time perception data at a current time, each point is taken as a tracking point to be selected; the dashed box 202 indicates the tracking point corresponding to the history time, the history tracking point coordinate corresponding to the center point is set to (0, 0), the history tracking point coordinate corresponding to the upper left corner point is set to (-1, 1), the history tracking point coordinate corresponding to the upper right corner point is set to (1, 1), the history tracking point coordinate corresponding to the lower left corner point is set to (-1, -1), and the history tracking point coordinate corresponding to the lower right corner point is set to (1, -1); and respectively determining the movement distances corresponding to the center point and the 4 corner points based on the predicted center point data x_cent at the current moment and the current state data m corresponding to the real-time perception data.
The corresponding calculation formula is as formula (4):
the naming rules of the parameters include: tr denotes a correlation parameter of a corner point of the upper right corner, tl denotes a correlation parameter of a corner point of the upper left corner, bl denotes a correlation parameter of a corner point of the lower right corner, br denotes a correlation parameter of a corner point of the upper right corner, and parameters other than tr, tl, bl, and br denote correlation parameters corresponding to the center point. Specifically, dis_tmp is the moving distance of the center point, dis_tmp_br is the moving distance of the lower right corner, dis_tmp_tr is the moving distance of the upper right corner, dis_tmp_bl is the moving distance of the lower left corner, dis_tmp_tl is the moving distance of the upper left corner, x_tmp is the predicted center point data, x_tmp_tr is the corner point data of the predicted upper right corner, other corner point data, and so on; m_tmp is the center point moving distance corresponding to the real-time sensing data at the k moment.
Illustratively, the track_index_tr is set as the history coordinates (1, 1) of the upper right corner, m_cent (k) represents the center point coordinates in the state data corresponding to the real-time sensing data at the k moment, m_size represents the size data in the state data corresponding to the real-time sensing data at the k moment, m_yaw represents the orientation angle data in the state data corresponding to the real-time sensing data at the k moment, dis_tmp_tr represents the moving distance between the prediction information x_tmp_tr and the measurement information m_tmp_tr; and the like, the moving distances corresponding to other angular points can be determined.
Optionally, the current tracking point prediction data is determined based on each movement distance and the real-time perception data. Comprising the following steps: obtaining a tracking point to be selected corresponding to the target moving distance based on the statistical indexes of the moving distances; determining center point data corresponding to the real-time perception data based on the real-time perception data and the tracking point to be selected corresponding to the target moving distance; the center point data is used as current tracking point prediction data.
Illustratively, determining an average value of all moving distances, and determining a difference value of each moving distance relative to the average value, to obtain a minimum difference value; determining center point data corresponding to the real-time perception data based on the real-time perception data and the tracking point to be selected corresponding to the minimum difference value; the center point data is used as current tracking point prediction data.
According to the technical scheme, noise reduction processing is carried out on the perception data, tracking point prediction data are determined based on the moving distance of each tracking point to be selected, and then updating is carried out based on state data corresponding to the tracking point prediction data and the real-time perception data, so that target tracking is achieved, and the accuracy of object tracking can be improved through the accuracy of the state data of the object to be tracked.
Fig. 3 is a flowchart of another object tracking method according to an embodiment of the present invention, where the embodiment is applicable to a scenario in which object tracking is performed based on kalman filtering. The present embodiment and the object tracking method in the foregoing embodiment belong to the same inventive concept, and on the basis of the foregoing embodiment, a process of determining the current tracking point prediction data based on the prediction center point data and the real-time perception data is further described.
As shown in fig. 3, the object tracking method includes:
s210, under the condition of including the object to be tracked, acquiring real-time perception data of the object to be tracked at the current moment.
S220, determining current state data and current tracking point data of the current moment after filtering processing for the real-time perception data, the historical state data and the historical tracking point data determined at the previous moment in a Kalman filtering mode.
S230, for each tracking point to be selected corresponding to the object to be tracked, determining a moving distance corresponding to the current tracking point to be selected based on the predicted central point data and the real-time perception data, so as to obtain a target tracking point corresponding to the minimum moving distance.
The target tracking point is the tracking point to be selected corresponding to the minimum moving distance.
Specifically, for each tracking point to be selected corresponding to the object to be tracked, based on the predicted central point data and the real-time perception data, determining the moving distance corresponding to the current tracking point to be selected, obtaining the minimum value of all the moving distances, taking the minimum value as the minimum moving distance, obtaining the tracking point to be selected corresponding to the minimum moving distance, and taking the tracking point to be selected as the target tracking point.
S240, determining current tracking point prediction data based on the sensing data of the target tracking point corresponding to the minimum moving distance in the real-time sensing data.
Specifically, based on the relative position relationship between the target tracking point and the center point and the historical tracking point data corresponding to the target tracking point, the center point data corresponding to the real-time perception data is determined, so as to obtain the current tracking point prediction data.
Illustratively, based on equation (5), the current tracking point prediction data x_update (k) is updated according to the history tracking point data (track_index), the prediction center point data (x_center), and the current state data x (k) corresponding to the target tracking point:
x_update(k)=x_center(k)+g(track_index(k),x_size(k),x_yaw(k)) (5)。
S250, determining current state prediction data at the current moment based on the current tracking point prediction data and the real-time perception data.
Specifically, based on a preset corresponding relation, the current state of the current moment is predicted according to the current tracking point prediction data and the real-time perception data after Kalman filtering, and the current state prediction data of the current moment is obtained.
Illustratively, based on equation (6), the current state prediction data m_update (k) is determined from the current measurement tracking point data:
m_update(k)=m_cent(k)+g(track_index(k),m_size(k),m_yaw(k)) (6);
where g (track_index (k-1), x_size (k), x_yaw (k)) represents the intermediate process value of the current measurement tracking point data to the predicted measurement data.
S260, updating the current tracking point prediction data based on the current state prediction data and the current tracking point prediction data at the current moment to perform object tracking based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data to be used for determining the tracking point prediction data at the next moment.
Specifically, based on the current state prediction data and the current tracking point prediction data at the current time and the kalman gain, an optimal tracking point at the current time is determined to obtain the current tracking point prediction data. Further, object tracking is performed based on the current tracking point prediction data, and the current tracking point prediction data is used as the current tracking point data for determining the tracking point prediction data at the next time.
Illustratively, based on equation (7), the current tracking point prediction data is updated according to the current state prediction data and the current tracking point prediction data to obtain tracking point data x_output (k) after current time filtering:
wherein K is k The Kalman gain is given, and H is the observation matrix.
Further, based on the formula (8), the center point data x_final (k) corresponding to the tracking point data x_output (k) filtered at the current time is determined:
wherein x_center_output (k) is the center coordinate data of the tracking point corresponding to x_output (k). Further, the tracking point prediction data at the next time is updated based on x_output and track_index at the current time.
According to the technical scheme, the tracking point to be selected corresponding to the minimum moving distance is selected and used as the target tracking point based on the Kalman filtering mode, filtering updating is further carried out on the current tracking point prediction data and the current state data based on the target tracking point, accuracy of the current tracking point prediction data and the current state data is improved, and accuracy of object tracking is further improved.
Fig. 4 is a flowchart of a specific object tracking method according to an embodiment of the present invention, as shown in fig. 4, the object tracking method includes:
S310, under the condition of including the objects to be tracked, for each object to be tracked, based on a pre-trained target detection model, determining observation data corresponding to real-time perception data at the current moment.
The object to be tracked is a driving obstacle of the target vehicle, and the observation data m comprises obstacle size data, orientation angle data and obstacle center point coordinate data.
In the running process of the vehicle, sensing the obstacle based on sensing equipment to obtain real-time sensing data corresponding to the current obstacle; and inputting the perception data into a target detection model to obtain observation data corresponding to the current obstacle.
S320, tracking point data at the current moment is determined based on the historical tracking point data.
Specifically, the history tracking point data is the central point coordinate data of the object to be tracked at the previous moment, namely the tracking index track_index. If the current time is the first time, track_index is (0, 0).
S330, determining an optimal estimated value corresponding to the current moment in a Kalman filtering mode, and determining predicted tracking point data of the current moment.
By means of Kalman filtering, an optimal estimated value x (k) corresponding to the last time instant final state data x (k-1) is determined. x (k) includes information such as a predicted tracking point center coordinate x_cent, a size x_size, and an orientation angle x_yaw, and the predicted tracking point center coordinate x_cent is used as predicted tracking point data at the current time.
S340, determining the prediction center point data of the current moment based on the preset corresponding relation, the prediction tracking point data of the current moment and the tracking point data of the historical moment.
Based on the correspondence relation shown in the formula (2), the predicted center point data x_center (k) at the present time corresponding to the predicted tracking point data at the present time and the tracking point data at the history time is determined.
S350, determining calculation history tracking point data based on the prediction center point data of the current moment and the observation data of the current moment.
Determining the moving distance of each tracking point to be selected based on the formula (4) to obtain the minimum moving distance, taking the tracking point to be selected corresponding to the minimum moving distance as a target tracking point, and further determining the center point coordinate based on the target tracking point to be taken as history tracking point data.
S360, determining predicted tracking point data of the current moment based on the predicted central point data and the historical tracking point data of the current moment, and determining observed tracking point data of the current moment based on the observed data of the current moment and the historical tracking point data.
And S370, updating the current tracking point data based on the Kalman gain, the predicted tracking point data at the current moment and the observed tracking point data at the current moment.
And S380, carrying out object tracking based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data to be used for determining the tracking point prediction data at the next moment.
According to the technical scheme, the obstacle is perceived in the running process of the vehicle based on the perception equipment, and then object tracking is carried out on the obstacle in a Kalman filtering mode, so that the object tracking accuracy is improved, and an information basis can be provided for automatic obstacle avoidance and path planning of a follow-up intelligent automobile.
Fig. 5 is a block diagram of an object tracking device according to an embodiment of the present invention, where the embodiment is applicable to a scenario of object tracking based on kalman filtering, and the device may be implemented in hardware and/or software, and integrated into a processor of an electronic device with an application development function.
As shown in fig. 5, the object tracking device includes: the sensing data obtaining module 501 is configured to obtain real-time sensing data of an object to be tracked at a current moment under a condition that the object to be tracked is included; the kalman filtering module 502 is configured to determine, by using a kalman filtering manner, current state data and current tracking point data of the current time after filtering processing for the real-time sensing data, the historical state data and the historical tracking point data determined at a previous time; a central point predicting module 503, configured to determine predicted central point data at the current moment based on the current state data, the current tracking point data and the historical tracking point data; the object tracking module 504 is configured to determine current tracking point prediction data based on the prediction center point data and the real-time perception data, to perform object tracking based on the current tracking point prediction data, and to use the current tracking point prediction data as the current tracking point data, to determine tracking point prediction data at a next time. The problem of low object tracking accuracy is solved, and object tracking accuracy is improved.
Optionally, the sensing data acquisition module 501 includes an object detection unit, where the object detection unit is specifically configured to:
acquiring environmental perception data at the current moment under the condition of not including an object to be tracked;
and performing target detection on the environmental perception data at the current moment based on a pre-trained target detection model to obtain current state data corresponding to each target object.
Optionally, the kalman filter module 502 is specifically configured to:
determining current state data based on the Kalman filtering preset state transition matrix, the real-time sensing data and the historical state data;
the current tracking point data is determined based on the current state data and the historical tracking point data.
Optionally, the central point prediction module 503 is specifically configured to:
and determining prediction center point data at the current moment according to the current state data, the current tracking point data and the historical tracking point data based on a preset corresponding relation, wherein the preset corresponding relation comprises the corresponding relation between the current tracking point data, the current state data and the historical tracking point data and the prediction center point data at the current moment.
Optionally, the object tracking module 504 is specifically configured to:
For each tracking point to be selected corresponding to the object to be tracked, determining a moving distance corresponding to the current tracking point to be selected based on the predicted central point data and the real-time perception data;
and determining the current tracking point prediction data based on each moving distance and real-time perception data.
Optionally, the object tracking module 504 is specifically configured to:
for each tracking point to be selected corresponding to the object to be tracked, determining a moving distance corresponding to the current tracking point to be selected based on the predicted central point data and the real-time perception data so as to obtain a target tracking point corresponding to the minimum moving distance;
and determining the current tracking point prediction data based on the sensing data of the tracking point to be selected corresponding to the minimum moving distance in the real-time sensing data.
Optionally, the object tracking module 504 further includes a data updating unit, where the data updating unit is specifically configured to:
determining current state prediction data at a current moment based on the current tracking point prediction data and the real-time perception data;
and updating the current tracking point prediction data based on the current state prediction data of the current moment and the current tracking point prediction data.
The object tracking device provided by the embodiment of the invention can execute the object tracking method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the object tracking method.
In some embodiments, the object tracking method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the object tracking method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the object tracking method in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable object tracking device, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. An object tracking method, comprising:
under the condition of comprising an object to be tracked, acquiring real-time perception data of the object to be tracked at the current moment;
determining current state data and current tracking point data of the current moment after filtering processing for the real-time sensing data, the history state data and the history tracking point data determined at the previous moment in a Kalman filtering mode;
Determining prediction center point data at the current moment based on the current state data, the current tracking point data and the historical tracking point data;
and determining current tracking point prediction data based on the prediction center point data and the real-time perception data, so as to track an object based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data for determining tracking point prediction data of the next moment.
2. The method according to claim 1, further comprising, after the acquiring real-time perceived data of the object to be tracked at the current time under the condition that the object to be tracked is included:
acquiring environmental perception data at the current moment under the condition of not including an object to be tracked;
and performing target detection on the environmental perception data at the current moment based on a pre-trained target detection model to obtain current state data corresponding to each target object.
3. The method according to claim 1, wherein determining the current state data and the current tracking point data of the current time after the filtering process for the real-time sensing data and the history state data and the history tracking point data determined at the previous time by means of kalman filtering includes:
Determining current state data based on the Kalman filtering preset state transition matrix, the real-time sensing data and the historical state data;
the current tracking point data is determined based on the current state data and the historical tracking point data.
4. The method of claim 1, wherein the determining predicted center point data for the current time based on the current state data, current tracking point data, and historical tracking point data comprises:
and determining prediction center point data at the current moment according to the current state data, the current tracking point data and the historical tracking point data based on a preset corresponding relation, wherein the preset corresponding relation comprises the corresponding relation between the current tracking point data, the current state data and the historical tracking point data and the prediction center point data at the current moment.
5. The method of claim 1, wherein the determining current tracking point prediction data based on the prediction center point data and the real-time perceptual data comprises:
for each tracking point to be selected corresponding to the object to be tracked, determining a moving distance corresponding to the current tracking point to be selected based on the predicted central point data and the real-time perception data;
And determining the current tracking point prediction data based on each moving distance and real-time perception data.
6. The method of claim 1 or 5, wherein said determining current tracking point prediction data based on said prediction center point data and said real-time perceptual data comprises:
for each tracking point to be selected corresponding to the object to be tracked, determining a moving distance corresponding to the current tracking point to be selected based on the predicted central point data and the real-time perception data so as to obtain a target tracking point corresponding to the minimum moving distance;
and determining the prediction data of the current tracking point based on the sensing data of the target tracking point in the real-time sensing data.
7. The method of claim 1, 5 or 6, wherein said determining current tracking point prediction data based on said prediction center point data and said real-time perceptual data further comprises:
determining current state prediction data at a current moment based on the current tracking point prediction data and the real-time perception data;
and updating the current tracking point prediction data based on the current state prediction data of the current moment and the current tracking point prediction data.
8. An object tracking device, comprising:
the sensing data acquisition module is used for acquiring real-time sensing data of the object to be tracked at the current moment under the condition of comprising the object to be tracked;
the Kalman filtering module is used for determining current state data and current tracking point data of the current moment after filtering processing for the real-time perception data, the history state data and the history tracking point data determined at the previous moment in a Kalman filtering mode;
the central point prediction module is used for determining prediction central point data at the current moment based on the current state data, the current tracking point data and the historical tracking point data;
the object tracking module is used for determining current tracking point prediction data based on the prediction center point data and the real-time perception data, performing object tracking based on the current tracking point prediction data, and taking the current tracking point prediction data as the current tracking point data to be used for determining tracking point prediction data at the next moment.
9. An electronic device, the electronic device comprising:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the object tracking method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the object tracking method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311746622.4A CN117872346A (en) | 2023-12-18 | 2023-12-18 | Object tracking method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311746622.4A CN117872346A (en) | 2023-12-18 | 2023-12-18 | Object tracking method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117872346A true CN117872346A (en) | 2024-04-12 |
Family
ID=90596055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311746622.4A Pending CN117872346A (en) | 2023-12-18 | 2023-12-18 | Object tracking method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117872346A (en) |
-
2023
- 2023-12-18 CN CN202311746622.4A patent/CN117872346A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109087510B (en) | Traffic monitoring method and device | |
US9767570B2 (en) | Systems and methods for computer vision background estimation using foreground-aware statistical models | |
CN110909712B (en) | Moving object detection method and device, electronic equipment and storage medium | |
CN113177968A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN105374049B (en) | Multi-corner point tracking method and device based on sparse optical flow method | |
CN115063454B (en) | Multi-target tracking matching method, device, terminal and storage medium | |
CN114049382A (en) | Target fusion tracking method, system and medium in intelligent network connection environment | |
CN115719436A (en) | Model training method, target detection method, device, equipment and storage medium | |
CN115100741B (en) | Point cloud pedestrian distance risk detection method, system, equipment and medium | |
CN115797736A (en) | Method, device, equipment and medium for training target detection model and target detection | |
CN111402293A (en) | Vehicle tracking method and device for intelligent traffic | |
CN111460917B (en) | Airport abnormal behavior detection system and method based on multi-mode information fusion | |
CN112683228A (en) | Monocular camera ranging method and device | |
CN115546705A (en) | Target identification method, terminal device and storage medium | |
CN116503803A (en) | Obstacle detection method, obstacle detection device, electronic device and storage medium | |
CN114528941A (en) | Sensor data fusion method and device, electronic equipment and storage medium | |
CN113705380A (en) | Target detection method and device in foggy days, electronic equipment and storage medium | |
CN117031512A (en) | Target object tracking method and device and electronic equipment | |
CN117372928A (en) | Video target detection method and device and related equipment | |
CN112308917A (en) | Vision-based mobile robot positioning method | |
CN116883460A (en) | Visual perception positioning method and device, electronic equipment and storage medium | |
CN116012421A (en) | Target tracking method and device | |
CN116129422A (en) | Monocular 3D target detection method, monocular 3D target detection device, electronic equipment and storage medium | |
CN117872346A (en) | Object tracking method, device, equipment and storage medium | |
CN115431968B (en) | Vehicle controller, vehicle and vehicle control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |