CN115908506A - Multi-target tracking method based on Kalman prediction - Google Patents

Multi-target tracking method based on Kalman prediction Download PDF

Info

Publication number
CN115908506A
CN115908506A CN202211102306.9A CN202211102306A CN115908506A CN 115908506 A CN115908506 A CN 115908506A CN 202211102306 A CN202211102306 A CN 202211102306A CN 115908506 A CN115908506 A CN 115908506A
Authority
CN
China
Prior art keywords
target
moment
kalman prediction
time
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211102306.9A
Other languages
Chinese (zh)
Other versions
CN115908506B (en
Inventor
满庆奎
胡畅
刘静
李冠华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yunqi Smart Vision Technology Co ltd
Original Assignee
Hangzhou Yunqi Smart Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yunqi Smart Vision Technology Co ltd filed Critical Hangzhou Yunqi Smart Vision Technology Co ltd
Priority to CN202211102306.9A priority Critical patent/CN115908506B/en
Publication of CN115908506A publication Critical patent/CN115908506A/en
Application granted granted Critical
Publication of CN115908506B publication Critical patent/CN115908506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a multi-target tracking method based on Kalman prediction, which is implemented by dynamically updating a target position real value set Tracker set real The real value of each position in the Kalman prediction model is matched with the position detection value corresponding to each target in the updated detection target set DetectnSet to adjust the observation variable of the Kalman prediction model, and then the model parameter of the Kalman prediction model is updated according to the adjusted observation variable, so that the influence of noise data increase caused by target loss of continuous multiple frames on the prediction performance of the Kalman prediction model is reduced. By searching the incidence relation between the time interval proportion among the three frames of t, t + N and t + N + M and the real position value of the target, the real position value of the target at the time of t + N + M is calculated to be used as the observation variable of the Kalman prediction model, and the target position at the next time of t + N + M is tracked and predicted after the model parameters are updated and adjusted by the observation variable, so that the problem of poor prediction precision of the model due to the non-linearity of the inter-frame interval time is solved.

Description

Multi-target tracking method based on Kalman prediction
Technical Field
The invention relates to the technical field of target tracking detection, in particular to a multi-target tracking method based on Kalman prediction.
Background
In the technical field of video analysis, multi-target tracking refers to continuous tracking detection of positions of multiple targets such as human bodies, automobiles and the like appearing in video frame images. In the prior art, a kalman prediction model is usually adopted to statistically analyze the linear variation relationship between frames of each target to predict the position information of the target appearing in the next frame (the position information includes, for example, the length, width, central point coordinates, motion direction, motion speed, etc. of a rectangular frame used for framing the target). Compared with other existing target tracking and predicting technologies, the Kalman prediction model has the following advantages:
the prediction effect is good when the track data of the target has no noise points, and particularly the prediction performance in a short time (within one step or two steps) is stable. The disadvantages are that:
(1) If the target is lost in a plurality of continuous frames, the prediction error is amplified along with the increase of noise due to the increase of data noise;
(2) In some multi-target tracking detection scenes, the time interval between frames is not fixed, namely the inter-frame time interval is nonlinear, the Kalman prediction model has a poor tracking detection effect on the moving target of the video frame data at the nonlinear time interval, and the prediction error is large.
Disclosure of Invention
The invention provides a multi-target tracking method based on Kalman prediction, aiming at improving the multi-target tracking precision of a Kalman prediction model in a scene with larger noise data and/or inter-frame interval time nonlinearity.
In order to achieve the purpose, the invention adopts the following technical scheme:
the multi-target tracking method based on Kalman prediction is provided, and comprises the following steps:
s1, acquiring at least two data frames with different time stamps;
s2, judging whether the number of the acquired data frames is equal to 2 or not,
if so, updating model parameters of a Kalman prediction model for tracking the corresponding target by using a first strategy;
if not, updating model parameters of the Kalman prediction model for tracking the corresponding target by using a second strategy;
and S3, predicting whether the target appears in the next frame or not by using the Kalman prediction model after parameter updating.
Preferably, when the number of the data frames acquired in step S1 is equal to two frames, the first strategy of updating the model parameters of the kalman prediction model for tracking the corresponding target includes the steps of:
a1, detection
Figure 36373DEST_PATH_IMAGE001
The data frame of a moment->
Figure 625618DEST_PATH_IMAGE002
Target in (4) is added to the detection target set->
Figure 501561DEST_PATH_IMAGE003
Middle, or>
Figure 381924DEST_PATH_IMAGE001
Time of day
Figure 852088DEST_PATH_IMAGE003
The data content in (a) is expressed as:
Figure 612234DEST_PATH_IMAGE004
Figure 302104DEST_PATH_IMAGE005
Figure 891217DEST_PATH_IMAGE006
respectively expressed in>
Figure 232199DEST_PATH_IMAGE002
In a detected fifth>
Figure 648399DEST_PATH_IMAGE007
Individual targets and total number of targets;
a2 is selected from
Figure 324100DEST_PATH_IMAGE008
Each target in the track container creates corresponding track information to be added into the track container
Figure 998795DEST_PATH_IMAGE009
In the method, the number of track points in each track information is set to be '1', 'and' are combined>
Figure 939157DEST_PATH_IMAGE001
Moment->
Figure 775526DEST_PATH_IMAGE009
The data content in (2) is expressed as:
Figure 204103DEST_PATH_IMAGE010
Figure 902062DEST_PATH_IMAGE011
is expressed as a fifth->
Figure 483216DEST_PATH_IMAGE007
The trajectory information created by each of the targets>
Figure 739754DEST_PATH_IMAGE012
A3, in order
Figure 140780DEST_PATH_IMAGE009
The track information in the Kalman prediction model is a model parameter assignment basis of the corresponding Kalman prediction model, and a parameter initial value of each Kalman prediction model is given;
a4, detection
Figure 908010DEST_PATH_IMAGE013
The data frame of a moment->
Figure 78091DEST_PATH_IMAGE014
To update the set of detected targets
Figure 771109DEST_PATH_IMAGE003
Figure 144584DEST_PATH_IMAGE013
Time updated->
Figure 964773DEST_PATH_IMAGE003
The data content in (a) is expressed as:
Figure 238628DEST_PATH_IMAGE015
Figure 853280DEST_PATH_IMAGE016
Figure 708192DEST_PATH_IMAGE017
respectively indicate updated->
Figure 332071DEST_PATH_IMAGE003
Is greater than or equal to>
Figure 726012DEST_PATH_IMAGE018
Individual targets and total number of targets;
and is aligned at
Figure 996719DEST_PATH_IMAGE013
The instant detected being->
Figure 594053DEST_PATH_IMAGE001
The position detection value of the target detected at the same time is added as a newly added track point to the->
Figure 5312DEST_PATH_IMAGE001
Corresponding track information created at a moment;
and to
Figure 489645DEST_PATH_IMAGE001
Is detected at a moment and is->
Figure 446100DEST_PATH_IMAGE013
When the target is not detected at the moment, predicting whether the target is on or is not on by using the Kalman prediction model which is specially used for detecting the target and is endowed with the initial value of the model parameter in the step A3>
Figure 45577DEST_PATH_IMAGE013
Position of moment in time noted>
Figure 480101DEST_PATH_IMAGE019
As it is at>
Figure 350099DEST_PATH_IMAGE013
The actual value of the position of the time instant>
Figure 461143DEST_PATH_IMAGE020
Joining a set of position truth values->
Figure 298649DEST_PATH_IMAGE021
The preparation method comprises the following steps of (1) performing;
a5, for
Figure 281737DEST_PATH_IMAGE021
Each position true value in (a) is updated with the value updated in step A4->
Figure 521088DEST_PATH_IMAGE003
The position detection value corresponding to each target in the plurality of targets is matched,
if the matching is successful, the matched result is recorded as
Figure 803034DEST_PATH_IMAGE022
The position detection value corresponding to the target is added as a newly added track point to->
Figure 862257DEST_PATH_IMAGE001
The corresponding record created for it at the moment is @>
Figure 920474DEST_PATH_IMAGE023
In the track information of (1), and
Figure 732441DEST_PATH_IMAGE023
is recorded as +>
Figure 201599DEST_PATH_IMAGE024
The number of the trace points is accumulated to be 1, and the target is used for judging whether the trace points are matched with the target or not>
Figure 233272DEST_PATH_IMAGE022
The corresponding position detection value is used as an observation variable of the Kalman prediction model special for the position detection value to update the model parameter of the position detection value;
if the matching fails, the track information is used
Figure 78868DEST_PATH_IMAGE023
The target described in (1)>
Figure 10921DEST_PATH_IMAGE022
In or on>
Figure 401713DEST_PATH_IMAGE002
As an observation variable of the kalman prediction model dedicated to it, and updates its model parameters and takes the target->
Figure 169949DEST_PATH_IMAGE025
Is based on the track information->
Figure 802924DEST_PATH_IMAGE023
The number of the trace points in the middle is set to be 0.
Preferably, when the number of the data frames acquired in step S1 is greater than "2", updating the second strategy of the model parameters of each kalman prediction model for tracking the corresponding target further includes, on the basis of the first strategy, the steps of:
a6, detection
Figure 108527DEST_PATH_IMAGE026
The data frame of a moment->
Figure 372018DEST_PATH_IMAGE027
To update the set of objects
Figure 96391DEST_PATH_IMAGE003
Middle, or>
Figure 768943DEST_PATH_IMAGE026
Time updated->
Figure 410009DEST_PATH_IMAGE003
The data content in (a) is expressed as:
Figure 657451DEST_PATH_IMAGE028
Figure 885432DEST_PATH_IMAGE029
Figure 594631DEST_PATH_IMAGE030
respectively is represented at>
Figure 106515DEST_PATH_IMAGE027
Is detected as being ^ th->
Figure 10011DEST_PATH_IMAGE031
Individual target and target total number->
Figure 223824DEST_PATH_IMAGE032
;/>
And to
Figure 753025DEST_PATH_IMAGE003
Is in>
Figure 129868DEST_PATH_IMAGE026
At the moment and/or in>
Figure 453533DEST_PATH_IMAGE013
Over-position is detected at all times and/or at
Figure 889062DEST_PATH_IMAGE026
At the moment and/or in>
Figure 707108DEST_PATH_IMAGE001
At all times an over position is detected and/or is->
Figure 177272DEST_PATH_IMAGE026
At the moment and/or in>
Figure 937418DEST_PATH_IMAGE013
At the moment and/or in>
Figure 627287DEST_PATH_IMAGE001
The same target that has detected an over position at all times is->
Figure 763871DEST_PATH_IMAGE026
The position detection value detected at the moment is added as a newly added track point to->
Figure 88542DEST_PATH_IMAGE001
In the corresponding track information created for the target at the moment, and accumulating the number of track points in the track information by '1';
a7, judging whether the number of track points of each piece of track information is more than or equal to 2,
if yes, turning to the step A8;
if not, predicting the position of the corresponding target by using the Kalman prediction model which is specially used for predicting the position of the target corresponding to the track information and has updated model parameter values in the step A5
Figure 285168DEST_PATH_IMAGE026
Position of moment in time noted>
Figure 462334DEST_PATH_IMAGE033
As it is at>
Figure 386296DEST_PATH_IMAGE026
The actual value of the position of the time instant>
Figure 581785DEST_PATH_IMAGE034
Joining in the set of real values of the location->
Figure 428606DEST_PATH_IMAGE021
Performing the following steps;
a8 is according to
Figure 76757DEST_PATH_IMAGE001
Time sum->
Figure 804410DEST_PATH_IMAGE013
Calculating the time when each target is->
Figure 870717DEST_PATH_IMAGE026
The actual value of the position of the time instant>
Figure 143567DEST_PATH_IMAGE034
A9, mixing each
Figure 793860DEST_PATH_IMAGE035
Add to the set of real values of location->
Figure 810357DEST_PATH_IMAGE021
Then, it returns to step A5.
Preferably, in step A8, each of said objects is calculated to be in
Figure 996750DEST_PATH_IMAGE026
True value of said position of a moment
Figure 174922DEST_PATH_IMAGE034
The method comprises the following steps:
a81, calculating the historical frame time interval of the track information corresponding to the target by the following formula (1)
Figure 46932DEST_PATH_IMAGE036
Figure 352273DEST_PATH_IMAGE037
In the formula (1), the first and second groups,
Figure 376861DEST_PATH_IMAGE038
respectively represent->
Figure 506360DEST_PATH_IMAGE001
Time of day creation is marked as +>
Figure 616399DEST_PATH_IMAGE039
Is and->
Figure 719572DEST_PATH_IMAGE013
Adding track point to be on/off at moment>
Figure 113513DEST_PATH_IMAGE039
The timestamp of the record;
a82, calculating the current value by the following formula (2)
Figure 367908DEST_PATH_IMAGE026
Moment and->
Figure 981554DEST_PATH_IMAGE013
Time interval of time->
Figure 877966DEST_PATH_IMAGE040
Figure 126414DEST_PATH_IMAGE041
Formula (2)In (1),
Figure 833601DEST_PATH_IMAGE042
indicates detection->
Figure 183811DEST_PATH_IMAGE027
A timestamp of the medium target;
a83, calculating
Figure 867602DEST_PATH_IMAGE040
And/or>
Figure 721288DEST_PATH_IMAGE036
In a ratio of (b), is recorded as%>
Figure 599377DEST_PATH_IMAGE043
A84, calculating
Figure 420571DEST_PATH_IMAGE013
The coordinates of the central point of the rectangular frame used for framing the target detected at the moment
Figure 658785DEST_PATH_IMAGE044
And the Kalman prediction model for tracking and detecting the target is
Figure 400868DEST_PATH_IMAGE013
The coordinate of the central point of the rectangular frame predicted at the moment->
Figure 964705DEST_PATH_IMAGE045
A85, according to
Figure 538774DEST_PATH_IMAGE044
And &>
Figure 800254DEST_PATH_IMAGE045
Calculating >>
Figure 362953DEST_PATH_IMAGE026
Time of dayThe real position coordinate of the target is->
Figure 815800DEST_PATH_IMAGE046
A86, according to
Figure 847472DEST_PATH_IMAGE046
Calculating a true position value ≥ of the target>
Figure 958648DEST_PATH_IMAGE047
Figure 890701DEST_PATH_IMAGE047
The data content of (a) is expressed as follows: />
Figure 343810DEST_PATH_IMAGE048
Wherein,
Figure 377625DEST_PATH_IMAGE049
respectively represent the presence of the target in the data frame for framing>
Figure 240627DEST_PATH_IMAGE027
The horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame of the true position of (2), and the width and height of the rectangular frame.
Preferably, in step A84,
Figure 292765DEST_PATH_IMAGE050
calculated by the following formulas (3) and (4), respectively:
Figure 103726DEST_PATH_IMAGE051
Figure 375570DEST_PATH_IMAGE052
formula (3) (ii) (4) In (1),
Figure 15499DEST_PATH_IMAGE053
Figure 938455DEST_PATH_IMAGE054
Figure 671050DEST_PATH_IMAGE055
respectively is represented at>
Figure 413878DEST_PATH_IMAGE013
And the horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame used for framing the target and the width and the height of the rectangular frame are detected at the moment.
Preferably, in step A84,
Figure 654236DEST_PATH_IMAGE056
calculated by the following equations (5) and (6), respectively:
Figure 916852DEST_PATH_IMAGE057
Figure 335195DEST_PATH_IMAGE058
in the formulas (5) and (6),
Figure 549007DEST_PATH_IMAGE059
Figure 343788DEST_PATH_IMAGE060
respectively expressed in>
Figure 189472DEST_PATH_IMAGE013
And the Kalman prediction model predicts the horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame of the target at the moment, and the width and the height of the rectangular frame.
Preferably, step A85In (1),
Figure 762405DEST_PATH_IMAGE061
calculated by the following equations (7) and (8), respectively:
Figure 745405DEST_PATH_IMAGE063
Figure 563450DEST_PATH_IMAGE065
preferably, in step A86,
Figure 784347DEST_PATH_IMAGE047
in>
Figure 59339DEST_PATH_IMAGE066
Calculated by the following equations (9) to (12), respectively:
Figure 264056DEST_PATH_IMAGE068
Figure 620213DEST_PATH_IMAGE069
Figure 226775DEST_PATH_IMAGE070
Figure 407089DEST_PATH_IMAGE071
as a matter of preference,
Figure 584255DEST_PATH_IMAGE001
the th created moment>
Figure 258950DEST_PATH_IMAGE007
Each of said targets->
Figure 589525DEST_PATH_IMAGE005
Corresponding track information
Figure 691473DEST_PATH_IMAGE011
The included information content is expressed as follows:
Figure 854470DEST_PATH_IMAGE072
wherein,
Figure 67277DEST_PATH_IMAGE073
respectively represent frames for framing the data>
Figure 133584DEST_PATH_IMAGE002
Is selected based on the target of (1)>
Figure 655701DEST_PATH_IMAGE005
Has an upper left vertex at->
Figure 791148DEST_PATH_IMAGE074
A horizontal axis coordinate and a vertical axis coordinate under the axis coordinate system;
Figure 558378DEST_PATH_IMAGE075
respectively indicate that the target is framed and selected>
Figure 728459DEST_PATH_IMAGE005
The width and height of the rectangular frame of (a);
Figure 155898DEST_PATH_IMAGE076
indicating the formation of the track information->
Figure 529373DEST_PATH_IMAGE011
The time stamp of (c).
The invention has the following beneficial effects:
1. aiming at the problem that continuous multiframes influence the accuracy of tracking and detecting the target by a Kalman prediction model due to the increase of noise data caused by target loss, the invention obtains the real value set of the target position through dynamic update
Figure 83982DEST_PATH_IMAGE021
The real value of each position in (a) and the updated set of detection targets>
Figure 623417DEST_PATH_IMAGE003
The position detection values corresponding to the targets are matched to adjust the observation variables of the Kalman prediction model, and then the model parameters of the Kalman prediction model are updated according to the adjusted observation variables, so that the influence of noise data increase caused by target loss of continuous multiple frames on the prediction performance of the Kalman prediction model is reduced.
2. Aiming at the problem that the prediction precision of a Kalman prediction model is influenced due to the nonlinearity of inter-frame interval time, the method provided by the invention searches
Figure 740800DEST_PATH_IMAGE077
Figure 116418DEST_PATH_IMAGE013
And &>
Figure 723985DEST_PATH_IMAGE026
The correlation between the time interval ratio between the three frames and the real position value (the position of the real prediction frame) of the target is used for calculating the position (or position) of the target in->
Figure 603080DEST_PATH_IMAGE026
And then, taking the real position value as an observation variable of the Kalman prediction model, taking the prediction variable as a model parameter adjustment basis to update and adjust the parameter of the Kalman prediction model and then tracking and predicting the condition of the Kalman prediction model>
Figure 404945DEST_PATH_IMAGE026
The target position of the next moment of the moment solves the problem that the prediction accuracy of the Kalman prediction model is poor due to the nonlinearity of inter-frame interval time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a diagram illustrating implementation steps of a kalman prediction-based multi-target tracking method according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and not for the purpose of limiting the same, the same is shown by way of illustration only and not in the form of limitation; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between components, is to be understood broadly, for example, as being either fixedly connected, detachably connected, or integrated; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
According to the multi-target tracking method based on Kalman prediction, provided by the embodiment of the invention, for the targets lost by continuous multi-frames, the historical position information of the targets is used as the observation variable of a Kalman prediction model in the current frame, and the observation variable is used as the basis for updating the model parameters, so that the problem of reduction of prediction precision of the Kalman prediction model due to the fact that the noise data are lost by the continuous multi-frame targets is solved; the method comprises the steps of searching a linear relation between an inter-frame time interval and a target real position by utilizing historical observation position information of the target, position information of a Kalman prediction model for predicting the target in a current frame and time interval information between three frames, calculating the real position information of the target in the current frame to be used as an observation variable of the Kalman prediction model, adjusting model parameters to predict the appearance position of the target in the next frame, and solving the technical problem that the prediction performance of the nonlinear Kalman prediction model is not ideal due to the inter-frame time interval. In order to solve the two technical problems, the multi-target tracking method based on kalman prediction provided by this embodiment, as shown in fig. 1, includes the steps of:
s1, acquiring at least two data frames with different time stamps;
s2, judging whether the number of the acquired data frames is equal to 2 or not,
if so, updating model parameters of a Kalman prediction model for tracking the corresponding target by using a first strategy;
if not, updating the model parameters of the Kalman prediction model for tracking the corresponding target by using a second strategy;
and S3, predicting whether the target appears in the next frame or not by using the Kalman prediction model after the parameters are updated.
The specific implementation of the first strategy is explained in detail below:
when the number of the data frames acquired in step S1 is two, a first policy for updating model parameters of a kalman prediction model used for tracking a corresponding target (in order to ensure a target tracking prediction effect, in the present application, each target has a corresponding kalman prediction model used for tracking its position, that is, the target and the kalman prediction model are in a one-to-one correspondence relationship) includes the steps of:
a1, detecting
Figure 267859DEST_PATH_IMAGE001
The data frame of a moment->
Figure 413538DEST_PATH_IMAGE002
Target in (4) is added to the detection target set->
Figure 678297DEST_PATH_IMAGE003
Middle, or>
Figure 385484DEST_PATH_IMAGE001
Time of day
Figure 453803DEST_PATH_IMAGE003
The data content in (a) is expressed as:
Figure 419485DEST_PATH_IMAGE004
Figure 23904DEST_PATH_IMAGE005
Figure 151260DEST_PATH_IMAGE006
respectively expressed in>
Figure 238034DEST_PATH_IMAGE002
Is detected as being ^ th->
Figure 741827DEST_PATH_IMAGE007
Individual targets and total number of targets;
a2 is selected from
Figure 194894DEST_PATH_IMAGE008
Each target in the track container creates corresponding track information to be added into the track container
Figure 742418DEST_PATH_IMAGE009
In the method, the number of track points in each track information which are juxtaposed is '1', 'and/or' is selected>
Figure 801641DEST_PATH_IMAGE001
Moment->
Figure 859858DEST_PATH_IMAGE009
The data content in (2) is expressed as:
Figure 422558DEST_PATH_IMAGE078
Figure 672142DEST_PATH_IMAGE011
is expressed as a fifth->
Figure 844760DEST_PATH_IMAGE007
Target created track information +>
Figure 939624DEST_PATH_IMAGE012
;/>
Figure 622409DEST_PATH_IMAGE011
The included information content is expressed as follows:
Figure 482043DEST_PATH_IMAGE079
wherein,
Figure 499546DEST_PATH_IMAGE073
respectively represent the frame for framing data>
Figure 148834DEST_PATH_IMAGE002
Is selected based on the target of (1)>
Figure 899840DEST_PATH_IMAGE005
Has an upper left vertex at->
Figure 225648DEST_PATH_IMAGE074
A horizontal axis coordinate and a vertical axis coordinate under the axis coordinate system;
Figure 966333DEST_PATH_IMAGE075
respectively indicate the frame selection target->
Figure 419311DEST_PATH_IMAGE005
Width and height of the rectangular frame of (1);
Figure 591535DEST_PATH_IMAGE076
indicating the formation of track information->
Figure 573398DEST_PATH_IMAGE011
The timestamp of (2).
A3, in order
Figure 801379DEST_PATH_IMAGE009
The track information in the Kalman prediction models is a model parameter assignment basis of the corresponding Kalman prediction models, and a parameter initial value of each Kalman prediction model is given;
hypothesis for tracking recognition target
Figure 792469DEST_PATH_IMAGE005
Is ≥ based on a kalman prediction model of position ≥>
Figure 553620DEST_PATH_IMAGE080
In step A3In or on target>
Figure 971963DEST_PATH_IMAGE005
Corresponding track information->
Figure 952820DEST_PATH_IMAGE011
Comprising information content of
Figure 465710DEST_PATH_IMAGE079
In this embodiment, the following components
Figure 363259DEST_PATH_IMAGE081
As->
Figure 431797DEST_PATH_IMAGE080
The initial values of the parameters of the model are given to the model observed variables of (1). The model parameters and the observation variables have corresponding relations, and the Kalman prediction models trained by the same sample and different model parameters usually have different results of position prediction on the same target at the same time, so that the model parameters can be obtained by back-deducing according to the observation variables of the models. Since the specific method for assigning the initial values of the model parameters or updating the model parameters is not within the scope of the claims of the present application, the model parameter assignment process is not specifically described.
A4, detection
Figure 883638DEST_PATH_IMAGE013
Data frame at a time +>
Figure 200218DEST_PATH_IMAGE014
To update, detect a set of objects
Figure 437427DEST_PATH_IMAGE003
Figure 931993DEST_PATH_IMAGE013
Time updated->
Figure 120398DEST_PATH_IMAGE003
The data content in (2) is expressed as:
Figure 742135DEST_PATH_IMAGE082
Figure 348696DEST_PATH_IMAGE016
Figure 263432DEST_PATH_IMAGE017
respectively is represented at>
Figure 689865DEST_PATH_IMAGE014
Is detected as being ^ th->
Figure 380872DEST_PATH_IMAGE018
Individual targets and total number of targets;
it should be noted here that the above-mentioned,
Figure 576361DEST_PATH_IMAGE014
is not necessarily->
Figure 661997DEST_PATH_IMAGE002
When the next frame of (a), when>
Figure 422116DEST_PATH_IMAGE083
When, is greater or less>
Figure 634923DEST_PATH_IMAGE014
Is->
Figure 934186DEST_PATH_IMAGE002
The next frame of (2).
Figure 207036DEST_PATH_IMAGE003
The update rule of (1) is:
Figure 93214DEST_PATH_IMAGE002
The target detected in (A) is, for example,B. C three targets, and>
Figure 93400DEST_PATH_IMAGE014
if three targets A, C, D are detected, for example, then->
Figure 997902DEST_PATH_IMAGE013
Time updated->
Figure 192386DEST_PATH_IMAGE003
The target in (2) includes three targets A, C and D.
And is aligned at
Figure 64395DEST_PATH_IMAGE013
Momentarily detected on>
Figure 619005DEST_PATH_IMAGE001
The position detection value of the target which is also detected at the moment is added as a newly added track point to->
Figure 394325DEST_PATH_IMAGE001
Corresponding track information created at a moment; for example, for target A, in->
Figure 258245DEST_PATH_IMAGE013
Time sum->
Figure 368283DEST_PATH_IMAGE001
If the time is detected, the target A is->
Figure 737036DEST_PATH_IMAGE014
Is added to the detected position>
Figure 616130DEST_PATH_IMAGE001
The corresponding trace information created for it at the moment in time ≥>
Figure 650951DEST_PATH_IMAGE084
Performing the following steps;
and are aligned with
Figure 999018DEST_PATH_IMAGE001
Is detected at a time and is->
Figure 895430DEST_PATH_IMAGE013
When the target is not detected at the moment, the Kalman prediction model which is specially used for detecting the target and is endowed with the initial value of the model parameter in the step A3 is used for predicting that the target is on->
Figure 143877DEST_PATH_IMAGE013
The position at the moment in time is recorded as +>
Figure 834753DEST_PATH_IMAGE019
As it is at>
Figure 670116DEST_PATH_IMAGE013
The actual value of the position of the time instant>
Figure 619486DEST_PATH_IMAGE020
Joining a set of position truth values->
Figure 473173DEST_PATH_IMAGE021
The preparation method comprises the following steps of (1) performing;
a5, to
Figure 616840DEST_PATH_IMAGE021
Each location true value in (b) and updated by step A4>
Figure 923188DEST_PATH_IMAGE003
The position detection value corresponding to each target in the system is matched (the existing Hungarian algorithm is adopted for matching, the matching idea is,
Figure 410670DEST_PATH_IMAGE021
is associated with the position of the real prediction box of each target (the position real value) and->
Figure 394894DEST_PATH_IMAGE003
Each of which isAnd (3) performing distance calculation on the position (position detection value) of the prediction rectangular frame corresponding to the mark to obtain a corresponding distance matrix, and calculating the distance matrix by adopting a Hungarian matching algorithm to obtain an optimal matching combination result set. When the distance of the matching combination pair is smaller than a preset distance threshold value, the matching of the two frames is judged to be successful, otherwise, the matching is failed), and if the matching is successful, the matched target is obtained
Figure 693152DEST_PATH_IMAGE022
The corresponding position detection value is added as a newly added track point in->
Figure 831003DEST_PATH_IMAGE001
The corresponding trace information created for it at the moment in time ≥>
Figure 122176DEST_PATH_IMAGE023
In, and make a pair->
Figure 170029DEST_PATH_IMAGE023
Number of track points in>
Figure 639187DEST_PATH_IMAGE024
Adds up "1" and picks up the target>
Figure 903815DEST_PATH_IMAGE022
The corresponding position detection value is used as an observation variable of a special Kalman prediction model to update the model parameter of the position detection value;
if the matching fails, the track information is used
Figure 703406DEST_PATH_IMAGE023
The target described in (1)>
Figure 386192DEST_PATH_IMAGE022
Is at>
Figure 541098DEST_PATH_IMAGE002
The position detection value in (1) is used as an observation variable of a Kalman prediction model special for the position detection value to update the model parameter of the position detection value, and the target is
Figure 54207DEST_PATH_IMAGE025
In the track information of>
Figure 172336DEST_PATH_IMAGE023
The number of the trace points in the middle is set to be 0.
When the number of the data frames acquired in step S1 is equal to or greater than "2", the second strategy for updating the model parameters of each kalman prediction model used for tracking the corresponding target further includes, on the basis of the first strategy, the steps of:
a6, detection
Figure 958895DEST_PATH_IMAGE026
The data frame of a moment->
Figure 520589DEST_PATH_IMAGE027
To update the set of objects
Figure 776121DEST_PATH_IMAGE003
Middle, or>
Figure 478366DEST_PATH_IMAGE026
Time updated->
Figure 870165DEST_PATH_IMAGE003
The data content in (2) is expressed as:
Figure 602759DEST_PATH_IMAGE085
Figure 611167DEST_PATH_IMAGE029
Figure 743202DEST_PATH_IMAGE030
respectively expressed in>
Figure 238774DEST_PATH_IMAGE027
Is detected as being ^ th->
Figure 657117DEST_PATH_IMAGE031
The individual targets and the total number of targets,
and to
Figure 655552DEST_PATH_IMAGE003
Is in>
Figure 919174DEST_PATH_IMAGE026
Time or moment>
Figure 800411DEST_PATH_IMAGE013
At all times an over position is detected and/or is->
Figure 389656DEST_PATH_IMAGE026
At the moment and/or in>
Figure 592229DEST_PATH_IMAGE001
At all times an over position is detected and/or is->
Figure 174389DEST_PATH_IMAGE026
At the moment and/or in>
Figure 660865DEST_PATH_IMAGE013
At the moment and/or in>
Figure 906164DEST_PATH_IMAGE001
The same target that has at all times detected an over-position is>
Figure 845301DEST_PATH_IMAGE026
The position detection value detected at the moment is added as a newly added track point to->
Figure 965573DEST_PATH_IMAGE001
In the corresponding track information created for the target at the moment, and accumulating the number of track points in the track information by '1';
a7, judging whether the number of track points of each track information is more than or equal to 2,
if yes, turning to the step A8;
if not, predicting the position of the corresponding target by using the Kalman prediction model which is specially used for predicting the position of the target corresponding to the track information and has updated model parameter values in the step A5
Figure 57288DEST_PATH_IMAGE026
Position of moment in time noted>
Figure 722756DEST_PATH_IMAGE033
As it is at>
Figure 664036DEST_PATH_IMAGE026
The actual value of the position of the time instant>
Figure 73151DEST_PATH_IMAGE034
Joining sets of location true values>
Figure 13514DEST_PATH_IMAGE021
The preparation method comprises the following steps of (1) performing;
a8 is according to
Figure 630309DEST_PATH_IMAGE077
Time sum->
Figure 809617DEST_PATH_IMAGE013
Calculating the on/off of each target according to the track information recorded by the same target at any moment>
Figure 507577DEST_PATH_IMAGE026
The actual value of the position of the time instant>
Figure 88731DEST_PATH_IMAGE034
A9, mixing each
Figure 345269DEST_PATH_IMAGE035
Add to location real value set +>
Figure 746294DEST_PATH_IMAGE021
Then, it returns to step A5.
In step A8, each target is calculated to be
Figure 247945DEST_PATH_IMAGE026
The actual value of the position of the time instant>
Figure 932873DEST_PATH_IMAGE034
The method comprises the following steps: />
A81, calculating the historical frame time interval of the track information corresponding to the target by the following formula (1)
Figure 376624DEST_PATH_IMAGE036
Figure 750099DEST_PATH_IMAGE086
In the formula (1), the first and second groups of the compound,
Figure 304708DEST_PATH_IMAGE038
respectively denote->
Figure 844143DEST_PATH_IMAGE001
Time of day creation is marked as +>
Figure 724374DEST_PATH_IMAGE039
And->
Figure 579285DEST_PATH_IMAGE013
Time addition trace point arrives at->
Figure 921274DEST_PATH_IMAGE039
The timestamp of the record;
a82, calculating the current value by the following formula (2)
Figure 331527DEST_PATH_IMAGE026
Moment and->
Figure 867813DEST_PATH_IMAGE013
Time interval of time->
Figure 730726DEST_PATH_IMAGE040
Figure 876406DEST_PATH_IMAGE087
In the formula (2), the first and second groups,
Figure 875586DEST_PATH_IMAGE042
indicates detection->
Figure 848352DEST_PATH_IMAGE027
A timestamp of the medium target;
a83, calculating
Figure 182250DEST_PATH_IMAGE040
And &>
Figure 882353DEST_PATH_IMAGE036
Is recorded as->
Figure 752351DEST_PATH_IMAGE043
A84, calculating
Figure 348549DEST_PATH_IMAGE013
The coordinates of the central point of the rectangular frame used for framing the target detected at the moment
Figure 169743DEST_PATH_IMAGE044
And a Kalman prediction model for tracking detection of the target is &>
Figure 152831DEST_PATH_IMAGE013
The coordinate of the central point of the rectangular frame predicted at the moment->
Figure 861024DEST_PATH_IMAGE045
Figure 142969DEST_PATH_IMAGE050
Calculated by the following formulas (3) and (4), respectively:
Figure 952925DEST_PATH_IMAGE051
Figure 729251DEST_PATH_IMAGE052
in the formulas (3) and (4),
Figure 10059DEST_PATH_IMAGE053
Figure 229950DEST_PATH_IMAGE054
Figure 760158DEST_PATH_IMAGE055
respectively is represented at>
Figure 605754DEST_PATH_IMAGE013
And the horizontal axis coordinate, the vertical axis coordinate and the width and the height of the rectangular frame are detected at the moment and used for framing the upper left vertex of the rectangular frame of the target.
Figure 39272DEST_PATH_IMAGE056
Calculated by the following equations (5) and (6), respectively:
Figure 944911DEST_PATH_IMAGE057
Figure 962414DEST_PATH_IMAGE058
in the formulas (5) and (6),
Figure 611701DEST_PATH_IMAGE059
Figure 573096DEST_PATH_IMAGE060
are respectively shown in
Figure 649636DEST_PATH_IMAGE013
And the Kalman prediction model predicts the horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame of the target and the width and the height of the rectangular frame at the moment.
A85, according to
Figure 655901DEST_PATH_IMAGE044
And &>
Figure 826988DEST_PATH_IMAGE045
Calculate->
Figure 749944DEST_PATH_IMAGE026
At the moment in time the real position coordinate of the object->
Figure 482539DEST_PATH_IMAGE046
Figure 490947DEST_PATH_IMAGE061
Calculated by the following equations (7) and (8), respectively:
Figure 200145DEST_PATH_IMAGE088
Figure 462762DEST_PATH_IMAGE089
a86, according to
Figure 84367DEST_PATH_IMAGE046
Calculating a true position value ≥ of the target>
Figure 32600DEST_PATH_IMAGE047
Figure 306675DEST_PATH_IMAGE047
The data content of (a) is expressed as follows:
Figure 673065DEST_PATH_IMAGE048
wherein,
Figure 245998DEST_PATH_IMAGE049
respectively, indicate that the target appears in the data frame for framing>
Figure 714151DEST_PATH_IMAGE027
The horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame of the true position of (2), and the width and height of the rectangular frame.
Figure 781464DEST_PATH_IMAGE047
In (A)>
Figure 251628DEST_PATH_IMAGE066
Calculated by the following equations (9) to (12), respectively:
Figure DEST_PATH_IMAGE090
Figure DEST_PATH_IMAGE091
Figure 621561DEST_PATH_IMAGE070
Figure 560698DEST_PATH_IMAGE071
in summary, the present invention obtains the set of real values of the target position by dynamic update
Figure 385697DEST_PATH_IMAGE021
With the updated set of detection targets and the actual value of each location in (a)>
Figure 975947DEST_PATH_IMAGE003
The position detection values corresponding to the targets are matched to adjust the observation variables of the Kalman prediction model, and then the model parameters of the Kalman prediction model are updated according to the adjusted observation variables, so that the influence of noise data increase caused by target loss of continuous multiple frames on the prediction performance of the Kalman prediction model is reduced. By looking for>
Figure 386288DEST_PATH_IMAGE001
Figure 812721DEST_PATH_IMAGE013
And &>
Figure 736684DEST_PATH_IMAGE026
The correlation between the time interval ratio between the three frames and the real position value of the target is used for calculating the position value of the target in->
Figure 932173DEST_PATH_IMAGE026
The real position value of the moment is used as an observation variable of the Kalman prediction model, and the observation variable is used for updating and adjusting the model parameter to track and predict the judgment whether the model parameter is based on the observation variable>
Figure 784853DEST_PATH_IMAGE026
The target position of the next moment of the moment solves the problem that the prediction accuracy of the model is poor due to the nonlinearity of the inter-frame interval time.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.

Claims (9)

1. A multi-target tracking method based on Kalman prediction is characterized by comprising the following steps:
s1, acquiring at least two data frames with different time stamps;
s2, judging whether the number of the acquired data frames is equal to 2 or not,
if so, updating model parameters of a Kalman prediction model for tracking the corresponding target by using a first strategy;
if not, updating model parameters of the Kalman prediction model for tracking the corresponding target by using a second strategy;
and S3, predicting whether the target appears in the next frame or not by using the Kalman prediction model after parameter updating.
2. The kalman prediction-based multi-target tracking method according to claim 1, wherein when the number of the data frames acquired in step S1 is equal to two frames, updating the first strategy of the model parameters of the kalman prediction model for tracking the corresponding target comprises the steps of:
a1, detection
Figure DEST_PATH_IMAGE001
The data frame of a moment->
Figure DEST_PATH_IMAGE002
Target to detection target set->
Figure DEST_PATH_IMAGE003
Middle, or>
Figure 887066DEST_PATH_IMAGE001
Time of day
Figure 23780DEST_PATH_IMAGE003
The data content in (a) is expressed as:
Figure DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
respectively expressed in>
Figure 410375DEST_PATH_IMAGE002
Is detected as being ^ th->
Figure DEST_PATH_IMAGE007
Individual targets and total number of targets;
a2 is selected from
Figure DEST_PATH_IMAGE008
Creates corresponding trajectory information to be added to the trajectory container @>
Figure DEST_PATH_IMAGE009
In the method, the number of track points in each track information is set to be '1', 'and' are combined>
Figure 937344DEST_PATH_IMAGE001
Moment->
Figure 220558DEST_PATH_IMAGE009
The data content in (a) is expressed as:
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
expressed as a number one>
Figure 715124DEST_PATH_IMAGE007
Track information created by said target>
Figure DEST_PATH_IMAGE012
A3, in order
Figure 870905DEST_PATH_IMAGE009
The track information in the Kalman prediction model is a model parameter assignment basis of the corresponding Kalman prediction model, and a parameter initial value of each Kalman prediction model is given;
a4, detection
Figure DEST_PATH_IMAGE013
Data frame at a time +>
Figure DEST_PATH_IMAGE014
To update the detection target set @>
Figure 820538DEST_PATH_IMAGE003
Figure 82892DEST_PATH_IMAGE013
Time updated->
Figure 810677DEST_PATH_IMAGE003
The data content in (a) is expressed as:
Figure DEST_PATH_IMAGE015
Figure DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE017
respectively indicate updated->
Figure 873661DEST_PATH_IMAGE003
Is greater than or equal to>
Figure DEST_PATH_IMAGE018
Individual targets and total number of targets;
and is aligned at
Figure 797623DEST_PATH_IMAGE013
The instant detected being->
Figure 740915DEST_PATH_IMAGE001
The position detection value of the target which is also detected at the moment is added as a newly added track point to->
Figure 764235DEST_PATH_IMAGE001
Corresponding track information created at any moment;
and are aligned with
Figure 474702DEST_PATH_IMAGE001
Is detected at a moment and is->
Figure 828454DEST_PATH_IMAGE013
When the target is not detected at the moment, predicting whether the target is on or is not on by using the Kalman prediction model which is specially used for detecting the target and is endowed with the initial value of the model parameter in the step A3>
Figure 829515DEST_PATH_IMAGE013
The position at the moment in time is recorded as +>
Figure DEST_PATH_IMAGE019
As it is at>
Figure 305627DEST_PATH_IMAGE013
The actual value of the position of the time instant>
Figure DEST_PATH_IMAGE020
Joining a set of position truth values->
Figure DEST_PATH_IMAGE021
Performing the following steps;
a5, to
Figure 648531DEST_PATH_IMAGE021
Each location true value in (b) and updated by step A4>
Figure 399450DEST_PATH_IMAGE003
The position detection value corresponding to each target in the plurality of targets is matched,
if the matching is successful, the matched result is recorded as
Figure DEST_PATH_IMAGE022
The position detection value corresponding to the target is added as a newly added track point to ^ er>
Figure 927121DEST_PATH_IMAGE001
The corresponding record created for it at the moment is @>
Figure DEST_PATH_IMAGE023
In the track information of (1), and>
Figure 525199DEST_PATH_IMAGE023
is marked as->
Figure DEST_PATH_IMAGE024
The number of the trace points is accumulated to be 1, and the target is used for judging whether the trace points are matched with the target or not>
Figure 492149DEST_PATH_IMAGE022
The corresponding position detection value is used as an observation variable of the Kalman prediction model special for the position detection value to update the model parameter of the position detection value;
if the matching fails, the track information is used
Figure 279714DEST_PATH_IMAGE023
The target described in (1)>
Figure 458629DEST_PATH_IMAGE022
Is at>
Figure 493188DEST_PATH_IMAGE002
As an observed variable of the kalman prediction model dedicated thereto, updates its model parameters and puts the target->
Figure 210084DEST_PATH_IMAGE022
Is based on the track information->
Figure 460062DEST_PATH_IMAGE023
The number of the trace points in the middle is set to be 0.
3. The kalman prediction-based multi-target tracking method according to claim 2, wherein when the number of the data frames acquired at step S1 is greater than "2", updating the second strategy of the model parameters of each kalman prediction model for tracking the corresponding target further comprises, on the basis of the first strategy, the steps of:
a6, detection
Figure DEST_PATH_IMAGE025
The data frame of a moment->
Figure DEST_PATH_IMAGE026
To update the set of objects
Figure 70647DEST_PATH_IMAGE003
Middle, or>
Figure 544616DEST_PATH_IMAGE025
Time updated->
Figure 938688DEST_PATH_IMAGE003
The data content in (a) is expressed as:
Figure DEST_PATH_IMAGE027
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
respectively is represented at>
Figure 199266DEST_PATH_IMAGE026
Is detected as being ^ th->
Figure DEST_PATH_IMAGE030
The individual targets and the total number of targets,
Figure DEST_PATH_IMAGE031
and to
Figure 775610DEST_PATH_IMAGE003
Is in>
Figure 607431DEST_PATH_IMAGE025
At the moment and/or in>
Figure 236602DEST_PATH_IMAGE013
The over-position is detected at all times, and/or is->
Figure 77650DEST_PATH_IMAGE025
At the moment and/or in>
Figure 55971DEST_PATH_IMAGE001
The over-position is detected at all times, and/or is->
Figure 586922DEST_PATH_IMAGE025
At the moment and/or in>
Figure 503056DEST_PATH_IMAGE013
Time or moment>
Figure 177489DEST_PATH_IMAGE001
The same target that has detected an over position at all times is->
Figure 807053DEST_PATH_IMAGE025
The position detection value detected at the moment is added as a newly added track point to->
Figure 259638DEST_PATH_IMAGE001
In the corresponding track information created for the target at the moment, and accumulating the number of track points in the track information by '1';
a7, judging whether the number of track points of each piece of track information is more than or equal to 2,
if yes, turning to the step A8;
if not, predicting the position of the corresponding target by using the Kalman prediction model which is specially used for predicting the position of the target corresponding to the track information and has updated model parameter values in the step A5
Figure 771391DEST_PATH_IMAGE025
The position at the moment in time is recorded as +>
Figure DEST_PATH_IMAGE032
As it is at>
Figure 311831DEST_PATH_IMAGE025
Actual position value for time>
Figure DEST_PATH_IMAGE033
Joining in the set of real values of the location->
Figure 530323DEST_PATH_IMAGE021
Performing the following steps;
a8 is according to
Figure 140427DEST_PATH_IMAGE001
Time sum->
Figure 372432DEST_PATH_IMAGE013
Calculating the track information of the same target record at any moment in time
Figure 14766DEST_PATH_IMAGE025
Actual position value for time>
Figure 307338DEST_PATH_IMAGE033
A9, mixing each
Figure 665507DEST_PATH_IMAGE033
Add to the set of location real values>
Figure 142666DEST_PATH_IMAGE021
Then, it returns to step A5.
4. The Kalman prediction based multi-target tracking method according to claim 3, characterized in that in the step A8, each target is calculated to be in
Figure 916587DEST_PATH_IMAGE025
The position true value of a time instant->
Figure 1349DEST_PATH_IMAGE033
The method comprises the following steps:
a81, calculating the historical frame time interval of the track information corresponding to the target by the following formula (1)
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE035
In the formula (1), the first and second groups,
Figure DEST_PATH_IMAGE036
respectively represent->
Figure 153588DEST_PATH_IMAGE001
Time created is recorded as +>
Figure DEST_PATH_IMAGE037
Is and->
Figure 32289DEST_PATH_IMAGE013
Adding track point to be on/off at moment>
Figure 983802DEST_PATH_IMAGE037
The timestamp of the record;
a82, calculating the current value by the following formula (2)
Figure 454229DEST_PATH_IMAGE025
Moment and->
Figure 980632DEST_PATH_IMAGE013
Time interval of time->
Figure DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE039
In the formula (2), the first and second groups,
Figure DEST_PATH_IMAGE040
indicates detection->
Figure 612208DEST_PATH_IMAGE026
A timestamp of the medium target; />
A83, calculating
Figure 822872DEST_PATH_IMAGE038
And/or>
Figure 472772DEST_PATH_IMAGE034
Is recorded as->
Figure DEST_PATH_IMAGE041
A84, calculating
Figure 907426DEST_PATH_IMAGE013
The coordinates of the central point of the rectangular frame used for framing the target detected at the moment
Figure DEST_PATH_IMAGE042
And the Kalman prediction model for tracking and detecting the target is
Figure 947670DEST_PATH_IMAGE013
The coordinate of the central point of the rectangular frame predicted at the moment->
Figure DEST_PATH_IMAGE043
A85, according to
Figure 132664DEST_PATH_IMAGE042
And &>
Figure 249787DEST_PATH_IMAGE043
Calculating
Figure 370189DEST_PATH_IMAGE025
At the moment in time the real position coordinate of the object->
Figure DEST_PATH_IMAGE044
A86, according to
Figure 655587DEST_PATH_IMAGE044
Calculating a real position value ^ of the target>
Figure DEST_PATH_IMAGE045
Figure 706589DEST_PATH_IMAGE045
The data content of (a) is expressed as follows:
Figure DEST_PATH_IMAGE046
wherein,
Figure DEST_PATH_IMAGE047
respectively represent the presence of the target in the data frame for framing>
Figure 35808DEST_PATH_IMAGE026
The horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame of the true position of (2), and the width and height of the rectangular frame.
5. The Kalman prediction based multi-target tracking method according to claim 4, characterized in thatIn the step a84, the first step,
Figure DEST_PATH_IMAGE048
calculated by the following formulas (3) and (4), respectively:
Figure DEST_PATH_IMAGE049
Figure DEST_PATH_IMAGE050
in the formulas (3) and (4),
Figure DEST_PATH_IMAGE051
Figure DEST_PATH_IMAGE052
Figure DEST_PATH_IMAGE053
respectively is represented at>
Figure 527445DEST_PATH_IMAGE013
And the horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame used for framing the target and the width and the height of the rectangular frame are detected at the moment.
6. The Kalman prediction based multi-target tracking method according to claim 5, characterized in that in step A84,
Figure DEST_PATH_IMAGE054
calculated by the following equations (5) and (6), respectively:
Figure DEST_PATH_IMAGE055
Figure DEST_PATH_IMAGE056
in the formulas (5) and (6),
Figure DEST_PATH_IMAGE057
Figure DEST_PATH_IMAGE058
respectively is represented at>
Figure 837554DEST_PATH_IMAGE013
And the Kalman prediction model predicts the horizontal axis coordinate and the vertical axis coordinate of the upper left vertex of the rectangular frame of the target at the moment, and the width and the height of the rectangular frame.
7. The Kalman prediction based multi-target tracking method according to claim 6, characterized in that in step A85,
Figure DEST_PATH_IMAGE059
calculated by the following equations (7) and (8), respectively:
Figure DEST_PATH_IMAGE060
Figure DEST_PATH_IMAGE062
8. the Kalman prediction based multi-target tracking method according to claim 7, characterized in that in step A86,
Figure 534989DEST_PATH_IMAGE045
in>
Figure DEST_PATH_IMAGE063
Calculated by the following equations (9) to (12), respectively:
Figure DEST_PATH_IMAGE064
Figure DEST_PATH_IMAGE065
Figure DEST_PATH_IMAGE066
Figure DEST_PATH_IMAGE068
9. the Kalman prediction based multi-target tracking method according to claim 2,
Figure 834162DEST_PATH_IMAGE001
the th created moment>
Figure 14477DEST_PATH_IMAGE007
Each of said targets->
Figure 237648DEST_PATH_IMAGE005
Corresponding said track information +>
Figure 256551DEST_PATH_IMAGE011
The included information content is expressed as follows:
Figure DEST_PATH_IMAGE069
wherein,
Figure DEST_PATH_IMAGE070
respectively indicate that the frame for framing the data is->
Figure 888258DEST_PATH_IMAGE002
Is selected based on the target of (1)>
Figure 987276DEST_PATH_IMAGE005
Has an upper left vertex at->
Figure DEST_PATH_IMAGE071
A horizontal axis coordinate and a vertical axis coordinate under the axis coordinate system;
Figure DEST_PATH_IMAGE072
respectively indicate that said target is framed>
Figure 868383DEST_PATH_IMAGE005
The width and height of the rectangular frame of (a);
Figure DEST_PATH_IMAGE073
indicating the formation of the track information->
Figure 642041DEST_PATH_IMAGE011
The time stamp of (c). />
CN202211102306.9A 2022-09-09 2022-09-09 Multi-target tracking method based on Kalman prediction Active CN115908506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211102306.9A CN115908506B (en) 2022-09-09 2022-09-09 Multi-target tracking method based on Kalman prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211102306.9A CN115908506B (en) 2022-09-09 2022-09-09 Multi-target tracking method based on Kalman prediction

Publications (2)

Publication Number Publication Date
CN115908506A true CN115908506A (en) 2023-04-04
CN115908506B CN115908506B (en) 2023-06-27

Family

ID=86469865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211102306.9A Active CN115908506B (en) 2022-09-09 2022-09-09 Multi-target tracking method based on Kalman prediction

Country Status (1)

Country Link
CN (1) CN115908506B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985138A (en) * 2014-05-14 2014-08-13 苏州盛景空间信息技术有限公司 Long-sequence image SIFT feature point tracking algorithm based on Kalman filter
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
CN111932580A (en) * 2020-07-03 2020-11-13 江苏大学 Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN112785630A (en) * 2021-02-02 2021-05-11 宁波智能装备研究院有限公司 Multi-target track exception handling method and system in microscopic operation
CN113256689A (en) * 2021-06-08 2021-08-13 南京甄视智能科技有限公司 High-altitude parabolic detection method and device
CN113269098A (en) * 2021-05-27 2021-08-17 中国人民解放军军事科学院国防科技创新研究院 Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
WO2021223367A1 (en) * 2020-05-06 2021-11-11 佳都新太科技股份有限公司 Single lens-based multi-pedestrian online tracking method and apparatus, device, and storage medium
CN114445453A (en) * 2021-12-21 2022-05-06 武汉中海庭数据技术有限公司 Real-time multi-target tracking method and system in automatic driving
CN114550219A (en) * 2022-04-06 2022-05-27 南京甄视智能科技有限公司 Pedestrian tracking method and device
CN114897944A (en) * 2021-11-10 2022-08-12 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985138A (en) * 2014-05-14 2014-08-13 苏州盛景空间信息技术有限公司 Long-sequence image SIFT feature point tracking algorithm based on Kalman filter
CN107292911A (en) * 2017-05-23 2017-10-24 南京邮电大学 A kind of multi-object tracking method merged based on multi-model with data correlation
WO2021223367A1 (en) * 2020-05-06 2021-11-11 佳都新太科技股份有限公司 Single lens-based multi-pedestrian online tracking method and apparatus, device, and storage medium
CN111932580A (en) * 2020-07-03 2020-11-13 江苏大学 Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN112785630A (en) * 2021-02-02 2021-05-11 宁波智能装备研究院有限公司 Multi-target track exception handling method and system in microscopic operation
CN113269098A (en) * 2021-05-27 2021-08-17 中国人民解放军军事科学院国防科技创新研究院 Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
CN113256689A (en) * 2021-06-08 2021-08-13 南京甄视智能科技有限公司 High-altitude parabolic detection method and device
CN114897944A (en) * 2021-11-10 2022-08-12 北京中电兴发科技有限公司 Multi-target continuous tracking method based on DeepSORT
CN114445453A (en) * 2021-12-21 2022-05-06 武汉中海庭数据技术有限公司 Real-time multi-target tracking method and system in automatic driving
CN114550219A (en) * 2022-04-06 2022-05-27 南京甄视智能科技有限公司 Pedestrian tracking method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JONG-MIN JEONG等: "Kalman Filter Based Multiple Objects Detection-Tracking Algorithm Robust to Occlusion", 《SICE ANNUAL CONFERENCE 2014》 *
吴 睿 曦: "基于深度学习的人体多目标 检测与跟踪研究", 《万方数据》 *
杨松林: "基于深度学习目标检测与核化相关滤波的跟踪技术研究", 《万方数据》, pages 36 - 50 *

Also Published As

Publication number Publication date
CN115908506B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
EP2858008B1 (en) Target detecting method and system
CN107452015B (en) Target tracking system with re-detection mechanism
CN107146239B (en) Satellite video moving target detection method and system
CN104008371B (en) Regional suspicious target tracking and recognizing method based on multiple cameras
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
US20120114176A1 (en) Image processing apparatus and image processing method
CN108447076B (en) Multi-target tracking method based on deep reinforcement learning
WO2019172172A1 (en) Object tracker, object tracking method, and computer program
CN113777600A (en) Multi-millimeter-wave radar cooperative positioning tracking method
CN108446710A (en) Indoor plane figure fast reconstructing method and reconstructing system
EP3593322B1 (en) Method of detecting moving objects from a temporal sequence of images
CN110782433A (en) Dynamic information violent parabolic detection method and device based on time sequence and storage medium
CN116309731A (en) Multi-target dynamic tracking method based on self-adaptive Kalman filtering
Wang et al. Automatic node selection and target tracking in wireless camera sensor networks
CN104700408A (en) Indoor singe target positioning method based on camera network
CN111160203A (en) Loitering and lingering behavior analysis method based on head and shoulder model and IOU tracking
CN110458862A (en) A kind of motion target tracking method blocked under background
CN110555377A (en) pedestrian detection and tracking method based on fisheye camera overlook shooting
CN105187801B (en) System and method for generating abstract video
CN116008936A (en) Human body track tracking detection method based on millimeter wave radar
Rao et al. Real-time speed estimation of vehicles from uncalibrated view-independent traffic cameras
Arróspide et al. On-board robust vehicle detection and tracking using adaptive quality evaluation
CN110660084A (en) Multi-target tracking method and device
CN111860392B (en) Thermodynamic diagram statistical method based on target detection and foreground detection
JP2017174305A (en) Object tracking device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant