CN109360225B - Motion model optimization system and method - Google Patents

Motion model optimization system and method Download PDF

Info

Publication number
CN109360225B
CN109360225B CN201811199699.3A CN201811199699A CN109360225B CN 109360225 B CN109360225 B CN 109360225B CN 201811199699 A CN201811199699 A CN 201811199699A CN 109360225 B CN109360225 B CN 109360225B
Authority
CN
China
Prior art keywords
tracked target
center
state vector
coordinate system
jerk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811199699.3A
Other languages
Chinese (zh)
Other versions
CN109360225A (en
Inventor
马越
裴鹏
阮书敏
林露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201811199699.3A priority Critical patent/CN109360225B/en
Publication of CN109360225A publication Critical patent/CN109360225A/en
Priority to PCT/CN2019/104394 priority patent/WO2020078140A1/en
Application granted granted Critical
Publication of CN109360225B publication Critical patent/CN109360225B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system and a method for optimizing a motion model. The method comprises the following steps: acquiring N frames of image information in a camera; acquiring a state vector of the center of a tracked target in the kth frame image under a pixel coordinate system, wherein k is more than or equal to 1 and less than or equal to N; predicting a state vector of a tracked target center in the k +1 frame image under a pixel coordinate system by adopting a second-order autoregressive motion model; establishing a jerk constraint condition of the center of a tracked target in two continuous adjacent frames of images; constraining the state vector of the tracked target center in the k +1 frame image under a pixel coordinate system by using a jerk constraint condition; obtaining the predicted position of the tracked target center according to the predicted state vector of the tracked target center in the coordinate system; and correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain the accurate center position. The method can accurately predict the position of the center of the tracked target while reducing the range of the search area, and improve the efficiency of target tracking.

Description

Motion model optimization system and method
Technical Field
The invention relates to the technical field of model optimization, in particular to a system and a method for optimizing a motion model.
Background
The detection and tracking of the moving target is a branch of image processing and computer vision, and has great significance in theory and practice. The research on the moving object detection and tracking technology is to extract a moving object from image data and continuously track the moving object. And providing basic elements and analysis basis for further video sequence images. The target detection is to extract the foreground target which has relative motion with the background, and the target tracking is to determine the corresponding position of the same target in different frames of the video sequence. The detection and tracking of the moving target are key technologies and hot problems in the research fields of computer vision, image processing, active monitoring and the like, and are applied to the aspects of video monitoring, intelligent transportation, autonomous navigation, accurate guidance, human-computer interaction perception interfaces and the like.
At present, in the process of tracking a moving target by using a KCF algorithm, the fast movement and the violent movement of the tracked target can cause that an original KCF algorithm search area can not completely cover the tracked target, thereby causing tracking drift and even failure of tracking.
In the prior art, the area of the search area is enlarged to ensure that the search area covers the target, but this results in an increased amount of calculation and a reduced tracking efficiency.
Disclosure of Invention
The invention aims to provide a motion model optimization system and method, which can accurately predict the position of the center of a tracked target while reducing the range of a search area and improve the target tracking efficiency.
In order to achieve the purpose, the invention provides the following scheme:
a method of optimizing a motion model, comprising:
acquiring N frames of image information in a camera;
acquiring a state vector of the tracked target center in the kth frame image under a pixel coordinate system; k is more than or equal to 1 and less than or equal to N;
predicting a state vector of the tracked target center in the k +1 frame image in a pixel coordinate system according to the state vector of the tracked target center in the k frame image in the pixel coordinate system by adopting a second-order autoregressive motion model;
establishing a jerk constraint condition of the center of a tracked target in two continuous adjacent frames of images;
constraining the state vector of the tracked target center in the (k + 1) th frame image under a pixel coordinate system by using the jerk constraint condition to obtain a predicted state vector of the tracked target center under the coordinate system;
obtaining the predicted position of the tracked target center according to the predicted state vector of the tracked target center in the coordinate system;
and correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position.
Optionally, the obtaining of the state vector of the tracked target center in the kth frame image under the pixel coordinate system specifically includes:
the formula is adopted:
Figure BDA0001829686020000021
to represent the state vector of the tracked target center in the k frame image under the pixel coordinate system, wherein the state vector comprises position, speed and acceleration;
wherein, XkAnd T is a step length between two continuous adjacent frames.
Optionally, the predicting the state vector of the tracked target center in the k +1 th frame image in the pixel coordinate system according to the state vector of the tracked target center in the k th frame image in the pixel coordinate system by using the second-order autoregressive motion model specifically includes:
the formula is adopted:
Figure BDA0001829686020000022
representing a state vector of the center of the tracked target in the k +1 th frame image under a pixel coordinate system;
wherein G (sigma) is variance of sigma2In the 1 st frame image, the position of the tracked target is the center of the tracked target in the selected search area, and at the moment, the central speed and the acceleration of the tracked target in the search area are initialized to be zero.
Optionally, the establishing of the jerk constraint condition of the center of the tracked target in the two consecutive adjacent frames of images specifically includes:
the formula is adopted:
Figure BDA0001829686020000031
obtaining the jerk of the center of the tracked target in the two continuous adjacent frames of images;
wherein, T is the step size between two continuous adjacent frames, and size is the size of the tracked target;
the formula is adopted:
Figure BDA0001829686020000032
establishing a constraint condition of the jerk of the tracked target center in the two continuous adjacent frames of images;
where exp () denotes a power operation with e as the base, c denotes a constant, γkThe jerk of the tracked target center in two continuous adjacent frames of images is shown.
Optionally, the constraining the state vector of the tracked target center in the (k + 1) th frame image in the pixel coordinate system by using the jerk constraint condition to obtain a predicted state vector of the tracked target center in the coordinate system specifically includes:
the formula is adopted:
Figure BDA0001829686020000033
obtaining a predicted state vector of the center of the tracked target under a coordinate system;
wherein Δ X ═ Xk+1-XkDeltaX represents the estimated state change quantity of the tracked target center between two continuous frames of images, Xk+1Represents the state vector of the tracked target center in the k +1 th frame image under the pixel coordinate system, XkAnd representing the state vector of the tracked target center in the k frame image under the pixel coordinate system.
Optionally, the correcting the target position in the search area by using the predicted position of the tracked target center to obtain an accurate center position specifically includes:
and taking the predicted position of the tracked target center as a new search area center position, and searching the tracked target in the new search area range.
A system for optimizing a motion model, comprising:
the image acquisition module is used for acquiring N frames of image information in the camera;
the state vector acquisition module is used for acquiring a state vector of the center of the tracked target in the kth frame of image in a pixel coordinate system; k is more than or equal to 1 and less than or equal to N;
the prediction module is used for predicting the state vector of the tracked target center in the k +1 frame image in the pixel coordinate system according to the state vector of the tracked target center in the k frame image in the pixel coordinate system by adopting a second-order autoregressive motion model;
the jerk constraint condition establishing module is used for establishing jerk constraint conditions of the centers of the tracked targets in two continuous adjacent frames of images;
the target center state vector prediction module is used for constraining the state vector of the tracked target center in the (k + 1) th frame image under the pixel coordinate system by utilizing the jerk constraint condition to obtain a prediction state vector of the tracked target center under the coordinate system;
the target center position prediction module is used for obtaining the predicted position of the tracked target center according to the predicted state vector of the tracked target center in the coordinate system;
and the correction module is used for correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position.
Optionally, the prediction module specifically includes:
a state vector prediction module to employ the formula:
Figure BDA0001829686020000041
representing a state vector of the center of the tracked target in the k +1 th frame image under a pixel coordinate system;
wherein G (sigma) is variance of sigma2In the 1 st frame image, the position of the tracked target is the center of the tracked target in the selected search area, and at the moment, the central speed and the acceleration of the tracked target in the search area are initialized to be zero.
Optionally, the jerk constraint condition establishing module specifically includes:
a jerk calculation unit for employing the formula:
Figure BDA0001829686020000051
obtaining the jerk of the center of the tracked target in the two continuous adjacent frames of images;
wherein, T is the step size between two continuous adjacent frames, and size is the size of the tracked target;
a constraint condition establishing unit for adopting a formula:
Figure BDA0001829686020000052
establishing a constraint condition of the jerk of the tracked target center in the two continuous adjacent frames of images;
where exp () denotes a power operation with e as the base, c denotes a constant, γkAnd representing the jerk of the acceleration of the tracked target center in two continuous adjacent images.
Optionally, the modification module specifically includes:
the correction unit is used for correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position;
and the target tracking unit is used for taking the predicted position of the center of the tracked target as the center position of a new search area and searching the tracked target in the range of the new search area.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides an optimization system and method of a motion model, which predict the position of a target center in a pixel coordinate system through the motion model of the tracked target center, and then correct the position of a search area through the predicted position, thereby ensuring that the search area can cover a target without enlarging the area, accurately predicting the position of the tracked target center based on a kinematic model, and improving the efficiency of target tracking.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a method for optimizing a motion model according to the present invention;
FIG. 2 is a schematic structural diagram of an optimization system of a motion model according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a motion model optimization system and method, which can accurately predict the position of the center of a tracked target while reducing the range of a search area and improve the target tracking efficiency.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
FIG. 1 is a schematic flow chart of the optimization method of the motion model of the present invention.
As shown in fig. 1, a method for optimizing a motion model includes:
step 101: acquiring N frames of image information in a camera;
step 102: acquiring a state vector of the tracked target center in the kth frame image under a pixel coordinate system; k is more than or equal to 1 and less than or equal to N;
step 103: predicting a state vector of the tracked target center in the k +1 frame image in a pixel coordinate system according to the state vector of the tracked target center in the k frame image in the pixel coordinate system by adopting a second-order autoregressive motion model;
step 104: establishing a jerk constraint condition of the center of a tracked target in two continuous adjacent frames of images;
step 105: constraining the state vector of the tracked target center in the (k + 1) th frame image under a pixel coordinate system by using the jerk constraint condition to obtain a predicted state vector of the tracked target center under the coordinate system;
step 106: obtaining the predicted position of the tracked target center according to the predicted state vector of the tracked target center in the coordinate system;
step 107: and correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position.
The step 102: acquiring a state vector of a tracked target center in a kth frame image under a pixel coordinate system, specifically comprising:
the formula is adopted:
Figure BDA0001829686020000071
to represent the state vector of the tracked target center in the k frame image under the pixel coordinate system, wherein the state vector comprises position, speed and acceleration;
wherein, XkAnd T is a step length between two continuous adjacent frames.
The step 103: predicting the state vector of the tracked target center in the k +1 frame image in the pixel coordinate system according to the state vector of the tracked target center in the k frame image in the pixel coordinate system by adopting a second-order autoregressive motion model, and specifically comprises the following steps:
the formula is adopted:
Figure BDA0001829686020000072
representing a state vector of the center of the tracked target in the k +1 th frame image under a pixel coordinate system;
wherein G (sigma) is variance of sigma2In the 1 st frame image, the position of the tracked target is the center of the tracked target in the selected search area, and at the moment, the central speed and the acceleration of the tracked target in the search area are initialized to be zero.
The step 104 is that: establishing a jerk constraint condition of the tracked target center in two continuous adjacent images, specifically comprising the following steps:
the formula is adopted:
Figure BDA0001829686020000073
obtaining the jerk of the center of the tracked target in the two continuous adjacent frames of images;
wherein, T is the step size between two continuous adjacent frames, and size is the size of the tracked target; the method is used for carrying out normalization processing on the accelerated speed of the center of the tracked target so as to weaken the influence caused by the difference of the sizes of the tracked targets; from the formula, the acceleration of the tracked target is a fixed value when the tracked target performs uniform acceleration movement, and the jerk value is approximate to zero; when the tracked target moves violently, the acceleration change is large, and the jerk value is large;
the formula is adopted:
Figure BDA0001829686020000081
establishing a constraint condition of the jerk of the tracked target center in the two continuous adjacent frames of images;
where exp () denotes a power operation with e as the base, c denotes a constant, γkThe jerk of the tracked target center in two continuous adjacent frames of images is shown.
The prediction of the central position of the tracked target may cause the prediction accuracy to be different because the tracked target performs different degrees of accelerated motion. When the tracked target moves at a constant speed, the state of the tracked target center in the k +1 th image predicted by the motion model has high precision, but the opposite is realized when the tracked target moves violently. Therefore, in order to cope with the situation of different degrees of acceleration and further improve the comprehensive accuracy of the prediction based on the motion model, the prediction based on the motion model needs to be constrained;
the step 105 is as follows: constraining the state vector of the tracked target center in the (k + 1) th frame image under the pixel coordinate system by using the jerk constraint condition to obtain a predicted state vector of the tracked target center under the coordinate system, and specifically comprising the following steps:
the formula is adopted:
Figure BDA0001829686020000082
obtaining a predicted state vector of the center of the tracked target under a coordinate system;
wherein Δ X ═ Xk+1-XkDeltaX represents the estimated state change quantity of the tracked target center between two continuous frames of images, Xk+1Represents the state vector of the tracked target center in the k +1 th frame image under the pixel coordinate system, XkAnd representing the state vector of the tracked target center in the k frame image under the pixel coordinate system.
The step 107 is as follows: correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position, and specifically comprising the following steps:
and taking the predicted position of the tracked target center as a new search area center position, and searching the tracked target in the new search area range.
FIG. 2 is a schematic structural diagram of an optimization system of a motion model according to the present invention.
As shown in fig. 2, a system for optimizing a motion model includes:
an image acquisition module 201, configured to acquire N frames of image information in a camera;
a state vector obtaining module 202, configured to obtain a state vector of a tracked target center in a kth frame of image in a pixel coordinate system; k is more than or equal to 1 and less than or equal to N;
the predicting module 203 is configured to predict a state vector of the tracked target center in the k +1 th frame image in the pixel coordinate system according to the state vector of the tracked target center in the k th frame image in the pixel coordinate system by using a second-order autoregressive motion model;
a jerk constraint condition establishing module 204, configured to establish a jerk constraint condition of a center of a tracked target in two consecutive adjacent frames of images;
the target center state vector prediction module 205 is configured to utilize the jerk constraint condition to constrain a state vector of a tracked target center in the (k + 1) th frame image in a pixel coordinate system, so as to obtain a predicted state vector of the tracked target center in the coordinate system;
the target center position prediction module 206 is configured to obtain a predicted position of the tracked target center according to the predicted state vector of the tracked target center in the coordinate system;
and the correcting module 207 is configured to correct the target position of the search area by using the predicted position of the tracked target center, so as to obtain an accurate center position.
The prediction module 203 specifically includes:
a state vector prediction module 202 configured to apply the formula:
Figure BDA0001829686020000091
representing a state vector of the center of the tracked target in the k +1 th frame image under a pixel coordinate system;
wherein G (sigma) is variance of sigma2In the 1 st frame image, the position of the tracked target is the center of the tracked target in the selected search area, and at the moment, the central speed and the acceleration of the tracked target in the search area are initialized to be zero.
The jerk constraint condition establishing module 204 specifically includes:
a jerk calculation unit for employing the formula:
Figure BDA0001829686020000101
obtaining the jerk of the center of the tracked target in the two continuous adjacent frames of images;
wherein, T is the step size between two continuous adjacent frames, and size is the size of the tracked target;
a constraint condition establishing unit for adopting a formula:
Figure BDA0001829686020000102
establishing a constraint condition of the jerk of the tracked target center in the two continuous adjacent frames of images;
where exp () denotes a power operation with e as the base, c denotes a constant, γkAnd representing the jerk of the acceleration of the tracked target center in two continuous adjacent images.
The modification module 207 specifically includes:
the correction unit is used for correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position;
and the target tracking unit is used for taking the predicted position of the center of the tracked target as the center position of a new search area and searching the tracked target in the range of the new search area.
The invention provides an optimization system and method of a motion model, which predict the position of a target center in a pixel coordinate system through the motion model of the tracked target center, and then correct the position of a search area through the predicted position, thereby ensuring that the search area can cover a target without enlarging the area, accurately predicting the position of the tracked target center based on a kinematic model, and improving the efficiency of target tracking; the essence of the method is that the position of the target center in a pixel coordinate system is predicted through a motion model of the tracked target center, and then the position of a search area is corrected through the predicted position, so that the search area can cover the target without enlarging the area, the robustness of the algorithm is optimized, and the real-time performance of the algorithm is guaranteed.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A method for optimizing a motion model, comprising:
acquiring N frames of image information in a camera;
acquiring a state vector of the tracked target center in the kth frame image under a pixel coordinate system; k is more than or equal to 1 and less than or equal to N;
predicting a state vector of the tracked target center in the k +1 frame image in a pixel coordinate system according to the state vector of the tracked target center in the k frame image in the pixel coordinate system by adopting a second-order autoregressive motion model;
normalizing the jerk of the tracked target center by the size of the tracked target, thereby weakening the influence caused by the difference of the sizes of the tracked target, wherein the jerk is a fixed value when the tracked target is in uniform acceleration movement, the jerk is approximately zero, the jerk is large when the tracked target is in violent movement, and the jerk is large; establishing a jerk constraint condition of the center of a tracked target in two continuous adjacent frames of images;
constraining the state vector of the tracked target center in the (k + 1) th frame image under a pixel coordinate system by using the jerk constraint condition to obtain a predicted state vector of the tracked target center under the coordinate system;
obtaining the predicted position of the tracked target center according to the predicted state vector of the tracked target center in the coordinate system;
and correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position.
2. The method according to claim 1, wherein the obtaining of the state vector of the tracked target center in the kth frame image under the pixel coordinate system specifically comprises:
the following 6 variables were used:
Figure FDA0002600001020000011
to represent the state vector of the tracked target center in the k frame image under the pixel coordinate system, wherein the state vector comprises position, speed and acceleration;
wherein, XkAnd T is a step length between two continuous adjacent frames.
3. The method according to claim 1, wherein the predicting the state vector of the tracked target center in the pixel coordinate system in the k +1 frame image according to the state vector of the tracked target center in the pixel coordinate system in the k frame image by using the second-order autoregressive motion model specifically comprises:
the formula is adopted:
Figure FDA0002600001020000021
representing a state vector of the center of the tracked target in the k +1 th frame image under a pixel coordinate system;
wherein G (sigma) is variance of sigma2The zero mean white noise is that in the 1 st frame image, the position of the tracked target is the center of the tracked target in the selected search area, and at the moment, the search area is followed by the white noiseThe center velocity and acceleration of the tracking target are initialized to zero.
4. The method according to claim 1, wherein the establishing of the jerk constraint condition of the tracked target center in two consecutive adjacent images specifically comprises:
the formula is adopted:
Figure FDA0002600001020000022
obtaining the jerk of the center of the tracked target in the two continuous adjacent frames of images;
wherein, T is the step size between two continuous adjacent frames, and size is the size of the tracked target;
the formula is adopted:
Figure FDA0002600001020000023
establishing a constraint condition of the jerk of the tracked target center in the two continuous adjacent frames of images;
where exp () denotes a power operation with e as the base, c denotes a constant, γkThe jerk of the tracked target center in two continuous adjacent frames of images is shown.
5. The method according to claim 1, wherein the constraining the state vector of the center of the tracked target in the (k + 1) th frame image in the pixel coordinate system by using the jerk constraint condition to obtain the predicted state vector of the center of the tracked target in the coordinate system specifically comprises:
the formula is adopted:
Figure FDA0002600001020000031
obtaining a predicted state vector of the center of the tracked target under a coordinate system;
wherein Δ X ═ Xk+1-XkDeltaX represents the estimated state change quantity of the tracked target center between two continuous frames of images, Xk+1Represents the state vector of the tracked target center in the k +1 th frame image under the pixel coordinate system, XkAnd representing the state vector of the tracked target center in the k frame image under the pixel coordinate system.
6. The method according to claim 1, wherein the modifying the target position in the search area using the predicted position of the tracked target center to obtain an accurate center position specifically comprises:
and taking the predicted position of the tracked target center as a new search area center position, and searching the tracked target in the new search area range.
7. A system for optimizing a motion model, comprising:
the image acquisition module is used for acquiring N frames of image information in the camera;
the state vector acquisition module is used for acquiring a state vector of the center of the tracked target in the kth frame of image in a pixel coordinate system; k is more than or equal to 1 and less than or equal to N;
the prediction module is used for predicting the state vector of the tracked target center in the k +1 frame image in the pixel coordinate system according to the state vector of the tracked target center in the k frame image in the pixel coordinate system by adopting a second-order autoregressive motion model;
the jerk constraint condition establishing module is used for carrying out normalization processing on the jerk of the center of the tracked target by using the size of the tracked target so as to weaken the influence caused by the size difference of the tracked target, the jerk is a fixed value when the tracked target is in uniform accelerated motion, the jerk is approximate to zero, the acceleration change is large when the tracked target is in violent motion, the jerk is large, and the jerk constraint condition of the center of the tracked target in two continuous adjacent frames of images is established;
the target center state vector prediction module is used for constraining the state vector of the tracked target center in the (k + 1) th frame image under the pixel coordinate system by utilizing the jerk constraint condition to obtain a prediction state vector of the tracked target center under the coordinate system;
the target center position prediction module is used for obtaining the predicted position of the tracked target center according to the predicted state vector of the tracked target center in the coordinate system;
and the correction module is used for correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position.
8. The system for optimizing a motion model according to claim 7, wherein the prediction module specifically comprises:
a state vector prediction module to employ the formula:
Figure FDA0002600001020000041
representing a state vector of the center of the tracked target in the k +1 th frame image under a pixel coordinate system;
wherein G (sigma) is variance of sigma2In the 1 st frame image, the position of the tracked target is the center of the tracked target in the selected search area, and at the moment, the central speed and the acceleration of the tracked target in the search area are initialized to be zero.
9. The system according to claim 7, wherein the jerk constraint establishing module specifically includes:
a jerk calculation unit for employing the formula:
Figure FDA0002600001020000042
obtaining the jerk of the center of the tracked target in the two continuous adjacent frames of images;
wherein, T is the step size between two continuous adjacent frames, and size is the size of the tracked target;
a constraint condition establishing unit for adopting a formula:
Figure FDA0002600001020000043
establishing a constraint condition of the jerk of the tracked target center in the two continuous adjacent frames of images;
where exp () denotes a power operation with e as the base, c denotes a constant, γkAnd representing the jerk of the acceleration of the tracked target center in two continuous adjacent images.
10. The system for optimizing a motion model according to claim 7, wherein the modification module specifically includes:
the correction unit is used for correcting the target position of the search area by adopting the predicted position of the tracked target center to obtain an accurate center position;
and the target tracking unit is used for taking the predicted position of the center of the tracked target as the center position of a new search area and searching the tracked target in the range of the new search area.
CN201811199699.3A 2018-10-16 2018-10-16 Motion model optimization system and method Active CN109360225B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811199699.3A CN109360225B (en) 2018-10-16 2018-10-16 Motion model optimization system and method
PCT/CN2019/104394 WO2020078140A1 (en) 2018-10-16 2019-09-04 Optimization system and method for motion model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811199699.3A CN109360225B (en) 2018-10-16 2018-10-16 Motion model optimization system and method

Publications (2)

Publication Number Publication Date
CN109360225A CN109360225A (en) 2019-02-19
CN109360225B true CN109360225B (en) 2020-12-18

Family

ID=65349455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811199699.3A Active CN109360225B (en) 2018-10-16 2018-10-16 Motion model optimization system and method

Country Status (2)

Country Link
CN (1) CN109360225B (en)
WO (1) WO2020078140A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109360225B (en) * 2018-10-16 2020-12-18 北京理工大学 Motion model optimization system and method
CN111105444B (en) * 2019-12-31 2023-07-25 哈尔滨工程大学 Continuous tracking method suitable for grabbing underwater robot target
CN111340857B (en) * 2020-02-20 2023-09-19 浙江大华技术股份有限公司 Tracking control method and device for camera
CN111479063B (en) * 2020-04-15 2021-04-06 上海摩象网络科技有限公司 Holder driving method and device and handheld camera
CN112037258B (en) * 2020-08-25 2024-03-08 广州视源电子科技股份有限公司 Target tracking method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860729A (en) * 2010-04-16 2010-10-13 天津理工大学 Target tracking method for omnidirectional vision
CN104835180A (en) * 2015-04-29 2015-08-12 北京航空航天大学 Multi-target tracking method and device in video monitoring
CN105931263A (en) * 2016-03-31 2016-09-07 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281476A (en) * 2013-04-22 2013-09-04 中山大学 Television image moving target-based automatic tracking method
US10382795B2 (en) * 2014-12-10 2019-08-13 Mediatek Singapore Pte. Ltd. Method of video coding using binary tree block partitioning
CN104616322A (en) * 2015-02-10 2015-05-13 山东省科学院海洋仪器仪表研究所 Onboard infrared target image identifying and tracking method and device
CN105427346B (en) * 2015-12-01 2018-06-29 中国农业大学 A kind of motion target tracking method and system
CN105913455A (en) * 2016-04-11 2016-08-31 南京理工大学 Local image enhancement-based object tracking method
CN108111760B (en) * 2017-12-26 2019-09-10 北京理工大学 A kind of electronic image stabilization method and system
CN109360225B (en) * 2018-10-16 2020-12-18 北京理工大学 Motion model optimization system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860729A (en) * 2010-04-16 2010-10-13 天津理工大学 Target tracking method for omnidirectional vision
CN104835180A (en) * 2015-04-29 2015-08-12 北京航空航天大学 Multi-target tracking method and device in video monitoring
CN105931263A (en) * 2016-03-31 2016-09-07 纳恩博(北京)科技有限公司 Target tracking method and electronic equipment

Also Published As

Publication number Publication date
WO2020078140A1 (en) 2020-04-23
CN109360225A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN109360225B (en) Motion model optimization system and method
Akolkar et al. Real-time high speed motion prediction using fast aperture-robust event-driven visual flow
CN109711304B (en) Face feature point positioning method and device
CN109102525B (en) Mobile robot following control method based on self-adaptive posture estimation
CN112785628B (en) Track prediction method and system based on panoramic view angle detection tracking
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN115619826A (en) Dynamic SLAM method based on reprojection error and depth estimation
CN111798485A (en) Event camera optical flow estimation method and system enhanced by IMU
US11699240B2 (en) Target tracking method and apparatus, and storage medium
CN115311617A (en) Method and system for acquiring passenger flow information of urban rail station area
CN112967316B (en) Motion compensation optimization method and system for 3D multi-target tracking
CN104091352A (en) Visual tracking method based on structural similarity
CN111768427B (en) Multi-moving-object tracking method, device and storage medium
KR101806453B1 (en) Moving object detecting apparatus for unmanned aerial vehicle collision avoidance and method thereof
CN111696155A (en) Monocular vision-based multi-sensing fusion robot positioning method
CN115797801A (en) Mobile robot visual mileage flow-metering control method based on deep learning
CN116433728A (en) DeepSORT target tracking method for shake blur scene
CN106934818B (en) Hand motion tracking method and system
KR100332639B1 (en) Moving object detection method using a line matching technique
CN114245102A (en) Vehicle-mounted camera shake identification method and device and computer readable storage medium
CN202931463U (en) Characteristic block based video image stabilization device
Adachi et al. Improvement of Visual Odometry Based on Robust Feature Extraction Considering Semantics
CN101593045B (en) Motion vector stability predicting method of optical indicating device
Ji et al. DRV-SLAM: An Adaptive Real-Time Semantic Visual SLAM Based on Instance Segmentation Toward Dynamic Environments
CN104021575A (en) Moving object detection method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant