CN109410254A - A kind of method for tracking target modeled based on target and camera motion - Google Patents
A kind of method for tracking target modeled based on target and camera motion Download PDFInfo
- Publication number
- CN109410254A CN109410254A CN201811307972.XA CN201811307972A CN109410254A CN 109410254 A CN109410254 A CN 109410254A CN 201811307972 A CN201811307972 A CN 201811307972A CN 109410254 A CN109410254 A CN 109410254A
- Authority
- CN
- China
- Prior art keywords
- target
- speed
- camera
- tracking
- predicted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20068—Projection on vertical or horizontal image axis
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for tracking target modeled based on target and camera motion, comprising the following steps: the true velocity of current target S1: is predicted by algorithm filter;S2: the speed of prediction current time camera;S3: the speed for the camera that the true velocity of the step S1 target predicted and step S2 are predicted is synthesized, current target is obtained in the predicted value of the conjunction speed of pixel planes, and the predicted value according to target in the conjunction speed of pixel planes comes selection target region of search;S4: target tracking algorism is carried out in the target search region of step S3 selection, acquires current target in the position of pixel planes.The method for tracking target proposed by the present invention modeled based on target and camera motion, effectively improves the levels of precision and robustness of target following.
Description
Technical field
The present invention relates to computer vision field more particularly to a kind of target followings modeled based on target and camera motion
Method.
Background technique
Video analysis has a wide range of applications scene in real life.And the key step of video analysis includes target inspection
It surveys, target following, behavioural analysis.Target detection is to find our interested targets, and target following is able to maintain to mesh
Target persistently tracks, and assesses status information of the target in each video frame, and behavioural analysis is namely based on to the continuous of dbjective state
Tracking information finds goal behavior feature, and provides foundation for possible decision.It can be found that target following is regarding from analysis
Particularly important role is occupied in frequency analysis, and there is strong current demand.
Visual target tracking can be divided mainly into pretreatment, feature extraction, the parts such as Target state estimator.Feature extraction conduct
Most important one link plays decisive role to the effect of target following, is found by investigation, what existing method used
Feature is concentrated mainly on the information such as the texture of image, color, such as color histogram, HOG feature, depth characteristic, and to mesh
Seldom, movement of the target in real scene must comply with the constraint of physical world to the research of target motion information, as speed cannot
Mutate etc., these prior informations can provide many important informations for tracking, so that tracking is more robust.Few use fortune
The method of dynamic feature be also due to motion feature show in many cases it is very poor.
To find out its cause, target by camera imaging after, perspective etc. due to, originally in real scene to target
The constraint of movement, it may occur however that very big destruction is more seriously fixed since the scene of target following is used not only for camera
Motionless scene, also a large amount of uses are in the state that camera is in mobile, so the movement of camera, can directly result in target and exist
The movement of the plane of delineation is irregular to follow, i.e. the constraint of physical world is destroyed, and the priori that can be used for target following originally is known
Know and lose, so those methods directly to the motion modeling of plane of delineation target, obtain no longer true and reliable motion information
So that tracking effect is very poor.
The disclosure of background above technology contents is only used for auxiliary and understands design and technical solution of the invention, not necessarily
The prior art for belonging to present patent application, no tangible proof show above content present patent application the applying date
In disclosed situation, above-mentioned background technique should not be taken to the novelty and creativeness of evaluation the application.
Summary of the invention
In order to solve the above-mentioned technical problem, the present invention proposes a kind of target following side modeled based on target and camera motion
Method effectively improves the levels of precision and robustness of target following.
In order to achieve the above object, the invention adopts the following technical scheme:
The invention discloses a kind of method for tracking target modeled based on target and camera motion, comprising the following steps:
S1: the true velocity of current target is predicted by algorithm filter;
S2: the speed of prediction current time camera;
S3: the speed for the camera that the true velocity of the step S1 target predicted and step S2 are predicted is synthesized, is obtained
Current target the conjunction speed of pixel planes predicted value, and according to target the conjunction speed of pixel planes predicted value come
Selection target region of search;
S4: target tracking algorism is carried out in the target search region of step S3 selection, acquires current target
In the position of pixel planes.
Preferably, the method for tracking target is further comprising the steps of:
S5: according to the current target acquired in the position of pixel planes, in conjunction with previous moment target in pixel planes
Position, current target is calculated in the observation of the conjunction speed of pixel planes;
S6: the current target that step S5 is calculated is pre- in the observation and step S2 of the conjunction speed of pixel planes
The speed of the current time camera of survey is separately input in the algorithm filter of step S1 to carry out more to the algorithm filter
Newly.
Preferably, in step S3 by the speed of the true velocity of the step S1 target predicted and step the S2 camera predicted into
Row synthesis, obtains current target in the predicted value of the conjunction speed of pixel planes and specifically uses following formula:
Vp=α Vg-αVc
Wherein, VpIndicate conjunction speed of the target in pixel planes, VgIndicate the true velocity of target, VcIndicate the speed of camera
Degree, α indicate the factor that the movement under world coordinate system is mapped to pixel planes.
Preferably, α=1.
Preferably, the target tracking algorism in step S4 uses DSST algorithm.
Preferably, the algorithm filter in step S1 uses Kalman filtering algorithm.
Preferably, the speed of current time camera is predicted in step S2 using optical flow method.
Preferably, the image block comprising target sampled when using speed of the optical flow method to predict current time camera is solid
Scale cun.
Preferably, the size of the image block comprising target of sampling adds fixed space for the size of target, wherein
Fixed space is determined by the possible maximum speed of closing of target.
Compared with prior art, the beneficial effects of the present invention are: it is disclosed by the invention to be built based on target and camera motion
The method for tracking target of mould passes through the current time of the true velocity of current target and prediction of predicting algorithm filter
The speed of camera is synthesized to obtain current target to be selected in the predicted value of the conjunction speed of pixel planes, and by the predicted value
Target search region is selected, then carrying out solution by target tracking algorism can be obtained target in the position of pixel planes;Wherein
It, can be with when target is with relative constant big speed movement or low frame per second data by modeling the movement of target and camera simultaneously
The possible position of target next frame is predicted by the target true motion of algorithm filter;The shake that happens suddenly when camera or
The big speed movement that target happens suddenly can predict that target may by the way that these movements are all modeled as the movement of camera
Position, motion information is effectively utilized by both complementary mechanism to assist the selection of region of search, to solve to search
Rope region is too small and the problem of causing target to move out region of search;Priori knowledge is provided for target following, to effectively mention
The high levels of precision and robustness of target following.
In further embodiment, target also is solved in pixel in the position of pixel planes by former and later two moment targets
The observation of the conjunction speed of plane, in conjunction with camera speed by move synthesis acquire target true velocity observation,
And then Kalman filter is updated, realize the self-correction of the closed loop of the target tracking algorism;To further mention
The high levels of precision and robustness of target following.
Detailed description of the invention
Fig. 1 is the flow diagram for the method for tracking target of the preferred embodiment of the present invention modeled based on target and camera;
Fig. 2 is the process simplification signal for the method for tracking target of the preferred embodiment of the present invention modeled based on target and camera
Figure;
Fig. 3 is that the movement of the real motion, camera of target and target are transported in the conjunction of pixel planes under world coordinate system
Dynamic relationship.
Specific embodiment
Below against attached drawing and in conjunction with preferred embodiment, the invention will be further described.
The most basic reason for causing motion feature to fail is movement while target and camera, and in the prior art usually
The fact that ignore the two while moving, the movement of plane of delineation target is considered to the movement of target.Based on this, the present invention is proposed
A kind of visual target tracking method modeled simultaneously based on target and camera motion can using method for tracking target of the invention
To model the movement of target and camera simultaneously, modeling method tallies with the actual situation, and the two movement is the true fortune of physical world
It is dynamic, meet the constraint of physical world, a large amount of priori knowledge can be provided for target following, to improve levels of precision and the Shandong of tracking
Stick.
The preferred embodiment of the present invention utilizes Kalman filter algorithm for estimating and predicting the real motion of target, utilizes
For acquiring camera in the projection of the plane of delineation, target tracking algorism leads to optical flow method for positioning the position of target in the picture
Cross movement of the available target of result in the plane of delineation of two frame of front and back;The movement for the camera that optical flow method is acquired first and card
The real motion for the target that Thalmann filter is predicted is synthesized to predict target most likely location, then passes through prediction
Position choose the candidate region of track algorithm positioning target, track algorithm positions to obtain target in image in the candidate region again
The actual position of plane is observed, then the camera acquired by movement compositive relation, the observation of combining target picture position and optical flow method
Movement obtains the observation of target true motion, and then updates Kalman filter, so far forms being circulated throughout for a forecast updating
Journey.
Referring to figs. 1 and 2, the preferred embodiment of the present invention discloses a kind of mesh modeled based on target and camera motion
Mark tracking, comprising the following steps:
S1: the true velocity of current target is predicted by algorithm filter;
Specifically, using kalman algorithm filter (but being also not necessarily limited to kalman filtering algorithm) for estimating and predicting mesh
Target real motion.Wherein, the real motion of target be the movement of a normal physical process and camera (camera) not
Together, the movement of camera can be amplified by perspective effect, and the real motion of target can be reduced instead by perspective effect, so mesh
Target real motion is more smooth, is predicted in the present embodiment by Kalman filter.
S2: the speed of prediction current time camera;
Specifically, the speed of camera is predicted using optical flow method;It is possible to further use intensive light stream, to overcome picture
The few bring robustness of element is low;Arithmetic speed is combined, samples the fixed-size image block comprising target, wherein image block
Size be that target size adds fixed space, which determined by the possible maximum speed of closing of target
's.Using fixed size without being using the advantages of fixed proportion in the present embodiment: using fixed proportion, when target size ratio
When smaller, the image block of sampling also can very little, however the movement velocity of target may be still very big, and object run be caused to go out image
Block range, and if using fixed space, even if target size changes, but their maximum movement speed or consistent
's.
When calculating the movement of camera of camera, observation is derived from mesh target area, and this is done to overcome perspective to imitate
It answers, target area segment can correctly reflect camera in the movement of the plane of delineation.It is gone to all observations in target area
After making an uproar, takes mean value to observe as the speed of camera, indicate are as follows: Vpc=(Vx,Vy)。
S3: the speed for the camera that the true velocity of the step S1 target predicted and step S2 are predicted is synthesized, is obtained
Current target the conjunction speed of pixel planes predicted value, and according to target the conjunction speed of pixel planes predicted value come
Selection target region of search;
Such as Fig. 3, can be by the movement V of target between two frames under world coordinate systemg(namely true velocity of target), phase
The movement V of machine (camera)c(speed of camera), and finally in the movement of plane of delineation presentation as shown, target is in picture
The conjunction speed of plain plane follows following formula:
Vp=α Vg-αVc=Vpg-Vpc
Wherein, VpTarget is indicated in the conjunction speed of pixel planes, α indicates the movement under world coordinate system being mapped to pixel
The factor of plane, VpgIndicate true velocity of the target in pixel planes, VpcIndicate camera (camera) in the speed of pixel planes
Degree.That is, in the present embodiment, by target pixel planes Kinematic Decomposition at target real motion and camera (camera)
Movement, and model simultaneously respectively, wherein α is generally taken as 1.
S4: target tracking algorism is carried out in the target search region of step S3 selection, acquires current target in picture
The position of plain plane.
Specifically, which is calculated using DSST (Discriminatiive Scale SpaceTracker)
Method.
S5: again by the current target acquired in the position of pixel planes, in conjunction with previous moment target in pixel planes
Position, current target is calculated in the observation of the conjunction speed of pixel planes;
S6: the current target that step S5 is calculated is pre- in the observation and step S2 of the conjunction speed of pixel planes
The speed of the current time camera of survey is separately input in the algorithm filter of step S1 to be updated to algorithm filter, until
This forms the cyclic process of a forecast updating, realizes the self-correction of closed loop.
The preferred embodiment of the present invention passes through the true velocity of Kalman filter prediction present frame target, and passes through optical flow method
Estimate the movement of present frame camera (camera), the resultant motion of the two is prediction of the present frame target in pixel planes speed,
It can choose the bigger region of target existing probability by this prediction, mesh accurately then solved by track algorithm
It is marked on the position of pixel planes, and target is solved in the conjunction speed of pixel planes by the pixel planes position of two frame target of front and back
Observation can pass through V in this waypg=Vp+VpcThe observation of the real motion of target is acquired, and then Kalman filter is carried out
It updates.It can see by this process, when target is with relative constant big speed movement or low frame per second data, can pass through
The target true motion of Kalman's modeling predicts the possible position of target next frame;When the shake that camera (camera) happens suddenly
Or target happen suddenly big speed movement, can by by these movement all be modeled as the movement of camera (camera) come
It predicts the possible position of target, effectively can assist region of search using motion information by both complementary mechanism
Selection, solves the problems, such as that region of search is too small and target is caused to move out region of search.
The present invention makes tracking effect more preferable using the motion modeling of target and camera, camera shake and for a long time with
When track, traditional tracking can fail.And algorithm proposed by the present invention is on the basis of choosing optimal region of search, even if
Preferable locating effect is also able to achieve using most basic Tracking and Orientation Arithmetic, even if other feature failures or reliable in the track
When reduced performance, method of the invention is conducive to traditional track algorithm and is correctly tracked but also the difficulty reduction positioned.
To sum up, the visual target tracking method proposed by the present invention modeled simultaneously based on target and camera motion, by target
It is the real motion of target and the real motion of camera in the Kinematic Decomposition of the plane of delineation, and they is modeled simultaneously.In the view
It mainly include the solution that three are moved, the i.e. true fortune of the real motion, camera of target during feeling method for tracking target
Resultant motion of the dynamic and target in pixel planes.Wherein target pixel planes resultant motion, by the real motion of target
It projects to pixel planes with the real motion of camera to synthesize to obtain, that is, the target position finally acquired by target tracking algorism
It sets obtained by difference;The real motion of camera, the movement of camera, can be in each pictures of pixel planes by projecting to pixel planes
Component motion is generated on element, the object of real physical world same distance possesses identical component motion, passes through in the present embodiment
Optical flow method maps component to acquire camera in the real motion that camera is acquired in the target area of tracking;The real motion of target is i.e. logical
The differences in motion of the resultant motion and camera that are marked on pixel planes in pixel planes target area of looking over so as to check is got.Since target is away from phase
The distance of machine compares farther out, so that move after the movement of target is mapped to pixel planes and still conform to physical constraint, and camera
Movement be mapped to the jitter motion of pixel planes, especially camera, can be presented with the ingredient of similar noise, by camera at
Pixel planes are presented on as amplifying after principle.Therefore there is relatively by force only target in the real motion of pixel planes in three kinds of movements
Physical constraint, and the resultant motion of target and camera motion do not meet physical constraint largely, therefore in the present embodiment,
The real motion of target estimated and predicted by filtering method, obtains more true Target Motion Character, and by prediction
As a result it is used for the object candidate area of selection target track algorithm, so that shake occurs to camera for target tracking algorism and target is sent out
The scene of raw strenuous exercise is more robust, and tracking accuracy is higher.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that
Specific implementation of the invention is only limited to these instructions.For those skilled in the art to which the present invention belongs, it is not taking off
Under the premise of from present inventive concept, several equivalent substitute or obvious modifications can also be made, and performance or use is identical, all answered
When being considered as belonging to protection scope of the present invention.
Claims (9)
1. a kind of method for tracking target modeled based on target and camera motion, which comprises the following steps:
S1: the true velocity of current target is predicted by algorithm filter;
S2: the speed of prediction current time camera;
S3: the speed for the camera that the true velocity of the step S1 target predicted and step S2 are predicted is synthesized, and is obtained current
Moment target and is selected according to target in the predicted value of the conjunction speed of pixel planes in the predicted value of the conjunction speed of pixel planes
Target search region;
S4: target tracking algorism is carried out in the target search region of step S3 selection, acquires current target in picture
The position of plain plane.
2. method for tracking target according to claim 1, which is characterized in that further comprising the steps of:
S5: according to the current target acquired in the position of pixel planes, in conjunction with previous moment target in the position of pixel planes
It sets, current target is calculated in the observation of the conjunction speed of pixel planes;
S6: the current target that step S5 is calculated is predicted in the observation and step S2 of the conjunction speed of pixel planes
The speed of current time camera is separately input in the algorithm filter of step S1 to be updated to the algorithm filter.
3. method for tracking target according to claim 1, which is characterized in that by the step S1 target predicted in step S3
True velocity and the speed of camera of step S2 prediction synthesized, and obtains current target in the conjunction speed of pixel planes
Predicted value specifically uses following formula:
Vp=α Vg-αVc
Wherein, VpIndicate conjunction speed of the target in pixel planes, VgIndicate the true velocity of target, VcIndicate the speed of camera, α
Indicate the factor that the movement under world coordinate system is mapped to pixel planes.
4. method for tracking target according to claim 3, which is characterized in that α=1.
5. method for tracking target according to claim 1, which is characterized in that the target tracking algorism in step S4 uses
DSST algorithm.
6. method for tracking target according to claim 1, which is characterized in that the algorithm filter in step S1 uses
Kalman filtering algorithm.
7. method for tracking target according to claim 1, which is characterized in that predicted in step S2 using optical flow method current
The speed of moment camera.
8. method for tracking target according to claim 7, which is characterized in that predict current time camera using optical flow method
Speed when the image block comprising target that samples be fixed-size.
9. method for tracking target according to claim 8, which is characterized in that the size of the image block comprising target of sampling
Fixed space is added for the size of target, wherein fixed space is determined by the possible maximum speed of closing of target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811307972.XA CN109410254B (en) | 2018-11-05 | 2018-11-05 | Target tracking method based on target and camera motion modeling |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811307972.XA CN109410254B (en) | 2018-11-05 | 2018-11-05 | Target tracking method based on target and camera motion modeling |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109410254A true CN109410254A (en) | 2019-03-01 |
CN109410254B CN109410254B (en) | 2020-07-28 |
Family
ID=65471593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811307972.XA Active CN109410254B (en) | 2018-11-05 | 2018-11-05 | Target tracking method based on target and camera motion modeling |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109410254B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111739055A (en) * | 2020-06-10 | 2020-10-02 | 新疆大学 | Infrared point-like target tracking method |
CN113125791A (en) * | 2019-12-30 | 2021-07-16 | 南京智能情资创新科技研究院有限公司 | Motion camera speed measurement method based on characteristic object and optical flow method |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1300503A (en) * | 1999-01-12 | 2001-06-20 | 皇家菲利浦电子有限公司 | Camera motion parameters estimation method |
US20050232466A1 (en) * | 2004-04-19 | 2005-10-20 | Ibeo Automobile Sensor Gmbh | Method of recognizing and/or tracking objects |
US20110150284A1 (en) * | 2009-12-22 | 2011-06-23 | Samsung Electronics Co., Ltd. | Method and terminal for detecting and tracking moving object using real-time camera motion |
JP2013055609A (en) * | 2011-09-06 | 2013-03-21 | Olympus Imaging Corp | Digital camera |
CN103295235A (en) * | 2013-05-13 | 2013-09-11 | 西安电子科技大学 | Soft cable traction video camera position monitoring method |
KR20140052256A (en) * | 2012-10-24 | 2014-05-07 | 계명대학교 산학협력단 | Real-time object tracking method in moving camera by using particle filter |
CN105096337A (en) * | 2014-05-23 | 2015-11-25 | 南京理工大学 | Image global motion compensation method based on hardware platform of gyroscope |
US20160034778A1 (en) * | 2013-12-17 | 2016-02-04 | Cloud Computing Center Chinese Academy Of Sciences | Method for detecting traffic violation |
WO2017103674A1 (en) * | 2015-12-17 | 2017-06-22 | Infinity Cube Ltd. | System and method for mobile feedback generation using video processing and object tracking |
CN107609635A (en) * | 2017-08-28 | 2018-01-19 | 哈尔滨工业大学深圳研究生院 | A kind of physical object speed estimation method based on object detection and optical flow computation |
-
2018
- 2018-11-05 CN CN201811307972.XA patent/CN109410254B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1300503A (en) * | 1999-01-12 | 2001-06-20 | 皇家菲利浦电子有限公司 | Camera motion parameters estimation method |
US20050232466A1 (en) * | 2004-04-19 | 2005-10-20 | Ibeo Automobile Sensor Gmbh | Method of recognizing and/or tracking objects |
US20110150284A1 (en) * | 2009-12-22 | 2011-06-23 | Samsung Electronics Co., Ltd. | Method and terminal for detecting and tracking moving object using real-time camera motion |
JP2013055609A (en) * | 2011-09-06 | 2013-03-21 | Olympus Imaging Corp | Digital camera |
KR20140052256A (en) * | 2012-10-24 | 2014-05-07 | 계명대학교 산학협력단 | Real-time object tracking method in moving camera by using particle filter |
CN103295235A (en) * | 2013-05-13 | 2013-09-11 | 西安电子科技大学 | Soft cable traction video camera position monitoring method |
US20160034778A1 (en) * | 2013-12-17 | 2016-02-04 | Cloud Computing Center Chinese Academy Of Sciences | Method for detecting traffic violation |
CN105096337A (en) * | 2014-05-23 | 2015-11-25 | 南京理工大学 | Image global motion compensation method based on hardware platform of gyroscope |
WO2017103674A1 (en) * | 2015-12-17 | 2017-06-22 | Infinity Cube Ltd. | System and method for mobile feedback generation using video processing and object tracking |
CN107609635A (en) * | 2017-08-28 | 2018-01-19 | 哈尔滨工业大学深圳研究生院 | A kind of physical object speed estimation method based on object detection and optical flow computation |
Non-Patent Citations (5)
Title |
---|
MOTAI, YUICHI,ET.AL: "Human tracking from a mobile agent: optical flow and Kalman filter arbitration", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》 * |
SHANTAIYA, S.,ET.AL: "Multiple object tracking using Kalman filter and optical flow", 《 EUROPEAN JOURNAL OF ADVANCES IN ENGINEERING AND TECHNOLOGY》 * |
丁祺,等: "强视差下的移动相机运动目标检测", 《激光与光电子学进展》 * |
包化伟: "基于ARM的相对移动目标跟踪的研究与实现", 《优秀硕士论文全文数据库 信息科技刊》 * |
唐佳林, 等: "基于特征匹配与运动补偿的视频稳像算法", 《计算机应用研究》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113125791A (en) * | 2019-12-30 | 2021-07-16 | 南京智能情资创新科技研究院有限公司 | Motion camera speed measurement method based on characteristic object and optical flow method |
CN113125791B (en) * | 2019-12-30 | 2023-10-20 | 南京智能情资创新科技研究院有限公司 | Motion camera speed measuring method based on characteristic object and optical flow method |
CN111739055A (en) * | 2020-06-10 | 2020-10-02 | 新疆大学 | Infrared point-like target tracking method |
CN111739055B (en) * | 2020-06-10 | 2022-07-05 | 新疆大学 | Infrared point-like target tracking method |
Also Published As
Publication number | Publication date |
---|---|
CN109410254B (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112669349B (en) | Passenger flow statistics method, electronic equipment and storage medium | |
CN109387204B (en) | Mobile robot synchronous positioning and composition method facing indoor dynamic environment | |
CN111539273B (en) | Traffic video background modeling method and system | |
US20200193671A1 (en) | Techniques for rendering three-dimensional animated graphics from video | |
Schöps et al. | Semi-dense visual odometry for AR on a smartphone | |
EP3211596A1 (en) | Generating a virtual world to assess real-world video analysis performance | |
CN111951325B (en) | Pose tracking method, pose tracking device and electronic equipment | |
CN108961312A (en) | High-performance visual object tracking and system for embedded vision system | |
CN105760846B (en) | Target detection and localization method and system based on depth data | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
CN101067866A (en) | Eagle eye technique-based tennis championship simulating device and simulation processing method thereof | |
CN101120382A (en) | Method for tracking moving object in video acquired of scene with camera | |
US20230281913A1 (en) | Radiance Fields for Three-Dimensional Reconstruction and Novel View Synthesis in Large-Scale Environments | |
CN110390685B (en) | Feature point tracking method based on event camera | |
JP6850751B2 (en) | Object tracking device, object tracking method, and computer program | |
CN102915545A (en) | OpenCV(open source computer vision library)-based video target tracking algorithm | |
US20110208685A1 (en) | Motion Capture Using Intelligent Part Identification | |
CN110827320B (en) | Target tracking method and device based on time sequence prediction | |
CN110827262B (en) | Weak and small target detection method based on continuous limited frame infrared image | |
CN101572770B (en) | Method for testing motion available for real-time monitoring and device thereof | |
CN110942484A (en) | Camera self-motion estimation method based on occlusion perception and feature pyramid matching | |
CN109410254A (en) | A kind of method for tracking target modeled based on target and camera motion | |
KR20040060970A (en) | A method for computing optical flow under the epipolar constraint | |
Sizintsev et al. | Long-range augmented reality with dynamic occlusion rendering | |
US11373318B1 (en) | Impact detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |