CN109410254B - Target tracking method based on target and camera motion modeling - Google Patents

Target tracking method based on target and camera motion modeling Download PDF

Info

Publication number
CN109410254B
CN109410254B CN201811307972.XA CN201811307972A CN109410254B CN 109410254 B CN109410254 B CN 109410254B CN 201811307972 A CN201811307972 A CN 201811307972A CN 109410254 B CN109410254 B CN 109410254B
Authority
CN
China
Prior art keywords
target
speed
camera
pixel plane
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811307972.XA
Other languages
Chinese (zh)
Other versions
CN109410254A (en
Inventor
杨余久
胡晓翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201811307972.XA priority Critical patent/CN109410254B/en
Publication of CN109410254A publication Critical patent/CN109410254A/en
Application granted granted Critical
Publication of CN109410254B publication Critical patent/CN109410254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target tracking method based on target and camera motion modeling, which comprises the following steps: s1: predicting the real speed of the target at the current moment through a filter algorithm; s2: predicting the speed of a camera at the current moment; s3: synthesizing the real speed of the target predicted in the step S1 and the speed of the camera predicted in the step S2 to obtain a predicted value of the resultant speed of the target in the pixel plane at the current moment, and selecting a target search area according to the predicted value of the resultant speed of the target in the pixel plane; s4: a target tracking algorithm is performed in the target search area selected in step S3 to find the position of the target on the pixel plane at the current time. The target tracking method based on the target and camera motion modeling effectively improves the accuracy and robustness of target tracking.

Description

Target tracking method based on target and camera motion modeling
Technical Field
The invention relates to the field of computer vision, in particular to a target tracking method based on target and camera motion modeling.
Background
Video analysis has a wide application scene in real life. The main steps of video analysis include target detection, target tracking and behavior analysis. The target detection is to find the target of interest, and the target tracking can keep continuous tracking of the target, evaluate the state information of the target in each video frame, and the behavior analysis is to find the behavior characteristics of the target based on the continuous tracking information of the target state and provide basis for possible decision. From the analysis, the target tracking can be found to occupy an extremely important role in video analysis, and has strong practical requirements.
The visual target tracking can be mainly divided into preprocessing, feature extraction, target state estimation and the like. The characteristic extraction is used as the most important link, the decisive effect on the target tracking is achieved, the investigation finds that the characteristics adopted by the existing method mainly focus on information such as textures, colors and the like of an image, such as a color histogram, an HOG characteristic, a depth characteristic and the like, the research on the motion information of the target is less, the motion of the target in a real scene must follow the constraint of a physical world, such as the speed cannot be changed suddenly and the like, and the prior information can provide a lot of important information for the tracking, so that the tracking is more robust. The rare methods of using motion characteristics are also due to the fact that motion characteristics in many cases perform poorly.
The reason for this is that after the target is imaged by the camera, due to perspective and other reasons, the constraint of the real scene on the target motion may be greatly damaged, and more seriously, since the scene tracked by the target is not only used in the scene where the camera is fixed but also used in a moving state of the camera in a large amount, the motion of the camera may directly cause the irregular motion of the target in the image plane, that is, the constraint of the physical world is damaged, and the priori knowledge that can be used for target tracking is lost, so that the method for modeling the motion of the target in the image plane directly obtains motion information that is no longer real and reliable, so that the tracking effect is poor.
The above background disclosure is only for the purpose of assisting understanding of the concept and technical solution of the present invention and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
In order to solve the technical problems, the invention provides a target tracking method based on target and camera motion modeling, which effectively improves the accuracy and robustness of target tracking.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a target tracking method based on target and camera motion modeling, which comprises the following steps:
s1: predicting the real speed of the target at the current moment through a filter algorithm;
s2: predicting the speed of a camera at the current moment;
s3: synthesizing the real speed of the target predicted in the step S1 and the speed of the camera predicted in the step S2 to obtain a predicted value of the resultant speed of the target in the pixel plane at the current moment, and selecting a target search area according to the predicted value of the resultant speed of the target in the pixel plane;
s4: a target tracking algorithm is performed in the target search area selected in step S3 to find the position of the target on the pixel plane at the current time.
Preferably, the target tracking method further comprises the following steps:
s5: according to the obtained position of the target at the current moment on the pixel plane, combining the position of the target at the previous moment on the pixel plane, and calculating to obtain an observed value of the resultant velocity of the target at the current moment on the pixel plane;
s6: the observed value of the resultant velocity of the target at the current time on the pixel plane calculated in step S5 and the velocity of the camera at the current time predicted in step S2 are input to the filter algorithm of step S1 to update the filter algorithm, respectively.
Preferably, in step S3, the real speed of the target predicted in step S1 and the speed of the camera predicted in step S2 are synthesized, and the predicted value of the synthesized speed of the target at the current time on the pixel plane is obtained by the following formula:
Vp=αVg-αVc
wherein, VpRepresenting the resultant velocity, V, of the object in the pixel planegRepresenting the true speed, V, of the targetcRepresenting the speed of the camera, α represents a factor that maps motion in the world coordinate system to the pixel plane.
Preferably, α ═ 1.
Preferably, the target tracking algorithm in step S4 employs a DSST algorithm.
Preferably, the filter algorithm in step S1 employs a kalman filter algorithm.
Preferably, in step S2, an optical flow method is used to predict the speed of the camera at the current time.
Preferably, the optical flow method is used to predict that the image block containing the object sampled at the time of the speed of the camera at the current time is of a fixed size.
Preferably, the size of the sampled image block containing the object is the size of the object plus a fixed motion space, wherein the fixed motion space is determined by the maximum possible resultant velocity of the object.
Compared with the prior art, the invention has the beneficial effects that: the invention discloses a target tracking method based on target and camera motion modeling, which comprises the steps of synthesizing the real speed of a target at the current moment predicted by a filter algorithm and the predicted speed of a camera at the current moment to obtain a predicted value of the combined speed of the target at the current moment on a pixel plane, selecting a target search area according to the predicted value, and solving by a target tracking algorithm to obtain the position of the target on the pixel plane; by modeling the motion of the target and the camera at the same time, when the target moves at a relatively constant large speed or low frame rate data, the possible position of the next frame of the target can be predicted through the real motion of the target of a filter algorithm; when sudden shaking of a camera occurs or sudden high-speed movement of a target occurs, the possible position of the target can be predicted by modeling all the movements as the movement of the camera, and the selection of a search area is assisted by effectively utilizing movement information through the two complementary mechanisms, so that the problem that the target moves out of the search area due to the fact that the search area is too small is solved; and prior knowledge is provided for target tracking, so that the accuracy and robustness of target tracking are effectively improved.
In a further scheme, an observed value of the resultant velocity of the target on a pixel plane is solved according to the positions of the target on the pixel plane at the front moment and the rear moment, the observed value of the real velocity of the target is obtained through motion synthesis according to the velocity of a camera, and then a Kalman filter is updated, so that the closed loop self-correction of the target tracking algorithm is realized; thereby further improving the accuracy and robustness of target tracking.
Drawings
FIG. 1 is a schematic flow diagram of a target tracking method based on target and camera modeling in accordance with a preferred embodiment of the present invention;
FIG. 2 is a simplified schematic flow chart of a target tracking method based on target and camera modeling in accordance with a preferred embodiment of the present invention;
FIG. 3 is a relationship of true motion of an object, motion of a camera, and resultant motion of the object at a pixel plane in a world coordinate system.
Detailed Description
The invention will be further described with reference to the accompanying drawings and preferred embodiments.
The most fundamental cause of failure of the motion features is the simultaneous motion of the object and the camera, while the fact that both are moving simultaneously is generally ignored in the prior art, and the motion of the image plane object is considered as the motion of the object. Based on the above, the invention provides a visual target tracking method based on simultaneous modeling of the target and the camera motion, the target tracking method of the invention can be used for simultaneously modeling the motion of the target and the camera, the modeling method conforms to the actual situation, the motion of the target and the motion of the camera are both real motion of the physical world, the physical world constraint is met, and a large amount of prior knowledge can be provided for target tracking so as to improve the tracking accuracy and robustness.
The method comprises the following steps of firstly, estimating and predicting the real motion of a target by utilizing a Kalman filter algorithm, solving the projection of a camera on an image plane by utilizing an optical flow method, positioning the position of the target in the image by utilizing a target tracking algorithm, and obtaining the motion of the target on the image plane through the results of two frames before and after; firstly, synthesizing the motion of a camera obtained by an optical flow method and the real motion of a target obtained by prediction of a Kalman filter to predict the most possible position of the target, then selecting a candidate area of the target by the predicted position, then positioning the candidate area by a tracking algorithm to obtain the real position observation of the target on an image plane, then obtaining the observation of the real motion of the target by combining the target image position observation and the camera motion obtained by the optical flow method through a motion synthesis relation, and further updating the Kalman filter, thereby forming a cyclic process of prediction updating.
Referring to fig. 1 and 2, a preferred embodiment of the present invention discloses a target tracking method based on target and camera motion modeling, which includes the following steps:
s1: predicting the real speed of the target at the current moment through a filter algorithm;
in particular, a kalman filter algorithm (but not limited to) is employed for estimating and predicting the true motion of the target. The real motion of the target is a normal physical process, and different from the motion of a camera (a camera), the motion of the camera is amplified through a perspective effect, and the real motion of the target is reduced through the perspective effect, so that the real motion of the target is smoother, and the prediction is performed through a kalman filter in the embodiment.
S2: predicting the speed of a camera at the current moment;
specifically, an optical flow method is employed to predict the speed of the camera; further, dense optical flow can be adopted to overcome the low robustness brought by less pixels; and simultaneously considering the operation speed, sampling an image block containing the target with a fixed size, wherein the size of the image block is the target size plus a fixed motion space, and the fixed motion space is determined by the maximum possible resultant speed of the target. The advantage of adopting fixed size rather than fixed proportion in this embodiment lies in: with a fixed scale, when the target size is small, the sampled image blocks will be small, however, the motion speed of the target may still be large, resulting in the target running out of the image block range, whereas if a fixed motion space is used, their maximum motion speed is consistent even if the target size changes.
When calculating the motion of the camera, the region taken from the target is observed, which is done to overcome the perspective effect, and the target region image block can correctly reflect the motion of the camera in the image plane. After all observed values of the target area are denoised, taking the average value as the speed observation of the camera, and expressing as follows: vpc=(Vx,Vy)。
S3: synthesizing the real speed of the target predicted in the step S1 and the speed of the camera predicted in the step S2 to obtain a predicted value of the resultant speed of the target in the pixel plane at the current moment, and selecting a target search area according to the predicted value of the resultant speed of the target in the pixel plane;
as shown in FIG. 3, the motion V of the object between two frames can be determined under the world coordinate systemg(i.e. the true speed of the object), the movement V of the camera (camera)c(speed of camera), and finally the motion presented at the image plane, as shown, the resultant speed of the object at the pixel plane follows the following formula:
Vp=αVg-αVc=Vpg-Vpc
wherein, VpRepresenting the resultant velocity of the object in the pixel plane, α representing the factor that maps the motion in the world coordinate system to the pixel plane, VpgRepresenting the true velocity, V, of the object in the pixel planepcWhich represents the velocity of the camera (webcam) at the pixel plane, i.e. in this embodiment, the motion of the object at the pixel plane is decomposed into real motion of the object and motion of the camera (webcam), respectively, and modeled simultaneously, where α is typically taken to be 1.
S4: the target tracking algorithm is performed in the target search area selected in step S3 to find the position of the target on the pixel plane at the present time.
Specifically, the target tracking algorithm adopts a dsst (cognitive Scale Space tracker) algorithm.
S5: calculating the obtained position of the target at the current moment on the pixel plane by combining the position of the target at the previous moment on the pixel plane to obtain an observed value of the resultant velocity of the target at the current moment on the pixel plane;
s6: the observed value of the resultant velocity of the target at the current moment on the pixel plane, which is calculated in step S5, and the velocity of the camera at the current moment, which is predicted in step S2, are respectively input into the filter algorithm in step S1 to update the filter algorithm, so that a cyclic process of prediction update is formed, and self-correction of a closed loop is realized.
The method comprises the steps of predicting the real speed of a current frame target through a Kalman filter, estimating the motion of a current frame camera (camera) through an optical flow method, predicting the speed of the current frame target in a pixel plane through the combined motion of the current frame target and the camera, selecting an area with high target existence probability through the prediction, accurately solving the position of the target in the pixel plane through a tracking algorithm, and solving the observed value of the combined speed of the target in the pixel plane through the pixel plane positions of two frames of targets, so that the observed value of the combined speed of the target in the pixel plane can be obtained through Vpg=Vp+VpcAnd obtaining the observed value of the real motion of the target, and further updating the Kalman filter. Through the process, when the target moves at a relatively constant high speed or the data of a low frame rate, the real motion of the target can be predicted through Kalman modelingPossible positions of the next frame of the target; when sudden shaking of a camera or sudden high-speed movement of a target occurs, the possible position of the target can be predicted by modeling all the movements as the movement of the camera, the selection of a search area can be assisted by effectively utilizing movement information through the two complementary mechanisms, and the problem that the target moves out of the search area due to the fact that the search area is too small is solved.
The invention utilizes the motion modeling of the target and the camera to ensure that the tracking effect is better, and the traditional tracking method can be invalid when the camera shakes and tracks for a long time. On the basis of selecting the optimal search area, the algorithm provided by the invention can realize a better positioning effect even if the most basic tracking and positioning algorithm is adopted, and the method of the invention also reduces the positioning difficulty even if other characteristics are invalid or the reliability performance is reduced in the tracking process, thereby being beneficial to the correct tracking of the traditional tracking algorithm.
In summary, the visual target tracking method based on simultaneous modeling of the target and the camera motion, which is provided by the invention, decomposes the motion of the target in the image plane into the real motion of the target and the real motion of the camera, and models them simultaneously. In the process of the visual target tracking method, the solution of three motions is mainly included, namely the real motion of the target, the real motion of the camera and the resultant motion of the target in the pixel plane. The synthetic motion of the target on the pixel plane is obtained by projecting the real motion of the target and the real motion of the camera to the pixel plane for synthesis, namely the target position difference finally obtained by a target tracking algorithm; the real motion of the camera, the motion of the camera will generate a motion component on each pixel of the pixel plane by projecting to the pixel plane, and objects with the same distance in the real physical world have the same motion component, in this embodiment, the real motion mapping component of the camera in the tracked target area is obtained by an optical flow method; the real motion of the target is obtained by synthesizing the synthesized motion of the target in the pixel plane and the motion of the camera in the target area in the pixel plane. Because the target is relatively far away from the camera, the motion of the target still conforms to the physical constraint after being mapped to the pixel plane, and the motion of the camera is mapped to the pixel plane, particularly the shake motion of the camera is represented by a noise-like component and is amplified and represented on the pixel plane after the camera imaging principle. Therefore, only the real motion of the target in the three motions has stronger physical constraints, and most of the synthesized motion of the target and the motion of the camera do not conform to the physical constraints, in the embodiment, the real motion of the target is estimated and predicted by a filtering method to obtain more real target motion characteristics, and the predicted result is used for selecting a target candidate area of a target tracking algorithm, so that the target tracking algorithm is more robust to scenes with camera shake and violent motion of the target, and the tracking accuracy is higher.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (9)

1. A target tracking method based on target and camera motion modeling is characterized by comprising the following steps:
s1: predicting the real speed of the target at the current moment through a filter algorithm;
s2: predicting the speed of a camera at the current moment;
s3: synthesizing the real speed of the target predicted in the step S1 and the speed of the camera predicted in the step S2 to obtain a predicted value of the resultant speed of the target in the pixel plane at the current moment, and selecting a target search area according to the predicted value of the resultant speed of the target in the pixel plane;
s4: a target tracking algorithm is performed in the target search area selected in step S3 to find the position of the target on the pixel plane at the current time.
2. The target tracking method of claim 1, further comprising the steps of:
s5: according to the obtained position of the target at the current moment on the pixel plane, combining the position of the target at the previous moment on the pixel plane, and calculating to obtain an observed value of the resultant velocity of the target at the current moment on the pixel plane;
s6: the observed value of the resultant velocity of the target at the current time on the pixel plane calculated in step S5 and the velocity of the camera at the current time predicted in step S2 are input to the filter algorithm of step S1 to update the filter algorithm, respectively.
3. The target tracking method according to claim 1, wherein the step S3 is performed by combining the real speed of the target predicted in step S1 and the speed of the camera predicted in step S2, and the predicted value of the combined speed of the target at the current time on the pixel plane is obtained by using the following formula:
Vp=αVg-αVc
wherein, VpRepresenting the resultant velocity, V, of the object in the pixel planegRepresenting the true speed, V, of the targetcRepresenting the speed of the camera, α represents a factor that maps motion in the world coordinate system to the pixel plane.
4. The object tracking method of claim 3, wherein α -1.
5. The target tracking method of claim 1, wherein the target tracking algorithm in step S4 employs a DSST algorithm.
6. The target tracking method of claim 1, wherein the filter algorithm in step S1 employs a kalman filter algorithm.
7. The object tracking method according to claim 1, wherein an optical flow method is used in step S2 to predict the speed of the camera at the current time.
8. The object tracking method according to claim 7, wherein an optical flow method is used to predict the image block containing the object sampled at the time of the speed of the camera at the present time to be of a fixed size.
9. The method of claim 8, wherein the size of the sampled image block containing the object is the size of the object plus a fixed motion space, wherein the fixed motion space is determined by the maximum possible resultant velocity of the object.
CN201811307972.XA 2018-11-05 2018-11-05 Target tracking method based on target and camera motion modeling Active CN109410254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811307972.XA CN109410254B (en) 2018-11-05 2018-11-05 Target tracking method based on target and camera motion modeling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811307972.XA CN109410254B (en) 2018-11-05 2018-11-05 Target tracking method based on target and camera motion modeling

Publications (2)

Publication Number Publication Date
CN109410254A CN109410254A (en) 2019-03-01
CN109410254B true CN109410254B (en) 2020-07-28

Family

ID=65471593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811307972.XA Active CN109410254B (en) 2018-11-05 2018-11-05 Target tracking method based on target and camera motion modeling

Country Status (1)

Country Link
CN (1) CN109410254B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113125791B (en) * 2019-12-30 2023-10-20 南京智能情资创新科技研究院有限公司 Motion camera speed measuring method based on characteristic object and optical flow method
CN111739055B (en) * 2020-06-10 2022-07-05 新疆大学 Infrared point-like target tracking method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013055609A (en) * 2011-09-06 2013-03-21 Olympus Imaging Corp Digital camera
KR20140052256A (en) * 2012-10-24 2014-05-07 계명대학교 산학협력단 Real-time object tracking method in moving camera by using particle filter
WO2017103674A1 (en) * 2015-12-17 2017-06-22 Infinity Cube Ltd. System and method for mobile feedback generation using video processing and object tracking

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1062815A1 (en) * 1999-01-12 2000-12-27 Koninklijke Philips Electronics N.V. Camera motion parameters estimation method
DE102004018813A1 (en) * 2004-04-19 2006-02-23 Ibeo Automobile Sensor Gmbh Method for detecting and / or tracking objects
KR101183781B1 (en) * 2009-12-22 2012-09-17 삼성전자주식회사 Method and apparatus for object detecting/tracking using real time motion estimation of camera
CN103295235B (en) * 2013-05-13 2016-02-10 西安电子科技大学 The method for supervising of soft cable traction video camera position
CN103778786B (en) * 2013-12-17 2016-04-27 东莞中国科学院云计算产业技术创新与育成中心 A kind of break in traffic rules and regulations detection method based on remarkable vehicle part model
CN105096337B (en) * 2014-05-23 2018-05-01 南京理工大学 A kind of image global motion compensation method based on gyroscope hardware platform
CN107609635A (en) * 2017-08-28 2018-01-19 哈尔滨工业大学深圳研究生院 A kind of physical object speed estimation method based on object detection and optical flow computation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013055609A (en) * 2011-09-06 2013-03-21 Olympus Imaging Corp Digital camera
KR20140052256A (en) * 2012-10-24 2014-05-07 계명대학교 산학협력단 Real-time object tracking method in moving camera by using particle filter
WO2017103674A1 (en) * 2015-12-17 2017-06-22 Infinity Cube Ltd. System and method for mobile feedback generation using video processing and object tracking

Also Published As

Publication number Publication date
CN109410254A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN110796010B (en) Video image stabilizing method combining optical flow method and Kalman filtering
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
CN112669349B (en) Passenger flow statistics method, electronic equipment and storage medium
JP6030617B2 (en) Image processing apparatus and image processing method
CN108961312A (en) High-performance visual object tracking and system for embedded vision system
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
CN101344965A (en) Tracking system based on binocular camera shooting
US8363902B2 (en) Moving object detection method and moving object detection apparatus
CN108615241B (en) Rapid human body posture estimation method based on optical flow
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
KR20060119707A (en) Image processing device, image processing method, and program
CN110647836B (en) Robust single-target tracking method based on deep learning
CN115131420A (en) Visual SLAM method and device based on key frame optimization
CN110070565A (en) A kind of ship trajectory predictions method based on image superposition
CN109410254B (en) Target tracking method based on target and camera motion modeling
CN115619826A (en) Dynamic SLAM method based on reprojection error and depth estimation
CN108900775B (en) Real-time electronic image stabilization method for underwater robot
CN108876807B (en) Real-time satellite-borne satellite image moving object detection tracking method
CN113379789B (en) Moving target tracking method in complex environment
Xu et al. Feature extraction algorithm of basketball trajectory based on the background difference method
CN111344741B (en) Data missing processing method and device for three-dimensional track data
CN117011381A (en) Real-time surgical instrument pose estimation method and system based on deep learning and stereoscopic vision
CN115170621A (en) Target tracking method and system under dynamic background based on relevant filtering framework
CN114529587A (en) Video target tracking method and device, electronic equipment and storage medium
CN113158942A (en) Segmentation algorithm and device for detecting motion human behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant