CN107610154A - The spatial histogram of multi-source target represents and tracking - Google Patents

The spatial histogram of multi-source target represents and tracking Download PDF

Info

Publication number
CN107610154A
CN107610154A CN201710946077.1A CN201710946077A CN107610154A CN 107610154 A CN107610154 A CN 107610154A CN 201710946077 A CN201710946077 A CN 201710946077A CN 107610154 A CN107610154 A CN 107610154A
Authority
CN
China
Prior art keywords
target
tracking
similarity
video
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710946077.1A
Other languages
Chinese (zh)
Other versions
CN107610154B (en
Inventor
张灿龙
李志欣
韩婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN201710946077.1A priority Critical patent/CN107610154B/en
Publication of CN107610154A publication Critical patent/CN107610154A/en
Application granted granted Critical
Publication of CN107610154B publication Critical patent/CN107610154B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses spatial histogram expression and the tracking of a kind of multi-source target, and it is indicated using second order spatial histogram as object representation model to multiple video source targets, and their similarity is weighted into fusion to build object function;Then, the linkage displacement formula according to core tracking inference mechanism export multi-source target;Finally, the automatic fast search of multi-source target is realized using average drifting program.The characteristics of present invention is adapted to the tracking of any number of video sources, and unified with quick.

Description

Spatial histogram representation and tracking method of multi-source target
Technical Field
The invention relates to the technical field of computer vision tracking, in particular to a spatial histogram representation and tracking method of a multi-source target.
Background
The target tracking is the premise and the basis for completing a plurality of video scene analysis and understanding tasks such as visual monitoring, human-computer interaction, vehicle navigation and the like. Currently, there are two main approaches to achieve target tracking: single source tracking and multi-source tracking. The single-source tracking refers to tracking a target object from one video source, and the mainstream methods thereof include nuclear density estimation, pattern classification, sparse representation, subspace analysis and the like. Multi-source tracking refers to tracking the same target object that originates from two or more video sources. Because the multi-source tracking is completed by describing and recording different directions and different characteristics of the same moving target through a plurality of image sensors and combining data of the image sensors, the time-space coverage range of the multi-source tracking is wider than that of a single-source tracking method, the survival capability is stronger, and the reliability is higher.
The paper "Fusion tracking in color and associated images using joint sparse representation" for Fusion tracking of visible and infrared targets (published in Science China: information Science) "proposes feature-level Fusion tracking of infrared and visible targets by using a method of joint sparse feature representation. The paper "a new tracking and associated sequences based on tracking-before-fusion-based method" (published in International Journal of Dynamics & Control) proposes a tracking-before-fusion-based strategy in which a visible light target is tracked by particle filtering alone, an infrared target is tracked by a template matching method, and then the tracking results of the visible light target and the infrared target are jointly decided. The paper "A compressed tracking based on time-space Kalman fusion mode" (published in Science China: information Science) proposes an infrared and visible light target spatio-temporal fusion tracking algorithm based on Kalman filtering and compressed sensing. It is easy to see that the current multi-source tracking method is mostly realized under a particle filter framework, the time complexity is generally high, and the tracking method is only limited to two video sources of infrared and visible light. Although the paper "Thermo-visual feature fusion for object tracking using multiple spatial histograms" (published in Machine Vision and Applications) proposes a decision-level fusion tracking method based on multiple spatial histograms in tandem, the tandem approach has a problem of global failure due to failure of any one of the trackers. Therefore, there is a need for a fast and uniform method for tracking multiple video sources.
Disclosure of Invention
The invention aims to solve the problems that the existing multi-source tracking method is long in time and easy to fail, and provides a spatial histogram representation and a tracking method of a multi-source target.
In order to solve the problems, the invention is realized by the following technical scheme:
the spatial histogram representation and tracking method of the multisource target comprises the following steps:
step 1, reading N video sources, manually selecting a candidate target in a frame 1 of a first video source, and obtaining an initial central position z of the candidate target 0 (ii) a Initializing weight coefficientsWherein 0 < alpha k <1;
Step 2, calculating a reference space histogram of each video source
Step 3, reading in the next frame, and calculating the initial central position z of each video source 0 Candidate spatial histogram of
Step 4, calculating candidate space histogram of each video sourceAnd reference spatial histogramSimilarity between them
Step 5, similarity of all video sourcesWith corresponding weight coefficientPerforming weighted fusion to obtain the initial central position z of all video sources 0 Combined similarity of (g) ((z)) 0 );
Step 6, combining the similarity rho (z) 0 ) As target function, taylor expansion is carried out on the target function to obtain a linear approximation formula, the derivative of the linear approximation formula is solved to make the derivative equal to zero, so as to deduce a joint displacement iterative formula, and a new central position z of the candidate target is obtained according to the joint displacement iterative formula 1
Wherein the content of the first and second substances,in order to be a position-weighting coefficient,is a position offset vector, z 0 Is the initial central position of the motor vehicle,the method comprises the steps of (1) obtaining a two-dimensional coordinate vector of an ith pixel, wherein g (\9679;) = -f' (·), f (\9679;) is a kernel function, k belongs to {1,2, \8230;, N }, N is the number of video sources, i belongs to {1,2, \8230;, N }, N is the number of pixel points, u belongs to {1,2, \8230;, m }, and m is the number of feature areas;
step 7, calculating new center positions z of all video sources 1 Combined similarity of (g) ((z)) 1 );
Step 8, based on the new center position z 1 Combined similarity of (g) ((z)) 1 ) And based on the initial central position z 0 Combined similarity of (c)/(z) 0 ) Comparing; if ρ (z) 1 )<ρ(z 0 ) Then the new center position z is set 1 Is updated to (z) 0 +z 1 ) 2 and return to step 7 until ρ (z) 1 )≥ρ(z 0 ) (ii) a Otherwise, go to step 9;
step 9, judging | z 1 -z 0 ||&ε or the maximum number of iterations; if so, stopping iteration and completing multi-source target tracking; otherwise, the initial center position z is set 0 Updated to a new center position z 1 And calculating all video sources based on the initial center position z 0 Combined similarity of (c)/(z) 0 ) Then, returning to the step 6; where ε is a pre-given error threshold;
step 10, setting the initial center position z 0 Updated to a new center position z 1 And returns to step 3.
Further, in step 1, the weight α is initialized 1 =α 2 =…=α N =1/N。
Further, after step 9 and before step 10, the method further comprises updating the weight coefficient of each video source according to the weight coefficient update formulaAnd updating, wherein the weight coefficient updating formula is as follows:
where ρ is k (z 1 ) The similarity of the kth video source.
Further, in step 6, the position weighting coefficientsAnd a position offset vectorRespectively as follows:
wherein the content of the first and second substances,andrespectively the probability density function of the pixel points in the u-th characteristic region of the candidate space histogram and the mean value and covariance matrix of the space distribution of the pixel points,andrespectively, the probability density function of the pixel points in the u-th characteristic region of the reference space histogram and the mean and covariance matrixes of the space distribution of the pixel points, alpha k In order to be a weight coefficient of the image,is a two-dimensional coordinate vector of the ith pixel, delta (\9679;) is a delta function,in order to map the features to the function of the histogram interval, k belongs to {1,2, \8230;, N }, N is the number of video sources, i belongs to {1,2, \8230;, N }, N is the number of pixel points, u belongs to {1,2, \8230;, m }, and m is the number of feature areas.
Compared with the prior art, the invention provides a multi-source video target multi-core fusion tracking method based on joint representation of a plurality of second-order spatial histograms, which takes the second-order spatial histograms as target representation models, represents a plurality of video source targets, and performs weighted fusion on the similarity of the video source targets to construct a target function; then, deriving a linkage displacement formula of the multi-source target according to a nuclear tracking reasoning mechanism; and finally, using a mean shift program to realize automatic and rapid search of the multi-source target. The invention is suitable for tracking any plurality of video sources and has the characteristic of rapidness and uniformity.
Detailed Description
The second-order moment space histogram (called as second-order histogram for short) is the histogram added with the pixel point space distribution mean value and variance information, so that the space structure information of the target can be better kept. Note the bookIs a second order histogram of candidate objects at z point in the kth video source.Andprobability density functions of the u-th characteristic region pixel points and mean and covariance matrixes of spatial distribution of the pixel points are respectively obtained, and the calculation formulas are as follows:
wherein n is the number of pixel points of the target image,is a two-dimensional coordinate vector of the first pixel. Delta iu Is a delta function, if the ith pixel falls in the u-th interval, then delta iu =1, noIt is zero. m is the number of the characteristic regions, h represents the size of the target image, and C is a normalization constant. The formula for the kernel function f (x) is:
setting the second order histogram of the target template asThe corresponding calculation method is the same as formulas (1) to (3), and the similarity between the target image and the target template is as follows:
in the formula (5), the reaction mixture is,it can be understood that the similarity of the target image of the kth video source and the target template thereof in the feature space is calculated, andthen the similarity of the two on the spatial distribution is calculated by the formula
Wherein, the first and the second end of the pipe are connected with each other,
for multi-source target tracking, whether a given target candidate state should be accepted or not is determined by the similarity of all video sources. Therefore, the joint similarity obtained by adding the similarities of all video sources is as follows:
in the formula, 0 < alpha k Less than 1 is weight coefficient for adjusting the proportion of the similarity of different video sources in the objective function, and has sigma k α k =1。
Let the position of the object in the previous frame be z 0 . By substituting formula (5) into formula (7)Andtaylor expansion is carried out on rho (z) to obtain a linear approximation form of the Taylor expansion coefficient
Where T is a remainder independent of z. The derivative of p (z) with respect to z in equation (9) is found
In the formula (I), the compound is shown in the specification,
order toThen the current position z can be obtained 0 To a new position z 1 Is a relational expression of
Wherein g (x) = -f' (x). The above equation indicates that the location of the object is determined by all video sources together.
As mentioned above, the weight coefficient α k Are used to adjust the weight of the k-th video source's similarity in the objective function, it is clear that their values should be dynamically variable. Generally, the similarity of objects between adjacent frames does not change greatly, and based on the fact that the present invention determines the weight coefficient of the current frame according to the similarity value of the previous frame. The similarity between the optimal target and the target model of a plurality of video sources in the previous frame is assumed to be rho 1 (z),…,ρ N (z), then of the kth video source in the current frame
Based on the mathematical derivation conclusion and combining with the mean shift to realize the process, the spatial histogram representation and tracking method of the multisource target designed by the invention can be obtained, and the specific steps are as follows:
step 1: reading in N video sources, manually selecting a tracking target in a frame 1 of a first video source, and obtaining the central position z of the tracking target 0 . Initialization weight value alpha 1 =α 2 =…=α N =1/N;
Step 2: calculating a reference spatial histogram according to equations (1) to (3)
And step 3: reading in the next frame, and calculating the candidate space histogram according to the formulas (1) to (3)And calculating the joint similarity rho (z) by using the formulas (5) to (7) 0 );
And 4, step 4: according to the formula (10)And
and 5: finding a new target candidate position z according to equation (11) 1 And calculateAnd ρ (z) 1 );
And 6: when rho (z) 1 )<ρ(z 0 ) When the utility model is used, the water is discharged,and calculateAnd ρ (z) 1 ) Until the condition is false;
and 7: if | | | z 1 -z 0 ||&If epsilon or the maximum iteration number is reached, stopping iteration; otherwiseAnd turning to step 4, wherein | | | z 1 -z 0 I represents z 1 And z 0 Is a predetermined error threshold;
and 8: using rho 1 (z 1 ),…,ρ N (z 1 ) And updating the weight value alpha according to the formula (12) k
And step 9: order toAnd turning to the step 3.
The invention discloses a general method for multi-source target fusion tracking based on a spatial histogram, belonging to the field of computer vision tracking. Firstly, establishing a spatial histogram model for a candidate target of each video source; then, respectively adopting a Bhattacharyya coefficient and a Mahalanobis distance to calculate the feature similarity and the spatial similarity between the candidate target model of each video source and the reference target model thereof, and multiplying the feature similarity and the spatial similarity to obtain the similarity of each video; then, performing weighted fusion on the joint similarity of each video source to form a target function; then, taylor expansion is carried out on the target function to obtain a linear approximation formula of the target function, the derivative of the approximation formula is solved, the derivative is made to be equal to zero, and therefore a joint displacement iterative formula is deduced; and finally, according to a joint displacement formula, a mean shift program is applied to realize the tracking of the multi-source target. The tracker has strong adaptability to the shielding, intersection and illumination change of the environment.
The invention is further illustrated in detail below by means of a specific example:
in this embodiment, a common infrared and visible light video pair is used as a test object, so that the parameter N =2 in the technical scheme of the present invention. This example tested 3 sets of infrared and visible video pairs, named separately: the system comprises a video 1, a video 2 and a video 3, wherein the infrared image and the visible light image in the video 1 respectively have 270 frames, and the system is characterized in that a target pedestrian walks at night; the video 2 has 78 frames, and is characterized in that the target pedestrian is intersected with other pedestrians; the video 3 has 165 frames, and is characterized in that the target pedestrian is shielded by foreign objects in the walking process. The specific tracking steps for these video sources are as follows (taking video 1 as an example):
step 1: simultaneously reading the 1 st frame infrared image and the visible light image in the video 1 into the memory, manually selecting the tracked target pedestrian in the infrared image, and obtaining the central position z of the tracked target pedestrian 0 = (207, 210). Initialization weight alpha 1 =α 2 =0.5;
And 2, step: respectively calculating reference space histograms of the infrared and visible light targets according to the formulas (1) to (3)And
and step 3: reading in the next frame, and respectively calculating candidate space histograms of the infrared and visible light targets according to formulas (1) to (3)Andand calculating the similarity rho (z) by using the expressions (5) to (7) 0 );
And 4, step 4: is calculated according to the formula (10)And
and 5: finding a new target candidate position z according to equation (11) 1 And calculateAnd ρ (z) 1 );
Step 6: when rho (z) 1 )<ρ(z 0 ) When it is executedAnd recalculateAnd ρ (z) 1 ) Until the condition is false;
and 7: if | | | z 1 -z 0 ||&If the number of iterations reaches 0.0001 or the maximum iteration number reaches 20, the iteration is stopped; otherwise executeAnd turning to the step 4;
and step 8: using rho 1 (z 1 ) And ρ 2 (z 1 ) And updating the weight value alpha according to the formula (12) 1 And alpha 2
And step 9: order toAnd turning to the step 3.
In addition, to further quantitatively evaluate the performance of the method of the present inventionIt can be seen that the present embodiment uses two performance indicators, which are the center positioning errorOverlap ratio ∈ = area (R) G ∩R T )/area(R G ∪R T ) And success rate. Wherein, the success rate refers to the percentage of the frame number with the overlapping rate of the tracking result larger than 0.5 to the total frame number, (x) G ,y G ,R G ) Is the center and area of the real object marked by hand, (x) T ,y T ,R T ) Is the center and area of the target given by the tracker.
It is calculated that the average center positioning error of video 1 in this example is 4.28, the average overlap ratio is 0.81, and the success rate is 100%. The average center positioning error for video 2 was 1.48, the average overlap was 0.84, and the success rate was 100%. The average center positioning error for video 3 was 19.1, the average overlap rate was 0.65, and the success rate was 78%. It can be seen from this embodiment that the tracker of the present invention has stable performance and good performance.
It should be noted that, although the above-mentioned embodiments of the present invention are illustrative, the present invention is not limited thereto, and thus the present invention is not limited to the above-mentioned embodiments. Other embodiments, which can be made by those skilled in the art in light of the teachings of the present invention, are considered to be within the scope of the present invention without departing from its principles.

Claims (4)

1. The spatial histogram representation and tracking method of the multisource target is characterized by comprising the following steps of:
step 1, reading N video sources, manually selecting a candidate target in a frame 1 of a first video source, and obtaining an initial central position z of the candidate target 0 (ii) a Initializing weight coefficientsWherein 0 < alpha k <1;
Step 2, countingComputing a reference spatial histogram for each video source
Step 3, reading in the next frame, and calculating the initial central position z of each video source 0 Candidate spatial histogram of
Step 4, calculating candidate space histogram of each video sourceAnd reference spatial histogramSimilarity between them
Step 5, similarity of all video sourcesWith corresponding weight coefficientPerforming weighted fusion to obtain the initial central position z of all video sources 0 Combined similarity of (c)/(z) 0 );
Step 6, combining the similarity rho (z) 0 ) Taking the target function as a target function, carrying out Taylor expansion on the target function to obtain a linear approximation formula of the target function, solving the derivative of the linear approximation formula, making the derivative equal to zero, deriving a joint displacement iterative formula, and obtaining a new central position z of a candidate target according to the joint displacement iterative formula 1
Wherein the content of the first and second substances,is a weight coefficient for the position, and,is a position offset vector, z 0 In order to be the initial central position,the method comprises the steps of taking a two-dimensional coordinate vector of an ith pixel, wherein g (·) = -f' (·), f (·) is a kernel function, k belongs to {1,2, \ 8230, N }, N is the number of video sources, i belongs to {1,2, \ 8230, N }, N is the number of pixel points, u belongs to {1,2, \ 8230, m }, and m is the number of feature areas;
step 7, calculating new center positions z of all video sources 1 Combined similarity of (c)/(z) 1 );
Step 8, based on the new center position z 1 Combined similarity of (g) ((z)) 1 ) And based on the initial central position z 0 Combined similarity of (g) ((z)) 0 ) Carrying out comparison; if ρ (z) 1 )<ρ(z 0 ) Then the new center position z is set 1 Is updated to (z) 0 +z 1 ) 2 and returns to step 7 until p (z) 1 )≥ρ(z 0 ) (ii) a Otherwise, go to step 9;
step 9, judging | | | z 1 -z 0 ||&ε or the maximum number of iterations; if yes, stopping iteration and completing multi-source target tracking; otherwise, the initial center position z is set 0 Updated to a new center position z 1 And calculating all video sources based on the initial center position z 0 Combined similarity of (c)/(z) 0 ) Then, returning to the step 6; where ε is a pre-given error threshold;
step 10, setting the initial central position z 0 Updated to a new center position z 1 And returns to step 3.
2. The polypeptide of claim 1The method for representing and tracking the spatial histogram of the source target is characterized in that in step 1, the weight alpha is initialized 1 =α 2 =…=α N =1/N。
3. The method of claim 1 or 2, further comprising updating the weighting factor of each video source according to a weighting factor update formula after step 9 and before step 10And updating, wherein the weight coefficient updating formula is as follows:
wherein ρ k (z 1 ) The similarity of the kth video source.
4. The method of claim 1, wherein in step 6, the position weighting factors are usedAnd a position offset vectorRespectively as follows:
wherein the content of the first and second substances,andrespectively the probability density function of the pixel points in the u-th characteristic region of the candidate space histogram and the mean value and covariance matrix of the space distribution of the pixel points,andrespectively, the probability density function of the pixel points in the u-th characteristic region of the reference space histogram and the mean and covariance matrixes of the space distribution of the pixel points, alpha k In order to be the weight coefficient,is a two-dimensional coordinate vector of the ith pixel, delta (-) is a delta function,in order to map the features to the function of the histogram interval, k belongs to {1,2, \8230;, N }, N is the number of video sources, i belongs to {1,2, \8230;, N }, N is the number of pixel points, u belongs to {1,2, \8230;, m }, and m is the number of feature areas.
CN201710946077.1A 2017-10-12 2017-10-12 Spatial histogram representation and tracking method of multi-source target Expired - Fee Related CN107610154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710946077.1A CN107610154B (en) 2017-10-12 2017-10-12 Spatial histogram representation and tracking method of multi-source target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710946077.1A CN107610154B (en) 2017-10-12 2017-10-12 Spatial histogram representation and tracking method of multi-source target

Publications (2)

Publication Number Publication Date
CN107610154A true CN107610154A (en) 2018-01-19
CN107610154B CN107610154B (en) 2020-01-14

Family

ID=61068663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710946077.1A Expired - Fee Related CN107610154B (en) 2017-10-12 2017-10-12 Spatial histogram representation and tracking method of multi-source target

Country Status (1)

Country Link
CN (1) CN107610154B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN110414338A (en) * 2019-06-21 2019-11-05 广西师范大学 Pedestrian based on sparse attention network discrimination method again

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166996A (en) * 2014-08-06 2014-11-26 北京航空航天大学 Human eye tracking method based on edge and color double-feature space column diagram
CN107092890A (en) * 2017-04-24 2017-08-25 山东工商学院 Naval vessel detection and tracking based on infrared video

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104166996A (en) * 2014-08-06 2014-11-26 北京航空航天大学 Human eye tracking method based on edge and color double-feature space column diagram
CN107092890A (en) * 2017-04-24 2017-08-25 山东工商学院 Naval vessel detection and tracking based on infrared video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张灿龙: "红外可见光目标的空间直方图表示与联合跟踪", 《中国图象图形学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN110414338A (en) * 2019-06-21 2019-11-05 广西师范大学 Pedestrian based on sparse attention network discrimination method again
CN110414338B (en) * 2019-06-21 2022-03-15 广西师范大学 Pedestrian re-identification method based on sparse attention network

Also Published As

Publication number Publication date
CN107610154B (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN111311666B (en) Monocular vision odometer method integrating edge features and deep learning
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN114972418B (en) Maneuvering multi-target tracking method based on combination of kernel adaptive filtering and YOLOX detection
CN112634325B (en) Unmanned aerial vehicle video multi-target tracking method
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
CN111582349B (en) Improved target tracking algorithm based on YOLOv3 and kernel correlation filtering
CN110796691B (en) Heterogeneous image registration method based on shape context and HOG characteristics
CN103106667A (en) Motion target tracing method towards shielding and scene change
CN110245566B (en) Infrared target remote tracking method based on background features
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
CN112752028A (en) Pose determination method, device and equipment of mobile platform and storage medium
CN112132874A (en) Calibration-board-free different-source image registration method and device, electronic equipment and storage medium
CN111812978B (en) Cooperative SLAM method and system for multiple unmanned aerial vehicles
CN111797684A (en) Binocular vision distance measuring method for moving vehicle
Zhang Detection and tracking of human motion targets in video images based on camshift algorithms
Zhao et al. A robust stereo feature-aided semi-direct SLAM system
CN110717934A (en) Anti-occlusion target tracking method based on STRCF
CN107610154B (en) Spatial histogram representation and tracking method of multi-source target
CN116704273A (en) Self-adaptive infrared and visible light dual-mode fusion detection method
Xiao et al. Tracking small targets in infrared image sequences under complex environmental conditions
CN113947616A (en) Intelligent target tracking and loss rechecking method based on hierarchical perceptron
CN117036404A (en) Monocular thermal imaging simultaneous positioning and mapping method and system
Duan Deep learning-based multitarget motion shadow rejection and accurate tracking for sports video
CN108010051A (en) Multisource video subject fusion tracking based on AdaBoost algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200114

Termination date: 20201012