CN106778831B - Rigid body target on-line feature classification and tracking method based on Gaussian mixture model - Google Patents

Rigid body target on-line feature classification and tracking method based on Gaussian mixture model Download PDF

Info

Publication number
CN106778831B
CN106778831B CN201611064798.1A CN201611064798A CN106778831B CN 106778831 B CN106778831 B CN 106778831B CN 201611064798 A CN201611064798 A CN 201611064798A CN 106778831 B CN106778831 B CN 106778831B
Authority
CN
China
Prior art keywords
classifier
surf
mixture model
feature
gaussian mixture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201611064798.1A
Other languages
Chinese (zh)
Other versions
CN106778831A (en
Inventor
苗权
王贵锦
李晗
吴昊
李锐光
程光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Computer Network and Information Security Management Center
Original Assignee
National Computer Network and Information Security Management Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Computer Network and Information Security Management Center filed Critical National Computer Network and Information Security Management Center
Priority to CN201611064798.1A priority Critical patent/CN106778831B/en
Publication of CN106778831A publication Critical patent/CN106778831A/en
Application granted granted Critical
Publication of CN106778831B publication Critical patent/CN106778831B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a rigid body target online feature classification and tracking method based on a Gaussian mixture model. The method comprises the following steps: 1) selecting a target region of interest in the initial image, and detecting SURF characteristics in the target region; 2) creating a classifier for each SURF feature; 3) when a new image arrives, matching SURF characteristics in the initial image with SURF characteristics detected by the new image by using a classifier to form matching point pairs; in the matching process of the classifier, judging a positive sample and a negative sample by adopting an online classification mechanism based on a Gaussian mixture model; 4) and calculating to obtain a motion parameter by adopting a random sampling consistency algorithm according to the matching point pair, thereby determining a target area of the current image and realizing target tracking. The method can cope with complex scene changes in the video, ensure the self-adaptive capacity of tracking and realize stable, continuous, realistic and available target tracking.

Description

Rigid body target on-line feature classification and tracking method based on Gaussian mixture model
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an online feature classification and tracking method based on a Gaussian mixture model.
Background
The motion of any point on the rigid body target surface can represent the motion of the whole body, so that the target motion can be described by using the characteristics in the target area. The existing rigid body target tracking method aims to extract certain characteristics with invariance in a reference image target area, and quantize and describe the extracted characteristics, such as color characteristics, texture characteristics and optical flow characteristics. The local features refer to features with invariance, reproducibility and specificity in local parts detected in an image region, can resist complex changes such as occlusion, scale and rotation to a certain extent, and provide quantitative description for the features. At present, compared with other characteristics, the advantages of local characteristics in invariance and specificity are more obvious, so that the local characteristics are more deeply applied to target tracking. When the current frame comes, local features are extracted and described for the whole area. And then, finding a candidate corresponding set of the local features in the same target through the matching of the local features. And removing an incorrect corresponding characteristic set by means of a random sampling consistency algorithm (RANSAC), estimating motion transformation parameters, and realizing target tracking. Fig. 1 shows a block diagram of a feature-based tracking method, and the main idea is to consider tracking as a local feature matching problem.
Currently, a Speed-up Robust Feature (SURF) Feature is one of local features which are applied more and have ideal effects, an integral image fast algorithm is mainly introduced, and a response value of a gaussian second order differential is obtained by performing addition and subtraction operation approximately. The SURF algorithm mainly includes two aspects of feature detection and feature description. The feature detection is realized by rapidly calculating the scale and the main direction of each feature and circling a scale rotation invariant symmetrical neighborhood taking a detection point as a center; the feature description performs Haar feature computation in this invariant neighborhood and finally forms a 64-dimensional feature vector. SURF feature matching between different images is mainly achieved by comparing distances between feature vectors. The motion model construction is done by SURF feature matching. Assuming that x and x represent corresponding SURF feature points between different images, respectively, the following relationship exists between the two:
Figure BDA0001164147090000011
where W (x, h) is the perspective transformation function, and h ═ h1,...h8)TIs a motion parameter. Specifically, the following are shown:
Figure BDA0001164147090000012
after obtaining the motion parameters, the corresponding perspective transformation is carried out on the boundary of the target area of the initial frame to obtain the target area of the current frame.
The common complex scene changes in video mainly include the following 3 types:
(1) a geometric change. In the interested area of the video, the self axis of the object rotates to cause the change of the visual angle; when the object rotates or the camera rotates, the rotation change can be generated in vision; when the relative distance between the scene and the camera changes, the scale change is generated in the scene; when the above changes occur simultaneously, an affine or perspective change occurs. An example of a geometric variation is given in fig. 2.
(2) The gray scale changes. When the light source or the surface reflection condition of the shot object changes, the illumination changes, the gray scale of the related image area changes correspondingly, and the characteristic matching is influenced. In addition, when the region of interest is occluded by other objects, the shaded region also generates gray scale changes.
(3) Other variations. When an object suddenly moves rapidly or a camera shakes violently, a scene can be blurred, and feature detection and description can be influenced. In addition, in a video for distinguishing a target from a background, if the background includes a region similar to the target, matching of features may be affected.
In video, one or more of the above changes often occur in a scene, causing serious interference with the matching of local features. The prior art continues to use the same local feature matching method as the static image, cannot adapt to scenes with violent changes, and does not reflect the adaptivity corresponding to the continuous changes of the scenes.
The moving target tracking is a research subject which is extremely challenging in the field of computer vision, not only has wide research significance, but also has an important promoting effect on the whole visual field. At present, a rigid body target tracking technology has become a hot research subject, has wide application in many fields such as military, civil and the like, and has profound economic and social values. In real life, due to the existence of some adverse factors, the design of the target tracking system still faces great difficulties and challenges, and the robustness and stability need to be further improved. How to accurately and effectively position a rigid body target of interest in a video, especially when the target or background is changed in a complex way, the self-adaptive capability of tracking can still be ensured, a stable, continuous and realistic and available tracking system is realized, and the method becomes a key problem of target tracking and is also a technical problem to be solved by the invention.
Disclosure of Invention
Aiming at the problems, the invention provides an online feature classification and tracking method based on a Gaussian mixture model, which can cope with complex scene changes in videos, ensure the self-adaptive capability of tracking and realize stable, continuous, realistic and available target tracking.
The technical scheme adopted by the invention is as follows:
a rigid body target online feature classification and tracking method based on a Gaussian mixture model comprises the following steps:
1) selecting a target region of interest in the initial image, and detecting SURF characteristics in the target region;
2) creating classifiers for each SURF feature, wherein each strong classifier corresponds to one SURF feature and comprises a plurality of weak classifiers;
3) when a new image arrives, matching SURF characteristics in the initial image with SURF characteristics detected by the new image by using a classifier to form matching point pairs; in the matching process of the classifier, judging a positive sample and a negative sample by adopting an online classification mechanism based on a Gaussian mixture model;
4) and calculating to obtain a motion parameter by adopting a random sampling consistency algorithm according to the obtained matching point pair, thereby determining a target area of the current image and realizing target tracking.
Further, the method also comprises the online updating step: after the target area is positioned, the feature point pairs establishing the corresponding relation are verified, and each classifier is updated on line.
The key points of the invention comprise: 1) solving the rigid body target tracking problem based on local feature matching; 2) constructing a motion model for the rigid body target between the continuous frames; 3) local feature matching is achieved by means of a classifier; 4) an online classification mechanism based on a Gaussian mixture model; 5) self-organizing learning of a Gaussian mixture model; 6) the tracking is kept adaptive by online updating, and the systematicness and completeness of the algorithm are ensured.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides an online feature classification and tracking scheme based on a Gaussian mixture model to cope with complex scene changes in a video. Firstly, matching SURF characteristics by using a classification concept, and introducing a classifier capable of learning online; matching with a target model is realized by utilizing a classifier which is adaptive to the change of a current target, and dynamic 2-dimensional scale-rotation invariant space cooperative matching is established for local features in the target model; an online Updating mechanism based on a Gaussian mixture model is provided, online Sequential Updating (Sequential Updating) is carried out on related parameters of the mixture model, and sample characteristics of self-organizing learning are learned, so that the precision and the adaptability of the classifier are improved from essence finally.
Drawings
FIG. 1 is a block diagram of a feature-based tracking method in the prior art.
Fig. 2 is a schematic diagram of the class of geometric transformations.
Fig. 3 is a work flow diagram of the method of the present invention.
FIG. 4 is a schematic diagram of a principal direction solution based on a fan-shaped sliding window.
FIG. 5 is a scale and rotation invariant classifier construction diagram.
Fig. 6 is a schematic diagram of the deficiency of the classification mechanism based on simple binary decision.
Fig. 7 is a schematic diagram of a gaussian mixture model (K ═ 3).
FIG. 8 is a schematic view of object tracking.
Fig. 9 is a schematic diagram of self-organizing learning for a gaussian mixture model.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
The invention provides an online feature classification and tracking scheme based on a Gaussian mixture model, and the working flow of the online feature classification and tracking scheme is shown in FIG. 3. In the initial image, a target region of interest is selected, SURF features are detected in the target region, and a classifier is created and updated for each SURF feature. When a new image arrives, the SURF features in the initial image are matched with SURF features detected by the new image by using the classifier, and matching point pairs are formed. After RANSAC, the target is positioned, and finally the classifier is adaptively updated, so that subsequent frames can be conveniently processed.
The specific implementation scheme is described as follows:
the method comprises the following steps: SURF feature extraction
SURF feature extraction utilizes integral images to calculate Hessian matrix determinant, and then SURF feature points are positioned by selecting extremum. Specifically, for a point x ═ x, y on the image I, the Hessian matrix H (x, s) of the scale s is represented as:
Figure BDA0001164147090000041
with Lxx(x, s) for example, represents the convolution of the second derivative of the gaussian function with the pattern I at x ═ x, y, using a square filter DxxTo approximate. Balancing the determinant of the Hessian matrix is achieved by introducing a correlation weight w (approximately equal to 0.9):
det(Ηapprox)=DxxDyy-(wDxy)2(4)
for SURF feature detection, the original image size is not required to be changed when the scale space is established, the dimension of the square grid filter is adjusted, and convolution calculation is carried out on the original image and the square grid filter. Combining an approximate representation of a checkered filter with an integral image to improve computational efficiency, computing filter template size normalized det (Η)approx)。
The layer formed by the different sized checkered filters (octave) is the representation of the scale space. The location of the interest point is to execute a non-maximum suppression strategy in the image with the candidate point as the center and the 3 × 3 × 3 neighborhood including the scale space, and to obtain the scale s while taking the corresponding point with the maximum or minimum value as the feature point.
The rotational invariance of SURF features is achieved by solving for the principal direction (dominant orientation), which still takes advantage of the computational advantages of integral images. In a circle with the characteristic point as the center of a circle and 6 sigma as the radius, calculating the Haar wavelet response of the corresponding pixel according to the step sigma, and simultaneously carrying out scale normalization and Gaussian smoothing to obtain the response d in the x directionxAnd response d in the y-directionyAnd then mapped into polar coordinates as shown in fig. 4. Within the sector sliding region of pi/3 for dxAnd dyMaking statistics, recording the vector (w) of the current window iii):
Figure BDA0001164147090000051
Figure BDA0001164147090000052
Taking the angle θ of the longest vector in the region as the main direction:
Figure BDA0001164147090000053
step two: classifier construction of SURF features
Constructing an invariance region P for each SURF feature point gamma in the target region in the initial frameγA circular area with γ as the center and 2.5s as the radius is defined, s being the scale of the feature.
Each strong classifier C corresponds to one SURF feature, the matching scores C (x) of the classifier for feature matching at each new SURF detection point x are compared, and the larger the value is, the higher the possibility that the current detection point is taken as the corresponding point is. Each strong classifier comprises a plurality of weak classifiers, and the weak classifiers (selectors) obtained after the reliability screening and the corresponding weights thereof form the strong classifier:
Figure BDA0001164147090000054
wherein J represents the number of weak classifiers αjRepresenting the weight occupied by each weak classifier;
Figure BDA0001164147090000055
the judgment of the x-attribute of the sample point is shown, and the x-attribute corresponds to 1 Haar feature in the SURF feature scale and rotation invariant neighborhood, and the Haar feature is normalized in scale and main direction at the same time, as shown in FIG. 5. The strong classifier formed by the weak classifier has invariance of scale and rotation, and can meet the requirement of image matching.
Step three: on-line classification mechanism based on Gaussian mixture model
In the matching process of the classifier, the robustness of each strong classifier is determined by the performances of a plurality of corresponding weak classifiers. In the previous tracking method, the classification mechanism of the weak classifier is established on a simple binary decision rule, that is, the output Haar features are positive samples on one side of a decision line and negative samples on the other side of the decision line. However, such a classification does not reflect the actual situation well. For a positive sample, the distribution of Haar features in its neighborhood in the classification space may tend to be concentrated; the negative samples of each frame are randomly selected, the corresponding Haar characteristic values are various, and the distribution in the classification space is completely random. As shown in FIG. 6, the five-pointed star represents the position of a certain Haar feature of the positive sample in the corresponding weak classifier decision line, and the circle represents the same Haar feature in the randomly selected negative sample neighborhood. If the number of negative samples is small, the classifier is easy to train, and may be able to completely distinguish the positive and negative sample features. However, the learning of the classifier by the tracking method is an online process. With the continuous influx of negative samples, the classification plane can swing left and right until finally being indistinguishable. The matching performance of the classifier has the potential to be further improved.
In this scheme, each weak classifier corresponds to a Haar feature in the neighborhood of the SURF feature. Since SURF features have a certain repeatability, and corresponding Haar features are also normalized by scale and rotation, the distribution of the same Haar feature value of the same SURF feature in different frames is relatively concentrated under the same kind of target change. The distribution of feature values is also easily clustered into different classes for different kinds of target variations. Based on the analysis, the scheme focuses on the Haar feature distribution of the positive samples, ignores the judgment influence of the negative samples on the weak classifier, and provides the following online classification mechanism based on the Gaussian mixture model p (x). Specifically, for each weak classifier, it is modeled using a K-gaussian mixture model (as shown in fig. 7):
Figure BDA0001164147090000061
where x represents an input variable, K represents the number of Gaussian components, and N (x | μ) per Gaussian densitykk) Representing a component in the mixture model by mean μkSum variance σkTo measure, parameter πkReferred to as the mixing coefficient. In the process of distinguishing the classifier, firstly, calculating a Haar characteristic value corresponding to the current SURF characteristic point, and if the characteristic value is in an interval range (mu) corresponding to any component of the Gaussian mixture modelk-ωσkk+ωσk) Where ω is a constant, the feature is considered a positive sample and outputs "+ 1", whereas it is classified as a negative sample and outputs "-1".
The online learning of samples is fundamentally different from the sample training methods commonly used in pedestrian detection. For pedestrian detection, thousands of samples are used to train the classifier. Under the influence of pedestrian clothing and pose, different positive samples may be distributed far apart over the classification hyperplane, thus requiring a large number of negative samples to ensure that each classifier is at least as accurate as a random guess. In contrast, the present scheme corresponds one SURF feature per strong classifier. The Haar features corresponding to each weak classifier are also normalized by scale and rotation. Therefore, the Gaussian mixture model is used for describing the distribution of the weak features of the positive samples, the negative samples are ignored, and the classification precision is improved while the calculation amount is saved.
Step four: target tracking
The target tracking diagram is shown in FIG. 8, the current frame It-1And the tth frame ItFinal motion parameter h between target areast,t-1The method is obtained by matching a classifier, obtains motion parameters by calculation after RANSAC is applied, and finally determines a target area of a current frame.
Step five: online update
After the target area is positioned, the feature point pairs establishing the corresponding relation are verified, and each classifier is updated on line. The updating of the strong classifier is done based on the updating of the weak classifier. For each weak classifier, firstly, calculating a Haar characteristic response value x corresponding to the current weak classifier according to the positive sample, and judging which component of the Gaussian mixture model is closest to the mean value of the x, assuming that the x is the kth component. The model is updated by sequential updating (sequential learning). If the component already contains NkFor each observed component, the relevant parameters are adjusted to:
Nk=Nk+1 (10)
Figure BDA0001164147090000071
Figure BDA0001164147090000072
next, the present solution focuses on the sample distribution in each component in the gaussian mixture model. When the tracking is started, the number of positive samples is small, the clustering of the samples is easy to judge, and the sample distribution in different components is easy to distinguish. As samples accumulate, the differences between samples within each component begin to manifest, with larger intra-class distances. At the moment, a self-organizing learning method is adopted, the inherent characteristics of the samples in each component are extracted again to form a data distribution topology, and the current samples are automatically clustered and subtractedAnd the intra-class distance and the inter-class distance are increased, so that the purpose of optimizing K Gaussian components is realized. The number K of Gaussian components of the mixed model is set in advance by the algorithm, and then the sample distribution is subjected to self-organizing learning in a K-means clustering mode. Specifically, the nth sample point of the current kth component is defined as xknAnd in an effort to minimize the sum of the squares of the errors within the various component classes:
Figure BDA0001164147090000073
the sample distribution in each component is then determined by iteratively solving the above equation. Wherein the current mean parameter mu of each componentkAs an initialization input. In the jth iteration, we get:
Figure BDA0001164147090000074
and determining the sample distribution in the current component according to the mean value of each component and the principle of the minimum Euclidean distance. This is repeated until the mean of the components completely converges. The process of self-organizing learning for the gaussian mixture model is shown in fig. 9.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (9)

1. A rigid body target online feature classification and tracking method based on a Gaussian mixture model is characterized by comprising the following steps:
1) selecting a target region of interest in the initial image, and detecting SURF characteristics in the target region;
2) creating classifiers for each SURF feature, wherein each strong classifier corresponds to one SURF feature and comprises a plurality of weak classifiers;
3) when a new image arrives, matching SURF characteristics in the initial image with SURF characteristics detected by the new image by using a classifier to form matching point pairs; in the matching process of the classifier, judging a positive sample and a negative sample by adopting an online classification mechanism based on a Gaussian mixture model; the online classification mechanism based on the Gaussian mixture model utilizes the K Gaussian mixture model to model each weak classifier; when samples are continuously accumulated to enable the difference between samples in each component in the Gaussian mixture model to be displayed and the intra-class distance is large, a self-organizing learning method is adopted to re-extract the inherent characteristics of the samples in each component and form a data distribution topology, the current samples are automatically clustered, the intra-class distance is reduced, the inter-class distance is increased, and the purpose of optimizing the K Gaussian component is achieved;
4) and calculating to obtain a motion parameter by adopting a random sampling consistency algorithm according to the obtained matching point pair, thereby determining a target area of the current image and realizing target tracking.
2. The method of claim 1, further comprising the online updating step of: after the target area is positioned, the feature point pairs establishing the corresponding relation are verified, and each classifier is updated on line.
3. The method of claim 1 or 2, wherein: step 1) when SURF characteristics are detected, a Hessian matrix determinant is calculated by utilizing integral images, SURF characteristic points are positioned by selecting extreme values, and a scale space is established by adjusting the size of a square filter; the rotational invariance of the SURF features is achieved by finding the principal direction.
4. The method of claim 1 or 2, wherein: in the step 2), each weak classifier corresponds to a Haar feature in the SURF feature neighborhood, the Haar features are normalized through the scale and the main direction, and the strong classifier formed by the weak classifiers has invariance of the scale and rotation at the same time.
5. The method of claim 1 or 2, wherein: and 3) when the features are matched, comparing the matching scores of each new SURF detection point by using the classifier, wherein the larger the value of the matching score is, the higher the possibility that the current detection point is taken as a corresponding point is.
6. The method of claim 1 or 2, wherein: and 3) focusing the Haar feature distribution of the positive sample and neglecting the judgment influence of the negative sample on the weak classifier by the online classification mechanism based on the Gaussian mixture model.
7. The method of claim 6, wherein: the online classification mechanism based on the Gaussian mixture model utilizes the K Gaussian mixture model to model each weak classifier according to the formula:
Figure FDA0002300512100000011
where x represents an input variable, K represents the number of Gaussian components, and N (x | μ) per Gaussian densitykk) Representing a component in the mixture model by mean μkSum variance σkTo measure, parameter πkReferred to as the mixing coefficient; in the process of distinguishing the classifier, firstly, calculating a Haar characteristic value corresponding to the current SURF characteristic point, and if the characteristic value is in an interval range (mu) corresponding to any component of the Gaussian mixture modelk-ωσkk+ωσk) Where w is a constant, the feature is considered a positive sample, otherwise it is classified as a negative sample.
8. The method of claim 1, wherein: and based on the number K of the Gaussian components, carrying out self-organizing learning on the sample distribution by adopting a K-means clustering mode.
9. The method of claim 2, wherein: when on-line updating is carried out, the updating of the strong classifiers is completed based on the updating of the weak classifiers, for each weak classifier, a Haar characteristic response value x corresponding to the current weak classifier is calculated according to a positive sample, the fact that the x is closest to the mean value of which component of the Gaussian mixture model is judged, and then the model is updated in a sequential updating mode.
CN201611064798.1A 2016-11-28 2016-11-28 Rigid body target on-line feature classification and tracking method based on Gaussian mixture model Expired - Fee Related CN106778831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611064798.1A CN106778831B (en) 2016-11-28 2016-11-28 Rigid body target on-line feature classification and tracking method based on Gaussian mixture model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611064798.1A CN106778831B (en) 2016-11-28 2016-11-28 Rigid body target on-line feature classification and tracking method based on Gaussian mixture model

Publications (2)

Publication Number Publication Date
CN106778831A CN106778831A (en) 2017-05-31
CN106778831B true CN106778831B (en) 2020-04-24

Family

ID=58902076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611064798.1A Expired - Fee Related CN106778831B (en) 2016-11-28 2016-11-28 Rigid body target on-line feature classification and tracking method based on Gaussian mixture model

Country Status (1)

Country Link
CN (1) CN106778831B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644199A (en) * 2017-08-23 2018-01-30 国家计算机网络与信息安全管理中心 A kind of feature based and the rigid-object tracking of Regional Synergetic matching
CN108596950B (en) * 2017-08-29 2022-06-17 国家计算机网络与信息安全管理中心 Rigid body target tracking method based on active drift correction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680554A (en) * 2015-01-08 2015-06-03 深圳大学 SURF-based compression tracing method and system
CN105976397A (en) * 2016-04-28 2016-09-28 西安电子科技大学 Target tracking method based on half nonnegative optimization integration learning
CN106056146A (en) * 2016-05-27 2016-10-26 西安电子科技大学 Logistic regression-based visual tracking method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680554A (en) * 2015-01-08 2015-06-03 深圳大学 SURF-based compression tracing method and system
CN105976397A (en) * 2016-04-28 2016-09-28 西安电子科技大学 Target tracking method based on half nonnegative optimization integration learning
CN106056146A (en) * 2016-05-27 2016-10-26 西安电子科技大学 Logistic regression-based visual tracking method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Tracking and Recognition of Objects using SURF Descriptor and Harris Corner Detection;J.Jasmine Anitha 等;《International Journal of Current Engineering and Technology》;20140430;全文 *
UAV视觉辅助自主降落技术研究;余冬勇;《中国优秀硕士学位论文全文数据库 工程科技II辑》;20130815;第7-10、14- 21、26 -27页及图3.5、3.6 *
改进混合高斯模型的运动目标检测与跟踪算法;黄苏雨 等;《计算机测量与控制》;20150419;第861-862页 *

Also Published As

Publication number Publication date
CN106778831A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN108470354B (en) Video target tracking method and device and implementation device
Li et al. Robust visual tracking based on convolutional features with illumination and occlusion handing
CN111640157B (en) Checkerboard corner detection method based on neural network and application thereof
CN107292869B (en) Image speckle detection method based on anisotropic Gaussian kernel and gradient search
CN106682678B (en) Image corner detection and classification method based on support domain
Feng et al. Fine-grained change detection of misaligned scenes with varied illuminations
CN104008379A (en) Object recognition method based on surf
CN111126412A (en) Image key point detection method based on characteristic pyramid network
CN103080979A (en) System and method for synthesizing portrait sketch from photo
CN103886324B (en) Scale adaptive target tracking method based on log likelihood image
CN107784284B (en) Face recognition method and system
Rout et al. Walsh–Hadamard-kernel-based features in particle filter framework for underwater object tracking
CN107194310A (en) The rigid-object tracking matched based on scene change classifications and online local feature
CN105321188A (en) Foreground probability based target tracking method
CN106778831B (en) Rigid body target on-line feature classification and tracking method based on Gaussian mixture model
CN109508674B (en) Airborne downward-looking heterogeneous image matching method based on region division
CN108876776B (en) Classification model generation method, fundus image classification method and device
Zhang et al. Affine object tracking with kernel-based spatial-color representation
CN106934395B (en) Rigid body target tracking method adopting combination of SURF (speeded Up robust features) and color features
CN116580121B (en) Method and system for generating 2D model by single drawing based on deep learning
Li et al. Efficient properties-based learning for mismatch removal
CN106897721A (en) The rigid-object tracking that a kind of local feature is combined with bag of words
CN108596950B (en) Rigid body target tracking method based on active drift correction
CN108876849B (en) Deep learning target identification and positioning method based on auxiliary identification
Yuan et al. Realtime CNN-based keypoint detector with Sobel filter and CNN-based descriptor trained with keypoint candidates

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200424

Termination date: 20201128

CF01 Termination of patent right due to non-payment of annual fee