CN105488541A - Natural feature point identification method based on machine learning in augmented reality system - Google Patents

Natural feature point identification method based on machine learning in augmented reality system Download PDF

Info

Publication number
CN105488541A
CN105488541A CN201510956768.0A CN201510956768A CN105488541A CN 105488541 A CN105488541 A CN 105488541A CN 201510956768 A CN201510956768 A CN 201510956768A CN 105488541 A CN105488541 A CN 105488541A
Authority
CN
China
Prior art keywords
unique point
image
point
target image
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510956768.0A
Other languages
Chinese (zh)
Inventor
赵孟德
张斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN201510956768.0A priority Critical patent/CN105488541A/en
Publication of CN105488541A publication Critical patent/CN105488541A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a natural feature point identification method based on machine learning in an augmented reality system. The method comprises the steps of selecting the front image of a planar object as a target image; simulating the target image under combined observation conditions by perspective transformation to synthesize a sample image; collecting feature points and feature vectors in the sample image to serve as a training sample for machine learning; modeling for the feature vectors of the natural feature points of the target image through machine learning; dividing the plurality of natural feature points in the target image into different categories of the same quantity according to the modeling result, and respectively judging whether each feature point in a scene image belongs to any category.

Description

Based on the natural feature points recognition methods of machine learning in augmented reality system
Technical field
The present invention relates to the natural feature points recognition methods based on machine learning in a kind of augmented reality system.
Background technology
In recent years along with the develop rapidly of the technology such as computer vision, optics, display, development for augmented reality provides strong technical support, but as the three-dimensional registration technology of one of augmented reality system core technology, never tremendous breakthrough, precision is not high enough governs the Application and Development that augmented reality is applied to outdoor system always in registration, can say that three-dimensional registration is directly connected to the success or not of an augmented reality system, thus also just have very important reality and far reaching significance to the three-dimensional registration algorithm research in augmented reality.
A kind of register method based on physical feature is proposed in prior art.At initial phase, the image getting two width different visual angles, as with reference to frame, mates the Corner Feature of reference frame, and uses the method based on mark to carry out Accurate Calibration.At real time phase, compare by present frame and former frame the wide baseline matching problem solving unique point, and calculate the basic transformation matrix between present frame and reference frame with two View Algorithm.Then using these initial estimation as the starting point of nonlinear optimization, deviation is retrained as cost function using two views of characteristic point position and three-view diagram, minimize this cost function to estimate the position of camera, finally make calculating more stable with the method for similar Kalman filtering.
The register method based on texture is also proposed in prior art.The method is also utilize mark to carry out tracking initiation, and the object needing ex ante analysis tracked, and extract feature based on texture as matching template, all the other processes are with basically identical based on the method identified.
The physical feature tracking technique based on backprojection reconstruction is also been proposed in prior art.The method is formed primarily of two steps: embed and play up.Wherein embed and comprise appointment 4 points to set up the world coordinate system at dummy object place.Use Kanade-Lucas-Tomasi (being called for short KLT) feature detection to follow the trail of physical feature corresponding in real-time video when playing up.These features are normalized and carry out estimated projection matrix as input.The method without the need to predefined Datum identifier and outdoor AR system can be applied to.People employ the Fern sorting technique simplified of feature descriptor Sift leading at present in typical AR application.Wherein using planar grains object known in advance as tracking target, establish training dataset, directly utilize shooting built-in on current mobile phone to carry out a video capture, achieve the real-time follow-up that 6DOF frequency is 20Hz.
But it is not high that the method for above-mentioned prior art generally mates accuracy, or process is complicated and cause characteristic matching slower.
Summary of the invention
Technical matters to be solved by this invention is for there is above-mentioned defect in prior art, natural feature points recognition methods based on machine learning in a kind of augmented reality is provided, initial matching point is less, coupling accuracy is higher to reach, the object of the accurate tracking that is conducive to target image and registration.
In order to realize above-mentioned technical purpose, according to the present invention, providing the natural feature points recognition methods based on machine learning in a kind of augmented reality system, comprising:
First step: choose the front image of a width planar object as target image;
Second step: utilize perspective transform to simulate the target image under combination observation condition, to synthesize sample image;
Third step: to collect in the sample image of synthesis unique point and proper vector as the training sample of machine learning;
4th step: modeling is carried out to the proper vector of the natural feature points of target image by machine learning;
5th step: according to modeling result using the different classification of the multiple natural feature points in target image as respective numbers, judges whether each unique point in scene image belongs to arbitrary classification respectively.
Preferably, described combination observation condition comprise angle and/or distance and/or illumination.
Preferably, second step comprises:
The perspective transformation matrix of target image local coordinate to the projection of imaging plane is expressed as:
p=CM, M = R T 0 1
Wherein, C is camera internal parameter, and M is external parameter matrix, and R is rotation matrix, and it adopts the Eulerian angle in three-dimensional graphics and robot kinematics to be expressed as R=R yaw, R pitch, R roll; T=[T xt yt z] be translation vector; Wherein R yaw, R pitchdetermine the angle between camera and target image plane normal, R rollillustrate the rotation of target image in imaging plane, T zcarry out the vertical range representing target and camera photocentre, it is as scale factor, T xrepresent the horizontal cross distance of camera and target image plane, T yrepresent the horizontal fore-and-aft distance of camera and target image plane; When camera internal parameter C is constant, by external parameter R yaw, R pitch, R roll, T x, T y, T zsample uniformly, to simulate the observation image of target image under different angles, distance, illumination and combination thereof as composograph.
Preferably, in the sample image of the collection synthesis in third step, unique point and proper vector comprise as the training sample of machine learning and perform following step for each sample image:
Calculate unique point K and the proper vector D thereof of target image I;
Upgrade perspective transformation matrix P (t);
Sample image I (t)=P (t) * I is synthesized according to perspective transformation matrix P (t);
Calculate the unique point K (t) of sample image I (t) and corresponding proper vector D (t) thereof;
Carry out matching characteristic point according to geometrical constraint K (t)=P (t) * K+ μ, wherein μ is the error that feature extraction algorithm is introduced, and wherein " * " represents multiplication symbol;
The unique point of coupling is added training sample data collection.
Preferably, also comprise at third step and further Screening Treatment is carried out to training sample; Described Screening Treatment comprises the uniqueness adopting Autocorrelation Detection unique point, to get rid of the similar features point in the similar or iteron region in sample image from the unique point of sample image.
Preferably, described Screening Treatment comprises: make I 0=I is target image, I t=p ti is t width sample image, p tfor perspective transformation matrix, K t,i, L t,i, D t,i, N tbe respectively sample image I tunique point, unique point coordinate, proper vector and unique point number, i, j are unique point sequence number; As t=0, p 0=E is unit matrix, I 0=I, the Autocorrelation Detection of unique point calculates the proper vector distance U of any two unique points i,j=| D 0, i-D 0, j|, i, j ∈ 0,1,2...N 0; U at that time i,jduring < ε, refuse the i-th, j unique point sequence number as class label, refuse this unique point K simultaneously 0, iand refusal unique point K 0, imatch point on training image as training sample, wherein, N 0the number of representation feature point, ε is proper vector distance threshold; The unique point set of this Autocorrelation Detection is expressed as SetK=<K 0, i| U i,j> ε, i, j ∈ 0,1,2...N 0, i ≠ j>; From unique point set by screening the unique point of the larger predetermined quantity of selected characteristic vector spacing as training sample data collection.
Preferably, the reproducibility detecting unique point is also comprised at third step;
Wherein, unique point K is calculated 0, iat sample image I tin reproducibility formula
In formula, ε, μ are the error that feature extraction algorithm is introduced, D t,jbe the proper vector of t width sample image, D 0, ifor the vector of projection properties point, dist (D t,j, D 0, i) be D t,jwith D 0, idistance, L t,jrepresent the unique point coordinate of t width sample image, L 0, ifor projection properties point coordinate, | L t,j-p tl 0, i| represent the distance of the unique point of sample image and projection; As unique point K 0, imeet time, keeping characteristics point K 0, iand match point is as training sample; As unique point K 0, ido not meet time, by unique point K 0, iget rid of from training sample set; Wherein T is the sum of sample image, and ω is reproducibility coefficient.
Preferably, third step also comprise utilize gauss hybrid models get rid of error hiding;
Wherein, by statistical nature point K iproper vector set by the proper vector set of the N number of unique point on target image as Gaussian Mixture Distribution Model
p ( D ; a k , S k , &pi; k ) = &Sigma; k = 0 N - 1 &pi; k p k ( D ) ;
In formula, D represents vector set, a k, S k, π krepresent Mean Matrix, covariance matrix respectively, ratio that each single Gauss model accounts in mixture model, π k>=0, φ (D; a ks k) representing the probability distributing density function of data, d is the dimension of proper vector, will in proper vector as the sampled point of Density Estimator, maximize formula estimated parameter θ by expectation value:
&theta; = argmax &theta; ( L ( D , &theta; ) ) ;
L ( D , &theta; ) = log p ( D , &theta; ) = &Sigma; i = 1 N l o g ( &Sigma; k = 0 N &pi; k p k ( D ) ) , &theta; = { a k , S k , &pi; k } ;
L (D, θ) represents expectation value or average;
According to gauss hybrid models, obtain unique point K and K ithe probability of coupling is expressed as follows:
The foundation that whether coupling is correct using above-mentioned new probability formula as judging characteristic, when time, if then think that unique point K does not meet K idistributed model, now, K is got rid of from training sample.
Compared with prior art, present invention employs the unique point founding mathematical models that a large amount of samples is each target image.Making the sampling feature vectors in same classification have certain unchangeability according to the suitable feature of selection is the principle constructing effective sorter, mapping relations one by one between proper vector are converted into the classification problem in pattern-recognition by the present invention, the speed of characteristic matching is further increased under the prerequisite ensureing accuracy, and the real-time follow-up achieved on this basis planar target and pose estimation.
At proper vector matching stage, the present invention using the coupling of unique point proper vector as classification problem, propose the feature matching method based on machine learning, mapping relations one by one between proper vector are converted into the classification problem in pattern-recognition, instead of the arest neighbors coupling of proper vector, thus computation burden is transferred to the training stage from real time phase.Experiment shows that method of the present invention is set relative to traditional KD and BBF method has the advantage that feature identification is accurate, robustness is high.
Accompanying drawing explanation
By reference to the accompanying drawings, and by reference to detailed description below, will more easily there is more complete understanding to the present invention and more easily understand its adjoint advantage and feature, wherein:
Fig. 1 schematically shows the process flow diagram based on the natural feature points recognition methods of machine learning in augmented reality system according to the preferred embodiment of the invention.
It should be noted that, accompanying drawing is for illustration of the present invention, and unrestricted the present invention.Note, represent that the accompanying drawing of structure may not be draw in proportion.Further, in accompanying drawing, identical or similar element indicates identical or similar label.
Embodiment
In order to make content of the present invention clearly with understandable, below in conjunction with specific embodiments and the drawings, content of the present invention is described in detail.
Fig. 1 schematically shows the process flow diagram based on the natural feature points recognition methods of machine learning in augmented reality system according to the preferred embodiment of the invention.
As shown in Figure 1, be included in augmented reality system based on the natural feature points recognition methods of machine learning in augmented reality system according to the preferred embodiment of the invention and perform following step:
First step S1: choose the front image of a width planar object as target image;
Second step S2: utilize perspective transform to simulate the target image under combination observation condition, to synthesize sample image; Wherein, combine observation condition comprise angle and/or distance and/or illumination.
Particularly, in second step S2, target image local coordinate can be expressed as to the perspective transformation matrix of the projection of imaging plane:
p=CM, M = R T 0 1
Wherein, C is camera internal parameter, and M is external parameter matrix, and R is rotation matrix, and it adopts the Eulerian angle in three-dimensional graphics and robot kinematics to be expressed as R=R yaw, R pitch, R roll; T=[T xt yt z] be translation vector; Wherein R yaw, R pitchdetermine the angle between camera and target image plane normal, R rollillustrate the rotation of target image in imaging plane, T zcarry out the vertical range representing target and camera photocentre, it is as scale factor, T xrepresent the horizontal cross distance of camera and target image plane, T yrepresent the horizontal fore-and-aft distance of camera and target image plane; When camera internal parameter C is constant, by external parameter R yaw, R pitch, R roll, T x, T y, T zsample uniformly, to simulate the observation image of target image under different angles, distance, illumination and combination thereof as composograph.
Third step S3: to collect in the sample image of synthesis unique point and proper vector as the training sample of machine learning;
In the sample image of such as, collection synthesis particularly, in third step S3, unique point and proper vector can comprise as the training sample of machine learning and perform following step for each sample image:
Calculate unique point K and the proper vector D thereof of target image I;
Upgrade perspective transformation matrix P (t);
Sample image I (t)=P (t) * I is synthesized according to perspective transformation matrix P (t);
Calculate the unique point K (t) of sample image I (t) and corresponding proper vector D (t) thereof;
Carry out matching characteristic point according to geometrical constraint K (t)=P (t) * K+ μ, wherein μ is the error that feature extraction algorithm is introduced, and wherein " * " represents multiplication symbol;
The unique point of coupling is added training sample data collection.
Preferably, also comprise at third step S3 and further Screening Treatment is carried out to training sample.Described Screening Treatment comprises the uniqueness adopting Autocorrelation Detection unique point, to get rid of the similar features point in the similar or iteron region in sample image from the unique point of sample image, to improve the accuracy of sorter.
Such as, I is made 0=I is former target image, I t=p ti is t width sample image, p tfor perspective transformation matrix, K t,i, L t,i, D t,i, N tbe respectively sample image I tunique point, unique point coordinate, proper vector and unique point number, i, j are unique point sequence number; As t=0, p 0=E is unit matrix, I 0=I, the Autocorrelation Detection of unique point calculates the proper vector distance U of any two unique points i,j=| D 0, i-D 0, j|, i, j ∈ 0,1,2...N 0; U at that time i,jduring < ε, refuse the i-th, j unique point sequence number as class label, refuse this unique point K simultaneously 0, iand refusal unique point K 0, imatch point on training image as training sample, wherein, N 0the number of representation feature point, ε is proper vector distance threshold; The unique point set of this Autocorrelation Detection is expressed as SetK=<K 0, i| U i,j> ε, i, j ∈ 0,1,2...N 0, i ≠ j>; By the unique point that this unique point set selected characteristic vector spacing is larger, similar features point can be got rid of like this to improve the accuracy of sorter (such as, from unique point set by screening the unique point of the larger predetermined quantity of selected characteristic vector spacing, as training sample data collection).
In addition, preferably, the reproducibility detecting unique point is also comprised at third step S3.
Particularly, unique point K is calculated 0, iat sample image I tin reproducibility formula
In formula, ε, μ are the error that feature extraction algorithm is introduced, D t,jbe the proper vector of t width sample image, D 0, ifor the vector of projection properties point, dist (D t,j, D 0, i) be D t,jwith D 0, idistance, L t,jrepresent the unique point coordinate of t width sample image, L 0, ifor projection properties point coordinate, | L t,j-p tl 0, i| represent the distance of the unique point of sample image and projection; As unique point K 0, imeet time, think unique point K 0, ithere is higher reproducibility, keeping characteristics point K 0, iand match point is as training sample; Otherwise, the unique point of lower reproducibility is got rid of from training sample set; Wherein T is the sum of sample image, and ω is reproducibility coefficient.
Preferably, also comprise at third step S3 and utilize gauss hybrid models to get rid of error hiding, particularly:
By statistical nature point K iproper vector set by the proper vector set of the N number of unique point on target image as Gaussian Mixture Distribution Model
p ( D ; a k , S k , &pi; k ) = &Sigma; k = 0 N - 1 &pi; k p k ( D ) - - - ( 1 - 2 )
In formula, D represents vector set, a k, S k, π krepresent Mean Matrix, covariance matrix respectively, ratio that each single Gauss model accounts in mixture model, π k>=0, φ (D; a ks k) representing the probability distributing density function of data, d is the dimension of proper vector, will in proper vector as the sampled point of Density Estimator, maximize formula estimated parameter θ by expectation value:
&theta; = argmax &theta; ( L ( D , &theta; ) ) - - - ( 1 - 4 )
L ( D , &theta; ) = log p ( D , &theta; ) = &Sigma; i = 1 N l o g ( &Sigma; k = 0 N &pi; k p k ( D ) ) , &theta; = { a k , S k , &pi; k } - - - ( 1 - 5 )
L (D, θ) represents expectation value or average;
According to gauss hybrid models, unique point K and K ithe probability of coupling can be expressed as follows:
The foundation that whether coupling is correct using above-mentioned new probability formula (1-6) as judging characteristic, when time, if then think that unique point K does not meet K idistributed model, now, K is got rid of from training sample, to improve the accuracy of sorter.
4th step S4: modeling is carried out to the proper vector of the natural feature points of target image by machine learning;
5th step S5: according to modeling result using the different classification of the multiple natural feature points in target image as respective numbers, judges whether each unique point in scene image belongs to arbitrary classification respectively.
Such as, t is made to be unique point in scene image, class (t) for sorter be class (t) ∈ {-1 to the response function of the unique point t in scene image, 0,1...n-1}, wherein 0,1,2 ... n-1 represents the classification sequence number of the target image characteristics point that the unique point t in target image characteristics point and scene image matches, and-1 represents do not have the unique point t in target image characteristics point and scene image to match; Wherein n, m are positive integer.
, there is not general optimal classification device in theoretical without free lunch according to machine learning.In practical problems, various sorter has his own strong points in computing velocity, memory space requirement etc.Therefore the present invention tests various sorter (decision tree, random tree, support vector machine, k nearest neighbor etc.), to select the best approach of the most applicable Feature Points Matching.
In fact, invention removes similar features point (such as, be 640*480 and details can detect 1000-1500 unique point usually compared with on the image of horn of plenty in a width resolution, and the present invention is about 150-200 by the unique point number being retained as classification after the screening of training sample), decrease interference, thus improve coupling accuracy, and tracking and the registration of target image can be completed more accurately.
In a particular application, conplane prerequisite is according to pinhole camera model and target signature point, the present invention uses the homography matrix H estimating between two width images based on the method for RANSAC, and using homography matrix H as judging characteristic Point matching whether foundation, the offset error of unique point coordinate is taken as 1.5 pixels (now experiment test effect is better).
It should be noted that, unless stated otherwise or point out, otherwise the term " first " in instructions, " second ", " the 3rd " etc. describe only for distinguishing each assembly, element, step etc. in instructions, instead of for representing logical relation between each assembly, element, step or ordinal relation etc.
Be understandable that, although the present invention with preferred embodiment disclose as above, but above-described embodiment and be not used to limit the present invention.For any those of ordinary skill in the art, do not departing under technical solution of the present invention ambit, the technology contents of above-mentioned announcement all can be utilized to make many possible variations and modification to technical solution of the present invention, or be revised as the Equivalent embodiments of equivalent variations.Therefore, every content not departing from technical solution of the present invention, according to technical spirit of the present invention to any simple modification made for any of the above embodiments, equivalent variations and modification, all still belongs in the scope of technical solution of the present invention protection.

Claims (8)

1. in augmented reality system based on a natural feature points recognition methods for machine learning, it is characterized in that comprising:
First step: choose the front image of a width planar object as target image;
Second step: utilize perspective transform to simulate the target image under combination observation condition, to synthesize sample image;
Third step: to collect in the sample image of synthesis unique point and proper vector as the training sample of machine learning;
4th step: modeling is carried out to the proper vector of the natural feature points of target image by machine learning;
5th step: according to modeling result using the different classification of the multiple natural feature points in target image as respective numbers, judges whether each unique point in scene image belongs to arbitrary classification respectively.
2. method according to claim 1, is characterized in that, combination observation condition comprise angle and/or distance and/or illumination.
3. method according to claim 1 and 2, is characterized in that, comprises at second step:
The perspective transformation matrix of target image local coordinate to the projection of imaging plane is expressed as:
p = C M , M = R T 0 1
Wherein, C is camera internal parameter, and M is external parameter matrix, and R is rotation matrix, and it adopts the Eulerian angle in three-dimensional graphics and robot kinematics to be expressed as R=R yaw, R pitch, R roll; T=[T xt yt z] be translation vector; Wherein R yaw, R pitchdetermine the angle between camera and target image plane normal, R rollillustrate the rotation of target image in imaging plane, T zcarry out the vertical range representing target and camera photocentre, it is as scale factor, T xrepresent the horizontal cross distance of camera and target image plane, T yrepresent the horizontal fore-and-aft distance of camera and target image plane; When camera internal parameter C is constant, by external parameter R yaw, R pitch, R roll, T x, T y, T zsample uniformly, to simulate the observation image of target image under different angles, distance, illumination and combination thereof as composograph.
4. method according to claim 1 and 2, is characterized in that, in the sample image of collection in third step synthesis, unique point and proper vector comprise as the training sample of machine learning and perform following step for each sample image:
Calculate unique point K and the proper vector D thereof of target image I;
Upgrade perspective transformation matrix P (t);
Sample image I (t)=P (t) * I is synthesized according to perspective transformation matrix P (t);
Calculate the unique point K (t) of sample image I (t) and corresponding proper vector D (t) thereof;
Carry out matching characteristic point according to geometrical constraint K (t)=P (t) * K+ μ, wherein μ is the error that feature extraction algorithm is introduced, and wherein " * " represents multiplication symbol;
The unique point of coupling is added training sample data collection.
5. method according to claim 1 and 2, is characterized in that, also comprises carry out Screening Treatment to training sample further at third step; Described Screening Treatment comprises the uniqueness adopting Autocorrelation Detection unique point, to get rid of the similar features point in the similar or iteron region in sample image from the unique point of sample image.
6. method according to claim 5, is characterized in that, described Screening Treatment comprises: make I 0=I is target image, I t=p ti is t width sample image, p tfor perspective transformation matrix, K t,i, L t,i, D t,i, N tbe respectively sample image I tunique point, unique point coordinate, proper vector and unique point number, i, j are unique point sequence number; As t=0, p 0=E is unit matrix, I 0=I, the Autocorrelation Detection of unique point calculates the proper vector distance U of any two unique points i,j=| D 0, i-D 0, j|, i, j ∈ 0,1,2...N 0; U at that time i,jduring < ε, refuse the i-th, j unique point sequence number as class label, refuse this unique point K simultaneously 0, iand refusal unique point K 0, imatch point on training image as training sample, wherein, N 0the number of representation feature point, ε is proper vector distance threshold; The unique point set of this Autocorrelation Detection is expressed as SetK=<K 0, i| U i,j> ε, i, j ∈ 0,1,2...N 0, i ≠ j>; From unique point set by screening the unique point of the larger predetermined quantity of selected characteristic vector spacing as training sample data collection.
7. method according to claim 1 and 2, is characterized in that, also comprises the reproducibility detecting unique point at third step;
Wherein, unique point K is calculated 0, iat sample image I tin reproducibility formula
In formula, ε, μ are the error that feature extraction algorithm is introduced, D t,jbe the proper vector of t width sample image, D 0, ifor the vector of projection properties point, dist (D t,j, D 0, i) be D t,jwith D 0, idistance, L t,jrepresent the unique point coordinate of t width sample image, L 0, ifor projection properties point coordinate, | L t,j-p tl 0, i| represent the distance of the unique point of sample image and projection; As unique point K 0, imeet time, keeping characteristics point K 0, iand match point is as training sample; As unique point K 0, ido not meet time, by unique point K 0, iget rid of from training sample set; Wherein T is the sum of sample image, and ω is reproducibility coefficient.
8. method according to claim 1 and 2, is characterized in that, also comprises utilize gauss hybrid models to get rid of error hiding at third step;
Wherein, by statistical nature point K iproper vector set by the proper vector set of the N number of unique point on target image as Gaussian Mixture Distribution Model
p ( D ; a k , S k , &pi; k ) = &Sigma; k = 0 N - 1 &pi; k p k ( D ) ;
In formula, D represents vector set, a k, S k, π krepresent Mean Matrix, covariance matrix respectively, ratio that each single Gauss model accounts in mixture model, π k>=0, φ (D; a ks k) representing the probability distributing density function of data, d is the dimension of proper vector, will in proper vector as the sampled point of Density Estimator, maximize formula estimated parameter θ by expectation value:
&theta; = argmax &theta; ( L ( D , &theta; ) ) ;
L ( D , &theta; ) = log p ( D , &theta; ) = &Sigma; i = 1 N l o g ( &Sigma; k = 0 N &pi; k p k ( D ) ) , &theta; = { a k , S k , &pi; k } ;
L (D, θ) represents expectation value or average;
According to gauss hybrid models, obtain unique point K and K ithe probability of coupling is expressed as follows:
The foundation that whether coupling is correct using above-mentioned new probability formula as judging characteristic, when time, if then think that unique point K does not meet K idistributed model, now, K is got rid of from training sample.
CN201510956768.0A 2015-12-17 2015-12-17 Natural feature point identification method based on machine learning in augmented reality system Pending CN105488541A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510956768.0A CN105488541A (en) 2015-12-17 2015-12-17 Natural feature point identification method based on machine learning in augmented reality system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510956768.0A CN105488541A (en) 2015-12-17 2015-12-17 Natural feature point identification method based on machine learning in augmented reality system

Publications (1)

Publication Number Publication Date
CN105488541A true CN105488541A (en) 2016-04-13

Family

ID=55675512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510956768.0A Pending CN105488541A (en) 2015-12-17 2015-12-17 Natural feature point identification method based on machine learning in augmented reality system

Country Status (1)

Country Link
CN (1) CN105488541A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN106302444A (en) * 2016-08-16 2017-01-04 深圳市巴古科技有限公司 Intelligent cloud recognition methods
CN107464290A (en) * 2017-08-07 2017-12-12 上海白泽网络科技有限公司 Three-dimensional information methods of exhibiting, device and mobile terminal
CN108416846A (en) * 2018-03-16 2018-08-17 北京邮电大学 It is a kind of without the three-dimensional registration algorithm of mark
CN108446615A (en) * 2018-03-05 2018-08-24 天津工业大学 General object identification method based on illumination dictionary
TWI645366B (en) * 2016-12-13 2018-12-21 國立勤益科技大學 Image semantic conversion system and method applied to home care
CN109117773A (en) * 2018-08-01 2019-01-01 Oppo广东移动通信有限公司 A kind of characteristics of image point detecting method, terminal device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592124A (en) * 2011-01-13 2012-07-18 汉王科技股份有限公司 Geometrical correction method, device and binocular stereoscopic vision system of text image
US20120212405A1 (en) * 2010-10-07 2012-08-23 Benjamin Zeis Newhouse System and method for presenting virtual and augmented reality scenes to a user
CN103530881A (en) * 2013-10-16 2014-01-22 北京理工大学 Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal
CN104077596A (en) * 2014-06-18 2014-10-01 河海大学 Landmark-free tracking registering method
CN105069754A (en) * 2015-08-05 2015-11-18 意科赛特数码科技(江苏)有限公司 System and method for carrying out unmarked augmented reality on image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120212405A1 (en) * 2010-10-07 2012-08-23 Benjamin Zeis Newhouse System and method for presenting virtual and augmented reality scenes to a user
CN102592124A (en) * 2011-01-13 2012-07-18 汉王科技股份有限公司 Geometrical correction method, device and binocular stereoscopic vision system of text image
CN103530881A (en) * 2013-10-16 2014-01-22 北京理工大学 Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal
CN104077596A (en) * 2014-06-18 2014-10-01 河海大学 Landmark-free tracking registering method
CN105069754A (en) * 2015-08-05 2015-11-18 意科赛特数码科技(江苏)有限公司 System and method for carrying out unmarked augmented reality on image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FENG ZHOU: "Trends in Augmented Reality Tracking, Interaction and Display: A Review of Ten Years of ISMAR", 《IEEE/ACM INTERNATIONAL SYMPOSIUM ON MIXED & AUGMENTED REALITY》 *
黄诗华 等: "基于机器学习的自然特征匹配方法", 《计算机工程》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN106228142B (en) * 2016-07-29 2019-02-15 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN106302444A (en) * 2016-08-16 2017-01-04 深圳市巴古科技有限公司 Intelligent cloud recognition methods
TWI645366B (en) * 2016-12-13 2018-12-21 國立勤益科技大學 Image semantic conversion system and method applied to home care
CN107464290A (en) * 2017-08-07 2017-12-12 上海白泽网络科技有限公司 Three-dimensional information methods of exhibiting, device and mobile terminal
CN108446615A (en) * 2018-03-05 2018-08-24 天津工业大学 General object identification method based on illumination dictionary
CN108416846A (en) * 2018-03-16 2018-08-17 北京邮电大学 It is a kind of without the three-dimensional registration algorithm of mark
CN109117773A (en) * 2018-08-01 2019-01-01 Oppo广东移动通信有限公司 A kind of characteristics of image point detecting method, terminal device and storage medium
WO2020024744A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Image feature point detecting method, terminal device, and storage medium

Similar Documents

Publication Publication Date Title
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN108427924B (en) Text regression detection method based on rotation sensitive characteristics
CN107067415B (en) A kind of object localization method based on images match
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
Sirmacek et al. A probabilistic framework to detect buildings in aerial and satellite images
CN111695522B (en) In-plane rotation invariant face detection method and device and storage medium
CN104867126B (en) Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle
Cai et al. Perspective-SIFT: An efficient tool for low-altitude remote sensing image registration
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN105809198B (en) SAR image target recognition method based on depth confidence network
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN109284704A (en) Complex background SAR vehicle target detection method based on CNN
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN102495998B (en) Static object detection method based on visual selective attention computation module
CN103839277A (en) Mobile augmented reality registration method of outdoor wide-range natural scene
CN103903013A (en) Optimization algorithm of unmarked flat object recognition
CN107516322A (en) A kind of image object size based on logarithm pole space and rotation estimation computational methods
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN103985143A (en) Discriminative online target tracking method based on videos in dictionary learning
CN108550165A (en) A kind of image matching method based on local invariant feature
CN103839066A (en) Feature extraction method based on biological vision
Yang et al. Visual tracking with long-short term based correlation filter
CN102708589B (en) Three-dimensional target multi-viewpoint view modeling method on basis of feature clustering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160413

WD01 Invention patent application deemed withdrawn after publication