CN101520849B - Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification - Google Patents

Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification Download PDF

Info

Publication number
CN101520849B
CN101520849B CN2009100481138A CN200910048113A CN101520849B CN 101520849 B CN101520849 B CN 101520849B CN 2009100481138 A CN2009100481138 A CN 2009100481138A CN 200910048113 A CN200910048113 A CN 200910048113A CN 101520849 B CN101520849 B CN 101520849B
Authority
CN
China
Prior art keywords
mark
point
unique point
dough sheet
random
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009100481138A
Other languages
Chinese (zh)
Other versions
CN101520849A (en
Inventor
季斐翀
陆涛
周暖云
潘晋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Crystal Information Technology Co Ltd
Original Assignee
Shanghai Crystal Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Crystal Information Technology Co Ltd filed Critical Shanghai Crystal Information Technology Co Ltd
Priority to CN2009100481138A priority Critical patent/CN101520849B/en
Publication of CN101520849A publication Critical patent/CN101520849A/en
Application granted granted Critical
Publication of CN101520849B publication Critical patent/CN101520849B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a reality augmenting method and a reality augmenting system based on image characteristic point extraction and random tree classification. The method comprises the following steps: initializing a system environment and configuring system parameters; selecting or extracting one marker front view and training a marker to obtain training data; calculating and correcting camerainner parameters of a marker image; correcting each frame of a real environment shot by a camera, and based on the training data, identifying the marker and calculating a relative position matrix of the marker in a camera coordinate system; and searching a corresponding virtual model by the identified marker, determining the position of the model by utilizing the position matrix of the extracted marker, and drawing the virtual model. The method and the system greatly reduce the limitation on the marker, can be used for marking maps and complex two-dimensional images, and can synthesize a three-dimensional model on any two-dimensional image to achieve three-dimensional and vivid effects.

Description

Based on the augmented reality method and system of image characteristic point extraction with random tree classification
Technical field
The invention belongs to the augmented reality technical field, be particularly related to a kind of method and system of augmented reality, be applied to virtual reality and computer vision field, utilize pattern-recognition and virtual reality technology to video acquisition to the real world images frame play the effect of adding content, strengthening effect.
Background technology
Augmented reality (AR, Augmented Reality) utilizes dummy object that real scene is carried out real technique for enhancing.The actual physical situation that augmented reality collects based on the first-class sampler of shooting, on the object of information labeling in the shown actual physical situation of display screen with virtual generations such as text, two dimensional image, three-dimensional models, thereby realize note, the explanation of real physical environment that the user is in, perhaps strengthen, emphasize some effect of actual environment.Put on special-purpose augmented reality such as the user and show glasses, when observing certain complicated machinery, he not only can see the physical construction itself that exists in the real world, can also see simultaneously by the additional multimedia messages of augmented reality technology such as the introduction of mechanical various piece etc.The experience that the augmented reality technology merges for a kind of virtual objects of user and actual environment two-phase, it can help the cognitive surrounding environment of user effectively, increases the information of surrounding environment, realizes the mutual of user and surrounding environment.
" ARToolkit " is a kind of open source software bag that can be used for augmented reality.The ARtoolkit vision technique that uses a computer calculates relative position relation between true shooting scene and the label symbol.The main algorithm flow process of ARToolkit is: the video frame image of input captured in real time, but convert thereof into the black and white binary map by preset threshold; The pairing connected region of black surround color of mark, the as a token of candidate target of thing black surround in the search scene; Obtain the outline line of each connected region, if can extract four crossing straight flanges, then as possible mark; The corner characteristics that utilizes four straight flanges to find carries out deformation and corrects, and calculates a homography matrix (homography) conversion that mark is transformed into front view; Utilize this homography matrix to sample at the black surround interior zone of mark, the sampling template is generally 16 * 16, obtains 256 sampled points altogether and constitutes a vector of samples; This vector of samples and the mark that leaves the mark database in advance in are compared one by one, and the vector that respective point constitutes on the calculation flag thing and the normalized vector dot product of vector of samples obtain a confidence value; If confidence value is lower than a threshold value, just being used as is that the match is successful, otherwise is exactly that the match is successful.Find corresponding dummy object according to the mark that the match is successful, dummy object is carried out conversion by the current relative orientation of camera and mark, make it to match with mark.
In the prior art, the method and system of realizing three-dimensional enhanced reality based on ARToolKit bag and two-dimentional visual encoding technology is arranged, in order to realize the foundation of mapping relations between the actual situation object.This system specifically comprises: video frame capture module, video tracking module, virtual pattern system module, actual situation synthesis module and video display module, and the various piece function is specially:
A, video frame capture module are used to catch the frame of video of two-dimentional visual encoding mark, and this frame of video is sent to the video tracking module;
B, video tracking module are used for the mark frame of video that computing obtains, and obtain to be tied to from the mark coordinate transformation matrix of camera coordinates system according to the computing result; By coding pattern in the two-dimentional visual encoding of sampling, obtain the mark encoded radio, retrieve the three-dimensional model corresponding, and, obtain the coordinate array of this three-dimensional picture under camera coordinates system according to the summit array of this three-dimensional model and the product of transformation matrix with this encoded radio.
C, virtual pattern system module are used for drawing corresponding three-dimensional picture according to the coordinate array of the three-dimensional picture that obtains under camera coordinates system, and this three-dimensional picture are stored in the frame buffer, generate the virtual pattern frame.
D, actual situation synthesis module are used for the virtual pattern frame that will obtain and the frame of video of two-dimentional visual encoding mark and synthesize, and obtain the synthetic video frame.
The principal feature of this technical scheme is:
1, in existing three-dimensional enhanced reality technology, introduces the two-dimensional visualization coded image of standard as following the tracks of used mark, replacing the mark of the arbitrary shape that ARToolkit adopts in the prior art, thereby improved track algorithm speed among the ARToolkit and reliability and accelerated the pattern match processing speed.
2, on the visual basis of coding of existing two-dimensional, introduce calculating and extraction to the relative information converting of three-dimensional, retrieve corresponding three-dimensional media information and the synthetic augmented reality technology of three-dimensional registration, this technology can identify two-dimentional visual encoding, can also be with its corresponding three-dimensional space position that obtains, the three-dimensional model that retrieves by coding strengthens in real time and is presented on the coded graphics, and then realizes the augmented reality function.
3, be mainly used in enforcement augmented reality technology on the relatively limited hand-held mobile computing device of computational resource, expand augmented reality The Application of Technology field.
Its shortcoming is, to having relatively high expectations of mark, requires the mark form simple, and the contrast of shape edge and background colour is very clear obviously, and the quadrilateral frame that must be made up of four straight flanges is as clear boundary, otherwise can influence recognition effect.
Summary of the invention
The objective of the invention is to, provide a kind of based on the augmented reality method and system of image characteristic point extraction, to reduce the restrictive condition of mark with random tree classification.
The present invention adopts following technical scheme:
A kind of based on the augmented reality method of image characteristic point extraction with random tree classification, may further comprise the steps:
Step 10), initialization system environment, configuration-system parameter;
Step 20), select or extract a width of cloth mark front elevation, mark is trained, obtain training data;
Step 30), the camera inner parameter of calculation flag object image and proofreading and correct;
Step 40), each frame in the true environment that camera is photographed, utilize step 30) in data proofread and correct, again based on step 20) in training data distinguishing mark thing, if contain mark, then calculate the relative position matrix of this mark in the camera coordinate system;
Step 50), seek corresponding dummy model, utilize the marker position matrix that extracts to determine the position of model by the mark that identifies;
Step 60), on true frame of video of taking, draw dummy model according to the relative position that calculates.
Further, described step 20) specifically may further comprise the steps:
Step 21), cromogram is converted into gray-scale map;
Step 22), preliminary extract minutiae, the method for concrete extract minutiae is as follows:
For each the pixel m on the picture, if be mid point with m, in eight pixels of m any two satisfy following two conditions, then get rid of this pixel m:
A, these two pixels are in m certain bar diameter two ends of this circle pixel that is mid point,
The gray-scale value of b, these two pixels is all approaching with m;
Step 23), gray-scale map is carried out the front elevation view transformation, extract minutiae in the view after conversion is used for obtaining more stable characteristics point;
Step 24), all same unique points in the front elevation of the perspective transform of different angles are taken out, constitutes one " certain view set ", obtain N individual " certain view set ", invariant feature point of each " certain view set " correspondence;
Step 25), structure is set at random in order to characteristic point classification and identification.
Further, the constructive method of " certain view set " is described step 24):
Original front elevation is pressed (π around x axle, y axle respectively, perspective transform is carried out in+π) scope rotation, to be divided into Lx angle around turning to of x axle, to be divided into Ly angle around turning to of y axle, get L=Lx * Ly amplitude variation and change view, the unique point that numbering in all conversion views is identical is taken out, and obtains N set V n={ v N1, v N2V NL, 1≤n≤N, each V nBe exactly " the certain view set " of a corresponding unique point, each element in this set has comprised the diverse location of same unique point under the different visual angles conversion.
Further, described step 23) be specially:
For the front elevation of a given mark, with step 22) in method extract M unique point, by characteristic point coordinates sequence of positions numbering, constitutes a unique point and gathers K={k 1, k 2K M, a unique point of the corresponding numbering of each element representation in the set;
The original front elevation of this mark is carried out the perspective transform of a plurality of different angles, and add white noise for the front elevation after the conversion, utilize step 22 afterwards again) in method extract the unique point of the view after the conversion, utilize inverse transformation that the unique point that extracts is reduced to corresponding front elevation unique point again; Add up the probability that a plurality of unique points of view after above-mentioned " conversion-extraction-reduction " through the different angles conversion still can find the corresponding unique point of original front elevation, N the point that probability is the highest is confirmed to be the unique point of " stablizing ", the number of elements of set K is kept to N by M, i.e. K={K 1, K 2K N.
Further, described step 25) structure is set at random in order to characteristic point classification and identification in, and is specific as follows:
Tree is adopted binary tree structure at random, the data of input are the dough sheet of 32 * 32 pixels, the dough sheet input that will comprise the unique point in " certain view set " when training is in a large number set at random, allow each dough sheet enter certain a slice leaf, after all dough sheets all enter leaf, calculate the probability distribution of the unique point of every leaf corresponding all " stablizing ", the probability distribution that certain sheet leaf comprises can be by following formulate:
P η(l,p)(Y(p)=c)
Wherein, p represents a dough sheet of 32 * 32 pixels, Y (p) is the unique point characteristic of correspondence piont mark that this dough sheet comprises, c ∈ { 1,1,2 ... N}, wherein-1 expression does not comprise the dough sheet of any unique point, l is the numbering of setting at random, and on behalf of the p dough sheet, η drop into that sheet leaf that l tree arrived;
For the judgment formula that each node is chosen as follows,
Wherein, (p m) represents the brightness that dough sheet p is ordered at m, m to I 1, m 2, m 3, m 4Four position different pixels for random choose among the dough sheet p.
Further, described step 40) specifically may further comprise the steps:
A two field picture of gathering is decomposed into the dough sheet of 32 * 32 pixels, with every dough sheet input step 25) in the different tree at random of structure;
The unique point label that expression dough sheet unique point that p comprises obtains through estimation,
Figure G2009100481138D00043
Utilize following formula to calculate:
Figure G2009100481138D00044
This formula is that the probability distribution addition of blade that dough sheet p is arrived in difference tree is averaged, obtaining an average probability distributes, with the label of that invariant feature point of probability maximum in this average probability distribution, as the label of dough sheet p characteristic of correspondence point; Utilize this formula to set up the correspondence of new images acquired unique point and original front elevation unique point.
It is a kind of based on the system of image characteristic point extraction with the augmented reality of random tree classification that the present invention also provides, and comprising:
The frame of video training module is used for selecting or extracting a width of cloth mark front elevation, and mark is trained, and obtains training data;
The frame of video correction module is connected with described frame of video training module, is used for the camera inner parameter of calculation flag object image and proofreaies and correct;
The video frame capture module, be connected with described frame of video correction module with described frame of video training module, each frame that is used for true environment that camera is photographed, utilize the data in the described frame of video correction module to proofread and correct, again based on the training data distinguishing mark thing in the described frame of video training module, if contain mark, then calculate the relative position matrix of this mark in the camera coordinate system, and the illumination information by the brightness of distinguishing mark thing and mark front elevation is compared and estimated mark environment of living in;
The actual situation synthesis module, be connected with the video frame capture module, be used for seeking corresponding dummy model, utilize the marker position matrix that extracts to determine the position of model, on true frame of video of taking, draw dummy model according to the relative position that calculates by the mark that identifies.
With respect to existing other inventions, as the system of ARToolkit kit and Huawei, native system has greatly reduced the restriction to mark, and these restrictions mainly comprise following several:
(1), require the mark color dark and single, big with the background colour contrast.
(2), the mark form is a simple graph.
(3), the border of tetragonal clear frame as identification arranged around the mark.
And native system institute respective flag thing need not any border, can intercept the arbitrary quadrilateral zone that comprises certain textural characteristics in the random two-dimensional image, mainly in the true environment with containing of picked-ups such as camera, camera true scenery, two dimensional image with photo characteristic, graphical content is complexity very.These characteristics of system have greatly been expanded the usable range of augmented reality.
Native system can be used for map and complicated two-dimensional image are indicated, and also can form three-dimensional, lively effect synthesizing three-dimensional model on the two dimensional image arbitrarily.
Further specify the present invention below in conjunction with drawings and Examples.
Description of drawings
Fig. 1 is the system embodiment synoptic diagram that the present invention is based on the augmented reality of image characteristic point extraction and random tree classification;
Fig. 2 is the augmented reality method embodiment process flow diagram that the present invention is based on image characteristic point extraction and random tree classification;
The process flow diagram of Fig. 3 among the inventive method embodiment mark being trained;
Fig. 4 is the actual conditions of unique point correspondence on one page book.
Embodiment
As shown in Figure 1, a kind of based on the system of image characteristic point extraction with the augmented reality of random tree classification, comprising:
The frame of video training module is used for selecting or extracting a width of cloth mark front elevation, and mark is trained, and obtains training data;
The frame of video correction module is connected with described frame of video training module, is used for the camera inner parameter of calculation flag object image and proofreaies and correct;
The video frame capture module, be connected with described frame of video correction module with described frame of video training module, each frame that is used for true environment that camera is photographed, utilize the data in the described frame of video correction module to proofread and correct, again based on the training data distinguishing mark thing in the described frame of video training module, if contain mark, then calculate the relative position matrix of this mark in the camera coordinate system;
The actual situation synthesis module, be connected with the video frame capture module, be used for seeking corresponding dummy model, utilize the marker position matrix that extracts to determine the position of model, on true frame of video of taking, draw dummy model according to the relative position that calculates by the mark that identifies.
As shown in Figure 2, a kind of based on the augmented reality method of image characteristic point extraction with random tree classification, may further comprise the steps:
Step 10), initialization system environment, configuration-system parameter; Mainly comprise and build the system hardware platform, setting can be supported the drawing environment of two and three dimensions figure, and distribution diagram is as spatial cache, identification camera etc.;
Step 20), from file, select the image file of a width of cloth mark front elevation or from camera, extract the mark front elevation, mark is trained.Training mainly comprises gray scale processing and unique point processing etc.;
Step 30), the confidential reference items of calculation flag object image and proofreading and correct.The camera inner parameter is meant inner intrinsic parameters such as the focal length of camera camera and deformation, and this parameter has been determined the projective transformation matrix of camera camera, and it depends on the attribute of camera itself, so its inner parameter is invariable concerning same camera.Native system is by taking marks in a plurality of different angles, and by the comparison to mark and the mark front elevation of different angles, the confidential reference items of computing camera also read in system, be used for to after each frame figure of synthesizing of actual situation proofread and correct;
Step 40), each frame in the true environment that camera is photographed, utilize step 30) in data proofread and correct, again based on step 20) in training data distinguishing mark thing, if contain mark, then calculate relative position matrix and the information such as illumination of this mark in the camera coordinate system;
Mark becomes the process of phase on camera plane, each the picture element coordinate that is equivalent to constitute mark up-converts into camera coordinates from three-dimensional system of coordinate and fastens, and projects to the two dimensional image that forms mark on the camera plane then.This conversion can be expressed by the relative position matrix.Step 40) promptly is used for calculating this location matrix.Illumination information by the brightness of distinguishing mark thing and mark front elevation is compared and estimated mark environment of living in afterwards;
Step 50), seek corresponding dummy model, utilize the marker position matrix that extracts to determine the position of model by the mark that identifies;
Step 60), on true frame of video of taking, draw dummy model, the realization augmented reality according to the relative position that calculates.
Further, as shown in Figure 3, described step 20) specifically may further comprise the steps:
Step 21), cromogram is converted into gray-scale map;
Step 22), preliminary extract minutiae, the method for concrete extract minutiae is as follows:
For each the pixel m on the picture, if be mid point with m, in eight pixels of m any two satisfy following two conditions:
1, these two pixels are in m certain bar diameter two ends of this circle pixel that is mid point;
2, the gray-scale value of these two pixels is all approaching with m.
Then this pixel m is considered to the point of " instability ".After getting rid of the pixel of all " instabilities ", remaining " more stable " unique point that tentatively extracts that is.Can remove soon like this and be positioned at average zone of gray-scale value and the point that is positioned on the edge;
Step 23), gray-scale map is carried out the front elevation view transformation, extract minutiae in the view after conversion is used for obtaining more stable characteristics point, and is specific as follows:
For the front elevation of a given mark, with step 22) in method extract M unique point, by characteristic point coordinates sequence of positions numbering, constitutes a unique point and gathers K={k 1, k 2K M, a unique point of the corresponding numbering of each element representation in the set.
The original front elevation of this mark is carried out the perspective transform of a plurality of different angles, and add white noise for the front elevation after the conversion, utilize step 22 afterwards again) in method extract the unique point of the view after the conversion, utilize inverse transformation that the unique point that extracts is reduced to corresponding front elevation unique point again.Add up the probability that a plurality of unique points of view after above-mentioned " conversion-extraction-reduction " through the different angles conversion still can find the corresponding unique point of original front elevation, N the point that probability is the highest finally confirmed as the unique point of " stablizing ".Can be by this method to step 22) in the unique point that extracts further screen, obtain stable characteristics point the most.The number of elements of set K is kept to N by M, i.e. K={K 1, K 2K N;
Step 24), make up " certain view set ", should " certain view set " be used for step 25) training and structure " set " at random;
The present invention is based on that feature point extraction and random tree classification are discerned mark and the position of calculation flag thing in the camera coordinate system, one of them the most key problem is, judge " the invariant feature point " that whether comprise in the frame to be identified on the front elevation, and which unique point what comprise is.For realizing this purpose, made up " certain view set ", to being explained as follows of its:
All same unique points in the front elevation of the perspective transform of different angles are taken out, constitute a set specially, can obtain N set, the corresponding invariant feature point of each set, these set promptly so-called " certain view set ".For example, original front elevation is pressed (π around x axle, y axle respectively, perspective transform is carried out in+π) scope rotation, to be divided into Lx angle around turning to of x axle, to be divided into Ly angle around turning to of y axle, finally can get L=Lx * Ly amplitude variation and change view, the unique point that numbering in all conversion views is identical is taken out, and can obtain N set V n={ v N1, v N2V NL, 1≤n≤N, each V nBe exactly " the certain view set " of a corresponding unique point, each element in this set has comprised the diverse location of same unique point under the different visual angles conversion;
Step 25), structure is set at random in order to characteristic point classification and identification;
The random tree classification method is a kind of succinct classification fast.Its concrete construction method is as follows:
Tree is adopted binary tree structure at random, has only a tree crown, tells two nodes, and each node is told two nodes again, and recurrence successively no longer includes branch up to the node of the bottom, is referred to as leaf.Each node all has a judgment formula, after data were imported from tree crown, the judgment formula of each node can be judged it, puts it into the node of Left or right with decision, judge again after putting into next node layer, up to entering some leaves.Among the present invention, the data of input are the dough sheet of 32 * 32 pixels, and each dough sheet can comprise or not comprise unique point.The dough sheet input that will comprise the unique point in " certain view set " when training is in a large number set at random, allow each dough sheet enter certain a slice leaf, after all dough sheets all enter leaf, just can calculate the probability distribution of the unique point of every leaf corresponding all " stablizing ", the sum of promptly counting the unique point of each numbering that enters this blade accounts for the ratio of the total dough sheet number that enters this blade.Like this, every leaf all comprises one group of probability distribution towards all " stablizing " unique points separately.Used many to set the accuracy that increases identification at random in the present embodiment.The probability distribution that certain sheet leaf comprises can be by following formulate:
P η(l,p)(Y(p)=c)
Wherein, p represents a dough sheet of 32 * 32 pixels, and Y (p) is the unique point characteristic of correspondence piont mark that this dough sheet comprises, and c ∈ { 1,1,2 ... N}, wherein-1 expression does not comprise the dough sheet of any unique point.L is the numbering of setting at random, and on behalf of the p dough sheet, η drop into that sheet leaf that l tree arrived.
Be the multiple judgment formula that each node is chosen, the judgment formula of choosing for each node in the present embodiment is as follows:
Figure G2009100481138D00081
Wherein, (p m) represents the brightness that dough sheet p is ordered at m, m to I 1, m 2, m 3, m 4Four position different pixels for random choose among the dough sheet p.
So just, built a tree at random, its principal character is exactly a different probability distribution on judgment formula on each node and each the sheet leaf.
By cutting apart dough sheet and the Grad that calculates each pixel different directions by different way, can be each node different judgment formulas is set, and then construct many different trees at random.
Further, described step 40) specifically may further comprise the steps:
One two field picture of camera collection is decomposed into the dough sheet of 32 * 32 pixels, with every dough sheet input step 25) in the different tree at random that makes up;
Figure G2009100481138D00091
The unique point label that expression dough sheet unique point that p comprises obtains through estimation,
Figure G2009100481138D00092
Can utilize following formula to calculate:
Figure G2009100481138D00093
The implication of this formula is that the probability distribution addition of blade that dough sheet p is arrived in difference tree is averaged, obtaining an average probability distributes, with the label of that invariant feature point of probability maximum in this average probability distribution, as the label of dough sheet p characteristic of correspondence point.Utilize this formula just can set up the correspondence of new images acquired unique point and original front elevation unique point.Experiment shows that this corresponding accuracy is more than 90%.After the correspondence of having set up unique point, just can utilize algorithm commonly used in the computer vision to come the position of calculation flag thing in the camera coordinate system.Fig. 4 has shown the situation of unique point correspondence on one page book.
Above-described embodiment only is used to illustrate technological thought of the present invention and characteristics, its purpose makes those skilled in the art can understand content of the present invention and is implementing according to this, when can not only limiting claim of the present invention with present embodiment, be all equal variation or modifications of doing according to disclosed spirit, still drop in the claim of the present invention.

Claims (6)

1. one kind is extracted augmented reality method with random tree classification based on image characteristic point, it is characterized in that may further comprise the steps:
Step 10), initialization system environment, configuration-system parameter;
Step 20), select or extract a width of cloth mark front elevation, mark is trained, obtain training data;
Step 30), the camera inner parameter of calculation flag object image;
Step 40), each frame in the true environment that camera is photographed, utilize step 30) in the camera inner parameter proofread and correct, again based on step 20) in training data distinguishing mark thing, if contain mark, then calculate the relative position matrix of this mark in the camera coordinate system;
Step 50), seek corresponding dummy model, utilize step 40 by the mark that identifies) in the relative position matrix of this mark in the camera coordinate system that calculate determine the position of dummy model;
Step 60), on true frame of video of taking in step 50) the dummy model position determined draws dummy model.
2. according to claim 1 based on the augmented reality method of image characteristic point extraction with random tree classification, it is characterized in that described step 20) specifically may further comprise the steps:
Step 21), cromogram is converted into gray-scale map, this gray-scale map is the original front elevation of mark;
Step 22), in step 21) tentatively on the original front elevation of described mark extract M unique point, specifically the method for extract minutiae is as follows:
For each the pixel m on this original front elevation, if be mid point with m, in eight pixels of m any two satisfy following two conditions, then get rid of this pixel m:
A, these two pixels are in m certain bar diameter two ends of this circle pixel that is mid point,
The gray-scale value of b, these two pixels is all approaching with m;
Step 23), with step 22) in the preliminary unique point of extracting of M, press the coordinate position serial number, constitute a unique point and gather K={k 1, k 2K M, a unique point of the corresponding numbering of each element representation in the set; The original front elevation of this mark is carried out the perspective transform of a plurality of different angles, and add white noise for the front elevation after the conversion, utilize step 22 afterwards again) in method extract the unique point of the view after the conversion, utilize inverse transformation that the unique point that extracts is reduced to corresponding front elevation unique point again; Add up the probability that a plurality of unique points of view after above-mentioned " conversion-extraction-reduction " through the different angles conversion still can find the corresponding unique point of original front elevation, N the point that probability is the highest finally confirmed as the unique point of " stablizing "; By this method to step 22) in the unique point that extracts further screen, obtain stable characteristics point the most; The number of elements of set K is kept to N by M, i.e. K={K 1, K 2K N;
Step 24), utilize step 23) N " invariant feature point " obtaining make up N " certain view set ", invariant feature point of each " certain view set " correspondence;
Step 25), structure is set at random in order to characteristic point classification and identification.
3. according to claim 2ly extract augmented reality method with random tree classification, it is characterized in that described step 24 based on image characteristic point) in the constructive method of " certain view set " be:
Original front elevation is pressed (π around x axle, y axle respectively, perspective transform is carried out in+π) scope rotation, to be divided into Lx angle around turning to of x axle, to be divided into Ly angle around turning to of y axle, get L=Lx * Ly amplitude variation and change view, the unique point that numbering in all conversion views is identical is taken out, and obtains N set V n={ v N1, v N2V NL, 1≤n≤N, each V nBe exactly " the certain view set " of a corresponding unique point, each element in this set has comprised the diverse location of same unique point under the different visual angles conversion.
4. according to claim 3ly extract augmented reality method with random tree classification, it is characterized in that described step 25 based on image characteristic point) in structure set at random in order to characteristic point classification and identification, specific as follows:
Tree is adopted binary tree structure at random, the data of input are the dough sheet of 32 * 32 pixels, the dough sheet input that will comprise the unique point in " certain view set " when training is in a large number set at random, allow each dough sheet enter certain a slice leaf, after all dough sheets all enter leaf, calculate the probability distribution of the unique point of every leaf corresponding all " stablizing ", the probability distribution that certain sheet leaf comprises can be by following formulate:
P η(l,p)(Y(p)=c)
Wherein, p represents a dough sheet of 32 * 32 pixels, Y (p) is the unique point characteristic of correspondence piont mark that this dough sheet comprises, c ∈ { 1,1,2 ... N}, wherein-1 expression does not comprise the dough sheet of any unique point, 1 is the numbering of setting at random, and on behalf of the p dough sheet, η drop into that sheet leaf that the 1st tree arrived;
For the judgment formula that each node is chosen as follows,
Figure FSB00000541312400021
Wherein, (p m) represents the brightness that dough sheet p is ordered at m, m to I 1, m 2, m 3, m 4Four position different pixels for random choose among the dough sheet p.
5. according to claim 4 based on the augmented reality method of image characteristic point extraction with random tree classification, it is characterized in that described step 40) specifically may further comprise the steps:
A two field picture of gathering is decomposed into the dough sheet of 32 * 32 pixels, with every dough sheet input step 25) in the tree at random of structure;
Figure FSB00000541312400022
The unique point label that expression dough sheet unique point that p comprises obtains through estimation,
Figure FSB00000541312400023
Utilize following formula to calculate:
Figure 1
This formula is that the probability distribution addition of blade that dough sheet p is arrived in difference tree is averaged, obtaining an average probability distributes, with the label of that invariant feature point of probability maximum in this average probability distribution, as the label of dough sheet p characteristic of correspondence point; Utilize this formula to set up the correspondence of new images acquired unique point and original front elevation unique point.
6. one kind is extracted system with the augmented reality of random tree classification based on image characteristic point, it is characterized in that comprising:
The frame of video training module is used for selecting or extracting a width of cloth mark front elevation, and mark is trained, and obtains training data;
The frame of video correction module is connected with described frame of video training module, and the camera inner parameter that is used for the calculation flag object image is to be used for correction;
The video frame capture module, be connected with described frame of video correction module with described frame of video training module, each frame that is used for true environment that camera is photographed, utilize the camera inner parameter in the described frame of video correction module to proofread and correct, again based on the training data distinguishing mark thing in the described frame of video training module, if contain mark, then calculate the relative position matrix of this mark in the camera coordinate system, and the illumination information by the brightness of distinguishing mark thing and mark front elevation is compared and estimated mark environment of living in;
The actual situation synthesis module, be connected with the video frame capture module, be used for seeking corresponding dummy model by the mark that identifies, utilize the relative position matrix of this mark in the camera coordinate system that is calculated in the Video Capture module to determine the position of model, dummy model is drawn in the position according to determined dummy model on true frame of video of taking.
CN2009100481138A 2009-03-24 2009-03-24 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification Expired - Fee Related CN101520849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100481138A CN101520849B (en) 2009-03-24 2009-03-24 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100481138A CN101520849B (en) 2009-03-24 2009-03-24 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification

Publications (2)

Publication Number Publication Date
CN101520849A CN101520849A (en) 2009-09-02
CN101520849B true CN101520849B (en) 2011-12-28

Family

ID=41081430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100481138A Expired - Fee Related CN101520849B (en) 2009-03-24 2009-03-24 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification

Country Status (1)

Country Link
CN (1) CN101520849B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8416263B2 (en) * 2010-03-08 2013-04-09 Empire Technology Development, Llc Alignment of objects in augmented reality
CN101833896B (en) * 2010-04-23 2011-10-19 西安电子科技大学 Geographic information guide method and system based on augment reality
CN101923791A (en) * 2010-05-31 2010-12-22 华中师范大学 Method for learning Chinese by combining reality enhancing technique and plane reading material
WO2012170023A1 (en) * 2011-06-08 2012-12-13 Empire Technology Development Llc Two-dimensional image capture for an augmented reality representation
CN102735100B (en) * 2012-06-08 2014-07-09 重庆邮电大学 Individual light weapon shooting training method and system by using augmented reality technology
CN102799761A (en) * 2012-06-20 2012-11-28 惠州Tcl移动通信有限公司 Augmented reality method and system used for mobile communication terminal
CN103177097B (en) * 2013-03-19 2015-09-16 浙江工商大学 Based on the image pattern planting modes on sink characteristic method for expressing of intensity profile statistical information
CN104102678B (en) * 2013-04-15 2018-06-05 腾讯科技(深圳)有限公司 The implementation method and realization device of augmented reality
CN103929653B (en) * 2014-04-30 2018-01-09 成都理想境界科技有限公司 Augmented reality video generator, player and its generation method, player method
CN104570759B (en) * 2014-10-28 2017-09-29 浙江工业大学 The quick Binomial Trees of control system midpoint orientation problem
CN106203446B (en) * 2016-07-05 2019-03-12 中国人民解放军63908部队 Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system
CN106846410B (en) * 2016-12-20 2020-06-19 北京鑫洋泉电子科技有限公司 Driving environment imaging method and device based on three dimensions
US20180211404A1 (en) * 2017-01-23 2018-07-26 Hong Kong Applied Science And Technology Research Institute Co., Ltd. 3d marker model construction and real-time tracking using monocular camera
CN106934693B (en) * 2017-03-06 2021-04-27 浙江传媒学院 Ceramic tile type selection method and system displayed in VR scene based on AR product model
CN109643455B (en) * 2017-06-16 2021-05-04 深圳市柔宇科技股份有限公司 Camera calibration method and terminal
WO2019084726A1 (en) * 2017-10-30 2019-05-09 深圳市柔宇科技有限公司 Marker-based camera image processing method, and augmented reality device
CN108830936B (en) * 2018-05-24 2022-07-05 讯飞幻境(北京)科技有限公司 3D model jitter prevention method and device
CN110909823B (en) * 2019-12-03 2024-03-26 携程计算机技术(上海)有限公司 Picture feature point extraction and similarity judgment method, system, equipment and medium
CN111125400B (en) * 2019-12-27 2022-03-15 中山大学 Scene graph spectrum optimization method based on relation constraint under virtual reality and augmented reality scenes
CN113673283A (en) * 2020-05-14 2021-11-19 惟亚(上海)数字科技有限公司 Smooth tracking method based on augmented reality
CN112819892B (en) * 2021-02-08 2022-11-25 北京航空航天大学 Image processing method and device

Also Published As

Publication number Publication date
CN101520849A (en) 2009-09-02

Similar Documents

Publication Publication Date Title
CN101520849B (en) Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
CN101520904B (en) Reality augmenting method with real environment estimation and reality augmenting system
CN101551732A (en) Method for strengthening reality having interactive function and a system thereof
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
CN106997597B (en) It is a kind of based on have supervision conspicuousness detection method for tracking target
CN111401384B (en) Transformer equipment defect image matching method
Wang et al. FSoD-Net: Full-scale object detection from optical remote sensing imagery
Dash et al. Designing of marker-based augmented reality learning environment for kids using convolutional neural network architecture
CN107481315A (en) A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms
CN104871180A (en) Text image quality based feedback for OCR
CN108564120B (en) Feature point extraction method based on deep neural network
CN102147867B (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN103984963B (en) Method for classifying high-resolution remote sensing image scenes
CN104574401A (en) Image registration method based on parallel line matching
Li et al. Place recognition based on deep feature and adaptive weighting of similarity matrix
CN111126412A (en) Image key point detection method based on characteristic pyramid network
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN110298867A (en) A kind of video target tracking method
CN104123554A (en) SIFT image characteristic extraction method based on MMTD
CN104217459A (en) Spherical feature extraction method
CN109753962A (en) Text filed processing method in natural scene image based on hybrid network
Kurz et al. Representative feature descriptor sets for robust handheld camera localization
CN110390228A (en) The recognition methods of traffic sign picture, device and storage medium neural network based
Donoser et al. Robust planar target tracking and pose estimation from a single concavity
CN201374082Y (en) Augmented reality system based on image unique point extraction and random tree classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111228

Termination date: 20180324