CN106203446A - Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system - Google Patents
Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system Download PDFInfo
- Publication number
- CN106203446A CN106203446A CN201610519065.6A CN201610519065A CN106203446A CN 106203446 A CN106203446 A CN 106203446A CN 201610519065 A CN201610519065 A CN 201610519065A CN 106203446 A CN106203446 A CN 106203446A
- Authority
- CN
- China
- Prior art keywords
- dimensional object
- image
- template
- edge
- sliding window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of three dimensional object recognition positioning method for augmented reality auxiliary maintaining system, the method includes: set up segmentation and the extraction of feature, the identification of object type and the location of object of the information database of three dimensional object, three dimensional object.The present invention is the three dimensional object character description method that a kind of overall situation edge histogram is combined with the local edge histogram of angle point guiding, describe method by this and two dimensional image of each three dimensional object different visual angles is carried out feature extraction, and using Gentle AdaBoost multi classifier that the characteristic vector extracted is trained, verification and measurement ratio that can be higher realizes the classification identification to three dimensional object.Machine learning is also combined by the present invention with template matching method, carries out coupling location by searching for little scope local data bank, improves location efficiency.
Description
Technical field
The present invention relates to a kind of machine vision technique, a kind of for the three of augmented reality auxiliary maintaining system
Dimensional object recognition positioning method.
Background technology
Carry out under augmented reality auxiliary maintaining environment detection, repair, dismount, assemble, safeguard, the master of the maintenance such as maintenance
Body is to operate with personnel, and the maintenance objects such as complete machine, parts, parts is then the object of maintenance.If will increase
Strong reality auxiliary maintaining system is applied to actual maintenance, and first have to solution keeps in repair the identification of maintenance objects in scene exactly
Orientation problem, it is video camera Attitude estimation, Tracing Registration and the premise of deficiency and excess combination and basis.Therefore tie up for augmented reality
Repair environment to carry out three dimensional object identification Position Research and have great importance.
What physical feature recognition methods actually solved is the identification orientation problem of the lower maintenance objects of various visual angles.Utilize some spy
Levy, outward appearance or contour feature and affine invariants are modal various visual angles identification character description methods.As by various visual angles 2D
Image SIFT feature point is mapped to the surface of 3D model and carries out 3D Object identifying, utilizes unchanged view angle profile relation and appearance information
It is trained by machine learning algorithm, obtains the identification model of 3D object, extract affine invariant subspace feature and realize affine
Change the match cognization of hypograph target, and contour feature combines with affine invariants point and is identified location etc., but
Affine invariants point often requires that visual angle change scope can not be the biggest.
Summary of the invention
It is an object of the invention to provide a kind of three dimensional object identification location side for augmented reality auxiliary maintaining system
Method, the problem the highest to the verification and measurement ratio of the classification identification of three dimensional object to solve existing location mode.
The object of the present invention is achieved like this: a kind of three dimensional object identification for augmented reality auxiliary maintaining system is fixed
Method for position, comprises the following steps:
A, set up the information database of three dimensional object:
A-1, using to have the equipment component of information association as three dimensional object and right with augmented reality auxiliary maintaining process
O should be labeled as1,O2,…,Ok;Set up the w that two kinds of visual angle change density of each three dimensional object is differentt×htTwo dimensional image template, its
Middle a kind of template being spaced apart 10 ° for visual angle change, is denoted as TA, the another kind of template being spaced apart 2.5 ° for visual angle change, is denoted as
TB;All templates TA and template TB of each three dimensional object are associated with in the visual angle corresponding to its two dimensional image template, and by regarding
Angle classification is saved in template database;
Edge image TEA, TEB of a-2, template TA extracting each three dimensional object respectively and template TB, extracts each three-dimensional right
Template TA of elephant and histogrammic characteristic vector FA in compound edge of template TB, FB, be associated with each self-corresponding template TA and mould
In plate TB, and it is saved in information database;
A-3, O will be labeled as1—OkThe each self-corresponding compound edge histogram feature vector FA of three dimensional object and right
The class label answered is sent into grader and is trained, it is thus achieved that all classification identification model M of three dimensional objecta, and store;Point
After not removing histogrammic characteristic vector FA in compound edge of each three dimensional object, it is fed again into grader and is trained,
To the identification model M not including deleted three dimensional objecti≠1,Mi≠2,…,Mi≠k, and store;
A-4, the compound edge histogrammic characteristic vector FB category of all three dimensional objects is set up retrieval K-D tree KB1,
KB2,…,KBk。
Grader described in step a-3 is to use Gentle AdaBoost multi classifier based on decision-making stub, wherein
The number maximum occurrences of Weak Classifier is 100.
B, the segmentation of three dimensional object and the extraction of feature:
Image, from a two field picture I of imageing sensor, is filtered and extracts its binary edge map by b-1, acquisition
E;Calculate the integral image II about binary edge map EE;Calculate the integration about binary edge map E and be combined edge histogram
Figure IHE;
B-2, establishment width are w, height is the sliding window W1 of h, create width be (w+2), width be the slip of (h+2)
Window W2;Sliding window W1 is nested in sliding window W2 between two parties, and carries out binary edge map E with sliding window W2 simultaneously
Scanning, utilizes integral image IIECalculate the pixel sum of sliding window W1 and sliding window W2;When sliding window W1 and sliding window
The pixel sum of mouthful W2 more than 0 and time the difference of pixel sum is 0, target area image E not being blockedW1, and forward step to
b-3;If the scanning result of entire image all can not meet above-mentioned condition, then w value adds 2, and h value adds 2, and repeats this step
Suddenly;When the height value of the width value of w >=image I and h >=image I, forward step b-1 to;
B-3, utilize IHEObtain the compound edge histogram H of region sliding window W1W, then perform step c.
C, the identification of object type:
Read and identify model Ma, by compound edge histogram HWSend into read as input vector and identify model Ma, known
Other result is combined edge histogram HWRepresentative class label n.
D, the location of object:
D-1, with compound edge histogram HWWith retrieval K-D tree KBnPasteur's distance D1 value of middle characteristic vector is minimum to be depended on
According to, at retrieval K-D tree KBnMiddle retrieval obtains and is combined edge histogram HWImmediate characteristic vector position p;
D-2, when Pasteur's distance D1 >=threshold tau1Time, directly proceed to step d-3, otherwise by the binary edge of sliding window W1
Image EW1Being normalized to width is wt, height be htYardstick, and take template image TB according to characteristic vector position kn,p, will be sliding
The binary edge map E of dynamic window W1W1With template image TBn,pDo Chamfer distance coupling, obtain matching result D2;When D2 <
Threshold tau2Time, output template image TBn,pCorresponding visual angle and the binary edge map E of sliding window W1W1Normalization coefficient ε,
Complete whole position fixing process and exit, otherwise forwarding step d-3 to;
D-3, reading identify model Mi≠n, by HWM is sent into as input vectori≠n, it is identified the classification that result is new
Label n, and take retrieval K-D tree KB according to class label nn, go to step d-1;When identifying model Mi≠1,Mi≠2,…,Mi≠kQuilt time
After going through once, declaration positions unsuccessfully, goes to step d-2.
The histogrammic extracting method in compound edge in the present invention is:
For any piece image I, its any pixel I (x, horizontal gradient and the computing formula of vertical gradient y) be:
Wherein, maskxFor horizontal filter template, maskyFor vertical filter template,For convolution operation;
(x, y) edge direction at place is defined as:
It is interval interior that the edge direction obtained is quantized to K in the range of-180 °~180 °;Order:
bk=[-180+ (k-1) 360/K ,-180+k 360/K] (3)
Wherein, k=1,2 ..., K, then edge orientation histogram is defined as formula (4) and formula (5);
Wherein, binary edge map E is the edge image of image I;
With k as vertical coordinate, GHkEdge histogram is set up for abscissa;
Extracting the angle point of image, centered by angle point, the edge histogram of statistics angle point neighborhood is as local edge Nogata
Scheming, it is defined as:
Wherein, the region that C is formed by angle point and eight neighborhoods thereof;
Overall situation edge histogram feature is combined as the following formula with the local edge histogram feature of angle point guiding:
Hk=GHk∪LHk (7)
I.e. available as compound edge histogram feature.
The present invention is the three dimensional object that a kind of overall situation edge histogram is combined with the local edge histogram of angle point guiding
Character description method, describes method by this and two dimensional image of each three dimensional object different visual angles is carried out feature extraction, and adopt
Being trained, with Gentle AdaBoost multi classifier, the characteristic vector extracted, verification and measurement ratio that can be higher realizes three
The classification identification of dimensional object.Machine learning is also combined by the present invention with template matching method, by searching for little scope local number
Carry out coupling location according to storehouse, improve location efficiency.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) of recognition positioning method of the present invention.
Fig. 2 is AdaBoost algorithm structure figure.
Fig. 3 is the 1st 6 class identification location object that experiment is targeted.
Fig. 4 is detection the 2nd class and the positioning result of the 5th class destination object.
Fig. 5 is detection the 5th class and the positioning result of the 6th class destination object.
Detailed description of the invention
Choose six classes image of component to be identified as identify object, the specific implementation process of the present invention is described.Such as Fig. 1 institute
Showing, the present invention comprises the steps: towards the three dimensional object recognition positioning method strengthening system supplymentary maintenance system
1, the template database of object to be identified is built.
1-1, set up the template database of three dimensional object.
First, being fixed on by video camera on the position becoming 45 ° of pitching visual angle with maintenance objects, maintenance objects is placed on high-precision
On degree turntable, driving turntable with 10 ° for interval stepping, the formation different viewpoint of video camera also triggers video camera and shoots.Right
The template image recycling image processing software that shooting obtains carries out rotating in the face at interval with 10 °, template image and process
After image collectively as the rough segmentation visual angle template of three dimensional object, be denoted as TA.
Secondly, driving turntable with 2.5 ° for interval stepping, viewpoint that formation video camera is different also triggers video camera and claps
Take the photograph.The template image recycling image processing software obtaining shooting carries out rotating in the face at interval with 2.5 °, template image
With the image after process collectively as the segmentation visual angle template of three dimensional object, it is denoted as TB.By template TA of all three dimensional objects and
The corresponding visual angle of template TB is associated, and category is saved in data base.Thus obtain this maintenance objects multiple
Template image under viewpoint.
1-2, set up the information database of three dimensional object.
Extract template TA and edge image TEA, TEB of template TB and compound edge histogram feature vector FA, FB respectively,
And be associated with template TA and template TB, then preserve to information database.
The histogrammic extracting method in compound edge is:
For any piece image I, its any pixel I (x, horizontal and vertical gradient calculation formula y) is:
Wherein, maskxFor horizontal filter template, maskyFor vertical filter template,For convolution operation.
(x, y) edge direction at place is defined as:
It is interval interior that the edge direction obtained is quantized to K in the range of-180 °~180 °.Order:
bk=[-180+ (k-1) 360/K ,-180+k 360/K] (3)
Wherein, k=1,2 ..., K, then edge orientation histogram is defined as formula (4) and (5).
Wherein: binary edge map E is the edge image of image I.
With k as vertical coordinate, GHkEdge histogram is set up for abscissa.
Extracting the angle point of image, centered by angle point, the edge histogram of statistics angle point neighborhood is as local edge Nogata
Scheming, it is defined as:
Wherein C is angle point and the region of eight neighborhood formation thereof.
Overall situation edge histogram feature is combined as compound edge with the local edge histogram feature of angle point guiding
Histogram feature, such as formula (7):
Hk=GHk∪LHk (7)
1-3, set up thick visual angle divide three dimensional object identification model.
By three dimensional object O1,O2,…,O6Compound edge histogram feature vector FA and correspondence class label send into
Grader is trained, it is thus achieved that about the classification identification model M of whole three dimensional objectsa.Remove each three dimensional object respectively
After histogrammic characteristic vector FA in compound edge, it is fed again into grader and is trained, do not included the most a certain classification
Identify model Mi≠1,Mi≠2,…,Mi≠6。
Wherein, grader uses Gentle AdaBoost multi classifier based on decision-making stub, Gentle
AdaBoost multi classifier is the improvement of AdaBoost grader, and AdaBoost grader obtains multiple weak typing by iteration
Device is also combined into strong classifier, and its algorithm structure is as shown in Figure 2.
Strong classifier YMFormed by 100 Weak Classifier weighted arrays, update its weights omega by iterationnMake weak typing
Device error in classification is minimum.The weighter factor of each Weak Classifier represents the significance level of Weak Classifier:
Wherein, emFor error in classification rate.The output codomain of Weak Classifier is set by Gentle AdaBoost multi classifier
For [-1,1], in the way of one-to-many, carry out multicategory classification training.If the various visual angles of maintenance objects O are combined edge histogram feature
For Hk(O), the jth Weak Classifier of its correspondence is hj(O).Make yiFor the label of i-th training sample, 1 represents positive sample ,-1
Represent negative sample, ωiFor i-th training sample weight.Use decision-making stub structure Weak Classifier hj(O) it is:
Wherein, affine parameter:Offset parameter:
And threshold θjNeed to make criterion function:
Value is minimum.The strong classifier that 100 Weak Classifiers are combined into is:
1-4, set up thin visual angle divide retrieving three-dimensional objects K-D tree.
The compound edge histogram feature vector FB category of all three dimensional objects is set up retrieval K-D tree KB1,KB2,…,
KB6。
2, the segmentation of three dimensional object and the rapid extraction of compound edge histogram feature.
2-1, acquisition image I as shown in Fig. 4 or Fig. 5, be filtered it and extract its binary edge map E.Calculate
Integral image II about binary edge map EE;Calculate the integration about compound edge histogram feature vector E and be combined edge
Rectangular histogram IHE.Wherein, to be combined the histogrammic computational methods in edge as follows for integral image and integration:
For a given two field picture, by initial point and coordinate (x, y) integration histogram in determined region is defined as:
In formula, u is the index of rectangular histogram bin image, and d is the number of post image:
Further, the value of the integration histogram of each pixel can obtain according to following formula recursive calculation:
H (x, y, u)=H (x-1, y, u)+H (x, y-1, u)-H (x-1, y-1, u)+Q (x, y, u)., u=1 ..., d (14)
After integration histogram is set up, for any rectangular area R (x-,y-,x+,y+) rectangular histogram can be by:
H (R, u)=H (x+,y+,u)-H(x-,y+,u)-H(x+,y-,u)+H(x-,y-, u) u=1 ..., d (15)
Rapid extraction obtains.B is the number of post image, w and h is respectively width and the height of image, thus constitutes a w
The cube of × h × b, for the histogrammic extraction of any subregion in w × h, only need to be by each post corresponding for subregion
Image is according to simply adding reducing, and this accumulative integration histogram also can quickly obtain subregion and be combined edge
Rectangular histogram.
2-2, create width and take advantage of the sliding window W1 of a height of w × h and width to take advantage of the sliding window W2 of a height of (w+2) × (h+2);Sliding
Dynamic window W1 is nested in W2 between two parties, and is scanned image with sliding window W2 simultaneously, calculates inside and outside window respectively and surrounds
Pixel value in region and with difference.When the pixel sum both greater than 0 of two windows and the difference of pixel sum are 0, obtain not hidden
Target area image E of gearW1, go to step 2-3;If the scanning result of entire image all can not meet above-mentioned condition, the widest
Degree w adds 2, and highly h adds 2, repeats this step operation;When the width of w >=image I, during h >=image I high, forward step 2-1 to.
2-3, utilize IHEObtain the compound edge histogram H of region W1W, go to step 3.
3, the identification of object type.
Read the identification model M about all 6 class three dimensional objectsa, by compound edge histogram HWSend into as input vector
Read and identify model Ma, it is identified result and is combined edge histogram HWRepresentative class label n.
4, the location of object.
4-1, with compound edge histogram HWWith retrieval K-D tree KBnPasteur's distance D1 value of middle characteristic vector is minimum to be depended on
According to, at retrieval K-D tree KBnMiddle retrieval obtains and is combined edge histogram HWImmediate characteristic vector position p.
4-2, when Pasteur's distance D1 >=threshold tau1Time directly proceed to step 4-3, otherwise, by target area image EW1Normalization
To wt×htYardstick, and take template image TB according to characteristic vector position kn,p, by target area image EW1With template image TBn,p
Do Chamfer distance coupling, obtain matching result D2;When D2 < threshold tau2Time, output template image TBn,pCorresponding visual angle and mesh
Mark area image EW1Normalization coefficient ε, complete position fixing process and exit operation, otherwise forwarding step 4-3 to.
4-3, reading identify model M i ≠ n, by compound edge histogram HWAs input vector, send into Mi ≠ n, known
The class label that other result is new is n;Go to step 4-1.When Mi ≠ 1, Mi ≠ 2 ..., after Mi ≠ 6 are traversed once, declaration
Position unsuccessfully, go to step 2-2.
According to the actually detected testing result obtained shown in Fig. 4 and Fig. 5 of aforesaid way.
Claims (3)
1., for a three dimensional object recognition positioning method for augmented reality auxiliary maintaining system, it is characterized in that, including following step
Rapid:
A, set up the information database of three dimensional object:
A-1, will there is the equipment component of information association as three dimensional object with augmented reality auxiliary maintaining process, and corresponding mark
It is designated as O1,O2,…,Ok;Set up the w that two kinds of visual angle change density of each three dimensional object is differentt×htTwo dimensional image template, Qi Zhongyi
Plant the template being spaced apart 10 ° for visual angle change, be denoted as TA, the another kind of template being spaced apart 2.5 ° for visual angle change, be denoted as TB;Will
All templates TA of each three dimensional object and template TB are associated with in its visual angle corresponding to two dimensional image template, and by visual angle classification
It is saved in template database;
Edge image TEA, TEB of a-2, template TA extracting each three dimensional object respectively and template TB, extracts each three dimensional object
Histogrammic characteristic vector FA in compound edge of template TA and template TB, FB, be associated with each self-corresponding template TA and template TB
In, and be saved in information database;
A-3, O will be labeled as1—OkThe each self-corresponding compound edge histogram feature vector FA of three dimensional object and correspondence
Class label is sent into grader and is trained, it is thus achieved that all classification identification model M of three dimensional objecta, and store;Go respectively
After histogrammic characteristic vector FA in compound edge of each three dimensional object, it is fed again into grader and is trained, obtain not
Identification model M including deleted three dimensional objecti≠1,Mi≠2,…,Mi≠k, and store;
A-4, the compound edge histogrammic characteristic vector FB category of all three dimensional objects is set up retrieval K-D tree KB1,
KB2,…,KBk;
B, the segmentation of three dimensional object and the extraction of feature:
Image, from a two field picture I of imageing sensor, is filtered and extracts its binary edge map E by b-1, acquisition;Meter
Calculate the integral image II about binary edge map EE;Calculate the integration about binary edge map E and be combined edge histogram IHE;
B-2, establishment width are w, height is the sliding window W1 of h, create width be (w+2), width be the sliding window of (h+2)
W2;Sliding window W1 is nested in sliding window W2 between two parties, and sweeps binary edge map E with sliding window W2 simultaneously
Retouch, utilize integral image IIECalculate the pixel sum of sliding window W1 and sliding window W2;When sliding window W1 and sliding window
The pixel sum of W2 more than 0 and time the difference of pixel sum is 0, target area image E not being blockedW1, and forward step b-to
3;If the scanning result of entire image all can not meet above-mentioned condition, then w value adds 2, and h value adds 2, and repeats this step;
When the height value of the width value of w >=image I and h >=image I, forward step b-1 to;
B-3, utilize IHEObtain the compound edge histogram H of region sliding window W1W, then perform step c;
C, the identification of object type:
Read and identify model Ma, by compound edge histogram HWSend into read as input vector and identify model Ma, it is identified knot
The compound edge histogram H of fruitWRepresentative class label n;
D, the location of object:
D-1, with compound edge histogram HWWith retrieval K-D tree KBnPasteur's minimum foundation of distance D1 value of middle characteristic vector,
At retrieval K-D tree KBnMiddle retrieval obtains and is combined edge histogram HWImmediate characteristic vector position p;
D-2, when Pasteur's distance D1 >=threshold tau1Time, directly proceed to step d-3, otherwise by the binary edge map of sliding window W1
EW1Being normalized to width is wt, height be htYardstick, and take template image TB according to characteristic vector position kn,p, by sliding window
The binary edge map E of mouth W1W1With template image TBn,pDo Chamfer distance coupling, obtain matching result D2;When D2 < threshold value
τ2Time, output template image TBn,pCorresponding visual angle and the binary edge map E of sliding window W1W1Normalization coefficient ε, complete
Whole position fixing processs also exit, and otherwise forward step d-3 to;
D-3, reading identify model Mi≠n, by HWM is sent into as input vectori≠n, it is identified the class label that result is new
N, and take retrieval K-D tree KB according to class label nn, go to step d-1;When identifying model Mi≠1,Mi≠2,…,Mi≠kIt is traversed one
After secondary, declaration positions unsuccessfully, goes to step d-2.
Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system the most according to claim 1, it is special
Levying and be, the described histogrammic extracting method in compound edge is:
For any piece image I, its any pixel I (x, horizontal gradient and the computing formula of vertical gradient y) be:
Wherein, maskxFor horizontal filter template, maskyFor vertical filter template,For convolution operation;
(x, y) edge direction at place is defined as:
It is interval interior that the edge direction obtained is quantized to K in the range of-180 °~180 °;Order:
bk=[-180+ (k-1) 360/K ,-180+k 360/K] (3)
Wherein, k=1,2 ..., K, then edge orientation histogram is defined as formula (4) and formula (5);
Wherein, binary edge map E is the edge image of image I;
With k as vertical coordinate, GHkEdge histogram is set up for abscissa;
Extract image angle point, centered by angle point, statistics angle point neighborhood edge histogram as local edge histogram, its
It is defined as:
Wherein, the region that C is formed by angle point and eight neighborhoods thereof;
Overall situation edge histogram feature is combined as the following formula with the local edge histogram feature of angle point guiding:
Hk=GHk∪LHk (7)
I.e. available as compound edge histogram feature.
Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system the most according to claim 1, it is special
Levying and be, grader described in step a-3 is to use Gentle AdaBoost multi classifier based on decision-making stub, the most weak
The number maximum occurrences of grader is 100.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610519065.6A CN106203446B (en) | 2016-07-05 | 2016-07-05 | Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610519065.6A CN106203446B (en) | 2016-07-05 | 2016-07-05 | Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106203446A true CN106203446A (en) | 2016-12-07 |
CN106203446B CN106203446B (en) | 2019-03-12 |
Family
ID=57464633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610519065.6A Active CN106203446B (en) | 2016-07-05 | 2016-07-05 | Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106203446B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109034418A (en) * | 2018-07-26 | 2018-12-18 | 国家电网公司 | Operation field information transferring method and system |
CN109218610A (en) * | 2018-08-15 | 2019-01-15 | 北京天元创新科技有限公司 | A kind of operator network resources methods of exhibiting and device based on augmented reality |
CN111480348A (en) * | 2017-12-21 | 2020-07-31 | 脸谱公司 | System and method for audio-based augmented reality |
CN111597674A (en) * | 2019-02-21 | 2020-08-28 | 中国科学院软件研究所 | Intelligent engine maintenance method based on man-machine cooperation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477538A (en) * | 2008-12-30 | 2009-07-08 | 清华大学 | Three-dimensional object retrieval method and apparatus |
CN101520849A (en) * | 2009-03-24 | 2009-09-02 | 上海水晶石信息技术有限公司 | Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification |
CN103295021A (en) * | 2012-02-24 | 2013-09-11 | 北京明日时尚信息技术有限公司 | Method and system for detecting and recognizing feature of vehicle in static image |
CN104484523A (en) * | 2014-12-12 | 2015-04-01 | 西安交通大学 | Equipment and method for realizing augmented reality induced maintenance system |
-
2016
- 2016-07-05 CN CN201610519065.6A patent/CN106203446B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477538A (en) * | 2008-12-30 | 2009-07-08 | 清华大学 | Three-dimensional object retrieval method and apparatus |
CN101520849A (en) * | 2009-03-24 | 2009-09-02 | 上海水晶石信息技术有限公司 | Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification |
CN103295021A (en) * | 2012-02-24 | 2013-09-11 | 北京明日时尚信息技术有限公司 | Method and system for detecting and recognizing feature of vehicle in static image |
CN104484523A (en) * | 2014-12-12 | 2015-04-01 | 西安交通大学 | Equipment and method for realizing augmented reality induced maintenance system |
Non-Patent Citations (6)
Title |
---|
ALEXANDER C 等: "Shape Matching and Object Recognition", 《TOWARD CATEGORY-LEVEL OBJECT RECOGNITION》 * |
B LEIBE 等: "Efficient Clustering and Matching for Object Class Recognition", 《BMVC》 * |
SER NAM LIM 等: "Automatic Registration of Smooth Object Image to 3D CAD Model for Industrial Inspection Applications", 《2013 INTERNATIONAL CONFERENCE ON 3D VISION》 * |
ZHAO SHOUWEI 等: "3-D Objects Recognition by Corner-edge Composite Feature in Augmented", 《ICIC EXPRESS LETTERS》 * |
李丹丹: "基于图像匹配技术的轮毅定位方法", 《中国优秀硕士学位论文全文数据库》 * |
赵守伟 等: "增强现实辅助维修系统的评价方法研究", 《火炮发射与控制学报》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111480348A (en) * | 2017-12-21 | 2020-07-31 | 脸谱公司 | System and method for audio-based augmented reality |
CN111480348B (en) * | 2017-12-21 | 2022-01-07 | 脸谱公司 | System and method for audio-based augmented reality |
CN109034418A (en) * | 2018-07-26 | 2018-12-18 | 国家电网公司 | Operation field information transferring method and system |
CN109218610A (en) * | 2018-08-15 | 2019-01-15 | 北京天元创新科技有限公司 | A kind of operator network resources methods of exhibiting and device based on augmented reality |
CN111597674A (en) * | 2019-02-21 | 2020-08-28 | 中国科学院软件研究所 | Intelligent engine maintenance method based on man-machine cooperation |
CN111597674B (en) * | 2019-02-21 | 2023-07-04 | 中国科学院软件研究所 | Intelligent engine maintenance method based on man-machine cooperation |
Also Published As
Publication number | Publication date |
---|---|
CN106203446B (en) | 2019-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109829398B (en) | Target detection method in video based on three-dimensional convolution network | |
Luvizon et al. | A video-based system for vehicle speed measurement in urban roadways | |
Torii et al. | 24/7 place recognition by view synthesis | |
CN106897675B (en) | Face living body detection method combining binocular vision depth characteristic and apparent characteristic | |
Rothwell et al. | Planar object recognition using projective shape representation | |
Wu et al. | A practical system for road marking detection and recognition | |
CN107481315A (en) | A kind of monocular vision three-dimensional environment method for reconstructing based on Harris SIFT BRIEF algorithms | |
CN115717894B (en) | Vehicle high-precision positioning method based on GPS and common navigation map | |
CN106156684B (en) | A kind of two-dimensional code identification method and device | |
CN107481279A (en) | A kind of monocular video depth map computational methods | |
CN107977656A (en) | A kind of pedestrian recognition methods and system again | |
CN106203446A (en) | Three dimensional object recognition positioning method for augmented reality auxiliary maintaining system | |
CN103679187B (en) | Image-recognizing method and system | |
CN105654122B (en) | Based on the matched spatial pyramid object identification method of kernel function | |
Yuan et al. | Learning to count buildings in diverse aerial scenes | |
CN103279738B (en) | Automatic identification method and system for vehicle logo | |
CN111078946A (en) | Bayonet vehicle retrieval method and system based on multi-target regional characteristic aggregation | |
CN107358189B (en) | Object detection method in indoor environment based on multi-view target extraction | |
CN103353941B (en) | Natural marker registration method based on viewpoint classification | |
CN109325487B (en) | Full-category license plate recognition method based on target detection | |
CN105574545A (en) | Environment image multi-view-angle meaning cutting method and device | |
CN112668662B (en) | Outdoor mountain forest environment target detection method based on improved YOLOv3 network | |
CN104036494B (en) | A kind of rapid matching computation method for fruit image | |
CN103729850B (en) | Method for linear extraction in panorama | |
Li et al. | Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |