CN102054166A - Scene recognition technology used in outdoor augmented reality system - Google Patents

Scene recognition technology used in outdoor augmented reality system Download PDF

Info

Publication number
CN102054166A
CN102054166A CN2010105238910A CN201010523891A CN102054166A CN 102054166 A CN102054166 A CN 102054166A CN 2010105238910 A CN2010105238910 A CN 2010105238910A CN 201010523891 A CN201010523891 A CN 201010523891A CN 102054166 A CN102054166 A CN 102054166A
Authority
CN
China
Prior art keywords
scene
augmented reality
submap
reality system
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010105238910A
Other languages
Chinese (zh)
Other versions
CN102054166B (en
Inventor
王涌天
郭俊伟
陈靖
刘越
吕旸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201010523891.0A priority Critical patent/CN102054166B/en
Publication of CN102054166A publication Critical patent/CN102054166A/en
Application granted granted Critical
Publication of CN102054166B publication Critical patent/CN102054166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the technical field of pattern recognition, in particular to a novel scene recognition technology used in an outdoor augmented reality system. The invention utilizes secondary spacial position information to restrain, so as to reduce the retrieval range of scene recognition at first, then provides two characteristics namely use texture and use profile to express the scene, and leads a categorizer with a simpler structure to evaluate posterior probability models of the two characteristics, so as to realize the scene recognition method based on visual sense. The scene recognition technology provided by the invention can meet the requirements of real-time performance and high recognition rate, and is suitable for application in outdoor augmented reality systems with larger range and more scenes.

Description

A kind of new scene Recognition technology that is used for outdoor augmented reality system
Technical field
The present invention relates to mode identification technology, particularly relate to a kind of new scene Recognition technology that is used for outdoor augmented reality system.
Background technology
Augmented reality technology (Augmented Reality-AR) is a kind of emerging computer utility and the human-computer interaction technology that produces along with the development of virtual reality technology.This technology merges the virtual environment that computing machine generates mutually with user's real scene on every side, makes the user be sure of that from sensory effects virtual environment is its ingredient of real scene on every side.
The range of application of early stage augmented reality system is confined to indoor or outdoor small-scale environments usually, and mostly research object is the simple scenario of single or less target.In recent years, along with constantly widen in its range of application of development and the field of augmented reality technology, the researchist begin gradually to pay close attention to how the augmented reality technology is applied to outdoor complicated on a large scale, target-rich environment, realized the multiple system that is applicable to different application.Mainly comprise as follows:
1, based on the .2006:1. of the city navigation system of augmented reality technology [referring to Murphy D Kahari M.Mara-sensor based augmented reality system for mobile imaging.the IEEE InternationalSymposium on Mixed and Augmented Reality 2006[C]].This type systematic is measured position and the orientation of user in the city by GPS and compass, and by showing that the place that each bar road is led to navigate for the user, simultaneity factor can also be measured user's gait of march.
2, museum guiding system [referring to Bruns E.Bimber O Brombach B.SubobjectDetection through Spatial Relationships on Mobile Phones.InternationalConference of Intelligent User Interfaces (IUI2009) [C] .2009.].This type systematic uses the positions of technological orientation visitor in the museum such as GPS or bluetooth, utilize compass determine the user towards or utilize vision technique identification user perceptual showpiece, better understand the showpiece of being visited by helping the user, make user's visit process become abundant, interesting to user's demonstration various information relevant with showpiece.
3, on a large scale ancient site guide to visitors and playback system [referring to Guo Junwei, Wang Yongtian, ChenJing, et al.Augmented Reality Registration Algorithm Applied in Reconstructionof Yuanmingyuan Archaeological Site.International Conference on CAD﹠amp; CG[C] .2009:324-329.].Because reconstruction and reparation to the ancient site building are that a comparatively complicated process and a manpower and materials consumption is bigger, also are easy to ruins are damaged, and have therefore occurred the ruins playback system that uses the augmented reality technology to realize in recent years.This type systematic is by technological orientation visitors' such as GPS or bluetooth position, come user's attitude is determined by compass or vision technique, and the information relevant with ancient site strengthened demonstration, generally include that three-dimensional original appearance model, the ruins of ancient site are historical to be introduced etc.
4, with the smart mobile phone be the outdoor augmented reality application of platform.This class is used and is made the augmented reality technology break away from the restriction of large volume PC platform, and low in energy consumption, the required storage space of algorithm is little.Present first item augmented reality mobile phone browser-LayAR that for example Dutch a company released in this year.This browser can run on the Android cell phone platform.The user only need aim at scene of interest with the camera of mobile phone, GPS is the consumer positioning position at first, compass is judged the direction that camera is faced, the user just can see the information relevant with captured scene below the screen of mobile phone subsequently, even also comprises the information of the practicality such as information of discount, job notice and ATM in peripheral house to let, bar and restaurant.Apple's this class application and development on the Iphone mobile phone also starts to walk early, the at present existing corresponding running software of many moneys is on Iphone3 generation and 4 generation products, as LondonPipes, the user can use this system to go sight-seeing in the street corner, London, and it can discern buildings and automatically to the detailed road information of user's displaying contents.
The characteristics that outdoor augmented reality is used are that the residing environmental field of user is big, user's interest scene quantity more (for example numerous building target, a large amount of museum's showpiece target and many places ancient sites are built target etc.).Therefore, this class is used a problem that at first needs to solve and is pre-determined user's present located scene exactly, and this process is referred to as scene Recognition usually.At present existent method is generally by adopting hardware approach to come the consumer positioning position or itself and the visible sensation method that adopts software being discerned the user place scene line trace location of going forward side by side.In the method that adopts hardware to position, LayAR system for example, usually use GPS or wireless communication technologys such as bluetooth, WiFi to come customer location is positioned, again according to sensors such as compass further determine the user towards and the orientation, finally definite user place scene.Some other system combines the hardware and software method, museum guiding for example above-mentioned system, ruins playback system, and at present at the Theodolite system of Iphone4 mobile phone development of new generation.These systems at first adopt technology such as GPS, compass that customer location is carried out Primary Location, and the visible sensation method that uses a computer is afterwards accurately discerned scene and user's attitude is registered, and final realization strengthens demonstration accurately.Usually physical sensors exists refreshing frequency and the lower problem of bearing accuracy, and increasing along with target object quantity simultaneously, the range of search of visual identity can increase, cause the recognition accuracy and the real-time performance of visible sensation method all can descend gradually, the locator meams that therefore present most systems adopts software and hardware technology to combine is finished the scene Recognition under the outdoor environment on a large scale, this also is the subject matter that the present invention pays close attention to and will solve, promptly how the combination by above dual mode realize outdoor augmented reality use in scene Recognition fast and accurately.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a kind of method of scene Recognition fast and accurately for complicated out of doors environment on a large scale develops the augmented reality technology down, this method synthesis utilizes profile and two kinds of composite characters of texture to represent scene, and by sorter simple in structure composite character is carried out supervised learning.Simultaneously information constrained in conjunction with the secondary spatial geographical locations, further dwindle the range of search of the current scene of living in of identification user, and in this scope, use visual identity method set forth above to finish scene Recognition.
Technical scheme of the present invention is:
, scene kind big according to the outdoor environment scope and a fairly large number of characteristics position the position of user in environment by GPS, by the locus constraint information range of search of user's scene of living in to be identified are dwindled.Use sorter that the profile and the texture blend feature of scene are discerned subsequently, and discern current scene by the identification of composite character.Algorithm mainly comprises off-line preparation and two parts of ONLINE RECOGNITION, and concrete steps comprise:
(1) the off-line preparatory stage:
The first step for the residing environment of user (for example campus, city or an area), is used GPS that its spatial dimension is measured and demarcated, and is adopted longitude and latitude to represent this scope.
Second step, space density size according to scene in the size of environmental field and the environment, entire environment is divided into several zones, the longitude and latitude spatial dimension in zone is n * m, and the unit of n and m is identical with longitude and latitude here, i.e. degree (°) divide (') and second ("); the standard of n and m value size is the scene that has some in each zone of assurance; in order to ensure the accuracy of visual identity, the present invention specify this quantity be no more than 15 (for example in certain campus environment, n=m=5 ").These ready-portioned zones are referred to as submap, and are numbered submap-1 respectively, submap-2 ..., submap-N (N is the zone sum);
The 3rd goes on foot, and has the scene of some in each submap, and these scenes exist in a certain order in the space and this spatial order remains unchanged.Therefore,,, and be numbered subscene-1 in order respectively for each scene among each submap is given a geographical location information label according to the locus order, subscene-2 ..., subscene-n (n is a scene sum in each zone).Then submap-i-subscene-j represents j scene in i the zone;
In the 4th step, take the key frame images of different points of view and extract profile and the texture blend feature at each scene.Use the Ferns sorter that composite character is carried out supervised learning;
(2) the ONLINE RECOGNITION stage: in the 5th step,, determine which submap is user's present located scene belong to according to each regional locus scope of GPS spatial orientation information and off-line phase demarcation;
The 6th step, if present frame is first frame, then system according to submap information should the zone in all scenes be shown to the user in the button mode, and determine the subscene label of current scene by the mode that the user manually selects, finish scene Recognition; If present frame is not first frame,, determine the range of search of identification current scene subscene then according to the subscene information of previous frame;
The k step, take current frame image for the residing current scene of user, and in the range of search that previous step is determined, adopt visible sensation method that current scene is discerned, obtain the subscene label of current scene, finish scene Recognition.
A kind of new scene Recognition technology that is used for outdoor augmented reality system of the present invention has the following advantages:
(1) in technical scheme of the present invention, owing to be at first by using texture and two kinds of composite characters of profile to come user place scene is expressed, therefore compare with the expression way of using single feature in the past, improved the ability to express for outdoor complex scene, the introducing of Ferns sorter has realized especially for the effective supervised learning of composite character and identification more fast and accurately.Therefore, can provide comparatively efficiently for the augmented reality system under the outdoor complicated environment on a large scale, scene Recognition result accurately.
(2) in technical scheme of the present invention, the range of search based on the scene Recognition of visible sensation method has been dwindled in the introducing of secondary spatial positional information constraint and the use of GPS information to a great extent, further shortened thus follow-up scene identifying processing time, realized higher scene Recognition success ratio.
Description of drawings
Fig. 1 is scene recognizer process flow diagram among the present invention.
Fig. 2 extracts result and PHOG histogram thereof for LoG unique point, part textural characteristics, the contour feature that obtains at some Object Extraction among the present invention.
Fig. 3 is Ferns sorter structure evolution synoptic diagram among the present invention.
The submap spatial geographical locations information labeling synoptic diagram of Fig. 4 among the present invention a certain campus environment being carried out.
The recognition result synoptic diagram of Fig. 5 for obtaining after according to the proposed method the some scenes in certain campus environment being discerned.
Embodiment
Below in conjunction with accompanying drawing the present invention is elaborated.
Fig. 1 is scene recognizer process flow diagram among the present invention.Be introduced at each the key step embodiment in the algorithm flow below.
Fig. 2 extracts result and PHOG histogram thereof for LoG unique point, part textural characteristics, the contour feature of some objects among the present invention.
Scene complexity in the outdoor environment, the characteristic feature that different scene had has nothing in common with each other.For example the contour feature of object such as buildings, vehicle will be than its textural characteristics more obvious (as shown in Figure 2).Therefore if will use two automobiles in the textural characteristics component-bar chart 2 just may obtain correct result.Textural characteristics is represented (is the textural characteristics at center with red unique point as some that show among Fig. 2) with the image block that in the image with the unique point is the center usually.The present invention's proposition combines two kinds of features of texture and profile and describes a scene.Wherein, the required unique point of texture feature extraction adopts LoG (Laplacian-of-Gaussian) unique point comparatively commonly used at present, and it is proved to be the most stable local feature point operator at present.When using the LoG feature, at first the Laplacian-of-Gaussian multiscale space of computed image is chosen the peaked scale-value of the Laplacian operator yardstick of image the most, afterwards extract minutiae on this yardstick.Contour feature uses PHOG (Pyramid Histogram of Gradient) operator, and the PHOG operator is the improvement to HOG (Histogram of Oriented Gradients) operator.PHOG is divided into n zone according to the direction of gradient with the Grad at edge, has added the pyramidal characteristic in space afterwards on the basis of histogram of gradients, has further improved the stability of this feature.Be structure space pyramid, at the 1st layer, image is divided into the 2l sub regions, and distinguishes statistic histogram on subregion.
Shown in the binary tree structure among Fig. 3, Ferns sorter used in the present invention is the improvement to random tree classification device (Randomized Tree).It is based on binary tree, change the hierarchical structure of setting at random into the flat structure, by corresponding test is set in each child node rational spatial division is carried out in the feature samples set, and the posterior probability that finally counts each category feature at each leaf node distributes, and by seeking the maximum a posteriori probability scoring target to be identified classified.
The Ferns algorithm with each image block in the image and under various image change the image block of gained be considered as a class.Use the Ferns sorter that an image block is discerned, will find that class the most similar in fact exactly to this image block.Make c k(k=1,2,3 ..., L, class add up to L) represent k class; Make t j(j=1,2,3 ..., M) the two-value test set of carrying out for each child node classification is required.The standard of then image block being carried out discriminator is:
. c k ^ = arg max c k P ( C = c k | t 1 , t 2 , . . . , t M ) . - - - ( 1 )
Wherein, C is the stochastic variable of an any class of representative.
According to bayesian theory, P (C=c k| t 1, t 2...., t M) can calculate in the following manner:
P ( C = c k | t 1 , t 2 , . . . , t M ) = P ( t 1 , t 2 , . . . , t M | C = c k ) P ( C = c k ) P ( t 1 , t 2 , . . . , t M ) - - - ( 2 )
Because P (t 1, t 2...., t M) be a factor that has nothing to do with classification, so formula (1) can be reduced to:
c k ^ = arg max c k P ( t 1 , t 2 , . . . , t M | C = c k ) - - - ( 3 )
Wherein, two-value test item t jOnly with image block in any two pixel d J, 1And d J, 2Gray-scale value relevant, can be expressed as:
t j = 0 I ( d j , 1 ) - I ( d j , 2 ) ≥ 0 1 otherwise - - - ( 4 )
What wherein I represented is grey scale pixel value, pixel d J, 1And d J, 2It is picked at random in advance.
Because t jBelong to a kind of comparatively simple test mode, therefore need carry out a large amount of tests could realize accurate classification.Cause algorithm that each class is discerned the data that need store about 2N order of magnitude when estimating thus and could express the joint probability density shown in the formula (3) comparatively accurately, when the value of N hour, handle that the required storage space of these data is less, the time is shorter, but when the value of N is big, along with required storage space of increase algorithm and processing time of N increases sharply, finally can't satisfy the requirement of real-time.Therefore, the Ferns method proposes that above test process is divided into Z group carries out, and the amount of capacity of each group is S, and S=N/Z.At this moment, the conditional probability in the formula (3) can be expressed as:
P ( t 1 , t 2 , . . . , t M | C = c k ) = Σ a = 1 Z P ( F a | C = c k ) - - - ( 5 )
F wherein a={ t σ (a, 1), t σ (a, 2)...., t σ (a, S), a=1,2 ...., Z represents a Ferns, and σ () is a permutation function at random, and scope is between 1~N.Because the Ferns algorithm divides into groups the child node test process, make the number of parameter by original 2 NReduce to Z * 2 S, not only make the calculating of conditional probability become simpler, and on the fairly large classification and identification of processing, have very obvious speed and storage advantage.
In off-line phase, algorithm is estimated conditional probability P (F by the supervised learning mode for each Ferns z| C=c k), shown in formula:
p d , c k = P ( F z = d | C = c k ) - - - ( 6 )
The two-value total number measured that each Ferns need carry out is D=2 S, conditional probability then
Figure BSA00000323479000073
Need satisfy following condition:
Σ d = 1 D p d , c k = 1 - - - ( 7 )
For feature class c k, by sorter all images piece sample that belongs to such is carried out supervised learning, finally the posterior probability to this class is calculated in each leaf node, and account form is as follows:
p d , c k = N d , c k + 1 N c k + D - - - ( 8 )
Wherein,
Figure BSA00000323479000076
Representative belongs to class c kAll images piece sample in fall into the image block number of samples of d leaf node,
Figure BSA00000323479000077
Representative belongs to class c kThe sum of all images piece sample.Repeat above training process at each Fern, finally finish training whole Ferns.
The online stage, each image block that extracts on the current frame image is put among the Ferns sorter, determine classification under this image block by the posterior probability on its final leaf node that arrives.
According to the posterior probability calculation method of the Ferns sorter of above introduction, the present invention carries out probability model to texture and contour feature in the following ways and estimates.
If Ω={ ω 1, ω 2, L, ω nBe the set that comprises the scene of all categories, scene classification number is n.F Texture={ F T1, F T2, L, F TnRepresent the textural characteristics of n scene to gather, wherein
Figure BSA00000323479000078
Be the unique point set of i scene, m iBe i scene characteristic point sum, F Shape={ f S1, f S2, L, f SnThe contour feature of n scene of expression gathers.According to Bayes's criterion, classification ω under the current scene *That class for posterior probability maximum in all categories:
ω * = arg max ω i ∈ Ω 1 T Σ t = 1 T P t , l ( ω i | F texture obs , f shape obs ) - - - ( 9 )
Wherein Be the textural characteristics of current scene,
Figure BSA00000323479000083
Represent the current scene profile, wherein l represents that composite character is t the leaf node numbering of setting arrival at random.According to bayesian theory:
P t , l ( ω i | F texture obs , f shape obs ) = P t , l ( F texture obs , f shape obs | ω i ) P ( ω i ) Σ i = 1 n P t , l ( ω i ) - - - ( 10 )
∝ P t , l ( F texture obs , f shape obs | ω i ) P ( ω i )
Suppose that Same Scene textural characteristics and contour feature are independent, m 0Individual textural characteristics is equally divided into the M group, and every group has
Figure BSA00000323479000086
Individual feature, and independent between every group of textural characteristics, then:
P t , l ( F texture obs , f shape obs | ω i ) = P t , l ( F texture obs | ω i ) P t , l ( f shape obs | ω i )
= P t , l ( { f to 1 , f to 2 , L , f to m o } | ω i ) · P t , l ( f shape obs | ω i ) - - - ( 11 )
= ( Π k = 1 M o P t , l ( F to k | ω i ) ) · P t , l ( f shape obs | ω i )
Wherein K=1, L, M oRepresent the k node layer of Ferns, (k s) represents from 1 to m σ 0Random mapping function.Then
ω * = arg max ω i ∈ Ω 1 T Σ t = 1 T ( Π k = 1 M o P t , l ( F to k | ω i ) · P t , l ( f shape obs | ω i ) ) P ( ω i ) - - - ( 12 )
The probability of supposing each scene appearance is identical, i.e. P (ω i) obey evenly distribution, adopt the method among the Ferns to estimate expression formula simultaneously
Π k = 1 M o P t , l ( F to k | ω i ) · P t , l ( f shape obs | ω i ) - - - ( 13 )
Probability distribution.
Each child node is tested in the following manner in the Ferns sorter:
F texture = 1 if I i < I j 0 Otherwise
(14)
f shape = 1 if w T x + b < 0 0 Otherwise
Wherein w is the vector that has identical dimensional n with the contour feature vector x, in the test process from vectorial w a picked at random a component (a ∈ [1, n]), component index is at random, and component value is between [1,1], (0 arrives b ∈
Figure BSA00000323479000093
).
The secondary spatial geographic information annotation results of Fig. 4 for coming according to the proposed method certain campus is carried out.Merely based on the scene Recognition algorithm of vision, its discrimination is no more than 75% at present, further improves discrimination and need use complicated more recognizer.And in fact, during the augmented reality under outdoor environment was on a large scale used, system can make full use of positional information in spatial relation and the ground, utilizes this category information to retrain and dwindle the hunting zone of scene to be identified, significantly improves recognition success rate.
The present invention proposes to use a kind of secondary space constraint to dwindle the range of search of scene Recognition, and specific practice is: at first use gps system that Primary Location is carried out in the position of user in environment.Consider that employed gps system can provide the longitude and latitude of user position, and measuring accuracy is 0.01 ".Therefore, the outdoor environment that scope is bigger (for example campus environment of certain shown in Fig. 4) carries out the division of regional area according to the specification of 5 " * 5 ", and this regional area is referred to as submap, and each submap is numbered, for example submap-i represents the regional area (as among Fig. 4 campus environment being divided several submap zones that obtain) of i 5 " * 5 ".System uses the locating information that GPS provided accurately to demarcate the pairing longitude and latitude scope of each submap (yellow mesh lines as shown in Figure 4 and corresponding longitude and latitude thereof).
Owing to all comprised the scene of some in each submap, this quantity be designated as V (V 〉=1).Sometimes the V value in certain submap is bigger, carries out scene Recognition and still can't guarantee 100% correct recognition rata in this scope.Therefore, the present invention proposes each scene among the submap is carried out the geography information mark, a label is given for it in proper order in adjacent position according to each scene, and is designated as subscene-j, represents the 3rd scene in the 3rd zone as label submap-3-subscene-3.
When system's real time execution, GPS at first positions user's current location, determine the affiliated submap of user, system dwindles the range of search of the current place of user scene in conjunction with the scene geographic information tags in this submap, and uses visible sensation method to identify current scene in this scope.
Even the situation of recognition failures also can appear in the visual identity method in a small range sometimes, in order to address this problem, algorithm is after by visible sensation method identification scene, can sort according to similarity degree to recognition result, with the scene that comes forward position as the alternative user of being shown to, so that the user can select correct current scene when finding the scene Recognition mistake.
The recognition result that obtains after according to the proposed method the some scenes in certain campus environment being discerned as shown in Figure 5, the result has comprised the secondary spatial geographic information of each scene that measures and the scene Recognition result who represents by label.
More than a kind of new scene Recognition technology that is used for outdoor augmented reality system provided by the present invention is described in detail, in the literary composition principle of the present invention and embodiment are set forth, the explanation of above content just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (8)

1. a new scene Recognition technology that is used for outdoor augmented reality system is characterized in that, may further comprise the steps:
(1) the off-line preparatory stage:
Step 1: for the residing environment of user (for example campus, city or an area), use GPS that its spatial dimension is measured and demarcated, and adopt longitude and latitude to represent this scope.
Step 2: according to the space density size of scene in the size of environmental field and the environment, entire environment is divided into several zones, the longitude and latitude spatial dimension in zone is n * m, here the unit of n and m is identical with longitude and latitude, i.e. degree (°) divide (') and second ("), the standard of n and m value size is the scene that has some in each zone of assurance, in order to ensure the accuracy of visual identity; the present invention specify this quantity be no more than 15 (for example in certain campus environment, n=m=5 ").These ready-portioned zones are referred to as submap, and are numbered submap-1 respectively, submap-2 ..., submap-N (N is the zone sum).
Step 3: have the scene of some in each submap, these scenes exist in a certain order in the space and this spatial order remains unchanged.Therefore,,, and be numbered subscene-1 in order respectively for each scene among each submap is given a geographical location information label according to the locus order, subscene-2 ..., subscene-n (n is a scene sum in each zone).Then submap-i-subscene-j represents j scene in i the zone.
Step 4: take the key frame images of different points of view and extract profile and the texture blend feature at each scene.Use the Ferns sorter that composite character is carried out supervised learning.
(2) the ONLINE RECOGNITION stage:
Step 5:, determine which submap is user's present located scene belong to according to each regional locus scope of GPS spatial orientation information and off-line phase demarcation.
Step 6: if present frame is first frame, then system according to submap information should the zone in all scenes be shown to the user in the button mode, and determine the subscene label of current scene by the mode that the user manually selects, finish scene Recognition; If present frame is not first frame,, determine the range of search of identification current scene subscene then according to the subscene information of previous frame.
Step 7: take current frame image for the residing current scene of user, and in the range of search that previous step is determined, adopt visible sensation method that current scene is discerned, obtain the subscene label of current scene, finish scene Recognition.
2. a kind of new scene Recognition technology that is used for outdoor augmented reality system as claimed in claim 1 is characterized in that: in step 1, use GPS that its spatial dimension is measured and demarcated, and adopt longitude and latitude to represent this scope.The measurement means of this its spatial location and mode are not limited to the GPS method, can also be by wireless space position measurement modes such as bluetooth, Wifi.
3. a kind of new scene Recognition technology that is used for outdoor augmented reality system as claimed in claim 1, it is characterized in that: in step 2, space density size according to scene in the size of entire environment scope and the environment comes the zone of entire environment is divided, the division methods of each regional extent and representation unit are different and different according to the metering system that uses in the step 1, for example use GPS as metering system, then usage degree (°) divide (') and second (") is used as the representation unit of regional extent.
4. a kind of new scene Recognition technology that is used for outdoor augmented reality system as claimed in claim 1, it is characterized in that: in step 2 and step 3, use secondary spatial geographical locations information to come each scene in the environment is represented, a kind of like this expression mode of secondary positional information is: first order information is determined by the metering system in the step 1 for the area of space information at this scene place, the expression mode of this information; Second-level message is in the area of space of this scene at its place, and the neighbouring relations of other scenes in this scene and the zone are represented and determined.
5. a kind of new scene Recognition technology that is used for outdoor augmented reality system as claimed in claim 1, it is characterized in that: in step 4, in order to reach the purpose that scene is expressed more accurately, use profile and texture blend feature to express scene, this composite character wherein also can be more kinds of combination of features.For composite character being carried out supervised learning and real-time identification, the comparatively simple sorter of utilization structure is trained it and is discerned simultaneously, and the sorter kind here can be Ferns, also can be other sorters that can carry out real-time feature identification.
6. a kind of new scene Recognition technology that is used for outdoor augmented reality system as claimed in claim 1, it is characterized in that: in step 5, corresponding to spatial position measuring means and mode come user's current location is measured in use and step 1, the step 2.
7. a kind of new scene Recognition technology that is used for outdoor augmented reality system as claimed in claim 1 is characterized in that: in step 6, use with three in scene space position, the corresponding to second level represent that mode comes the position of current scene is discerned.
8. a kind of new scene Recognition technology that is used for outdoor augmented reality system as claimed in claim 1, it is characterized in that: in step 7, to discern the range of search of current scene by the secondary spatial positional information and dwindle, use subsequently with step 4 in the recognition methods of corresponding to computer vision finish the final identification of scene.
After above-described processing, just can realize the identification of the current scene in the augmented reality system under the outdoor environment.
CN201010523891.0A 2010-10-25 2010-10-25 A kind of scene recognition method for Outdoor Augmented Reality System newly Active CN102054166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010523891.0A CN102054166B (en) 2010-10-25 2010-10-25 A kind of scene recognition method for Outdoor Augmented Reality System newly

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010523891.0A CN102054166B (en) 2010-10-25 2010-10-25 A kind of scene recognition method for Outdoor Augmented Reality System newly

Publications (2)

Publication Number Publication Date
CN102054166A true CN102054166A (en) 2011-05-11
CN102054166B CN102054166B (en) 2016-04-27

Family

ID=43958466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010523891.0A Active CN102054166B (en) 2010-10-25 2010-10-25 A kind of scene recognition method for Outdoor Augmented Reality System newly

Country Status (1)

Country Link
CN (1) CN102054166B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663448A (en) * 2012-03-07 2012-09-12 北京理工大学 Network based augmented reality object identification analysis method
CN103118050A (en) * 2011-11-17 2013-05-22 上海贝尔股份有限公司 Method, equipment and system of realizing data aggregation based on data enhancement technology
CN103577788A (en) * 2012-07-19 2014-02-12 华为终端有限公司 Augmented reality realizing method and augmented reality realizing device
CN103903013A (en) * 2014-04-15 2014-07-02 复旦大学 Optimization algorithm of unmarked flat object recognition
CN103968824A (en) * 2013-01-28 2014-08-06 华为终端有限公司 Method for discovering augmented reality target, and terminal
CN105447460A (en) * 2015-11-20 2016-03-30 联想(北京)有限公司 Information processing method and electronic equipment
CN106127166A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 A kind of augmented reality AR image processing method, device and intelligent terminal
CN106528800A (en) * 2016-11-11 2017-03-22 叶火 Image generation method and apparatus based on real scenes
CN106529452A (en) * 2016-11-04 2017-03-22 重庆市勘测院 Mobile intelligent terminal building rapid identification method based on building three-dimensional model
CN108600634A (en) * 2018-05-21 2018-09-28 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN111932680A (en) * 2020-06-16 2020-11-13 厦门大学 Urban geomorphic element identification method and system based on VR (virtual reality) depth visual perception

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0944686A (en) * 1995-07-27 1997-02-14 Sanyo Electric Co Ltd Method for automatically generating standard data for part location recognition
CN201374082Y (en) * 2009-03-24 2009-12-30 上海水晶石信息技术有限公司 Augmented reality system based on image unique point extraction and random tree classification
US7739033B2 (en) * 2004-06-29 2010-06-15 Sony Corporation Information processing device and method, program, and information processing system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0944686A (en) * 1995-07-27 1997-02-14 Sanyo Electric Co Ltd Method for automatically generating standard data for part location recognition
US7739033B2 (en) * 2004-06-29 2010-06-15 Sony Corporation Information processing device and method, program, and information processing system
CN201374082Y (en) * 2009-03-24 2009-12-30 上海水晶石信息技术有限公司 Augmented reality system based on image unique point extraction and random tree classification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIAN JUN等: ""Constrained Submap Algorithm for Simultaneous Localization and Mapping"", 《JOURNAL OF SHANGHAI JIAOTONG UNIVERSITY(SCIENCE)》, 5 October 2009 (2009-10-05), pages 600 - 605 *
康绍鹏: ""增强现实关键技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 31 October 2009 (2009-10-31), pages 138 - 726 *
陈靖等: ""适用于户外增强现实系统的混合跟踪定位算法"", 《计算机辅助设计与图形学学报》, vol. 22, no. 2, 28 February 2010 (2010-02-28), pages 204 - 209 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103118050A (en) * 2011-11-17 2013-05-22 上海贝尔股份有限公司 Method, equipment and system of realizing data aggregation based on data enhancement technology
CN102663448B (en) * 2012-03-07 2016-08-10 北京理工大学 Method is analyzed in a kind of network augmented reality object identification
CN102663448A (en) * 2012-03-07 2012-09-12 北京理工大学 Network based augmented reality object identification analysis method
US9607222B2 (en) 2012-07-19 2017-03-28 Huawei Device Co., Ltd. Method and apparatus for implementing augmented reality
CN103577788A (en) * 2012-07-19 2014-02-12 华为终端有限公司 Augmented reality realizing method and augmented reality realizing device
CN103968824A (en) * 2013-01-28 2014-08-06 华为终端有限公司 Method for discovering augmented reality target, and terminal
US9436874B2 (en) 2013-01-28 2016-09-06 Huawei Device Co., Ltd. Method for discovering augmented reality object, and terminal
CN103903013A (en) * 2014-04-15 2014-07-02 复旦大学 Optimization algorithm of unmarked flat object recognition
CN105447460A (en) * 2015-11-20 2016-03-30 联想(北京)有限公司 Information processing method and electronic equipment
CN105447460B (en) * 2015-11-20 2019-05-31 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN106127166A (en) * 2016-06-28 2016-11-16 广东欧珀移动通信有限公司 A kind of augmented reality AR image processing method, device and intelligent terminal
CN106529452A (en) * 2016-11-04 2017-03-22 重庆市勘测院 Mobile intelligent terminal building rapid identification method based on building three-dimensional model
CN106528800A (en) * 2016-11-11 2017-03-22 叶火 Image generation method and apparatus based on real scenes
CN106528800B (en) * 2016-11-11 2019-10-08 叶一火 A kind of image generating method and device based on real scene
CN108600634A (en) * 2018-05-21 2018-09-28 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN111932680A (en) * 2020-06-16 2020-11-13 厦门大学 Urban geomorphic element identification method and system based on VR (virtual reality) depth visual perception
CN111932680B (en) * 2020-06-16 2022-06-28 厦门大学 Urban geomorphic element identification method and system based on VR (virtual reality) depth visual perception

Also Published As

Publication number Publication date
CN102054166B (en) 2016-04-27

Similar Documents

Publication Publication Date Title
CN102054166A (en) Scene recognition technology used in outdoor augmented reality system
CN101976461A (en) Novel outdoor augmented reality label-free tracking registration algorithm
Kim et al. Robust vehicle localization using entropy-weighted particle filter-based data fusion of vertical and road intensity information for a large scale urban area
CN100533486C (en) Digital city full-automatic generating method
Orellana et al. Exploring visitor movement patterns in natural recreational areas
Wieland et al. Estimating building inventory for rapid seismic vulnerability assessment: Towards an integrated approach based on multi-source imaging
KR101566022B1 (en) 3D GIS including sensor map based festival vistor statistics management system and method
Shirowzhan et al. Data mining for recognition of spatial distribution patterns of building heights using airborne lidar data
CN106647742A (en) Moving path planning method and device
CN108305260A (en) Detection method, device and the equipment of angle point in a kind of image
CN104281991A (en) Smart community three-dimensional monitoring platform and method
CN116227834A (en) Intelligent scenic spot digital platform based on three-dimensional point cloud model
Payet et al. Scene shape from texture of objects
CN111782741A (en) Interest point mining method and device, electronic equipment and storage medium
Wu et al. Automatic building rooftop extraction using a digital surface model derived from aerial stereo images
Karl et al. A technique for estimating rangeland canopy-gap size distributions from high-resolution digital imagery
Shirowzhan et al. Developing metrics for quantifying buildings’ 3D compactness and visualizing point cloud data on a web-based app and dashboard
CN116340563A (en) Urban scene geographic position positioning method with pattern matching
Ramalingam et al. Automatizing the generation of building usage maps from geotagged street view images using deep learning
Gruen et al. Perspectives in the reality-based generation, n D modelling, and operation of buildings and building stocks
Lazern et al. Automatic landmark identification in large virtual environment: a spatial data mining approach
Nandi et al. Geographical Information System (GIS) in water resources engineering
CN111639672A (en) Deep learning city functional area classification method based on majority voting
Musungu Assessing spatial data quality of participatory GIS studies: A case study in Cape Town
Gaurav et al. RainRoof: Automated Shared Rainwater Harvesting Prediction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant