CN103162682A - Indoor path navigation method based on mixed reality - Google Patents

Indoor path navigation method based on mixed reality Download PDF

Info

Publication number
CN103162682A
CN103162682A CN2011104066964A CN201110406696A CN103162682A CN 103162682 A CN103162682 A CN 103162682A CN 2011104066964 A CN2011104066964 A CN 2011104066964A CN 201110406696 A CN201110406696 A CN 201110406696A CN 103162682 A CN103162682 A CN 103162682A
Authority
CN
China
Prior art keywords
image sequence
mobile terminal
mixed reality
path navigation
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104066964A
Other languages
Chinese (zh)
Other versions
CN103162682B (en
Inventor
宋小波
李芬
刘百辰
周培莹
赵江海
何锋
王敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN201110406696.4A priority Critical patent/CN103162682B/en
Publication of CN103162682A publication Critical patent/CN103162682A/en
Application granted granted Critical
Publication of CN103162682B publication Critical patent/CN103162682B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Navigation (AREA)

Abstract

The invention discloses an indoor path navigation method based on mixed reality, wherein a mobile terminal, a wireless local area network, a remote server, and a plurality of remote PC are provided, the mobile terminal comprises a mobile camera and a display screen, and the plurality of the remote PC are connected via a wired network. The method is characterized by comprising the following steps that: 1) a destination is input on the mobile terminal, and the destination is transmitted to the remote PC through the wireless local area network; 2) the mobile terminal forms an image sequence according to a current image captured by the camera, and transmits the image sequence to the remote PC through the wireless local area network; 3) the remote PC sequentially performs three steps such as reliable identification labeling, image sequence matching and position model screening according to the received image sequence to calculate current position information of a user; and 4) the remote PC calculates and forms path navigation according to the current position information of the user and the destination, overlays the path navigation by using a mixed reality technology in a 3D manner, and feeds the overlaid path navigation back to the display screen of the mobile terminal through the wireless local area network.

Description

Indoor path navigation method based on mixed reality
Technical field
The present invention designs a kind of indoor path navigation method based on mixed reality, belongs to the path navigation field.
Technical background
Along with advancing by leaps and bounds of urban construction, various modern high buildings and large mansions are jostled each other in a crowd, its inner 26S Proteasome Structure and Function is intricate, some are strange and when needing to look for specific target in complex environment when people are in like this, as at room areas such as office building, show room, campus, hospital, museum, concert, technology news conference, Large Exhibitions, especially aobvious its importance of indoor path navigation this moment---guiding is unfamiliar with local user and is found the route that goes to the destination, the indoor crowd's of emergency case rapid evacuation and emergency management and rescue etc.
The China number of applying for a patent CN200810060078.7 " based on the indoor orientation method of wireless sense network " Patent discloses a kind of hybrid indoor orientation method based on wireless sensor network, and the method mainly is comprised of beaconing nodes, blind node, aggregation node three parts.Beaconing nodes is the sensor node of location aware; Blind node is sensor node to be positioned, but by communicating by letter with closing on beaconing nodes, carries out coarse localization according to distributed location method; And aggregation node receives the blind spot location, adopts the centralized location optimized algorithm to carry out the accurate location of target, but will and can't three-dimensional show the position at a large amount of sensor of indoor location.Chinese patent application CN201010518905.X " a kind of mobile terminal and indoor navigation method that possesses indoor navigation function " has described a kind of mobile terminal and indoor navigation method that possesses indoor navigation function.Mobile terminal comprises accelerometer module, electrical compass module, pressure detection module and control module etc.Accelerometer module, electrical compass module, pressure detection module are connected with control module respectively.This patent adopts the accelerometer module to obtain the distance that mobile terminal is advanced, electrical compass module obtains the directional information that mobile terminal is advanced, the pressure detection module is obtained the atmospheric pressure information of mobile terminal, and distance, directional information and the atmospheric pressure information of being advanced according to mobile terminal by control module, thereby draw mobile terminal in indoor 3 D stereo positional information, realized the indoor stereo navigation, but affected by environment larger aspect degree of accuracy and imaging.China Patent No. CN201010518905.X " location of building inside and the method for navigation are provided " patent discloses a kind of method that when gps signal is unavailable, electronic equipment in building area is positioned/navigates.Electronic equipment is by scanning available wireless local network connecting point and download location information, and then realization location/navigation feature, shows realistically in real time the user in indoor current residing position, and spendable guidance path.But this patent imaging plane shows, can't three-dimensional show the position.
Along with augmented reality (Augmented Reality, abbreviation " AR "), strengthen virtual (Augmented Virtuality, abbreviation " AV "), these two are collectively referred to as again " mixed reality " (Mixed Reality, abbreviation " MR ") development can be constructed required high telepresenc virtual three-dimensional scene for the user.
Present user more need a kind of can be in the indoor method that realizes stereo navigation, can three-dimensional show the place current location, facilitate the user to find more fast the route that goes to the destination.
Summary of the invention
The objective of the invention is in order to solve above-mentioned deficiency, a kind of indoor path navigation method based on mixed reality is provided.
Realize that technical scheme of the present invention is as follows:
A kind of indoor path navigation method based on mixed reality, comprise mobile terminal 7, WLAN (wireless local area network) 3, remote server 4 and a plurality of long-range PC5, mobile terminal 7 comprises dollying head 1 and display screen 2, connect by cable network 6 between a plurality of long-range PC5, it is characterized in that comprising the following steps: 1. input the destination on mobile terminal 7, by WLAN (wireless local area network) 3, the destination is transferred on long-range PC5; 2. mobile terminal 7 forms image sequence according to the present image of camera 1 shooting, by WLAN (wireless local area network) 3, image sequence is transferred on long-range PC5; 3. long-range PC5 according to the image sequence of receiving, mates, calculates the current positional information of user with three steps of position model screening by mark reliable recognition, image sequence successively; 4. the pathway navigation is calculated in long-range PC5 positional information and the destination current according to the user, uses the mixed reality technology superpose and feed back to by WLAN (wireless local area network) 4 on the display screen 2 of mobile terminal 7 in the three-dimensional mode of 3D this path navigation.
Above-mentioned the 3. in the step, in mark reliable recognition process, the image sequence of inputting is carried out the Threshold segmentation aftertreatment and become binary picture, and then highlight target label, obtain the feature in binary picture, and then carry out feature registration, if can reach registration, i.e. expression has identified position mark, and then complete Primary Location, if can not reach registration, returned to for the 2. step.
Adopt the adaptive thresholding algorithm of different illumination conditions to generate binary picture in above-mentioned mark reliable recognition process, 1-10 second or 64 frames are carried out once, and all the other non-marked images use the fixed threshold of nearest time to cut apart image.
In above-mentioned image sequence matching process, the image sequence that captures when the indoor moving according to the user carries out design of graphics as sequence library, and the image sequence coupling is first according to the frequency continuous sampling of 8-60 frame/second, then with 8 frames/second design of graphics as sequence library.
RGB color space conversion with input in above-mentioned fast accurate position fixing process becomes the HSI color space, then extracts the first five LAD feature from the HSI color space, determines current location by compute euclidian distances at last.
The similarity of proper vector of image sequence of n position that needs the positional structure model bank of the proper vector of calculating input image sequence and long-range PC5 in above-mentioned image sequence coupling.
Above-mentioned the 4. in the step, adopts layering L-K optical flow algorithm that unique point is followed the tracks of, and adopts the RANSAC algorithm to try to achieve its homography matrix to the result of optical flow tracking.
The present invention adopts technique scheme to obtain following beneficial effect:
(1) the present invention utilizes the camera of mobile terminal automatically to gather image, utilizes the mixed reality technology, and demonstration user true to nature is at indoor current residing position and guidance path on display screen, so the user can obtain intuitively guidance path and guides.
(2) indoor navigation method of the present invention is due to doors structure degree, scope of heading size are not done any requirement, only rely on camera to take and complete positioning function, therefore can be the user who is unfamiliar with the region provides convenient, economic indoor path navigation service.
(3) the present invention adopts the adaptive thresholding algorithm of different illumination conditions, generate bianry image, owing to simplifying, having integrated gray value information in image, the calculated amount of programmed algorithm can reduce greatly, the more important thing is, can improve the accuracy of successive image coupling, and all the other non-marked images use the fixed threshold of nearest time to cut apart image, can reduce the system-computed amount.
(4) the present invention will utilize chromaticity to strengthen adapting to the ability of light intensity, reduce the impact that the true color space is subjected to color change.
(5) due to the feature according to algorithm, select different algorithms to process different the processing stage, can solve preferably real-time and the jitter problem of tracing process.
By the positional structure model, can make after mark identification and image sequence coupling, reduce the error rate of consumer positioning final position, guarantee the accuracy of bearing accuracy
Description of drawings
Fig. 1. indoor path guiding system workflow schematic diagram
Fig. 2. the reliable recognition method schematic diagram of mark
Fig. 3. the real-time registration schematic flow sheet of scene
Camera 1; Display screen 2; WLAN (wireless local area network) 3; Remote server 4; Long-range PC5; Cable network 6; Mobile terminal 7.
Embodiment
The invention will be further described below in conjunction with the drawings and specific embodiments.
The present invention be a kind of based on monocular vision location, rely on real-time registration scene to complete the quick tracking of virtual navigation information and the navigational system in real-time rendering indoor path.The below introduces exact position localization method and the scene configuration method of indoor path guiding system.
The schematic diagram of indoor path guiding system network topology structure of the present invention.Comprise mobile terminal 7, mobile terminal 7 comprises dollying head 1 and display screen 2, WLAN (wireless local area network) 3, remote server 4, long-range PC5, be provided with the positional structure model bank in long-range PC5, can include in advance the picture of this interior architecture inner position architectural feature in the positional structure model bank, connect by cable network 6 between long-range PC5.
At first, the user inputs the destination in mobile terminal 7; The camera 1 of mobile terminal 7 is taken the image sequence of present environment of living in, then by WLAN (wireless local area network) 3, image sequence is transferred on long-range PC5, long-range PC5 calculates user's current location according to the image sequence of receiving, its main process comprises: the 1. reliable recognition of mark, 2. coupling, the 3. position model screening of image sequence:
1. the reliable recognition of mark
As shown in the figure, in position fixing process, at first complete the reliable recognition of mark: the image of inputting is carried out the Threshold segmentation aftertreatment become binary picture, and then highlight target label.Above-mentioned mark (as the box-shaped image) is carried out feature extraction, obtains all squares in bianry image, so with long-range PC5 in the architectural feature registration of positional structure model bank.If can reach registration, i.e. expression has identified position mark, and then completes Primary Location, if can not reach registration, the camera of mobile device is taken pictures again, forms new image sharp, re-starts the reliable recognition of mark.
Wherein, the effect that Threshold segmentation is processed depends on selected image threshold, and the impact of the factors such as illumination that the optimal threshold of image is mainly changed.In order to obtain better treatment effect, the present invention adopts the adaptive thresholding algorithm of different illumination conditions, generate bianry image, every 1-10 carries out once or 64 frames execution once second, in 1-10 second or 64 frames all use last threshold value, cut apart to do again next time, obtain new threshold value, to adapt to fast-changing lighting condition.Owing to simplifying, having integrated gray value information in image, the calculated amount of programmed algorithm can reduce greatly, the more important thing is, can improve the accuracy of successive image coupling, and all the other non-marked images use the fixed threshold of nearest time to cut apart image, can reduce the system-computed amount.See Fig. 3, original image obtains corresponding binary picture after adaptive thresholding algorithm is processed, finally identify reliably mark.
2. image sequence mates
As depicted in figs. 1 and 2, after Primary Location, for reaching reliability, the accuracy requirement of final location in the present invention, at first carry out the collection of image sequence, the collection of image sequence is the video sequence that captures when the indoor walking according to the user, and come design of graphics as sequence library: first according to the frequency continuous sampling of 8-60 frame/second, wherein are optimum conditions 12 frame/seconds, when lower than 8 frames/second, can not accurately locate; When higher than 60 frames/second, the carrier data processing power is had relatively high expectations, then with 8 frames/second design of graphics as sequence library.The image sequence at each identified mark place needs the pre-stored image sequence in image sequence storehouse with long-range PC to mate.
Image sequence matching process step of the present invention is: at first carry out histogram and extract, in each square marked region, extract image; Then carry out Histogram Matching, the image sequence that detects in the image sequence storehouse of long-range PC5 of the image of square and advance reservation is mated, if matching result is up to standard, namely determined user's accurate location.Feature registration similar process in the reliable recognition of this matching process and mark, but result requires more accurate and the assurance speed of trying one's best.
The image information that captures of the present invention is affected by the several factors such as surround lighting, relative motion, and even under identical photoenvironment, the color information of two width images of seizure also might be different.So, be subject to the impact of color change due to the true color space, so the present invention will utilize chromaticity to strengthen adapting to the ability of light intensity.The present invention becomes the HSI color space with the RGB color space conversion of input, then extracts the first five LAD feature from the HSI color space, determines current location by compute euclidian distances at last.Therefore, the present invention needs the proper vector (x of calculating input image sequence (X) i) with the image sequence (Y of n position of the positional structure model bank of long-range PC5 n) proper vector (y i) similarity.Wherein, the image sequence that captures with the speed of 12 frames/s is as the image sequence storehouse of retrieval.
3. position model screening
Position model screening technique: represent the position with node, and each node interconnects with adjacent node, thereby consist of user's potential path.There are three kinds of possible situations in the customer location location: 1) consistent with the Output rusults after the image sequence judgement by mark identification; 2) inconsistent by the Output rusults after mark identification and image sequence judgement; 3) only obtain Output rusults by the image sequence coupling.In the model discrimination process of position, accurate positional recognition rule of the present invention sees Table 1:
Table 1 accurate positional recognition rule
Figure BDA0000117830530000051
In the reliable recognition process of mark, in a single day long-range PC5 detects potential mark, according to predefined label information, and consumer positioning current position; In the image sequence matching process, by extracting input image sequence feature and predefined image sequence characteristic, analysis-by-synthesis, calculating, output active user positional information; Represent the position with node in the model discrimination of position, and each node interconnects with adjacent node, thereby consist of user's potential path.In above three step process, the positional structure model is used for the final customer location of resolving.
After customer location is determined, by position and the attitude of judgement camera with respect to real world, and then virtual scene correctly is added on the particular location of real world, thereby makes the user think that from sense organ virtual scene is the part of real world really.
See Fig. 3, the present invention is in the real-time registration process of scene, at first adopt the wide baseline matching algorithm based on Naive Bayes Classification, for the unique point that detects, if can set up homography matrix by the RANSAC algorithm, show this object to be detected, enter tracing process, otherwise still be in testing process.After entering tracing process, adopt and follow the tracks of based on the combined tracking algorithm of angle point and texture.At first, with the coordinate of current homography matrix calculated characteristics point, adopt layering L-K optical flow algorithm that unique point is followed the tracks of, the result employing RANSAC algorithm of optical flow tracking is tried to achieve its homography matrix; Then, then by the IC algorithm of robust, it is optimized, judge according to optimum results whether tracing process is successful.If follow the tracks of successfully, just remain on and continue in tracing process to follow the tracks of, otherwise, if continuous 30 frame algorithms are not restrained, think and follow the tracks of unsuccessfully, and turn back to testing process.Adopt simultaneously the combined tracking algorithm based on angle point and textural characteristics to carry out real-time follow-up, can solve preferably real-time and the jitter problem of tracing process.
at last, as shown in Figure 2, long-range PC5 feeds back to user's positional information on the display screen 2 of mobile terminal 7 by remote server 4, marking user residing position on indoor map on display screen 2, the present invention with OPENGL as graphics rendering engine, manage as scene graph with OSG, with OSGArt as the mixed reality kit, to resolve the path navigation mark that obtains according to customer location and destination is presented on the display screen 2 of mobile terminal 7, realization seamlessly is added to virtual navigation information on the display screen 2 of mobile terminal 7, by the real-time scene registration technology, the guidance path of mixed reality seamlessly superposes out.
Obviously, those skilled in the art can carry out various changes and not depart from the scope of the present invention and spirit indoor path navigation method of the present invention.Like this, if within these modifications of the present invention belong to the scope of claim of the present invention and equivalent technologies thereof, the present invention also is intended to comprise these and changes interior.

Claims (7)

1. indoor path navigation method based on mixed reality, comprise mobile terminal (7), WLAN (wireless local area network) (3), remote server (4) and a plurality of long-range PC (5), mobile terminal (7) comprises dollying head (1) and display screen (2), connect by cable network (6) between a plurality of long-range PC (5), it is characterized in that comprising the following steps: 1. in the upper input of mobile terminal (7) destination, by WLAN (wireless local area network) (3), the destination is transferred on long-range PC (5); 2. mobile terminal (7) forms image sequence according to the present image of camera (1) shooting, by WLAN (wireless local area network) (3), image sequence is transferred on long-range PC (5); 3. long-range PC (5) according to the image sequence of receiving, mates, calculates the current positional information of user with three steps of position model screening by mark reliable recognition, image sequence successively; 4. the pathway navigation is calculated in long-range PC (5) positional information current according to the user and destination, uses mixed reality technology to superpose in the three-dimensional mode of 3D this path navigation and passes through on display screen (2) that WLAN (wireless local area network) (4) feeds back to mobile terminal (7).
2. the indoor path navigation method based on mixed reality according to claim 1, it is characterized in that: described the 3. in the step, in mark reliable recognition process, the image sequence of inputting being carried out the Threshold segmentation aftertreatment becomes binary picture to highlight target label, obtain the feature in binary picture, then with the positional structure model bank in architectural feature carry out feature registration, if can reach registration, i.e. expression has identified position mark, and then complete Primary Location, if can not reach registration, returned to for the 2. step.
3. the indoor path navigation method based on mixed reality according to claim 1, it is characterized in that: adopt the adaptive thresholding algorithm of different illumination conditions to generate binary picture in described mark reliable recognition process, 1-10 second or 64 frames are carried out once, and all the other non-marked images use the fixed threshold of nearest time to cut apart image.
4. the indoor path navigation method based on mixed reality according to claim 1, it is characterized in that: in described image sequence matching process, the image sequence design of graphics that captures when the indoor moving according to the user is as sequence library, image sequence coupling is first according to the frequency continuous sampling of 8-60 frame/second, then with the frequency design of graphics of 8 frame/seconds as sequence library.
5. the indoor path navigation method based on mixed reality according to claim 1, it is characterized in that: the RGB color space conversion with input in described image sequence coupling becomes the HSI color space, then extract the first five LAD feature from the HSI color space, determine current location by compute euclidian distances at last.
6. the indoor path navigation method based on mixed reality according to claim 1 is characterized in that: the similarity of proper vector of image sequence of n position that needs the positional structure model bank of the proper vector of calculating input image sequence and long-range PC5 in described image sequence coupling.
7. the indoor path navigation method based on mixed reality according to claim 1, it is characterized in that: described the 4. in the step, adopt layering L-K optical flow algorithm that unique point is followed the tracks of, the result employing RANSAC algorithm of optical flow tracking is tried to achieve its homography matrix.
CN201110406696.4A 2011-12-08 2011-12-08 Based on the indoor path navigation method of mixed reality Active CN103162682B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110406696.4A CN103162682B (en) 2011-12-08 2011-12-08 Based on the indoor path navigation method of mixed reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110406696.4A CN103162682B (en) 2011-12-08 2011-12-08 Based on the indoor path navigation method of mixed reality

Publications (2)

Publication Number Publication Date
CN103162682A true CN103162682A (en) 2013-06-19
CN103162682B CN103162682B (en) 2015-10-21

Family

ID=48585972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110406696.4A Active CN103162682B (en) 2011-12-08 2011-12-08 Based on the indoor path navigation method of mixed reality

Country Status (1)

Country Link
CN (1) CN103162682B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103411621A (en) * 2013-08-09 2013-11-27 东南大学 Indoor-mobile-robot-oriented optical flow field vision/inertial navigation system (INS) combined navigation method
CN103697882A (en) * 2013-12-12 2014-04-02 深圳先进技术研究院 Geographical three-dimensional space positioning method and geographical three-dimensional space positioning device based on image identification
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN105241460A (en) * 2015-10-20 2016-01-13 广东欧珀移动通信有限公司 Route generating method and user terminal
CN106679668A (en) * 2016-12-30 2017-05-17 百度在线网络技术(北京)有限公司 Navigation method and device
CN107588766A (en) * 2017-09-15 2018-01-16 南京轩世琪源软件科技有限公司 A kind of indoor orientation method based on radio area network
CN107831920A (en) * 2017-10-20 2018-03-23 广州视睿电子科技有限公司 Cursor movement display methods, device, mobile terminal and storage medium
CN108009588A (en) * 2017-12-01 2018-05-08 深圳市智能现实科技有限公司 Localization method and device, mobile terminal
CN108180901A (en) * 2017-12-08 2018-06-19 深圳先进技术研究院 Indoor navigation method, device, robot and the storage medium of blind-guidance robot
CN109115221A (en) * 2018-08-02 2019-01-01 北京三快在线科技有限公司 Indoor positioning, air navigation aid and device, computer-readable medium and electronic equipment
CN109357673A (en) * 2018-10-30 2019-02-19 上海仝物云计算有限公司 Vision navigation method and device based on image
CN109506658A (en) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 Robot autonomous localization method and system
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology
TWI695966B (en) * 2019-01-28 2020-06-11 林器弘 Indoor positioning and navigation system for mobile communication device
CN112540673A (en) * 2020-12-09 2021-03-23 吉林建筑大学 Virtual environment interaction method and equipment
CN113670298A (en) * 2019-11-27 2021-11-19 支付宝(杭州)信息技术有限公司 Service handling guiding method and device based on augmented reality
CN114334119A (en) * 2022-03-14 2022-04-12 北京融威众邦电子技术有限公司 Intelligent self-service terminal
CN114697870A (en) * 2022-03-11 2022-07-01 北京西南风信息技术有限公司 Method, device, equipment and medium for positioning, screening and matching personnel in exhibition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1569558A (en) * 2003-07-22 2005-01-26 中国科学院自动化研究所 Moving robot's vision navigation method based on image representation feature
US20090289956A1 (en) * 2008-05-22 2009-11-26 Yahoo! Inc. Virtual billboards
CN101976461A (en) * 2010-10-25 2011-02-16 北京理工大学 Novel outdoor augmented reality label-free tracking registration algorithm
CN102006548A (en) * 2009-09-02 2011-04-06 索尼公司 Information providing method and apparatus, information display method and mobile terminal and information providing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1569558A (en) * 2003-07-22 2005-01-26 中国科学院自动化研究所 Moving robot's vision navigation method based on image representation feature
US20090289956A1 (en) * 2008-05-22 2009-11-26 Yahoo! Inc. Virtual billboards
CN102006548A (en) * 2009-09-02 2011-04-06 索尼公司 Information providing method and apparatus, information display method and mobile terminal and information providing system
CN101976461A (en) * 2010-10-25 2011-02-16 北京理工大学 Novel outdoor augmented reality label-free tracking registration algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
任鹏: "增强现实中基于视觉的跟踪配准技术的研究", 《中国优秀硕士学位论文全文数据库》 *
林聚财等: "基于彩色图像的玻璃绝缘子缺陷诊断", 《电网技术》 *
董子龙: "面向增强现实的实时三维跟踪", 《中国博士学位论文全文数据库》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103411621B (en) * 2013-08-09 2016-02-10 东南大学 A kind of vision/INS Combinated navigation method of the optical flow field towards indoor mobile robot
CN103411621A (en) * 2013-08-09 2013-11-27 东南大学 Indoor-mobile-robot-oriented optical flow field vision/inertial navigation system (INS) combined navigation method
CN103697882A (en) * 2013-12-12 2014-04-02 深圳先进技术研究院 Geographical three-dimensional space positioning method and geographical three-dimensional space positioning device based on image identification
CN104748738A (en) * 2013-12-31 2015-07-01 深圳先进技术研究院 Indoor positioning navigation method and system
CN104748738B (en) * 2013-12-31 2018-06-15 深圳先进技术研究院 Indoor positioning air navigation aid and system
CN105241460A (en) * 2015-10-20 2016-01-13 广东欧珀移动通信有限公司 Route generating method and user terminal
CN106679668B (en) * 2016-12-30 2018-08-03 百度在线网络技术(北京)有限公司 Air navigation aid and device
CN106679668A (en) * 2016-12-30 2017-05-17 百度在线网络技术(北京)有限公司 Navigation method and device
CN107588766A (en) * 2017-09-15 2018-01-16 南京轩世琪源软件科技有限公司 A kind of indoor orientation method based on radio area network
CN107831920A (en) * 2017-10-20 2018-03-23 广州视睿电子科技有限公司 Cursor movement display methods, device, mobile terminal and storage medium
CN108009588A (en) * 2017-12-01 2018-05-08 深圳市智能现实科技有限公司 Localization method and device, mobile terminal
CN108180901A (en) * 2017-12-08 2018-06-19 深圳先进技术研究院 Indoor navigation method, device, robot and the storage medium of blind-guidance robot
CN109115221A (en) * 2018-08-02 2019-01-01 北京三快在线科技有限公司 Indoor positioning, air navigation aid and device, computer-readable medium and electronic equipment
CN109357673A (en) * 2018-10-30 2019-02-19 上海仝物云计算有限公司 Vision navigation method and device based on image
CN109506658A (en) * 2018-12-26 2019-03-22 广州市申迪计算机系统有限公司 Robot autonomous localization method and system
CN109506658B (en) * 2018-12-26 2021-06-08 广州市申迪计算机系统有限公司 Robot autonomous positioning method and system
TWI695966B (en) * 2019-01-28 2020-06-11 林器弘 Indoor positioning and navigation system for mobile communication device
CN113670298A (en) * 2019-11-27 2021-11-19 支付宝(杭州)信息技术有限公司 Service handling guiding method and device based on augmented reality
CN111179436A (en) * 2019-12-26 2020-05-19 浙江省文化实业发展有限公司 Mixed reality interaction system based on high-precision positioning technology
CN112540673A (en) * 2020-12-09 2021-03-23 吉林建筑大学 Virtual environment interaction method and equipment
CN114697870A (en) * 2022-03-11 2022-07-01 北京西南风信息技术有限公司 Method, device, equipment and medium for positioning, screening and matching personnel in exhibition
CN114334119A (en) * 2022-03-14 2022-04-12 北京融威众邦电子技术有限公司 Intelligent self-service terminal

Also Published As

Publication number Publication date
CN103162682B (en) 2015-10-21

Similar Documents

Publication Publication Date Title
CN103162682B (en) Based on the indoor path navigation method of mixed reality
US11252329B1 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
CN111126304B (en) Augmented reality navigation method based on indoor natural scene image deep learning
Cheng et al. Improving monocular visual SLAM in dynamic environments: an optical-flow-based approach
CN103901884B (en) Information processing method and message processing device
US9674507B2 (en) Monocular visual SLAM with general and panorama camera movements
US11632602B2 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
US20120075342A1 (en) Augmenting image data based on related 3d point cloud data
US10127667B2 (en) Image-based object location system and process
CN104748738A (en) Indoor positioning navigation method and system
Bettadapura et al. Egocentric field-of-view localization using first-person point-of-view devices
EP4174772A1 (en) Automated building floor plan generation using visual data of multiple building
Feng et al. Visual map construction using RGB-D sensors for image-based localization in indoor environments
Jang et al. Survey of landmark-based indoor positioning technologies
Qian et al. Wearable-assisted localization and inspection guidance system using egocentric stereo cameras
Hile et al. Information overlay for camera phones in indoor environments
CN116843754A (en) Visual positioning method and system based on multi-feature fusion
Schall et al. 3D tracking in unknown environments using on-line keypoint learning for mobile augmented reality
Skulimowski et al. Door detection in images of 3d scenes in an electronic travel aid for the blind
Xu et al. Indoor localization using region-based convolutional neural network
CN116136408A (en) Indoor navigation method, server, device and terminal
US11741631B2 (en) Real-time alignment of multiple point clouds to video capture
Johns et al. Urban position estimation from one dimensional visual cues
WO2018114581A1 (en) Method and apparatus for constructing lighting environment representations of 3d scenes
Tonosaki et al. Indoor Localization by Map Matchin Using One Image of Information Board

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant