CN109612455A - A kind of indoor orientation method and system - Google Patents

A kind of indoor orientation method and system Download PDF

Info

Publication number
CN109612455A
CN109612455A CN201811469706.7A CN201811469706A CN109612455A CN 109612455 A CN109612455 A CN 109612455A CN 201811469706 A CN201811469706 A CN 201811469706A CN 109612455 A CN109612455 A CN 109612455A
Authority
CN
China
Prior art keywords
module
positioning result
moving
vision
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811469706.7A
Other languages
Chinese (zh)
Inventor
李莉
于岭岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
Tianjin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology filed Critical Tianjin University of Technology
Priority to CN201811469706.7A priority Critical patent/CN109612455A/en
Publication of CN109612455A publication Critical patent/CN109612455A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The embodiment of the present application provides a kind of indoor orientation method and system, and wherein system includes: mobile terminal and map server;The mobile terminal is used for: acquisition two-dimensional barcode information and image feature information, and the two-dimensional barcode information and described image characteristic information are sent to the map server;Obtain visual odometry positioning result;Positioning result, which is searched for, according to the moving-vision of visual odometry positioning result and map server shows final positioning result;The map server is used for: being based on the two-dimensional barcode information and image feature information, is generated the moving-vision search positioning result of the map server.In the embodiment of the present application, utilize the two dimensional code identification region position for being deployed in key position, reduce the range of moving-vision search, improve the hardware resource occupancy of operation efficiency and algorithm, and the accumulated error of visual odometry algorithm is modified using moving-vision search positioning result, improve positioning accuracy.

Description

A kind of indoor orientation method and system
Technical field
This application involves indoor positioning technologies field more particularly to a kind of indoor orientation method and systems.
Background technique
Satellite navigation system signals severe exacerbation indoors, and common mobile device user has 80% ~ 90% time to exist Indoor activity, and it is to come from interior, therefore mobile device-based indoor positioning technologies are for each that the traffic, which also has 70% ~ 80%, Class location based service is most important, and wide application prospect and huge city is presented in the fields such as military, commercial, civilian Field value.
According to physical characteristic, existing indoor positioning technologies can be divided into five major class: inertial navigation, ultrasonic wave positioning, room Interior wireless location, satellite/pseudolite positioning, optical alignment.Above five kinds of indoor positioning technologies, the outside that can not depend on having are set It is standby, it directly realizes on the mobile apparatus, some then needs by equipment such as external emitters or is combined with each other to realize not With the positioning of accuracy requirement.Wherein optical alignment includes computer vision positioning, infrared ray positioning, laser positioning, dedicated illumination System positioning etc..Wherein vision positioning is using various imaging systems (monocular vision, binocular stereo vision, multi-vision visual, panorama Vision, moving-vision etc.) replace the organs of vision as input medium, computer understanding and perception made by image matching algorithm External environment.
Significantly, since user constantly converts posture in environment indoors, acquired image can be made to have not With feature, and external environment bring picture noise complicated and changeable, fuzzy, illumination variation etc. are difficult affects room always The stability of interior vision positioning algorithm.Meanwhile require nothing more than accurately extract image characteristic point be it is far from being enough, due to movement set Standby flow, calculating, storage resource are limited, if cannot quick matching characteristic point in time, system will be made to lose real-time, Influence positioning accuracy.
The Tango technology realization of Google is mainly by being equipped with special hardware mould group for mobile device, such as special view Feel computing chip, camera, depth camera and sensor, cooperates moving tracing, regional learning, depth perception scheduling algorithm, it is real Existing high-precision positioning.But requirement of the Tango technology to hardware is excessively high, is not suitable with current mobile device application.
Summary of the invention
The many aspects of the application provide a kind of indoor orientation method and system, to solve real-time in the prior art and fixed The problem of position low precision.
The embodiment of the present application provides a kind of indoor locating system, comprising: mobile terminal and map server;
The mobile terminal is used for:
Two-dimensional barcode information and image feature information are acquired, and the two-dimensional barcode information and described image characteristic information are sent to institute State map server;
Obtain visual odometry positioning result;
Positioning result, which is searched for, according to the moving-vision of visual odometry positioning result and map server shows final positioning result;
The map server is used for:
Based on the two-dimensional barcode information and image feature information, the moving-vision search positioning knot of the map server is generated Fruit.
Further, the mobile terminal includes Response Code scan module, visual sensor module, visual odometry mould Block, mobile terminal moving-vision search module, fusion locating module and positioning display module;
The Response Code scan module is used for: being scanned the two-dimensional code, is obtained two-dimensional barcode information;
The visual sensor module is used for: acquisition visual pattern obtains image sequence;
The visual odometry module is used for:
Extract the image feature information of described image sequence;
Described image characteristic information is sampled, sample information is obtained;
Visual odometry positioning result is obtained according to described image sequence;
The mobile terminal moving-vision search module is used for: the map clothes are passed to after the sample information is encoded Business device;
The fusion locating module is used for: searching for positioning result in the vision according to the moving-vision of the map server Journey meter positioning result is modified;
The positioning display module is used for: showing final positioning result.
Further, the map server includes two-dimension code area locating module, server end moving-vision search mould Block and database;
The two-dimension code area locating module is used for: according to the prime area position of two-dimensional barcode information judgement positioning and being provided Region code;
The server end moving-vision search module is used for: being carried out matching positioning according to the sample information, is obtained mobile view Feel search positioning result;
The database is used for: according to the region code, transferring region picture to the server end moving-vision search module.
Further, the visual odometry module includes feature detection module, sampling module, demarcating module, feature With module, motion estimation module and position optimization module;
The feature detection module is used for: to described image sequential extraction procedures image feature information;
The sampling module is used for: sampling to described image characteristic information, and sampled data is passed to the movement eventually Hold moving-vision search module;
The demarcating module is used for: being carried out calibration to the first frame of the image sequence from the visual sensor module and is set as Key frame;
The characteristic matching module is used for: the point feature progress to the before and after frames image that the feature detection module extracts Match;
The motion estimation module is used for: the same point image for the before and after frames image that the characteristic matching module matches is arrived Three-dimensional system of coordinate, to estimate mileage;
The position optimization module is used for: updating current trajectory according to the mileage of the mobile device terminal estimated.
Further, the mobile terminal moving-vision search module includes describing sub- extraction module and description son coding mould Block;
The sub- extraction module of description is used for: extracting sampled data Feature Descriptor generated;
The sub- coding module of description is used for: the Feature Descriptor of generation is encoded.
Further, the server end moving-vision search module includes describing sub- decoder module, description son matching mould Block and locating module;
The sub- decoder module of description is used for: Feature Descriptor coding is decoded;
The sub- matching module of description is used for: the region picture that will have been transferred in decoded Feature Descriptor and the database Data are matched;
The locating module is used for: determining specific location according to matched picture.
The embodiment of the present application also provides a kind of indoor orientation method, comprising:
Establish two dimensional code location library and database;
Two-dimensional barcode information and image sequence are obtained, and image feature information is obtained according to described image sequence;
According to the two-dimensional barcode information and described image characteristic information, moving-vision search positioning result is obtained.
Further, this method further includes that visual odometry positioning result is obtained according to described image sequence, and according to institute It states moving-vision search positioning result to be modified the visual odometry positioning result, and shows final positioning result.
Further, described to establish two dimensional code location library and map server database includes:
Two dimensional code is disposed in key position in advance, establishes the two dimensional code location library;
Crucial point image is acquired around indoor path, and records key point position, crucial point image and key point position are deployed to In the database.
It is further, described that obtain characteristics of image according to described image sequence include: that will collect continuous described image Sequential extraction procedures characteristics of image, and described image feature is sampled.
Further, described according to the two-dimensional barcode information and described image feature, obtain moving-vision search positioning knot Fruit includes: to adjust visual search range, and search in the vision according to described image characteristic information according to the two-dimensional barcode information Rope range carries out images match positioning, obtains the moving-vision search positioning result.
In the embodiment of the present application, first with being deployed in the two of key position (such as floor stairs port, Entrance) Code identification region position is tieed up, the range of moving-vision search is reduced, improves the hardware resource occupancy of operation efficiency and algorithm, And the accumulated error of visual odometry algorithm is modified using moving-vision search positioning result, improve positioning accuracy.It obtains In real time, high-precision mobile device indoor positioning.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present application, constitutes part of this application, this Shen Illustrative embodiments and their description please are not constituted an undue limitation on the present application for explaining the application.In the accompanying drawings:
Fig. 1 is a kind of indoor locating system structural schematic diagram that one embodiment of the application provides;
In Fig. 2 the embodiment of the present application visual odometry module and structural schematic diagram;
The structure of mobile terminal moving-vision search module and server end moving-vision search module in Fig. 3 the embodiment of the present application Schematic diagram;
Fig. 4 is a kind of work flow diagram for indoor orientation method that one embodiment of the application provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with the application specific embodiment and Technical scheme is clearly and completely described in corresponding attached drawing.Obviously, described embodiment is only the application one Section Example, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall in the protection scope of this application.
Below in conjunction with attached drawing, the technical scheme provided by various embodiments of the present application will be described in detail.
Fig. 1 is the structural schematic diagram for the indoor locating system that one embodiment of the application provides.As shown in Figure 1, the system packet It includes: mobile terminal 10 and map server 20;
Mobile terminal 10 is used for:
Two-dimensional barcode information and image feature information are acquired, and two-dimensional barcode information and image feature information are sent to map server 20;
Visual odometry positioning result is obtained according to image sequence;
Positioning result, which is searched for, according to the moving-vision of visual odometry positioning result and map server 20 shows final positioning knot The moving-vision search positioning result of fruit, map server 20 plays the role of being modified visual odometry positioning result;
Map server 20 is used for:
Based on two-dimensional barcode information and image feature information, the moving-vision for generating map server 20 searches for positioning result.It is based on Two-dimensional barcode information adjusts visual search range, and carries out images match positioning in visual search range according to image feature information, The moving-vision for generating map server 20 searches for positioning result, and the moving-vision of map server 20 is searched for positioning result Feed back to mobile terminal 10.
Mobile terminal 10 acquires two-dimensional barcode information and image sequence, and to image zooming-out image feature information, and will be two-dimentional Code information and image feature information are sent to map server 20;Mobile terminal 10, which obtains visual odometry according to image sequence, to be determined Position result;Map server 20 is based on two-dimensional barcode information, adjusts visual search range, and search in vision according to image feature information Rope range carries out images match positioning, and the moving-vision for generating map server 20 searches for positioning result, and by map server 20 moving-vision search positioning result feeds back to mobile terminal 10;The moving-vision search positioning result of map server 20 exists Visual odometry positioning result is modified in mobile terminal 10, and shows final positioning result.
Indoor locating system provided in this embodiment can be applied to various indoor scenes, for example, market, exhibition room etc., this reality It applies example and this is not construed as limiting.
Mobile terminal 10 includes Response Code scan module 30, visual sensor module 40, visual odometry module 50, movement Terminal moving-vision search module 80, fusion locating module 60 and positioning display module 70;
Response Code scan module 30 is used for: being scanned the two-dimensional code, is obtained two-dimensional barcode information;And it is passed to by way of wireless transmission Two-dimension code area locating module 100 in map server 20.
Visual sensor module 40 is used for: acquisition visual pattern obtains image sequence;And image sequence is sent to vision Odometer template.
Visual odometry module 50 is used for:
Extract the image feature information of image sequence;
Image feature information is sampled, sample information is obtained;
Visual odometry positioning result is obtained according to image sequence;And visual odometry positioning result is sent to fusion positioning mould Block 60.
Mobile terminal moving-vision search module 80 is used for: after sample information is encoded, passing through wireless transmission method Pass to the server end moving-vision search module 90 in map server 20;
Fusion locating module 60 is used for: the moving-vision search positioning result of server 20 positions visual odometry according to the map As a result it is modified;
Positioning display module 70 is used for: showing final positioning result.
Map server 20 includes two-dimension code area locating module 100,90 sum number of server end moving-vision search module According to library 110;
Two-dimension code area locating module 100 is used for: according to the prime area position of two-dimensional barcode information judgement positioning and providing region Code;
Database 110 is used for: adjusting the certain geographic range in prime area position that search range is adjusted to positioning according to region code Region picture is transferred to server end moving-vision search module 90 in interior region.
Server end moving-vision search module 90 is used for: it is fixed to be carried out matching in transferring region picture according to sample information Position obtains moving-vision search positioning result;
Mobile terminal 10 is scanned two dimensional code by Response Code scan module 30, obtains two-dimensional barcode information, and by wireless The mode of transmission passes to the two-dimension code area locating module 100 of map server 20, is sentenced by two-dimension code area locating module 100 The prime area position (stairs port of such as a certain floor or the entrance of some exhibition room) of disconnected positioning out, and the prime area is provided Region code is sent to database 110 by the region code of position, and visual search range is adjusted to two dimensional code and positioned by database 110 The certain geographic range of initial position in region.Response Code scan module 30 starts vision while detecting two dimensional code Sensor module 40 starts to acquire visual pattern, and acquired image sequence is issued visual odometry module 50.
The characteristics of image that visual odometry module 50 will test is sent to mobile terminal moving-vision and searches after being sampled Rope module 80.After mobile terminal moving-vision search module 80 is encoded image feature information, transmitted by wireless transmission To positioned at server end moving-vision search module 90.The area that database 110 is provided only according to two-dimension code area locating module 100 Domain code is transferred region picture and is positioned to server end moving-vision search module 90 for images match, to substantially reduce The time of moving-vision search.Positioning result is sent to fusion locating module 60 by server end moving-vision search module 90.
Visual odometry module 50 is positioned according to the image sequence from visual sensor module 40, and positioning is tied Fruit is sent to fusion locating module 60.
It is fixed to visual odometry according to the sampling time of moving-vision search positioning and positioning result to merge locating module 60 The result of position is modified, and shows final positioning result by positioning display module 70.
Wireless transmission method in the present embodiment can access the wireless network based on communication standard, such as WiFi, 2G or 3G Deng or their combination, can also be Near Field Communication (NFC) module, to promote short range communication.For example, can be based in NFC module Radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra wideband (UWB) technology, bluetooth (BT) technology and other Technology is realized.
As shown in Fig. 2, visual odometry module 50 includes feature detection module 51, sampling module 52, demarcating module 54, spy Levy matching module 53, motion estimation module 55 and position optimization module 56;
Feature detection module 51 is used for: extracting image feature information to image sequence;
Sampling module 52 is used for: sampling according to the speed of every ten seconds frames to image feature information, and sampled data is passed Pass mobile terminal moving-vision search module 80;
Demarcating module 54 is used for: being carried out calibration to the first frame of the image sequence from visual sensor module 40 and is set as crucial Frame.
Characteristic matching module 53 is used for: the point feature progress to the before and after frames image that feature detection module 51 extracts Match.
Motion estimation module 55 is used for: the same point image that characteristic matching module 63 matches obtained before and after frames image is arrived Three-dimensional system of coordinate, to estimate the motion profile mileage of mobile terminal 10 in this period of time
Position optimization module 56 is used for: updating current trajectory according to the mileage of the mobile device terminal estimated.
The workflow of visual odometry module 50 is described: firstly, waiting first frame to arrive at after program starting, by first frame It carries out calibration and is set as key frame.Later, characteristic matching is carried out between frames.Key frame after successful match, with calibration In conjunction with move distance is estimated, motion profile is updated by position optimization module 56, obtains optimal location.Wherein, according to every ten The image characteristic point that the speed of one frame of second extracts feature detection module 51 samples, and the data of sampling are transferred to service Device end moving-vision search module 90, carries out subsequent position correction.
As shown in figure 3, mobile terminal moving-vision search module 80 includes describing sub- extraction module 81 and description son coding Module 82;
It describes sub- extraction module 81 to be used for: extracting the generated Feature Descriptor of sampled data.
It describes sub- coding module 82 to be used for: the Feature Descriptor of generation is encoded.
Server end moving-vision search module 90 includes describing sub- decoder module 91, the sub- matching module 92 of description and positioning Module;
Describe sub- decoder module 91 to be used for: the information that mobile terminal moving-vision search module 80 is transmitted decodes.
It describes sub- matching module 92 to be used for: the administrative division map that will have been transferred in decoded Feature Descriptor and database 110 Sheet data is matched.
Locating module 93 is used for: determining specific location according to matched optimal picture.
The workflow of mobile terminal moving-vision search module 80 and server end moving-vision search mould is described: first First, sub- extraction is described in the picture that received visual odometry module 50 samples, then encode information onto and pass through wireless network It is transferred to server end moving-vision search module 90.The information that server end moving-vision search module 90 carrys out transmission decodes And the image data transferred with database 110 is matched, and positioning result is obtained.Finally, positioning result is sent to fusion Locating module 60.
Moving-vision search module uses SURF(Speeded Up Robust Features, accelerates robust feature) algorithm Characteristic point is carried out to detect and match.The feature of SURF algorithm is detected as the plus and minus calculation to integral image, according to the feature of statistics Harr wavelet character in point field determines that the principal direction of characteristic point, each characteristic point generate 64 dimensional feature vectors.Its algorithm tool Body process is as follows:
(1) Hessian matrix (SURF algorithm core) is constructed, all points of interest is generated, for the extraction of feature, for one Image f(x, y) Hessian matrix is as follows:
(2) scale space is constructed;(3) positioning feature point;(4) characteristic point principal direction is distributed;(5) feature point description is generated;(6) Feature Points Matching.
The target of visual odometry is according to the image analysis processing associated image sequences of shooting to estimate mobile terminal 10 movement, to determine the location information of mobile device.The feature detection module 51 of visual odometry module 50 carries out image Feature detection, and the characteristics of image that will test passes to sampling module 52.Sampling module 52 was according to every ten seconds one frame figures of sampling Moving-vision search module is passed to as the speed sampling of feature, and by the characteristics of image sampled.This location information and movement It searches for obtained location information and obtains more accurately location information by Co-factor propagation module.Positioning result is finally shown to use In the mobile device at family.
Feature detection module 51 is to the collected consecutive image sequential extraction procedures image characteristic point of visual sensor module 40.
Sampling module 52 carries out the image characteristic point that feature detection module 51 extracts according to the speed of every ten seconds frames Sampling, and pass data to moving-vision search module.
Characteristic matching module 63 matches the point feature for the before and after frames image that feature detection module 51 extracts.
The same point image for the before and after frames image that motion estimation module 55 matches module to three-dimensional system of coordinate, thus Estimate the mileage of the motion profile sum of mobile terminal 10 in this period of time.
Visual odometry module 50 is using the FAST(Features from that complexity is low, detection effect is good Accelerated Segment Test) algorithm progress characteristic point detection.FAST algorithm carries out Corner Detection first.Angle point judgement Foundation: for test point p, if having in 16 pixels of surrounding all smaller threshold value of N number of pixel value continuously put or A big threshold value.
Pass through non-maxima suppression again.Each angle point score, the rule of scoring function are acquired according to scoring function are as follows: surrounding The minimum value of the absolute difference of maximum continuous 10 points in 16 points, and the value for meeting x > t is score;Again in the area of 3x3 Comparison score in domain chooses that maximum point of score value.Wherein score calculation formula is as follows:
V indicates score, and t indicates threshold value.
The specific detecting step of FAST algorithm:
(1) on partial pixel point circumferentially, the detection of non-angle point is carried out;
(2) if preliminary judgement is angle point, Corner Detection is carried out on whole pixels circumferentially;
(3) angle steel joint carries out non-maxima suppression, obtains angle point output.
Fig. 4 is the flow diagram for the indoor orientation method that one embodiment of the application provides.As shown in figure 4, this method packet It includes:
401, two dimensional code location library and database 110 are established;
It establishes two dimensional code location library and 20 database 110 of map server includes:
Two dimensional code is disposed at key position (such as floor stairs port, Entrance) in advance, establishes two dimensional code location library;
Crucial point image is acquired around indoor path with equipment such as laser ranging, high-precision cameras, and records key point position, it will Crucial point image and key point position are deployed in database 110.
402, two-dimensional barcode information and image sequence are obtained, and image feature information is obtained according to image sequence;
Obtaining characteristics of image according to image sequence includes: that will collect continuous image sequence to extract characteristics of image, and to image Feature is sampled.
403, according to two-dimensional barcode information and image feature information, moving-vision search positioning result is obtained.
According to two-dimensional barcode information, visual search range is adjusted, and is carried out according to image feature information in visual search range Images match positioning obtains moving-vision search positioning result.
This method further include:
404, visual odometry positioning result is obtained according to image sequence, and positioning result is searched for in vision according to moving-vision Journey meter positioning result is modified, and shows final positioning result.
Correspondingly, the embodiment of the present application also provides a kind of computer readable storage medium for being stored with computer program, meter Calculation machine program, which is performed, can be realized each step that can be executed by electronic equipment in above method embodiment.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculate equipment include one or more processors (CPU), input/output interface, Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/ Or the forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable Jie The example of matter.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data. The example of the storage medium of computer include, but are not limited to phase change memory (PRAM), static random access memory (SRAM), Dynamic random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electricity Erasable Programmable Read Only Memory EPROM (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other Magnetic storage device or any other non-transmission medium, can be used for storage can be accessed by a computing device information.According to herein In define, computer-readable medium does not include temporary computer readable media (transitory media), such as the data of modulation Signal and carrier wave.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want There is also other identical elements in the process, method of element, commodity or equipment.
The above is only embodiments herein, are not intended to limit this application.To those skilled in the art, Various changes and changes are possible in this application.It is all within the spirit and principles of the present application made by any modification, equivalent replacement, Improve etc., it should be included within the scope of the claims of this application.

Claims (11)

1. a kind of indoor locating system characterized by comprising mobile terminal and map server;
The mobile terminal is used for:
Two-dimensional barcode information and image feature information are acquired, and the two-dimensional barcode information and described image characteristic information are sent to institute State map server;
Obtain visual odometry positioning result;
Positioning result, which is searched for, according to the moving-vision of visual odometry positioning result and map server shows final positioning result;
The map server is used for:
Based on the two-dimensional barcode information and image feature information, the moving-vision search positioning knot of the map server is generated Fruit.
2. indoor locating system according to claim 1, which is characterized in that the mobile terminal includes two-dimensional code scanning mould Block, visual sensor module, visual odometry module, mobile terminal moving-vision search module, fusion locating module and positioning Display module;
The Response Code scan module is used for: being scanned the two-dimensional code, is obtained two-dimensional barcode information;
The visual sensor module is used for: acquisition visual pattern obtains image sequence;
The visual odometry module is used for:
Extract the image feature information of described image sequence;
Described image characteristic information is sampled, sample information is obtained;
Visual odometry positioning result is obtained according to described image sequence;
The mobile terminal moving-vision search module is used for: the map clothes are passed to after the sample information is encoded Business device;
The fusion locating module is used for: searching for positioning result in the vision according to the moving-vision of the map server Journey meter positioning result is modified;
The positioning display module is used for: showing final positioning result.
3. indoor locating system according to claim 1 or 2, which is characterized in that the map server includes two dimensional code Zone location module, server end moving-vision search module and database;
The two-dimension code area locating module is used for: according to the prime area position of two-dimensional barcode information judgement positioning and being provided Region code;
The server end moving-vision search module is used for: being carried out matching positioning according to the sample information, is obtained mobile view Feel search positioning result;
The database is used for: according to the region code, transferring region picture to the server end moving-vision search module.
4. indoor locating system according to claim 2, which is characterized in that the visual odometry module includes feature inspection Survey module, sampling module, demarcating module, characteristic matching module, motion estimation module and position optimization module;
The feature detection module is used for: to described image sequential extraction procedures image feature information;
The sampling module is used for: sampling to described image characteristic information, and sampled data is passed to the movement eventually Hold moving-vision search module;
The demarcating module is used for: being carried out calibration to the first frame of the image sequence from the visual sensor module and is set as Key frame;
The characteristic matching module is used for: the point feature progress to the before and after frames image that the feature detection module extracts Match;
The motion estimation module is used for: the same point image for the before and after frames image that the characteristic matching module matches is arrived Three-dimensional system of coordinate, to estimate mileage;
The position optimization module is used for: updating current trajectory according to the mileage of the mobile device terminal estimated.
5. indoor locating system according to claim 2, which is characterized in that the mobile terminal moving-vision search module Including describing sub- extraction module and describing sub- coding module;
The sub- extraction module of description is used for: extracting sampled data Feature Descriptor generated;
The sub- coding module of description is used for: the Feature Descriptor of generation is encoded.
6. indoor locating system according to claim 3, which is characterized in that the server end moving-vision search module Including describing sub- decoder module, the sub- matching module of description and locating module;
The sub- decoder module of description is used for: Feature Descriptor coding is decoded;
The sub- matching module of description is used for: the region picture that will have been transferred in decoded Feature Descriptor and the database Data are matched;
The locating module is used for: determining specific location according to matched picture.
7. a kind of indoor orientation method characterized by comprising
Establish two dimensional code location library and database;
Two-dimensional barcode information and image sequence are obtained, and image feature information is obtained according to described image sequence;
According to the two-dimensional barcode information and described image characteristic information, moving-vision search positioning result is obtained.
8. indoor orientation method according to claim 7, which is characterized in that this method further includes according to described image sequence Visual odometry positioning result is obtained, and positioning result is searched for the visual odometry positioning result according to the moving-vision It is modified, and shows final positioning result.
9. indoor orientation method according to claim 7, which is characterized in that described to establish two dimensional code location library and map clothes Business device database include:
Two dimensional code is disposed in key position in advance, establishes the two dimensional code location library;
Crucial point image is acquired around indoor path, and records key point position, crucial point image and key point position are deployed to In the database.
10. indoor orientation method according to claim 7, which is characterized in that described to obtain figure according to described image sequence As feature includes: that will collect continuous described image sequential extraction procedures characteristics of image, and sample to described image feature.
11. indoor orientation method according to claim 7, which is characterized in that described according to the two-dimensional barcode information and institute Characteristics of image is stated, obtaining moving-vision search positioning result includes: to adjust visual search range according to the two-dimensional barcode information, And images match positioning is carried out in the visual search range according to described image characteristic information, obtain the moving-vision search Positioning result.
CN201811469706.7A 2018-12-04 2018-12-04 A kind of indoor orientation method and system Pending CN109612455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811469706.7A CN109612455A (en) 2018-12-04 2018-12-04 A kind of indoor orientation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811469706.7A CN109612455A (en) 2018-12-04 2018-12-04 A kind of indoor orientation method and system

Publications (1)

Publication Number Publication Date
CN109612455A true CN109612455A (en) 2019-04-12

Family

ID=66005764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811469706.7A Pending CN109612455A (en) 2018-12-04 2018-12-04 A kind of indoor orientation method and system

Country Status (1)

Country Link
CN (1) CN109612455A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112880681A (en) * 2021-01-12 2021-06-01 桂林慧谷人工智能产业技术研究院 SSD-based visual indoor positioning system technical method
CN112967517A (en) * 2021-02-23 2021-06-15 中煤科工开采研究院有限公司 Underground vehicle positioning system based on coded pattern recognition

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354475A (en) * 2011-10-08 2012-02-15 浙江元亨通信技术有限公司 Adaptive localization self-help tour guide method and system
CN104520732A (en) * 2012-02-10 2015-04-15 Isis创新有限公司 Method of locating sensor and related apparatus
CN105143907A (en) * 2013-04-22 2015-12-09 阿尔卡特朗讯 Localization systems and methods
CN105792353A (en) * 2016-03-14 2016-07-20 中国人民解放军国防科学技术大学 Image matching type indoor positioning method with assistance of crowd sensing WiFi signal fingerprint
CN106291517A (en) * 2016-08-12 2017-01-04 苏州大学 The indoor cloud robot angle localization method optimized with visual information based on position
CN106767810A (en) * 2016-11-23 2017-05-31 武汉理工大学 The indoor orientation method and system of a kind of WIFI and visual information based on mobile terminal
CN106793086A (en) * 2017-03-15 2017-05-31 河北工业大学 A kind of indoor orientation method
CN107024216A (en) * 2017-03-14 2017-08-08 重庆邮电大学 Introduce the intelligent vehicle fusion alignment system and method for panoramic map
CN107368867A (en) * 2017-07-26 2017-11-21 四川西谷物联科技有限公司 Image information reponse system and server
CN107402012A (en) * 2016-05-20 2017-11-28 北京自动化控制设备研究所 A kind of Combinated navigation method of vehicle
CN107613262A (en) * 2017-09-30 2018-01-19 驭势科技(北京)有限公司 A kind of Vision information processing System and method for
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354475A (en) * 2011-10-08 2012-02-15 浙江元亨通信技术有限公司 Adaptive localization self-help tour guide method and system
CN104520732A (en) * 2012-02-10 2015-04-15 Isis创新有限公司 Method of locating sensor and related apparatus
CN105143907A (en) * 2013-04-22 2015-12-09 阿尔卡特朗讯 Localization systems and methods
CN105792353A (en) * 2016-03-14 2016-07-20 中国人民解放军国防科学技术大学 Image matching type indoor positioning method with assistance of crowd sensing WiFi signal fingerprint
CN107402012A (en) * 2016-05-20 2017-11-28 北京自动化控制设备研究所 A kind of Combinated navigation method of vehicle
CN106291517A (en) * 2016-08-12 2017-01-04 苏州大学 The indoor cloud robot angle localization method optimized with visual information based on position
CN106767810A (en) * 2016-11-23 2017-05-31 武汉理工大学 The indoor orientation method and system of a kind of WIFI and visual information based on mobile terminal
CN107024216A (en) * 2017-03-14 2017-08-08 重庆邮电大学 Introduce the intelligent vehicle fusion alignment system and method for panoramic map
CN106793086A (en) * 2017-03-15 2017-05-31 河北工业大学 A kind of indoor orientation method
CN107368867A (en) * 2017-07-26 2017-11-21 四川西谷物联科技有限公司 Image information reponse system and server
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN107613262A (en) * 2017-09-30 2018-01-19 驭势科技(北京)有限公司 A kind of Vision information processing System and method for
CN108318043A (en) * 2017-12-29 2018-07-24 百度在线网络技术(北京)有限公司 Method, apparatus for updating electronic map and computer readable storage medium
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112880681A (en) * 2021-01-12 2021-06-01 桂林慧谷人工智能产业技术研究院 SSD-based visual indoor positioning system technical method
CN112967517A (en) * 2021-02-23 2021-06-15 中煤科工开采研究院有限公司 Underground vehicle positioning system based on coded pattern recognition

Similar Documents

Publication Publication Date Title
US11003956B2 (en) System and method for training a neural network for visual localization based upon learning objects-of-interest dense match regression
US9275499B2 (en) Augmented reality interface for video
CN109059895A (en) A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
TW201715476A (en) Navigation system based on augmented reality technique analyzes direction of users' moving by analyzing optical flow through the planar images captured by the image unit
CN103119611A (en) Method and apparatus for image-based positioning
US11113896B2 (en) Geophysical sensor positioning system
CN111323024B (en) Positioning method and device, equipment and storage medium
US9239965B2 (en) Method and system of tracking object
CN107407566A (en) Vector field fingerprint mapping based on VLC
Feng et al. Visual map construction using RGB-D sensors for image-based localization in indoor environments
Zhang et al. Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired
CN111664848B (en) Multi-mode indoor positioning navigation method and system
CN109612455A (en) A kind of indoor orientation method and system
CN114185073A (en) Pose display method, device and system
Elias et al. An accurate indoor localization technique using image matching
CN108289327A (en) A kind of localization method and system based on image
JP6580286B2 (en) Image database construction device, position and inclination estimation device, and image database construction method
Jonas et al. IMAGO: Image-guided navigation for visually impaired people
CN110146104A (en) The air navigation aid of electronic device
Li et al. Vision-based indoor localization via a visual SLAM approach
Liu et al. A Low-cost and Scalable Framework to Build Large-Scale Localization Benchmark for Augmented Reality
US11741631B2 (en) Real-time alignment of multiple point clouds to video capture
Huang et al. Ubiquitous indoor vision navigation using a smart device
JP2020020645A (en) Position detection system and position detection method
Nguyen et al. A hybrid positioning system for indoor navigation on mobile phones using panoramic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190412

WD01 Invention patent application deemed withdrawn after publication