CN106447585A - Urban area and indoor high-precision visual positioning system and method - Google Patents
Urban area and indoor high-precision visual positioning system and method Download PDFInfo
- Publication number
- CN106447585A CN106447585A CN201610847459.4A CN201610847459A CN106447585A CN 106447585 A CN106447585 A CN 106447585A CN 201610847459 A CN201610847459 A CN 201610847459A CN 106447585 A CN106447585 A CN 106447585A
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- information
- dimensional
- vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 94
- 230000000007 visual effect Effects 0.000 title claims abstract description 25
- 230000008569 process Effects 0.000 claims abstract description 49
- 238000013507 mapping Methods 0.000 claims abstract description 16
- 238000004364 calculation method Methods 0.000 claims description 35
- 230000008878 coupling Effects 0.000 claims description 26
- 238000010168 coupling process Methods 0.000 claims description 26
- 238000005859 coupling reaction Methods 0.000 claims description 26
- 230000004807 localization Effects 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 10
- 239000007787 solid Substances 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 7
- 238000012512 characterization method Methods 0.000 claims description 5
- 238000011084 recovery Methods 0.000 claims description 5
- 230000010354 integration Effects 0.000 claims 2
- 238000005516 engineering process Methods 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 9
- 239000000203 mixture Substances 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000013589 supplement Substances 0.000 description 4
- 238000011282 treatment Methods 0.000 description 4
- 230000003190 augmentative effect Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 208000037805 labour Diseases 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241000208340 Araliaceae Species 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000000205 computational method Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000010181 polygamy Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/40—Business processes related to the transportation industry
Landscapes
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides an urban area and indoor high-precision visual positioning system and method. The image information of a surrounding environment is acquired based on a visual sensor to realize high-precision urban area and indoor positioning. The method comprises the steps that the distinguishing feature information of an image is calculated and extracted after the visual sensor captures the scene image information; according to the distinguishing feature information, similarity degree recognition and matching are carried out in the feature information base of a digital three-dimensional model; according to matching coordinate information recorded in an image feature matching process, a geometrical mapping relation from a three-dimensional scene to a two-dimensional image is restored, and a camera intersection model from a two-dimensional image coordinate to a three-dimensional space coordinate is established to determine the three-dimensional position and attitude information of the visual sensor and a dynamic user; multi-source image information captured by the visual sensor is received; a digital three-dimensional model corresponding to a real scene is reconstructed or updated; and the feature information base is updated. According to the invention, robust and sustainable positioning ability is provided when the surrounding environment changes.
Description
Technical field
The present invention relates to urban area and indoor navigation field of locating technology are and in particular to a kind of pass through user's visual sensing
Device collection image information and extraction feature, and the solid geometry of camera intersection is built by using the characteristic matching with threedimensional model
Mapping, in the high-precision location technique of urban canyons and indoor offer Centimeter Level, provides for unmanned in the future and indoor positioning
Effectively technical support.
Background technology
Today's society, people are growing to the demand of navigator fix information.Develop rapidly with scientific and technical, with
GNSS (Global Navigation Satellite System, GLONASS) is the space-based alignment system of representative
Moment provides round-the-clock, inexpensive positioning, navigation and time service service for Global Subscriber.However, space-based alignment system is in city
Area and interior are faced with increasing environment and limit.For example in Central Commercial, because skyscraper is believed to radio frequency
Number the presence blocked with phenomenons such as multipath interference, user equipment cannot continuous tracking satellite signal, lead to positioning result to go out
Existing waiting time length, precision are low, poor continuity the problems such as, even cannot position when serious.Indoors in environment, in most cases
GNSS signal can be because being not used to by blocking completely of building position.
In order to improve the positioning performance in urban area for the space-based alignment system, improve the reliability of overall navigation positioning result
And seriality, assisted GNSS (A-GNSS) arises at the historic moment and merged terrestrial cellular networks and space-based alignment system.A-
Although GNSS can improve the search efficiency of space-based satellite signal, shorten primary positioning time and reduce positioning to a certain extent
The power consumption of mobile terminal, but positioning precision is had no with essence optimization.
With deepening continuously of urban digital and IT application process, digital city or even smart city are increasingly becoming future
The direction of urban development, and build brand-new urban morphology by Internet of Things, sensor network etc..Research worker has started to Wi-
The wireless senser such as Fi access point AP or bluetooth is used as locating source, realizes city and indoor environment by fingerprint matching positioning principle
Under seamless positioning., its AP access point exists although system is easy to build in a large number in city scope taking Wi-Fi as a example, applies model
Enclose extensively, but signal cover is limited, the fingerprint base stage of building need to expend a large amount of labours, signal is easily subject to Radio frequency interference, fixed
Position precision and stability of locating results are poor, and power consumption is also higher.Additionally, when City Building structure changes or to AP access point
When being changed, whole alignment system faces the risk of inefficacy.
With the development of the technology such as unmanned vehicle, virtual reality, augmented reality and mixed reality, in urban canyons area and room
Interior environment provides Centimeter Level hi-Fix service will become public demand in the near future.Due to answering of these localizing environments
Polygamy, above-mentioned all class location technologies are difficult to meet user to precision, reliability, successional requirement.High-end synchronous positioning and system
Drawing system (Simultaneous Location and Mapping) is provided that high accuracy, high reliability location technology, but it is high
Expensive hardware cost is the biggest obstacle realizing popular popularization.
Content of the invention
In view of this, the present invention is intended to provide a kind of high accuracy seamless positioning side for urban area and indoor application
Method, gathers surrounding scene image information using user terminal vision sensor (as camera, videocorder, smart mobile phone etc.), not
On the premise of transformation network infrastructure and user terminal sensor, realize Centimeter Level real-time positioning.
The present invention provides a kind of urban area and indoor high-precision vision alignment system, for view-based access control model sensor acquisition
Surrounding image information and realize urban area and the indoor positioning of hi-Fix, including box counting algorithm subelement,
Image Feature Matching subelement, camera intersection calculation subelement, feature database and threedimensional model update subelement;Wherein,
Box counting algorithm subelement, for calculating and extracting image after vision sensor photographed scene image information
Marked feature information;
Image Feature Matching subelement, for according to the letter of marked feature determined by described image feature calculation subelement
Breath carries out similarity identification and mates in the characteristic information storehouse of digital three-dimensional model, and feature is in the picture after recording coupling
Two-dimensional coordinate and the three-dimensional geographical coordinate in real scene;
Camera intersection calculation subelement, for the coupling coordinate information recorded according to Image Feature Matching subelement,
Recover the geometric maps relation from three-dimensional scenic to two dimensional image, and set up by the phase of two-dimensional image coordinate to three dimensional space coordinate
Machine intersection model, determines three-dimensional position and the attitude information of vision sensor equipment and dynamic subscriber;
Feature database and threedimensional model update subelement, for receiving many source image information of vision sensor shooting, again
Build or update real scene respective digital threedimensional model, and update characteristic information storehouse.
And, localizing environment includes interior and outdoor environment;
Dynamic subscriber's object includes pedestrian, motor vehicle driver, passenger, manned vehicle and automatic driving car
?;
Described vision sensor includes smart mobile phone, wearable device, digital camera, camera, monitor, motion phase
Machine, drive recorder, reversing photographing unit, depth camera;
And, described vision sensor quantity includes one or more;Described vision sensor coverage includes folk prescription
Position, multi-faceted and omnibearing visual angle.
And, if being used motor vehicles as dynamic subscriber, the putting position of vision sensor is included in cabin and outside cabin;
Vision sensor placing attitude include forward sight, backsight, left view, the right side depending on, side-looking, overlook, look up.
And, at least include characterization information in the characteristic information storehouse of digital three-dimensional model, and corresponding with feature
Three-dimensional coordinate information, and the specific descriptions form of described feature determines by selected feature extraction algorithm.
And, the acquiring way in the digital three-dimensional model for positioning scene and characteristic information storehouse includes commercially available, net
Network resource acquisition and autonomous type build.
And, the autonomous type in the digital three-dimensional model for positioning scene and characteristic information storehouse builds implementation, including
Integrate many source image information data in Cloud Server, extract and mate many source images scene characteristic, utilize computer vision several
What relation calculates threedimensional model recording feature information bank.
And, the characteristic information that described image feature calculation subelement calculates and records, including the plane coordinates of feature
With all kinds of statistical information values stating this feature, the concrete expression form of described feature determined by selected feature extraction algorithm
Fixed.
And, described image characteristic matching subelement, using outline customer position information, for improving characteristics of image and spy
Levy the matching efficiency of information base data.
And, Image Feature Matching subelement passes through all kinds of statistics in contrast images feature with feature in characteristic information storehouse
The value of information, according to selected characteristic matching criterion, determines the proper fit from two dimensional image to threedimensional model, in one group of coupling
At least record including two-dimensional image coordinate and real scene three-dimensional coordinate.
And, described camera intersection calculation subelement includes error characteristic coupling and rejects and Position-Solving,
Described error characteristic mates rejecting process and is used for detecting in the coupling mapping of all of 2 d-to-3 d and searching
Error error hiding, thus reject to erroneous matching before carrying out described Position-Solving;
Described Position-Solving passes through to recover the three-dimensional coordinate of multiple spatial point from real scene to image character pair point
Two-dimensional pixel coordinate solid geometry mapping, determine the vision sensor or user position under known coordinate system and attitude letter
Breath.
And, described camera intersection calculation subelement also includes wave filter, for being filtered further to positioning result
And optimization processing.
And, it as solution object and is ignored in certain single vision sensor position by described camera intersection calculation subelement
His vision sensor, or take Combined Treatment mode to be merged multiple stage visual sensor information.
And, described feature database and threedimensional model update subelement, constantly receive the crowd being shot by a large amount of vision sensors
Source images and/or video information, rebuild in the range of many source image information are covered or update real scene respective counts
Word threedimensional model, and emerging feature and its corresponding three-dimensional coordinate are added to original characteristic information storehouse, build after updating
Characteristic information storehouse, for providing steadily and surely reliable sustainable positioning function.
The present invention accordingly provides a kind of urban area and indoor high-precision vision localization method, for view-based access control model sensor
Gather surrounding image information and realize urban area and the indoor positioning of hi-Fix, including box counting algorithm mistake
Journey, Image Feature Matching process, camera intersection calculation process, feature database and threedimensional model renewal process;Wherein,
Box counting algorithm process, for calculating and extracting the aobvious of image after vision sensor photographed scene image information
Write characteristic information;
Image Feature Matching process, for existing according to marked feature information determined by described image feature calculation process
Carry out similarity identification and mate in the characteristic information storehouse of digital three-dimensional model, and record two in the picture of feature after coupling
Dimension coordinate and the three-dimensional geographical coordinate in real scene;
Camera intersection calculation process, for the coupling coordinate information recorded according to Image Feature Matching process, recovers
Geometric maps relation from three-dimensional scenic to two dimensional image, and set up by the camera friendship of two-dimensional image coordinate to three dimensional space coordinate
Can model, determine three-dimensional position and the attitude information of vision sensor equipment and dynamic subscriber;
Feature database and threedimensional model renewal process, for receiving many source image information of vision sensor shooting, structure again
Build or update real scene respective digital threedimensional model, and update characteristic information storehouse.
And, localizing environment includes interior and outdoor environment;
Dynamic subscriber's object includes pedestrian, motor vehicle driver, passenger, manned vehicle and automatic driving car
?;
Described vision sensor includes smart mobile phone, wearable device, digital camera, camera, monitor, motion phase
Machine, drive recorder, reversing photographing unit, depth camera;
And, described vision sensor quantity includes one or more;Described vision sensor coverage includes folk prescription
Position, multi-faceted and omnibearing visual angle.
And, if being used motor vehicles as dynamic subscriber, the putting position of vision sensor is included in cabin and outside cabin;
Vision sensor placing attitude include forward sight, backsight, left view, the right side depending on, side-looking, overlook, look up.
And, at least include characterization information in the characteristic information storehouse of digital three-dimensional model, and corresponding with feature
Three-dimensional coordinate information, and the specific descriptions form of described feature determines by selected feature extraction algorithm.
And, the acquiring way in the digital three-dimensional model for positioning scene and characteristic information storehouse includes commercially available, net
Network resource acquisition and autonomous type build.
And, the autonomous type in the digital three-dimensional model for positioning scene and characteristic information storehouse builds implementation, including
Integrate many source image information data in Cloud Server, extract and mate many source images scene characteristic, utilize computer vision several
What relation calculates threedimensional model recording feature information bank.
And, the characteristic information that described image feature calculation process calculates and records, including feature plane coordinates and
State all kinds of statistical information values of this feature, the concrete expression form of described feature is determined by selected feature extraction algorithm.
And, described image characteristic matching process, using outline customer position information, for improving characteristics of image and feature
The matching efficiency of information base data.
And, Image Feature Matching process passes through all kinds of statistics letters in contrast images feature with feature in characteristic information storehouse
Breath value, according to selected characteristic matching criterion, determines the proper fit from two dimensional image to threedimensional model, in one group of coupling extremely
Record less including two-dimensional image coordinate and real scene three-dimensional coordinate.
And, described camera intersection calculation process includes error characteristic coupling and rejects and Position-Solving,
Described error characteristic mates rejecting process and is used for detecting in the coupling mapping of all of 2 d-to-3 d and searching
Error error hiding, thus reject to erroneous matching before carrying out described Position-Solving;
Described Position-Solving passes through to recover the three-dimensional coordinate of multiple spatial point from real scene to image character pair point
Two-dimensional pixel coordinate solid geometry mapping, determine the vision sensor or user position under known coordinate system and attitude letter
Breath.
And, described camera intersection calculation process also includes wave filter, for being filtered further to positioning result and
Optimization processing.
And, other as solution object and are ignored in certain single vision sensor position by described camera intersection calculation process
Vision sensor, or take Combined Treatment mode to be merged multiple stage visual sensor information.
And, described feature database and threedimensional model renewal process, constantly receive the many sources being shot by a large amount of vision sensors
Image and/or video information, rebuild in the range of many source image information are covered or update real scene respective digital
Threedimensional model, and emerging feature and its corresponding three-dimensional coordinate are added to original characteristic information storehouse, build after updating
Characteristic information storehouse, for providing steadily and surely reliably sustainable positioning function.
Urban area proposed by the invention and indoor high-precision vision location technology scheme have the advantages that:
(1) using the moving-vision sensor device acquired images such as hand-held or vehicle-mounted, Wearable or video data as
Direct observation information under urban area and indoor environment, need not persistently receive the radiofrequency signal from GNSS or WLAN, it is to avoid
Multipath and eclipse phenomena that radiofrequency signal is likely to occur, thus realize consecutive tracking;
(2) when captured environment has enough matching characteristic points, camera intersection technical scheme of the present invention can have
Effect avoids in conventional mapping methods due to the scene positioning result saltus step that causes of switching, low cost, expeditiously realizes Centimeter Level
Positioning;
(3) technical scheme necessary hardware of the present invention is based entirely on equipment (the inclusion vision can supply on existing market
Sensor and Cloud Server etc.), carry out hardware modification and upgrading without ancillary cost cost, and support many Source Data Acquisition sides
Formula, greatly reduces the labour of required consuming;
(4) image acquisition is carried out to surrounding by technical scheme of the present invention, can achieve to city and indoor number
The supplement of word threedimensional model and renewal, and then ensure change occurs (as changed decoration layout, newly-built building when surrounding or scene
Space etc.), the program can provide sane sustainable stationkeeping ability.
Brief description
Fig. 1 receives schematic diagram for GNSS satellite signal under the urban environment of prior art
Fig. 2 is that the utilization hand-hold mobile device of the embodiment of the present invention shoots off-the-air picture information schematic diagram.
Fig. 3 realizes urban area high-precision vision positioning side for the embodiment of the present invention using wear-type vision sensor equipment
Method exemplary plot.
It is fixed that Fig. 4 realizes urban area high-precision vision for the embodiment of the present invention using vehicle moving vision sensor equipment
Method for position exemplary plot.
Fig. 5 is the characteristic matching schematic diagram in the high-precision vision localization method of embodiment of the present invention urban area.
Fig. 6 is embodiment of the present invention plane of delineation coordinate system o-xy, camera coordinate system O-XcYcZcWith earth coordinates Og-
XgYgZgIllustrate and transformational relation figure.
Fig. 7 is that location information described in the embodiment of the present invention obtains data flow diagram.
Fig. 8 is high-precision vision localization method implementing procedure figure in urban area described in the embodiment of the present invention.
Specific embodiment
Below by embodiment and accompanying drawing, technical scheme is described in further detail.
User's positioning signal reception situation based on radiofrequency signal under urban environment is described first.Fig. 1 depicts
The schematic diagram that under urban environment, GNSS satellite signal receives.
As shown in figure 1, user's (automobile) travels in built-up urban canyons, GNSS antenna persistently receives self-conductance
Boat satellite (including but not limited to GPS, BeiDou, GLONASS, Galileo, WAAS, EGNOS, QZSS, GAGAN/IRNSS
Deng) and (or) ground pseudo satellite, pseudolite aeronautical satellite radiofrequency signal.Because skyscraper blocks to radiofrequency signal in Fig. 1, user
The navigation signal of multi-satellite cannot continuously be followed the tracks of in visual range, lead to positioning result that waiting time length, precision occur low, even
The problems such as continuous property difference, even cannot position when serious.
On the other hand, vision sensor can be caught within very short (0.001-0.01 second) time by way of image or video
Obtain substantial amounts of precise information.When it is in external light source coverage, surrounding scene image or video information can at any time by
Obtain.Because urban area building stands in great numbers, indoor environment scene is complicated, therefore when user is located at urban area or indoor environment
When, vision sensor acquired image and (or) video information often have rich and varied and clear and legible edge and (or)
Between textural characteristics, and different scenes, often diversity is big for characteristics of image.
When being embodied as, method provided by the present invention can realize automatic running flow process based on software engineering, and mould may also be employed
Massing mode realizes corresponding system.
Vision positioning system provided in an embodiment of the present invention includes box counting algorithm subelement, Image Feature Matching list
Unit, camera intersection calculation subelement, feature database and threedimensional model update subelement four part;Wherein,
Box counting algorithm subelement, for calculating and extracting image after vision sensor photographed scene image information
Marked feature information;
Image Feature Matching subelement, for according to the letter of marked feature determined by described image feature calculation subelement
Breath carries out similarity identification and mates in the characteristic information storehouse of digital three-dimensional model, and feature is in the picture after recording coupling
Two-dimensional coordinate and the three-dimensional geographical coordinate in real scene;
Camera intersection calculation subelement, for the coupling coordinate information recorded according to Image Feature Matching subelement,
Recover the geometric maps relation from three-dimensional scenic to two dimensional image, and set up by the phase of two-dimensional image coordinate to three dimensional space coordinate
Machine intersection model, determines three-dimensional position and the attitude information of vision sensor equipment and dynamic subscriber;
Feature database and threedimensional model update subelement, for receiving many source image information of vision sensor shooting, again
Build or update real scene respective digital threedimensional model, and update characteristic information storehouse.
The localizing environment that the present invention is suitable for includes interior and outdoor environment.
Applicable object of the present invention include but is not limited to pedestrian, motor vehicle driver, passenger, manned vehicle and
Automatic driving vehicle etc.;Vision sensor platform used herein includes but is not limited to smart mobile phone, digital camera (includes
Ordinary digital camera), moving camera, wearable device (such as headset equipment), camera, monitor, (before vehicle dress or after
Dress) drive recorder, (fill before vehicle or fill afterwards) reversing photographing unit, depth camera etc..As used vehicle platform, vision sensor
The position that can put includes in cabin and cabin is outer (inclusion roof);Vision sensor attitude may include (but not limited to) forward sight,
Backsight, left view, the right side regard, side-looking etc.;Vision sensor quantity both can be one, also can be multiple stage;(one or more) vision passes
Sensor can be observed and be perceived scene information around by folk prescription position, multi-faceted or omnibearing visual angle.
Preferably, it is used motor vehicles as dynamic subscriber, the putting position of vision sensor is included in cabin and outside cabin.
If motor vehicles are used as dynamic subscriber, vision sensor placing attitude includes forward sight, backsight, left view, the right side regard, side-looking, vertical view,
Look up.
Now to illustrate to utilize acquired figure with handheld mobile device under indoor environment for one embodiment of the present of invention
As information is as the schematic diagram in location information source.As shown in Fig. 2 user's hand-held mobile terminal is located at domestic kitchens, by movement
True three-dimension scene information is captured and is digitized in two dimensional image space showing by terminal camera sensor.By obtaining one
Line item enriches, edge clear is distinguishable image and (or) video information, the core concept of the present invention is by extracting user
Image or video information feature that vision sensor is collected, and carry out characteristic matching in city or indoor threedimensional model,
And then recover to map from real scene feature to the solid geometry of image character pair, the three of final calculating user's vision sensor
Dimension position coordinateses and (or) attitude.
Fig. 3 is the enforcement illustrating to realize urban area high-precision vision localization method using wear-type vision sensor equipment
Illustrate and be intended to, realize in (including but not limited to) virtual reality or augmented reality environment as shown in Figure 2 to real city or
The image of indoor scene and (or) acquiring video information.Similar, Fig. 4 is to illustrate using vehicle moving vision sensor equipment
Realize the embodiment schematic diagram of urban area high-precision vision localization method.
Need the digital three-dimensional model by positioning scene and characteristic information storehouse, the feature of positioning scene digital three-dimensional model
At least include characterization information in information bank, and the three-dimensional coordinate information corresponding with feature, and described feature is concrete
Description form is determined by selected feature extraction algorithm.
Therefore, a premise using city proposed by the invention and indoor high-precision vision positioning is to need by fixed
The digital three-dimensional model of potential field scape, and threedimensional model possesses abundant and complete characteristic information storehouse.In characteristic information storehouse, extract
And the method for Expressive Features point includes but is not limited to Harris algorithm, contrast Texture similarity (Contrast
Context Histogram, CCH) algorithm, Scale invariant change eigentransformation (Scale invariant feature
Transform, SIFT) algorithm, accelerate robust feature (Speeded Up Robust Features, SURF) algorithm, closest
Characteristic locus (Nearest Feature Trajectory, NFT) algorithm and the like the derivative algorithm of algorithm or synthesis
Algorithm etc..
Each characteristic point in threedimensional model is included but is not limited in the characteristic information storehouse of positioning scene digital three-dimensional model
Description information, and the three-dimensional coordinate corresponding with each characteristic point.The description form of characteristic point is carried by selected feature
Algorithm is taken to determine, including but not limited to Harris algorithm, contrast Texture similarity (Contrast Context
Histogram, CCH) algorithm, Scale invariant change eigentransformation (Scale invariant feature Transform, SIFT)
Algorithm, acceleration robust feature (Speeded Up Robust Features, SURF) algorithm, closest characteristic locus (Nearest
Feature Trajectory, NFT) algorithm and the like the derivative algorithm of algorithm or composition algorithm etc..
The digital three-dimensional model of positioning scene and described characteristic information storehouse can, network moneys commercially available by (but not limited to)
Source obtains or autonomous type builds etc..
The autonomous type construction step in described positioning scene digital three-dimensional model and described characteristic information storehouse includes:In cloud service
Integrate many source image information data in device, extract and mate many source images scene characteristic, utilize computer vision geometrical relationship meter
Calculate threedimensional model recording feature information bank.Take the method that autonomous type builds scene digital three-dimensional model, that is, pass through in cloud clothes
Integrate many source images scene information data in business device, rely on computer vision algorithms make (such as SIFT, SURF, NFT and all so
The derivative algorithm of class algorithm or composition algorithm etc.) extract and mate substantial amounts of scene characteristic point, build scene characteristic information bank, profit
With exercise recovery structure (Structure from Motion, SfM) is theoretical and known point coordinates on a small quantity, in many source image information
Real scene threedimensional model is built in the range of being covered.
In the present invention, described crowd's source image information data is derived from above-mentioned all kinds of localizing environments, dynamic subscriber, visual sensing
Device, number of sensors, sensor coverage, sensor putting position, sensor placing attitude.Preferably, many source datas
Information source include but is not limited to that a large amount of smart phone users are captured and image that be uploaded to Cloud Server or video information with
And the vision sensor that carried of a large amount of vehicle platform is captured and is uploaded to image or video information of Cloud Server etc..
The three-dimensional system of coordinate of positioning scene threedimensional model can be based on universal reference benchmark (as ECEF coordinate system
(Earth-Centered, Earth-Fixed, ECEF)) or area reference benchmark (as local geographic coordinate system) is also or other ginsengs
Examine benchmark.
The high-precision vision localization method that the embodiment of the present invention is proposed, corresponding includes (I) individual or multiple characteristics of image
Calculating process, (II) individual or multiple Image Feature Matching processes, (III) camera intersection process, (IV) feature database and three-dimensional mould
Type renewal process four part.
(I) individual or multiple box counting algorithms:Calculate after vision sensor photographed scene image information and extract figure
The marked feature information of picture.
For (I) part, when carrying out box counting algorithm, the characteristic information calculating and recording includes the flat of feature
Areal coordinate and all kinds of statistical information values stating this feature, the concrete expression form of described feature is calculated by selected feature extraction
Method determines.Calculating platform includes vision sensor or remote platform.
When being embodied as, vision sensor, after photographed scene image information, is analyzed to characteristics of image immediately, can lead to
Cross using Harris, CCH, SIFT, SURF, NFT algorithm and the like the derivative algorithm of algorithm or composition algorithm etc., calculate
And extract all marked featurees of individual or multiple images (as vehicle-mounted multiple camera sync pulse jamming).The calculating task of this part was both
Can realize on vision sensor platform, also can pass through (wired or wireless) transmission link (including but not limited to USB, Serial
Port, GSM, CDMA, WCDMA, WLAN etc.) image is uploaded to Cloud Server realization.
For (II) part, when carrying out Image Feature Matching, it is possible to provide outline customer position information, for improving figure
Matching efficiency as feature and characteristic information database data;All kinds of systems by feature in contrast images feature and characteristic information storehouse
The meter value of information, according to selected characteristic matching criterion, determines the best match from two dimensional image to threedimensional model, mates at one group
In at least record including two-dimensional image coordinate and real scene three-dimensional coordinate.
When being embodied as, the vision sensor of any instant sole user's platform can shoot individual (as pedestrian's hand-held intelligent
Mobile phone shoots) or multiple (vehicle-mounted multi-faceted or comprehensive camera shoots) surrounding scene images.This link is by captured (individual
Or multiple) all features of image are mated one by one in the characteristic information storehouse of positioning scene threedimensional model.Often find one group
Matching characteristic, records this feature in the pixel coordinate in photo current and the three-dimensional geographical coordinate in feature database simultaneously.
The Image Feature Matching algorithm being adopted may include (but not limited to) va-file, IQ-tree, LPC-file, IDistance, many
Weight kD-Tree, Cluster Decomposition B+-Tree, VQ-Index, LSH, KLSH, LSB-tree, TCAN technology, WSH, MS-tree,
The derivative algorithm of MRSVQH, Best-Bin-First algorithm and the like algorithm or composition algorithm etc..
In order to improve the matching efficiency of current (individual or multiple) characteristics of image and characteristic information database data, can be by using
The existing location technology being applied to city and indoor environment, carries out initial location (several meters~tens of meters precision), thus to user
The characteristic information database data of required retrieval can be contracted to around user's general location.Initial location can be real by a kind of location technology
Existing, also can be merged by multiple location technologies and realize.Specifically carry out initial location technology may include (but not limited to) GNSS positioning,
Wi-Fi positioning, magnetic orientation, pedestrian's dead reckoning (Pedestrian Dead Reckoning, PDR) positioning and all so
The derivative algorithm of class algorithm or composition algorithm etc..Carry out initial location particularly important, because such ring in complicated positioning place
The possible quantity of information in border Scene characteristic information storehouse is huge, and the offer of general location can reduce hunting zone so that feature is greatly improved
Information base data search and characteristic matching efficiency.
(II) part calculating task both can be realized on vision sensor platform, also can realize in Cloud Server.When
When (II) part calculating task is realized on vision sensor platform (as user uses smart mobile phone, wearable device etc.),
Vision sensor platform can beforehand through (wired or wireless) transmission link (including but not limited to USB, Serial Port, GSM,
CDMA, WCDMA, WLAN etc.) scene characteristic information base data around Cloud Server is downloaded;When (II) part calculating task exists
(as user uses the ordinary digital camera of vehicular platform, moving camera, drive recorder, reversing to shine when realizing on Cloud Server
Camera etc.), Cloud Server at least through (wired or wireless) transmission link (including but not limited to USB, Serial Port, GSM,
CDMA, WCDMA, WLAN etc.) receive from the current general location of vision sensor and image characteristic point information.
For (III) part, carry out during camera intersection calculation, including error characteristic coupling rejecting and Position-Solving, institute
State error characteristic and mate rejecting process for detecting in the coupling mapping of all of 2 d-to-3 d and finding out erroneous matching,
Thus rejecting to erroneous matching before carrying out described Position-Solving;Described Position-Solving passes through to recover from real scene
The solid geometry mapping of the two-dimensional pixel coordinate to image character pair point for the three-dimensional coordinate of multiple spatial point, determines visual sensing
Position under known coordinate system of device or user and attitude information.
Further, carry out also including using wave filter and (or) other optimized algorithms during camera intersection calculation, for right
Positioning result is filtered and optimization processing further.
It is possible to further certain single vision sensor position is ignored other visual sensings as solution object and simply
Device, or take Combined Treatment mode to be merged multiple stage visual sensor information.
After carrying out task (II), recover the three-dimensional seat of multiple spatial point from real scene using camera intersection principle
Mark the solid geometry mapping of the two-dimensional pixel coordinate of current (individual or multiple) image character pair point.Fig. 5 describes Fig. 4 institute
Characteristic matching schematic diagram in the embodiment of description.Wherein, O point is camera photocentre.A, B, C, D, E are real scene three-dimensional point,
A ', B ', C ', D ', E ' are respectively the matching characteristic point in each leisure two-dimensional image plane, and the matching result of characteristic point is by (II)
Part is calculated.
For erroneous matching phenomenon that may be present after (II) Partial Feature Matching, in (III) partly middle utilization
Core face geometry (Epipolar Geometry) principle and constraints, using stochastic sampling concordance (Random Sample
Consensus, RANSAC) algorithm or its derivative algorithm or other algorithms, to currently captured (individual or multiple) picture point
The error hiding of feature and three dimensions point feature is rejected, and sets up by the camera of two-dimensional image coordinate to three dimensional space coordinate
Intersection model, the three-dimensional location coordinates of computation vision sensor device and attitude.
For simplifying the explanation principle of the invention, illustrate that (present invention also wraps below with the camera intersection principle of single image
Include multiple images of sync pulse jamming and other situations, this example is for illustrative purposes only), vision sensor equipment imaging geometry is by Fig. 6 table
Show.Wherein O point is referred to as camera photocentre, XcWith YcTwo coordinate axess are parallel with the x-axis of plane of delineation coordinate system o-xy and y-axis respectively, Zc
For camera primary optical axis and vertical with the plane of delineation, primary optical axis is principal point o, O-X with the intersection point of the plane of delineationcYcZcConstitute camera
Coordinate system, Oo is camera focal length f.Og-XgYgZgFor local coordinate system (universal reference benchmark can be based on, or area reference benchmark,
Also or other basis references).The following describe one embodiment of the present of invention.
Arbitrary 3 D spatial point P and its picture point p meet photogrammetric direct linear transformation (Direct Linear
Transformation, DLT) relation:
Wherein, (u, v)T(X, Y, Z)TIt is respectively coordinate under local coordinate system and photo coordinate system for the P point, coefficient
li(i=1,2 ..., 11) it is camera sensor elements of interior orientation (u0,v0,f)T, elements of exterior orientation (Xc,Yc,Zc,ω,φ,κ)T、
Camera coordinates axle nonorthogonal factor dsWith coordinate axess proportionality coefficient dcFunction.(u0,v0)TCoordinate offset amount for principal point o, outward
In the element of orientation, three line element (Xc,Yc,Zc)TWith three angle elements (ω, φ, κ)TCamera coordinates system initial point is respectively described exist
Coordinate under local coordinate system and camera coordinates system with respect to the rotation parameter of local coordinate system, therefore in the present embodiment,
The determination of end user's positioning result is to solve elements of exterior orientation (Xc,Yc,Zc,ω,φ,κ)T.
After the step (II) having carried out the present invention, for i-th match point, can be obtained by formula (1)
A2n×11l11×1=b2n×1(2)
Wherein, n is the match point total number having carried out finding after step (II), matrix l11×1、b2n×1And A2n×11Respectively
It is as follows,
l11×1=[l1l2l3l4l5l6l7l8l9l10l11]T(3)
b2n×1=-[u1v1u2v2… … unvn]T(4)
When arbitrary image shoots match point number n >=6 (the i.e. r in momentrow≥rcolumn, wherein rrowAnd rcolumnIt is respectively
Coefficient matrices A2n×11Row rank and column rank) when, make A2n×11、l11×1And b2n×1It is abbreviated as A, l and b respectively,
Can be obtained by least square
L=(ATA)-1ATb (6)
Camera sensor elements of interior orientation (u0,v0,f)T, camera coordinates axle nonorthogonal factor dsWith coordinate axess proportionality coefficient dc
Can be corrected by camera sensor and obtain priori numerical value, also can be carried out directly by the direct linear transformation (DLT) in the present embodiment
Connect resolving, due to elements of interior orientation and dsAnd dcBe calculated as prior art, it will not go into details for the present invention.
For three exterior orientation line elements, according to liThe expression formula of (i=1,2 ..., 11) has
Due to three foreign sides's parallactic angle element (ω, φ, κ)TThe direction cosines constituting can be described as
According to coefficient li(i=1,2 ..., 11) is had with the direction cosines relation of elements of exterior orientation
From formula (9), angular configurations are not unique, make its value be respectively φ ∈ [0, pi/2], ω ∈ [0, pi/2] and κ
∈[0,π].Under practical situation, three foreign side's parallactic angle element values can be all ± α and π ± α (α=ω, φ, κ).On the other hand,
Direct linear transformation's mid-focal length f and three foreign side's parallactic angle elements meet relation
Wherein, γ=r31Xc+r32Yc+r33Zc, rij(1≤i≤2,1≤j≤3) are the correspondence of spin matrix R in formula (8)
Element.The angle of value can be combined using angle ω, φ and κ, recalculate rijElement and γ, finally make in formula (10)
The valuation of 8 groups of focal length f is the parallactic angle element valuation of positive three foreign side and is ω, φ and κ value.
It is emphasized that the vision sensor equipment being obtained by formula (7~10) is with respect to the rotation of local coordinate system
Parameter and translation variable, are position under local coordinate system for the user and attitude information.
It is repeated that, above only the next in local coordinate system to the user in the case of any instant shooting single image
Put and carry out solving citing and having done corresponding simplification to practical situation, can continue on the basis of formula (7~10) in practical application
Proceed accurately to solve using methods such as the photogrammetric space resection of (but not limited to), bundle adjustments, therefore this
Bright described urban area and indoor environment high-precision vision alignment system and method are not limited to above-described embodiment and enumerate situation.
After obtaining customer location and attitude information, positioning result can be filtered further according to specific embodiment and
Optimization processing, to obtain more preferable location information, including using all kinds of wave filter of (but not limited to) and (or) it derivative is calculated
Method and (or) other optimized algorithms.
Because user platform can carry separate unit or multiple stage vision sensor, scene around is carried out folk prescription position, multi-faceted very
To omnidirectional shooting, be engraved in when therefore a certain after having carried out (II) part, found with three dimensions point corresponding two
Dimensional feature point may come from a vision sensor, also may come from multiple stage vision sensor.When these characteristic points come from multiple stage
During vision sensor, can<1>Certain single vision sensor position is ignored other vision sensors as solution object and simply,
Also may be used<2>Combined Treatment mode is taken to be merged multiple stage visual sensor information.
When by strategy<1>During process, you can simply ignore the two dimensional character point information from other vision sensors, this feelings
Condition is equivalent to whole characteristic points and all comes from single vision sensor.When by strategy<2>During process, can be for different visual sensings
Device is sequential or is performed in parallel strategy<1>, and it is flat that (but not limited to) weighting is asked in some vision sensor positions solving
All or geometric center;Also can in conjunction with multiple vision sensors between geometrical relationship, through process in unified camera intersection model
Middle solution customer location.
The above is only the user enumerating under any instant multiple stage vision sensor sync pulse jamming image conditions being likely to occur
Position solution strategies, but urban area of the present invention and indoor high-precision vision localization method are not limited to above-mentioned enumerate feelings
Condition.
For (III) part, the characteristic point that vision sensor positioning precision depends primarily in characteristic information storehouse exists
The currently pixel coordinate distribution in (individual or multiple) image.If characteristic point is in current (individual or multiple) figure in characteristic information storehouse
In picture, intensive and distribution uniform occurs, user's positioning precision can maintain Centimeter Level;If characteristic point is being worked as in characteristic information storehouse
Before sparse and skewness occurs in (individual or multiple) image, user's positioning precision can accordingly reduce and (reaches decimetre~rice
Level).
(III) part calculating task both can be realized on vision sensor platform, also can realize in Cloud Server.When
When (III) part calculating task is realized on vision sensor platform (as user uses smart mobile phone, wearable device etc.),
Vision sensor platform at least through (wired or wireless) transmission link (including but not limited to USB, Serial Port, GSM,
CDMA, WCDMA, WLAN etc.) characteristic point information and corresponding three dimensions after mating are downloaded in advance from Cloud Server
Point coordinates;When (III) part calculating task is realized on Cloud Server (as user uses the general digital phase of vehicular platform
Machine, moving camera, drive recorder, reversing photographing unit etc.), mobile terminal is at least through (wired or wireless) transmission link (bag
Include but be not limited to USB, Serial Port, GSM, CDMA, WCDMA, WLAN etc.) upload the characteristic point information and therewith after coupling
Corresponding plane pixel coordinates.
For (IV) part, when feature database and threedimensional model update, constantly receive and shot by a large amount of vision sensors
Many source images and (or) video information, rebuild in the range of many source image information are covered or update real scene three
Dimension module, and emerging feature and its corresponding three-dimensional coordinate are added to original characteristic information storehouse, build the spy after updating
Levy information bank, for providing steadily and surely reliably sustainable positioning function.
Cloud Server pass through (wired or wireless) transmission link (including but not limited to USB, Serial Port, GSM,
CDMA, WCDMA, WLAN etc.), constantly receive the many source images being shot by a large amount of vision sensors and video information, thus new
Scene content will continuously emerge.Cloud Server utilizes Harris, CCH, SIFT, SURF, NFT in computer vision technique etc. to calculate
The derivative algorithm of method and the like algorithm or composition algorithm etc. extract and mate substantial amounts of scene characteristic, and combine original
Characteristic information database data, using exercise recovery structure (Structure from Motion, SfM) is theoretical or its derivative theory,
Many source image information rebuild or update real scene threedimensional model in the range of being covered.To real scene threedimensional model
After realizing updating, emerging feature and its corresponding three-dimensional coordinate are added to original characteristic information storehouse, build after updating
Characteristic information storehouse.
For (IV) part, Cloud Server can rely on many sources vision sensor data to build three-dimensional geographic information system
(Geographic Information System, GIS) data base.Complete three-dimensional GIS data storehouse contributes to meeting a large amount of void
Intend reality and the practical application of augmented reality.Meanwhile, constantly enriching of many source data resources can be to scene characteristic information bank and city
City's digital three-dimensional model is supplemented and is updated, when environment or scene change (as changed decoration layout, newly-built building around
Space etc.) when, Cloud Server can constantly be repaired and update scene characteristic information bank and threedimensional model, and being easy to this method provides steadily and surely
Reliably sustainable positioning function.
When being embodied as, the process of each several part can be carried out as needed however it is not limited to order executes.Fig. 7 describes root in detail
A concrete location information according to the embodiment of the present invention obtains data flow diagram.Vision sensor equipment carries out image bat first
Take the photograph, calculate and extract image feature information;After initial location being carried out to vision sensor equipment using existing localization method, really
Determine the city digital threedimensional model of subrange, and determine scene characteristic information base data;Characteristic information storehouse is searched for one by one
And mate corresponding feature in image;After recording corresponding two and three dimensions characteristic coordinates information, by camera intersection
Three-dimensional mapping relations recover position and the attitude information of camera sensor equipment, and realizing camera sensor equipment provides user positioning
Information;Meanwhile, supplement scene characteristic information storehouse using 3 D model construction methods such as exercise recovery structures, realize city number
The renewal of word threedimensional model.
Fig. 8 describes applying according to embodiments of the present invention in detail and is embodied as flowing.After vision sensor obtains image
Carry out feature information extraction, under city digital threedimensional model and scene characteristic information bank are supported, realize characteristic matching, and inversely
Supplement and update city digital threedimensional model and scene characteristic information bank;If visual apparatus used by described embodiment sense for separate unit
Device, then directly realize the acquisition of customer location and attitude using camera Intersection Arithmetic;If visual apparatus used by described embodiment are
Whether multiple stage sensor, then depending on processing as separate unit;If multiple stage vision sensor in embodiment is considered as single device,
Computational methods are consistent with processing single device;If being not considered as single device, taking, specific fusion treatment strategy is laggard
Row camera intersection calculation.After camera intersection calculation, carry out camera coordinates system and local coordinate system conversion, obtain customer location, appearance
State information.
To sum up, urban area proposed by the invention and indoor high-precision vision localization method are by using (single or many
Individual) vision sensor, realize (folk prescription position, the multi-faceted or comprehensive) collection to surrounding image information, by city number
Word threedimensional model and high-performance cloud computing services platform, before not transforming existing hardware infrastructure or increasing device sensor
Put, for personal or vehicle-mounted (including manned or unmanned) user's provides convenient, efficiently centimeter-level positioning knot in a large number
Really, greatly improve mobile subscriber in urban area and indoor high accuracy seamless positioning ability, positioning simultaneously by week
Collarette border carries out image acquisition, realizes the supplement to city and indoor digital threedimensional model and renewal, and then ensures ring around
When border changes, the method can provide sane sustainable stationkeeping ability.All transmission link methods include wide-area wireless and pass
Defeated, local radio transmission, individual radio transmit and wire transmission.
The each unit of system provided by the present invention implements corresponding to method each several part, and it will not go into details.
The explanation of above-mentioned offer should be counted as to the illustration that can be applicable to different possible embodiments.
Claims (28)
1. a kind of urban area and indoor high-precision vision alignment system it is characterised in that:For view-based access control model sensor acquisition
Surrounding image information and realize urban area and the indoor positioning of hi-Fix, including box counting algorithm subelement,
Image Feature Matching subelement, camera intersection calculation subelement, feature database and threedimensional model update subelement;Wherein,
Box counting algorithm subelement, for calculating and extracting the notable of image after vision sensor photographed scene image information
Characteristic information;
Image Feature Matching subelement, for existing according to marked feature information determined by described image feature calculation subelement
Carry out similarity identification and mate in the characteristic information storehouse of digital three-dimensional model, and record two in the picture of feature after coupling
Dimension coordinate and the three-dimensional geographical coordinate in real scene;
Camera intersection calculation subelement, for the coupling coordinate information recorded according to Image Feature Matching subelement, recovers
Geometric maps relation from three-dimensional scenic to two dimensional image, and set up by the camera friendship of two-dimensional image coordinate to three dimensional space coordinate
Can model, determine three-dimensional position and the attitude information of vision sensor equipment and dynamic subscriber;
Feature database and threedimensional model update subelement, for receiving many source image information of vision sensor shooting, rebuild
Or update real scene respective digital threedimensional model, and update characteristic information storehouse.
2. urban area as claimed in claim 1 and indoor high-precision vision alignment system it is characterised in that:Localizing environment bag
Include interior and outdoor environment;
Dynamic subscriber's object includes pedestrian, motor vehicle driver, passenger, manned vehicle and automatic driving vehicle;
Described vision sensor includes smart mobile phone, wearable device, digital camera, camera, monitor, moving camera, OK
Car monitor, reversing photographing unit, depth camera.
3. urban area as claimed in claim 2 and indoor high-precision vision alignment system it is characterised in that:Described vision passes
Sensor quantity includes one or more;Described vision sensor coverage includes folk prescription position, multi-faceted and omnibearing visual angle.
4. urban area as claimed in claim 2 and indoor high-precision vision alignment system it is characterised in that:If using motor-driven
Car includes in cabin and outside cabin as dynamic subscriber, the putting position of vision sensor;Vision sensor placing attitude includes
Forward sight, backsight, left view, the right side depending on, side-looking, overlook, look up.
5. urban area as claimed in claim 1 and indoor high-precision vision alignment system it is characterised in that:Digital three-dimensional mould
Characterization information is at least included in the characteristic information storehouse of type, and the three-dimensional coordinate information corresponding with feature, and described spy
The specific descriptions form levied is determined by selected feature extraction algorithm.
6. urban area as claimed in claim 5 and indoor high-precision vision alignment system it is characterised in that:For localization field
The acquiring way in the digital three-dimensional model of scape and characteristic information storehouse includes commercially available, Internet resources acquisition and autonomous type builds.
7. urban area as claimed in claim 6 and indoor high-precision vision alignment system it is characterised in that:For localization field
The autonomous type in the digital three-dimensional model of scape and characteristic information storehouse builds implementation, including the many source images of integration in Cloud Server
Information data, extract and mate many source images scene characteristic, calculate threedimensional model recording using computer vision geometrical relationship
Characteristic information storehouse.
8. the urban area as described in claim 1 or 2 or 3 or 4 or 5 or 6 or 7 and indoor high-precision vision alignment system, its
It is characterised by:The characteristic information that described image feature calculation subelement calculates and records, the plane coordinates including feature and table
State all kinds of statistical information values of this feature, the concrete expression form of described feature is determined by selected feature extraction algorithm.
9. the urban area as described in claim 1 or 2 or 3 or 4 or 5 or 6 or 7 and indoor high-precision vision alignment system, its
It is characterised by:Described image characteristic matching subelement, using outline customer position information, is believed with feature for improving characteristics of image
The matching efficiency of breath database data.
10. urban area as claimed in claim 9 and indoor high-precision vision alignment system it is characterised in that:Characteristics of image
Coupling subelement passes through all kinds of statistical information values in contrast images feature with feature in characteristic information storehouse, according to selected feature
Matching criterior, determines the proper fit from two dimensional image to threedimensional model, at least records including image two in one group of coupling
Dimension coordinate and real scene three-dimensional coordinate.
11. urban areas as claimed in claim 9 and indoor high-precision vision alignment system it is characterised in that:Described camera
Intersection calculation subelement includes error characteristic coupling and rejects and Position-Solving,
Described error characteristic mates rejecting process and is used for detecting in the coupling mapping of all of 2 d-to-3 d and searching error
Error hiding, thus reject to erroneous matching before carrying out described Position-Solving;
Described Position-Solving is by the three-dimensional coordinate of recovery multiple spatial point from real scene to the two of image character pair point
The solid geometry mapping of dimension pixel coordinate, determines the vision sensor or user position under known coordinate system and attitude information.
12. urban areas as claimed in claim 9 and indoor high-precision vision alignment system it is characterised in that:Described camera
Intersection calculation subelement also includes wave filter, for being filtered further and optimization processing to positioning result.
13. urban areas as claimed in claim 9 and indoor high-precision vision alignment system it is characterised in that:Described camera
Other vision sensors as solution object and are ignored in certain single vision sensor position by intersection calculation subelement, or take connection
Close processing mode to be merged multiple stage visual sensor information.
14. urban areas as described in claim 1 or 2 or 3 or 4 or 5 or 6 or 7 and indoor high-precision vision alignment system, its
It is characterised by:Described feature database and threedimensional model update subelement, constantly receive the many sources figure being shot by a large amount of vision sensors
Picture and/or video information, rebuild in the range of many source image information are covered or update real scene respective digital three
Dimension module, and emerging feature and its corresponding three-dimensional coordinate are added to original characteristic information storehouse, build the spy after updating
Levy information bank, for providing steadily and surely reliably sustainable positioning function.
A kind of 15. urban areas and indoor high-precision vision localization method it is characterised in that:For view-based access control model sensor acquisition
Surrounding image information and realize urban area and the indoor positioning of hi-Fix, including box counting algorithm process, figure
As characteristic matching process, camera intersection calculation process, feature database and threedimensional model renewal process;Wherein,
Box counting algorithm process, for calculating and extracting the notable spy of image after vision sensor photographed scene image information
Reference ceases;
Image Feature Matching process, for according to determined by described image feature calculation process marked feature information in numeral
Carry out similarity identification and mate in the characteristic information storehouse of threedimensional model, and record feature two dimension seat in the picture after coupling
It is marked with and the three-dimensional geographical coordinate in real scene;
Camera intersection calculation process, for the coupling coordinate information recorded according to Image Feature Matching process, recovers from three
Dimension scene is to the geometric maps relation of two dimensional image, and sets up by the camera intersection mould of two-dimensional image coordinate to three dimensional space coordinate
Type, determines three-dimensional position and the attitude information of vision sensor equipment and dynamic subscriber;
Feature database and threedimensional model renewal process, for receive vision sensor shooting many source image information, rebuild or
Update real scene respective digital threedimensional model, and update characteristic information storehouse.
16. urban areas as claimed in claim 15 and indoor high-precision vision localization method it is characterised in that:Localizing environment
Including indoor and outdoor environment;
Dynamic subscriber's object includes pedestrian, motor vehicle driver, passenger, manned vehicle and automatic driving vehicle;
Described vision sensor includes smart mobile phone, wearable device, digital camera, camera, monitor, moving camera, OK
Car monitor, reversing photographing unit, depth camera.
17. urban areas as claimed in claim 16 and indoor high-precision vision localization method it is characterised in that:Described vision
Number of sensors includes one or more;Described vision sensor coverage includes folk prescription position, multi-faceted and omnibearing visual angle.
18. urban areas as claimed in claim 16 and indoor high-precision vision localization method it is characterised in that:If using machine
Motor-car includes in cabin and outside cabin as dynamic subscriber, the putting position of vision sensor;Vision sensor placing attitude bag
Include forward sight, backsight, left view, the right side depending on, side-looking, overlook, look up.
19. urban areas as claimed in claim 15 and indoor high-precision vision localization method it is characterised in that:Digital three-dimensional
Characterization information is at least included in the characteristic information storehouse of model, and the three-dimensional coordinate information corresponding with feature, and described
The specific descriptions form of feature is determined by selected feature extraction algorithm.
20. urban areas as claimed in claim 19 and indoor high-precision vision localization method it is characterised in that:For positioning
The acquiring way in the digital three-dimensional model of scene and characteristic information storehouse includes commercially available, Internet resources acquisition and autonomous type structure
Build.
21. urban areas as claimed in claim 20 and indoor high-precision vision localization method it is characterised in that:For positioning
The autonomous type in the digital three-dimensional model of scene and characteristic information storehouse builds implementation, including integration many sources figure in Cloud Server
As information data, extract and mate many source images scene characteristic, calculate threedimensional model remembering using computer vision geometrical relationship
Record characteristic information storehouse.
22. urban areas as described in claim 15 or 16 or 17 or 18 or 19 or 20 or 21 and indoor high-precision vision positioning
Method it is characterised in that:The characteristic information that described image feature calculation process calculates and records, including the plane coordinates of feature
With all kinds of statistical information values stating this feature, the concrete expression form of described feature determined by selected feature extraction algorithm
Fixed.
23. urban areas as described in claim 15 or 16 or 17 or 18 or 19 or 20 or 21 and indoor high-precision vision positioning
Method it is characterised in that:Described image characteristic matching process, using outline customer position information, for improve characteristics of image with
The matching efficiency of characteristic information database data.
24. urban areas as claimed in claim 23 and indoor high-precision vision localization method it is characterised in that:Characteristics of image
Matching process passes through all kinds of statistical information values in contrast images feature with feature in characteristic information storehouse, according to selected feature
Registration then, determines the proper fit from two dimensional image to threedimensional model, at least records including two-dimensional image in one group of coupling
Coordinate and real scene three-dimensional coordinate.
25. urban areas as claimed in claim 23 and indoor high-precision vision localization method it is characterised in that:Described camera
Intersection calculation process includes error characteristic coupling and rejects and Position-Solving,
Described error characteristic mates rejecting process and is used for detecting in the coupling mapping of all of 2 d-to-3 d and searching error
Error hiding, thus reject to erroneous matching before carrying out described Position-Solving;
Described Position-Solving is by the three-dimensional coordinate of recovery multiple spatial point from real scene to the two of image character pair point
The solid geometry mapping of dimension pixel coordinate, determines the vision sensor or user position under known coordinate system and attitude information.
26. urban areas as claimed in claim 23 and indoor high-precision vision localization method it is characterised in that:Described camera
Intersection calculation process also includes wave filter, for being filtered further and optimization processing to positioning result.
27. urban areas as claimed in claim 23 and indoor high-precision vision localization method it is characterised in that:Described camera
Other vision sensors as solution object and are ignored in certain single vision sensor position by intersection calculation process, or take joint
Multiple stage visual sensor information is merged by processing mode.
28. urban areas as described in claim 15 or 16 or 17 or 18 or 19 or 20 or 21 and indoor high-precision vision positioning
Method it is characterised in that:Described feature database and threedimensional model renewal process, constantly receive the crowd being shot by a large amount of vision sensors
Source images and/or video information, rebuild in the range of many source image information are covered or update real scene respective counts
Word threedimensional model, and emerging feature and its corresponding three-dimensional coordinate are added to original characteristic information storehouse, build after updating
Characteristic information storehouse, for providing steadily and surely reliable sustainable positioning function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610847459.4A CN106447585A (en) | 2016-09-21 | 2016-09-21 | Urban area and indoor high-precision visual positioning system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610847459.4A CN106447585A (en) | 2016-09-21 | 2016-09-21 | Urban area and indoor high-precision visual positioning system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106447585A true CN106447585A (en) | 2017-02-22 |
Family
ID=58166914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610847459.4A Pending CN106447585A (en) | 2016-09-21 | 2016-09-21 | Urban area and indoor high-precision visual positioning system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106447585A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107588766A (en) * | 2017-09-15 | 2018-01-16 | 南京轩世琪源软件科技有限公司 | A kind of indoor orientation method based on radio area network |
CN108009588A (en) * | 2017-12-01 | 2018-05-08 | 深圳市智能现实科技有限公司 | Localization method and device, mobile terminal |
CN108495259A (en) * | 2018-03-26 | 2018-09-04 | 上海工程技术大学 | A kind of gradual indoor positioning server and localization method |
CN108647242A (en) * | 2018-04-10 | 2018-10-12 | 北京天正聚合科技有限公司 | A kind of generation method and system of thermodynamic chart |
CN108759834A (en) * | 2018-04-28 | 2018-11-06 | 温州大学激光与光电智能制造研究院 | A kind of localization method based on overall Vision |
CN108828576A (en) * | 2018-04-10 | 2018-11-16 | 上海摩软通讯技术有限公司 | Indoor locating system and method |
CN108957504A (en) * | 2017-11-08 | 2018-12-07 | 北京市燃气集团有限责任公司 | The method and system of indoor and outdoor consecutive tracking |
CN109029450A (en) * | 2018-06-26 | 2018-12-18 | 重庆市勘测院 | A kind of indoor orientation method |
CN109357679A (en) * | 2018-11-16 | 2019-02-19 | 济南浪潮高新科技投资发展有限公司 | A kind of indoor orientation method based on significant characteristics identification |
CN109691185A (en) * | 2018-07-26 | 2019-04-26 | 深圳前海达闼云端智能科技有限公司 | A kind of localization method, device, terminal and readable storage medium storing program for executing |
CN109708649A (en) * | 2018-12-07 | 2019-05-03 | 中国空间技术研究院 | A kind of attitude determination method and system of remote sensing satellite |
CN109857122A (en) * | 2019-03-21 | 2019-06-07 | 浙江尤恩机器人科技有限公司 | Controlling of path thereof, device and the warehouse transportation system of warehouse haulage vehicle |
CN110017841A (en) * | 2019-05-13 | 2019-07-16 | 大有智能科技(嘉兴)有限公司 | Vision positioning method and its air navigation aid |
CN110222761A (en) * | 2019-05-31 | 2019-09-10 | 中国民航大学 | Indoor locating system and indoor orientation method based on digital terrestrial reference map |
WO2019205069A1 (en) * | 2018-04-27 | 2019-10-31 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for updating 3d model of building |
CN110567460A (en) * | 2018-12-05 | 2019-12-13 | 昆明北理工产业技术研究院有限公司 | Unmanned platform indoor positioning system and positioning method |
WO2019242392A1 (en) * | 2018-06-20 | 2019-12-26 | 华为技术有限公司 | Database construction method, positioning method and relevant device therefor |
CN110675446A (en) * | 2018-07-03 | 2020-01-10 | 百度在线网络技术(北京)有限公司 | Positioning method and device |
CN110895407A (en) * | 2018-08-22 | 2020-03-20 | 郑州宇通客车股份有限公司 | Automatic driving vehicle operation control method integrating camera shooting and positioning and vehicle |
CN110940316A (en) * | 2019-12-09 | 2020-03-31 | 国网山东省电力公司 | Navigation method and system for fire-fighting robot of transformer substation in complex environment |
CN110989599A (en) * | 2019-12-09 | 2020-04-10 | 国网智能科技股份有限公司 | Autonomous operation control method and system for fire-fighting robot of transformer substation |
CN111177840A (en) * | 2019-12-31 | 2020-05-19 | 广东博智林机器人有限公司 | Updating method and device for building information model, storage medium and processor |
CN111220156A (en) * | 2018-11-25 | 2020-06-02 | 星际空间(天津)科技发展有限公司 | Navigation method based on city live-action |
CN111238450A (en) * | 2020-02-27 | 2020-06-05 | 北京三快在线科技有限公司 | Visual positioning method and device |
CN111536978A (en) * | 2020-05-25 | 2020-08-14 | 深圳市城市公共安全技术研究院有限公司 | Indoor positioning navigation system and application method thereof in emergency evacuation |
CN111595349A (en) * | 2020-06-28 | 2020-08-28 | 浙江商汤科技开发有限公司 | Navigation method and device, electronic equipment and storage medium |
CN111986347A (en) * | 2020-07-20 | 2020-11-24 | 汉海信息技术(上海)有限公司 | Device management method, device, electronic device and storage medium |
CN112001947A (en) * | 2020-07-30 | 2020-11-27 | 海尔优家智能科技(北京)有限公司 | Shooting position determining method and device, storage medium and electronic device |
CN112348885A (en) * | 2019-08-09 | 2021-02-09 | 华为技术有限公司 | Visual feature library construction method, visual positioning method, device and storage medium |
CN112884834A (en) * | 2019-11-30 | 2021-06-01 | 华为技术有限公司 | Visual positioning method and system |
CN112949445A (en) * | 2021-02-24 | 2021-06-11 | 中煤科工集团重庆智慧城市科技研究院有限公司 | Urban management emergency linkage system and method based on spatial relationship |
WO2021258700A1 (en) * | 2020-06-23 | 2021-12-30 | 广东小天才科技有限公司 | Method for indoor and outdoor recognition assistance, smart wearable device, and storage medium |
WO2024083010A1 (en) * | 2022-10-20 | 2024-04-25 | 腾讯科技(深圳)有限公司 | Visual localization method and related apparatus |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441769A (en) * | 2008-12-11 | 2009-05-27 | 上海交通大学 | Real time vision positioning method of monocular camera |
CN101920498A (en) * | 2009-06-16 | 2010-12-22 | 泰怡凯电器(苏州)有限公司 | Device for realizing simultaneous positioning and map building of indoor service robot and robot |
CN103106252A (en) * | 2013-01-16 | 2013-05-15 | 浙江大学 | Method for using handheld device to position plane area |
CN103765880A (en) * | 2011-09-12 | 2014-04-30 | 英特尔公司 | Networked capture and 3D display of localized, segmented images |
CN203719666U (en) * | 2013-11-21 | 2014-07-16 | 西安中科光电精密工程有限公司 | Combined navigation system |
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN105469405A (en) * | 2015-11-26 | 2016-04-06 | 清华大学 | Visual ranging-based simultaneous localization and map construction method |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
CN105792353A (en) * | 2016-03-14 | 2016-07-20 | 中国人民解放军国防科学技术大学 | Image matching type indoor positioning method with assistance of crowd sensing WiFi signal fingerprint |
-
2016
- 2016-09-21 CN CN201610847459.4A patent/CN106447585A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441769A (en) * | 2008-12-11 | 2009-05-27 | 上海交通大学 | Real time vision positioning method of monocular camera |
CN101920498A (en) * | 2009-06-16 | 2010-12-22 | 泰怡凯电器(苏州)有限公司 | Device for realizing simultaneous positioning and map building of indoor service robot and robot |
CN103765880A (en) * | 2011-09-12 | 2014-04-30 | 英特尔公司 | Networked capture and 3D display of localized, segmented images |
CN103106252A (en) * | 2013-01-16 | 2013-05-15 | 浙江大学 | Method for using handheld device to position plane area |
CN203719666U (en) * | 2013-11-21 | 2014-07-16 | 西安中科光电精密工程有限公司 | Combined navigation system |
CN104574386A (en) * | 2014-12-26 | 2015-04-29 | 速感科技(北京)有限公司 | Indoor positioning method based on three-dimensional environment model matching |
CN105469405A (en) * | 2015-11-26 | 2016-04-06 | 清华大学 | Visual ranging-based simultaneous localization and map construction method |
CN105674993A (en) * | 2016-01-15 | 2016-06-15 | 武汉光庭科技有限公司 | Binocular camera-based high-precision visual sense positioning map generation system and method |
CN105792353A (en) * | 2016-03-14 | 2016-07-20 | 中国人民解放军国防科学技术大学 | Image matching type indoor positioning method with assistance of crowd sensing WiFi signal fingerprint |
Non-Patent Citations (1)
Title |
---|
JASON ZHI LIANG ET AL: "Image Based Localization in Indoor Environments", 《2013 FOURTH INTERNATIONAL CONFERENCE ON COMPUTING FOR GEOSPATIAL RESEARCH AND APPLICATION》 * |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107588766A (en) * | 2017-09-15 | 2018-01-16 | 南京轩世琪源软件科技有限公司 | A kind of indoor orientation method based on radio area network |
CN108957504A (en) * | 2017-11-08 | 2018-12-07 | 北京市燃气集团有限责任公司 | The method and system of indoor and outdoor consecutive tracking |
CN108009588A (en) * | 2017-12-01 | 2018-05-08 | 深圳市智能现实科技有限公司 | Localization method and device, mobile terminal |
CN108495259A (en) * | 2018-03-26 | 2018-09-04 | 上海工程技术大学 | A kind of gradual indoor positioning server and localization method |
CN108647242A (en) * | 2018-04-10 | 2018-10-12 | 北京天正聚合科技有限公司 | A kind of generation method and system of thermodynamic chart |
CN108828576A (en) * | 2018-04-10 | 2018-11-16 | 上海摩软通讯技术有限公司 | Indoor locating system and method |
CN108647242B (en) * | 2018-04-10 | 2022-04-29 | 北京天正聚合科技有限公司 | Generation method and system of thermodynamic diagram |
CN112020630A (en) * | 2018-04-27 | 2020-12-01 | 北京嘀嘀无限科技发展有限公司 | System and method for updating 3D model of building |
WO2019205069A1 (en) * | 2018-04-27 | 2019-10-31 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for updating 3d model of building |
US11841241B2 (en) | 2018-04-27 | 2023-12-12 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for updating a 3D model of building |
CN108759834A (en) * | 2018-04-28 | 2018-11-06 | 温州大学激光与光电智能制造研究院 | A kind of localization method based on overall Vision |
CN108759834B (en) * | 2018-04-28 | 2023-03-21 | 温州大学激光与光电智能制造研究院 | Positioning method based on global vision |
US11644339B2 (en) | 2018-06-20 | 2023-05-09 | Huawei Technologies Co., Ltd. | Database construction method, positioning method, and related device |
CN110688500B (en) * | 2018-06-20 | 2021-09-14 | 华为技术有限公司 | Database construction method, positioning method and related equipment thereof |
CN110688500A (en) * | 2018-06-20 | 2020-01-14 | 华为技术有限公司 | Database construction method, positioning method and related equipment thereof |
WO2019242392A1 (en) * | 2018-06-20 | 2019-12-26 | 华为技术有限公司 | Database construction method, positioning method and relevant device therefor |
CN109029450A (en) * | 2018-06-26 | 2018-12-18 | 重庆市勘测院 | A kind of indoor orientation method |
CN110675446A (en) * | 2018-07-03 | 2020-01-10 | 百度在线网络技术(北京)有限公司 | Positioning method and device |
CN109691185A (en) * | 2018-07-26 | 2019-04-26 | 深圳前海达闼云端智能科技有限公司 | A kind of localization method, device, terminal and readable storage medium storing program for executing |
CN110895407A (en) * | 2018-08-22 | 2020-03-20 | 郑州宇通客车股份有限公司 | Automatic driving vehicle operation control method integrating camera shooting and positioning and vehicle |
CN109357679A (en) * | 2018-11-16 | 2019-02-19 | 济南浪潮高新科技投资发展有限公司 | A kind of indoor orientation method based on significant characteristics identification |
CN109357679B (en) * | 2018-11-16 | 2022-04-19 | 山东浪潮科学研究院有限公司 | Indoor positioning method based on significance characteristic recognition |
CN111220156B (en) * | 2018-11-25 | 2023-06-23 | 星际空间(天津)科技发展有限公司 | Navigation method based on city live-action |
CN111220156A (en) * | 2018-11-25 | 2020-06-02 | 星际空间(天津)科技发展有限公司 | Navigation method based on city live-action |
CN110567460A (en) * | 2018-12-05 | 2019-12-13 | 昆明北理工产业技术研究院有限公司 | Unmanned platform indoor positioning system and positioning method |
CN109708649B (en) * | 2018-12-07 | 2021-02-09 | 中国空间技术研究院 | Attitude determination method and system for remote sensing satellite |
CN109708649A (en) * | 2018-12-07 | 2019-05-03 | 中国空间技术研究院 | A kind of attitude determination method and system of remote sensing satellite |
CN109857122A (en) * | 2019-03-21 | 2019-06-07 | 浙江尤恩机器人科技有限公司 | Controlling of path thereof, device and the warehouse transportation system of warehouse haulage vehicle |
CN110017841A (en) * | 2019-05-13 | 2019-07-16 | 大有智能科技(嘉兴)有限公司 | Vision positioning method and its air navigation aid |
CN110222761A (en) * | 2019-05-31 | 2019-09-10 | 中国民航大学 | Indoor locating system and indoor orientation method based on digital terrestrial reference map |
CN110222761B (en) * | 2019-05-31 | 2023-01-17 | 中国民航大学 | Indoor positioning system and indoor positioning method based on digital landmark map |
WO2021027692A1 (en) * | 2019-08-09 | 2021-02-18 | 华为技术有限公司 | Visual feature library construction method and apparatus, visual positioning method and apparatus, and storage medium |
CN112348885A (en) * | 2019-08-09 | 2021-02-09 | 华为技术有限公司 | Visual feature library construction method, visual positioning method, device and storage medium |
CN112884834A (en) * | 2019-11-30 | 2021-06-01 | 华为技术有限公司 | Visual positioning method and system |
CN110989599A (en) * | 2019-12-09 | 2020-04-10 | 国网智能科技股份有限公司 | Autonomous operation control method and system for fire-fighting robot of transformer substation |
CN110940316A (en) * | 2019-12-09 | 2020-03-31 | 国网山东省电力公司 | Navigation method and system for fire-fighting robot of transformer substation in complex environment |
CN111177840A (en) * | 2019-12-31 | 2020-05-19 | 广东博智林机器人有限公司 | Updating method and device for building information model, storage medium and processor |
CN111177840B (en) * | 2019-12-31 | 2023-11-03 | 广东博智林机器人有限公司 | Building information model updating method, building information model updating device, storage medium and processor |
CN111238450A (en) * | 2020-02-27 | 2020-06-05 | 北京三快在线科技有限公司 | Visual positioning method and device |
CN111238450B (en) * | 2020-02-27 | 2021-11-30 | 北京三快在线科技有限公司 | Visual positioning method and device |
CN111536978A (en) * | 2020-05-25 | 2020-08-14 | 深圳市城市公共安全技术研究院有限公司 | Indoor positioning navigation system and application method thereof in emergency evacuation |
WO2021258700A1 (en) * | 2020-06-23 | 2021-12-30 | 广东小天才科技有限公司 | Method for indoor and outdoor recognition assistance, smart wearable device, and storage medium |
CN111595349A (en) * | 2020-06-28 | 2020-08-28 | 浙江商汤科技开发有限公司 | Navigation method and device, electronic equipment and storage medium |
CN111986347A (en) * | 2020-07-20 | 2020-11-24 | 汉海信息技术(上海)有限公司 | Device management method, device, electronic device and storage medium |
CN112001947A (en) * | 2020-07-30 | 2020-11-27 | 海尔优家智能科技(北京)有限公司 | Shooting position determining method and device, storage medium and electronic device |
CN112949445A (en) * | 2021-02-24 | 2021-06-11 | 中煤科工集团重庆智慧城市科技研究院有限公司 | Urban management emergency linkage system and method based on spatial relationship |
CN112949445B (en) * | 2021-02-24 | 2024-04-05 | 中煤科工集团重庆智慧城市科技研究院有限公司 | Urban management emergency linkage system and method based on spatial relationship |
WO2024083010A1 (en) * | 2022-10-20 | 2024-04-25 | 腾讯科技(深圳)有限公司 | Visual localization method and related apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106447585A (en) | Urban area and indoor high-precision visual positioning system and method | |
CN109631887B (en) | Inertial navigation high-precision positioning method based on binocular, acceleration and gyroscope | |
CN107133325B (en) | Internet photo geographic space positioning method based on street view map | |
CN103530881B (en) | Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal | |
CN108801274B (en) | Landmark map generation method integrating binocular vision and differential satellite positioning | |
CN107690840B (en) | Unmanned plane vision auxiliary navigation method and system | |
CN105930819A (en) | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system | |
CN112129281B (en) | High-precision image navigation positioning method based on local neighborhood map | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN111261016A (en) | Road map construction method and device and electronic equipment | |
WO2021017211A1 (en) | Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal | |
CN111815765A (en) | Heterogeneous data fusion-based image three-dimensional reconstruction method | |
CN108253942B (en) | Method for improving oblique photography measurement space-three quality | |
CN113284239B (en) | Method and device for manufacturing electronic sand table of smart city | |
Zhang et al. | Online ground multitarget geolocation based on 3-D map construction using a UAV platform | |
Chen et al. | Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles | |
CN111652276B (en) | All-weather portable multifunctional bionic positioning and attitude-determining viewing system and method | |
Javed et al. | PanoVILD: a challenging panoramic vision, inertial and LiDAR dataset for simultaneous localization and mapping | |
CN116027351A (en) | Hand-held/knapsack type SLAM device and positioning method | |
CN115435773A (en) | High-precision map collecting device for indoor parking lot | |
CN115409910A (en) | Semantic map construction method, visual positioning method and related equipment | |
CN113009533A (en) | Vehicle positioning method and device based on visual SLAM and cloud server | |
CN110930510A (en) | Urban space three-dimensional reconstruction method | |
Tang et al. | UAV Visual Localization Technology Based on Heterogenous Remote Sensing Image Matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170222 |
|
RJ01 | Rejection of invention patent application after publication |