CN109520500A - One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method - Google Patents

One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method Download PDF

Info

Publication number
CN109520500A
CN109520500A CN201811222795.5A CN201811222795A CN109520500A CN 109520500 A CN109520500 A CN 109520500A CN 201811222795 A CN201811222795 A CN 201811222795A CN 109520500 A CN109520500 A CN 109520500A
Authority
CN
China
Prior art keywords
point
image
streetscape
library
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811222795.5A
Other languages
Chinese (zh)
Other versions
CN109520500B (en
Inventor
胡强
屈蔷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201811222795.5A priority Critical patent/CN109520500B/en
Publication of CN109520500A publication Critical patent/CN109520500A/en
Application granted granted Critical
Publication of CN109520500B publication Critical patent/CN109520500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method, panoramic picture near user's changing coordinates, which is obtained, by the map APP of the terminal devices such as mobile phone, navigator obtains front high definition real-time street view as correction road figure using terminal camera module as reference arm figure;User's actual coordinate is calculated using feature point extraction matching and monocular vision location algorithm to be modified original coordinates;By amendment, successfully correction road figure adds longitude and latitude label typing streetscape library.The present invention uses the high definition streetscape library constructed based on the existing panoramic table library map APP to position.Panoramic table library coverage area is small, updates problem high with acquisition cost not in time applied to making up in the Internet era increasingly promoted based on location-based service demand by the present invention.The optimization of point increased with picture clarity may be selected in positioning by user simultaneously, realize greatly promoting for vision positioning precision.

Description

One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
Technical field
The present invention relates to image procossing and it is accurately positioned field, more particularly to outdoor streetscape acquisition and pedestrian navigation position skill Art field.
Background technique
The development of map APP in recent years brings numerous conveniences to the trip of pedestrian and vehicle, and map developers are same When propose to be able to reflect the panoramic table technology of destination true environment.But the acquisition of panoramic table needs the image of profession to set Standby to obtain, the man power and material of consuming is very high.Panoramic picture is mainly covered in turnpike road simultaneously, and shooting time major part exists Before 1 year, the problem of causing environmental data covering surface wideless and update not in time, make troubles to the trip of pedestrian and vehicle.
Currently, being applied to the method for Pedestrians and vehicles positioning has: GPS positioning, base station location, inertial navigation positioning etc..Wherein, Civilian GPS positioning substantially meets daily movable needs, but is easy to receive in built-up city area satellite-signal and block, Lead to position inaccurate;The precision of base station location and the base station number of position are related, therefore most area is difficult to realize More accurate positioning;The device of inertial navigation positioning is easy to generate integral error as time increases, to be difficult to individually make With.
With the development of computer and electronic device, the acquisition of more effective high definition streetscape library and accurate positioning method will be serviced In society.
Summary of the invention
Goal of the invention: it is acquired the purpose of the present invention is to propose to one kind based on the matched streetscape library of terminal shooting image and accurate Localization method, the panoramic table library coverage area for making up current map APP is small, updates problem high with acquisition cost not in time. Carrying out vision positioning by newly created high definition streetscape library simultaneously can be such that position error reduces, and significantly facilitate the outer of Pedestrians and vehicles Out.
To achieve the above object, the present invention adopts the following technical scheme:
One kind shooting the matched accurate positioning method of image based on terminal, includes the following steps:
1) panoramic picture near user's changing coordinates is obtained by the map APP of terminal device to utilize as reference arm figure Terminal camera module obtains front high definition real-time street view as correction road figure, and records determining for reference arm figure shooting location in real time Position longitude and latitude (X1, Y1) and corresponding yaw angle β1, the positioning longitude and latitude (X of the road figure shooting location of record correction in real time2, Y2) and it is right The yaw angle β answered2
2) it extracts reference arm figure and corrects the characteristic point of road figure, carry out Feature Points Matching, it is highest to find matching accuracy One group of characteristic point calculates the practical longitude and latitude of correction road figure shooting point using monocular vision location algorithm and trigonometric function knowledge To (X2,Y2) be corrected.
Preferably, in step 1), the terminal device is the equipment with camera such as mobile phone, navigator.
Preferably, in step 1), the choosing method of the reference arm figure are as follows: call nearby coordinates panorama using map APP Map, connects point to be corrected and panoramic table reference point as reference line, in selection panoramic table library with this reference line to the left or To right avertence 30-90 degree direction picture as reference arm figure.
Preferably, in step 2), correction road figure is extracted using SURF (Speeded UpRobust Features) algorithm With the characteristic point of reference arm figure, the matching of the characteristic point of correction road figure and reference arm figure is realized using FLANN adaptation.
Preferably, in step 2), correction road figure is calculated using monocular vision location algorithm and trigonometric function knowledge and is shot The practical longitude and latitude of point, specific steps include:
Position coordinates (the X of known reference road figure shooting location A1,Y1), the distance between point A and characteristic point C L1, correction road The distance between figure shooting location B and characteristic point C L2, yaw angle β1、β2, then deflection η calculation formula of the point A relative to point B Are as follows:
L is calculated separately out using formula (2)1,L2
Wherein, fx, fyFor logic focal length, d is the physical size of each pixel, (iF,jF) and (io,jo) it is respectively characteristic point Two-dimensional pixel coordinate, camera optical center two-dimensional pixel coordinate in image coordinate system of the C in image coordinate system;
According to the distance between trigonometric function relationship point A and point B L3Calculation formula are as follows:
Known distance L3, deflection η and reference arm figure shooting location A latitude and longitude information after, known according to coordinate system transformation Know the longitude and latitude positional information that can find out correction road figure shooting location B.
Streetscape library acquisition method based on above-mentioned accurate positioning method, includes the following steps:
1) pair warp and weft degree resolves successful real-time street view image and carries out the calculating of clarity parameter and content legality detection;
2) qualified street view image is packaged to the progress of typing network server end together with corresponding latitude and longitude information Storage is built into high definition streetscape library.
The utility model has the advantages that the present invention uses the high definition streetscape library constructed based on the existing panoramic table library map APP Positioned, can make localization region cover conventional panoramic image without reference to the regions such as cell street.Application of the present invention In the Internet era increasingly promoted based on location-based service demand, make up panoramic table library coverage area it is small, update not in time The high problem with acquisition cost.The optimization of point increased with picture clarity may be selected in positioning by user simultaneously, realize vision Positioning accuracy greatly promotes.
Detailed description of the invention
Fig. 1 is that the present invention is based on terminals to shoot the matched streetscape library acquisition method flow chart of image;
Fig. 2 is the panoramic table data source picture of the present embodiment terminal map APP;
Fig. 3 is the present embodiment SURF feature extraction algorithm schematic diagram;
Fig. 4 is that the present embodiment terminal shooting picture matches picture feature extraction matching result figure with panorama;
Fig. 5 is the schematic diagram of the present embodiment target line ranging model;
Fig. 6 is that the present embodiment calculates user's longitude and latitude positional information schematic diagram.
Specific embodiment
Further explanation is done to the present invention with reference to the accompanying drawing.
Acquisition of the present invention using terminals such as the higher mobile phone of domestic popularity rate, navigators as high definition streetscape library is set It is standby, for the panoramic table library for needing more more digital cameras of vehicle to be acquired, it can be improved the acquisition effect of database Rate and acquisition cost.
Since the present invention is used based on image characteristic point extraction and matched vision positioning technology, with panoramic table library As location base library, and the data that current panoramic table library includes are only limitted to turnpike road, therefore the present invention is only applicable to Outdoor positioning and navigation.
Panoramic table is also referred to as panoramic looking-around map, refers to the ground for tri-dimensional picture being modeled to real-world object 3-D effect Figure, viewer can pull the effect that map browses real things from different angles.With digital camera to existing scene into Row multi-angle looks around shooting and then carries out later period suture using computer, and loads playing program to complete three-dimensional exhibition Show.
The present invention proposes a kind of based on terminal shooting image matched high definition streetscape library acquisition method, panoramic table data Source picture as shown in Fig. 2, select experiment picture place be Nanjing Aero-Space University west gate, image resolution ratio be 756 × 434, the latitude and longitude coordinates after panoramic table picks up are 118.793467,31.94281, and pitch angle is 6.313968561389864 yaw angle 73.5777451151586, zoom level is 3 grades.
For the data source picture there are following characteristics, shooting visual angle lies substantially in horizontal state, image color difference it is big and The environment things for including is more complex, can effectively verify the efficiency of feature extraction algorithm.
The present embodiment is based on terminal and shoots the matched streetscape library acquisition method of image, as shown in Figure 1, this method includes following Step:
(1) reference arm figure and correction road diagram data are obtained using panoramic table library and terminal device: calls APP panorama nearby The image conduct in reference line inclined 30-90 degree direction to the left or to the right is selected in map, connection point to be determined and panoramic table reference point Reference arm figure records the positioning longitude and latitude (X of this reference arm figure in real time1, Y1) and corresponding yaw angle β1
Need to open a terminal cellular mobile network and positioning service switch simultaneously, the positioning longitude and latitude at real-time records photographing moment Spend (X2, Y2) and magnetic compass gyroscope output yaw angle β2
(2) reference arm figure and correction road figure packing are uploaded into internet cloud: due to the arithmetic speed of terminal handler Real-time perfoming images match and user coordinates information can not be calculated, so existing using the upload cloud function that map APP is carried Aforesaid operations are carried out on server.
(3) images match is realized using SURF algorithm and FLANN adaptation: shooting figure being extracted using SURF algorithm first Whole characteristic points of picture and panorama matching image, implementation are as shown in Figure 3:
Step 1 is building Hessian matrix, generates all points of interest, the extraction for feature.For image f (x, Y), Hessian Matrix Computation Formulas is as follows:
When Hessian matrix discriminate obtain local maximum when, then can determine that current point than in surrounding neighbors other Point is brighter or darker, thus the position to determine possible location feature point.
Step 2 is building scale space: due to the Scale invariant characteristic of object, finding characteristic point for convenience and improves meter It calculates efficiency to need to construct scale space, gaussian filtering is replaced using rectangular filter, the original size of image is between generating different groups It is consistent, and the filter template size used between different group is gradually increased, thus change filter size to original image into Row filtering can construct the different scale space of scale, while integral image being used to accelerate convolution algorithm for intermediary.
Step 3 is positioning feature point: each pixel of Hessian matrix disposal and two dimensional image and scale space is adjacent 26 points in domain are compared, and Primary Location goes out characteristic point, then filter out that energy comparison is weak and the characteristic point of location of mistake, sieve Select finally stable characteristic point.
Step 4 is characteristic point principal direction distribution: the Harr wavelet character in statistical nature point circle shaped neighborhood region, by 60 degree of sectors Principal direction of the maximum fan-shaped direction of interior horizontal and vertical Harr wavelet character total value as this feature point.
Step 5 is to generate feature point description: choosing the rectangular area block of 4*4 around characteristic point, counts each region It is useful as Expressive Features point that the haar wavelet character value horizontally and vertically of 25 pixels is built into feature vector Description of information.
After the success of the feature point extraction in street view image and panorama matching image will be shot, pass through FLANN adaptation structure The feature duration set for building characteristic point, is compared characteristic quantity, screens, and obtains correctly matching set, constantly change threshold value, Erroneous matching is eliminated, until getting the matching highest one group of match point of accuracy, feature point extraction and matching effect such as Fig. 4 institute Show.
Can be seen that according to the effect picture after feature point extraction has a characteristic point more than 1,000 in two width figures, with Matching degree is determined by calculating the Euclidean distance between two characteristic points afterwards, and the method for specifically eliminating erroneous matching is:
The maximum value max_dist and minimum value min_dist of Euclidean distance between characteristic point are calculated, threshold value h is set, such as The Euclidean distance of fruit detected feature point is less than h*min_dist, then regards as the characteristic point of successful match;
Effect after observation matching, constantly adjusts threshold value h, until getting the matching highest one group of match point of accuracy.
(4) monocular vision positioning calculation user coordinates corrects original coordinates: principle such as Fig. 5 institute of target line ranging model Show:
Convert the distance of camera to image to the highest characteristic point of matching accuracy searched out in the picture Image coordinate extracts characteristic point coordinate by image processing program, establishes point-to-point according to characteristic point coordinate and optical center coordinate Straight line ranging model calculate camera to testee complete distance.
It is assumed that optical axis is in the horizontal direction, then any spatial point coordinate P is (Xw,Yw, 0), it can according to image coordinate relevant knowledge To obtain position (i, j) calculation formula of respective image point p are as follows:
Wherein fx,fyFor logic focal length, d is the physical size of each pixel, (cx,cy) be image coordinate system under origin Coordinate;
According to the area mapping relations of related mathematical knowledge body surface and target image are as follows:
Wherein S2For the image area of target;
Target object distance indicates are as follows:
Wherein LwFor characteristic point P under world coordinate system to origin OwDistance, (iF,jF) and (io,jo) it is focus and optical center Image coordinate system coordinate;
Calculate the distance L that target feature point arrives panorama matching image shooting point and street view image shooting point respectively1,L2, then The calculation of longitude & latitude schematic diagram of shooting point is as shown in Figure 6:
Position coordinates (the X of known reference road figure shooting location A1,Y1), the distance between point A and point feature C L1, correction road The distance between figure shooting location B and point C L2, yaw angle β1、β2, then deflection η calculation formula of the point A relative to point B are as follows:
According to the distance between trigonometric function relationship point A and point B L3Calculation formula are as follows:
Known distance L3, deflection η and reference arm figure shooting location A latitude and longitude information after, known according to coordinate system transformation Know the longitude and latitude positional information that can find out correction road figure shooting location B.
(5) optical sieving, complete the acquisition of high definition streetscape library: pair warp and weft degree resolves successful real-time street view image and carries out clearly Parameter calculating and content legality detection are spent, qualified street view image is packaged typing together with corresponding latitude and longitude information Network server end carries out storage and is built into high definition streetscape library.
Street view image in view of the shooting of certain terminals because the problems such as software and hardware and shooting environmental can generate it is smudgy and Situations such as resolution ratio is lower, the method for evaluating a width street view image readability at present are the gray scale pictures for obtaining the image, such as Fruit intensity profile is wide and low ash degree point situation adjacent or close with high gray scale point is more and then image definition if picture noise lacks Good, resolution ratio detection then needs the cloud APP setting picture available threshold to check upload street view image attribute to screen;
The main method of street view image legitimacy detection at present is using the deep learning method based on convolutional neural networks It constructs training set and carries out feature extraction and labeling, feature then is extracted to the street view image for uploading cloud again and carries out image point Class and legitimacy detection.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (6)

1. one kind shoots the matched accurate positioning method of image based on terminal, which comprises the steps of:
1) panoramic picture near user's changing coordinates is obtained by the map APP of terminal device and utilizes terminal as reference arm figure Camera module obtains front high definition real-time street view as correction road figure, and records the positioning warp of reference arm figure shooting location in real time Latitude (X1, Y1) and corresponding yaw angle β1, the positioning longitude and latitude (X of the road figure shooting location of record correction in real time2, Y2) and it is corresponding Yaw angle β2
2) it extracts reference arm figure and corrects the characteristic point of road figure, carry out Feature Points Matching, find highest one group of accuracy of matching Characteristic point calculates the practical longitude and latitude pair of correction road figure shooting point using monocular vision location algorithm and trigonometric function knowledge (X2,Y2) be corrected.
2. according to claim 1 a kind of based on the terminal shooting matched accurate positioning method of image, which is characterized in that step It is rapid 1) in, the terminal device is the equipment with camera such as mobile phone, navigator.
3. according to claim 1 a kind of based on the terminal shooting matched streetscape library acquisition method of image, which is characterized in that In step 1), the choosing method of the reference arm figure are as follows: call nearby coordinates panoramic table using map APP, connect to be corrected Point and panoramic table reference point are used as reference line, with this reference line inclined degree side 30-90 to the left or to the right in selection panoramic table library To picture as reference arm figure.
4. according to claim 1 a kind of based on the terminal shooting matched streetscape library acquisition method of image, which is characterized in that In step 2), the characteristic point of correction road figure and reference arm figure is extracted using SURF algorithm, is realized and is corrected using FLANN adaptation The matching of the characteristic point of road figure and reference arm figure.
5. according to claim 1 a kind of based on the terminal shooting matched streetscape library acquisition method of image, which is characterized in that In step 2), the practical longitude and latitude of correction road figure shooting point is calculated using monocular vision location algorithm and trigonometric function knowledge, Specific steps include:
Position coordinates (the X of known reference road figure shooting location A1,Y1), the distance between point A and characteristic point C L1, correction road figure clap Take the photograph the distance between place B and characteristic point C L2, yaw angle β1、β2, then deflection η calculation formula of the point A relative to point B are as follows:
L is calculated separately out using formula (2)1,L2
Wherein, fx, fyFor logic focal length, d is the physical size of each pixel, (iF,jF) and (io,jo) it is respectively that characteristic point C exists The two-dimensional pixel coordinate of two-dimensional pixel coordinate, camera optical center in image coordinate system in image coordinate system;
According to the distance between trigonometric function relationship point A and point B L3Calculation formula are as follows:
Known distance L3, deflection η and reference arm figure shooting location A latitude and longitude information after, can be with according to coordinate system transformation knowledge Find out the longitude and latitude positional information of correction road figure shooting location B.
6. the streetscape library acquisition method based on any accurate positioning method of claim 1-5, which is characterized in that including as follows Step:
1) pair warp and weft degree resolves successful real-time street view image and carries out the calculating of clarity parameter and content legality detection;
2) qualified street view image is packaged to typing network server end to store together with corresponding latitude and longitude information It is built into high definition streetscape library.
CN201811222795.5A 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching Active CN109520500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811222795.5A CN109520500B (en) 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811222795.5A CN109520500B (en) 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching

Publications (2)

Publication Number Publication Date
CN109520500A true CN109520500A (en) 2019-03-26
CN109520500B CN109520500B (en) 2020-10-20

Family

ID=65772357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811222795.5A Active CN109520500B (en) 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching

Country Status (1)

Country Link
CN (1) CN109520500B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910497A (en) * 2019-11-15 2020-03-24 北京信息科技大学 Method and system for realizing augmented reality map
CN112100521A (en) * 2020-09-11 2020-12-18 广州宸祺出行科技有限公司 Method and system for identifying and positioning street view and acquiring panoramic picture
CN112398526A (en) * 2020-10-30 2021-02-23 南京凯瑞得信息科技有限公司 Method for generating satellite spot wave beam based on Cesium simulation
CN113008252A (en) * 2021-04-15 2021-06-22 西华大学 High-precision navigation device and navigation method based on panoramic photo
CN113188439A (en) * 2021-04-01 2021-07-30 深圳市磐锋精密技术有限公司 Internet-based automatic positioning method for mobile phone camera shooting
CN113283285A (en) * 2021-03-19 2021-08-20 南京四维向量科技有限公司 Method for accurately positioning address based on image recognition technology
CN113532394A (en) * 2021-05-28 2021-10-22 昆山市水利测绘有限公司 Hydraulic engineering surveying and mapping method
CN113739797A (en) * 2020-05-31 2021-12-03 华为技术有限公司 Visual positioning method and device
CN114494376A (en) * 2022-01-29 2022-05-13 山西华瑞鑫信息技术股份有限公司 Mirror image registration method
CN114860976A (en) * 2022-04-29 2022-08-05 南通智慧交通科技有限公司 Image data query method and system based on big data
CN115620154B (en) * 2022-12-19 2023-03-07 江苏星湖科技有限公司 Panoramic map superposition replacement method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103398717A (en) * 2013-08-22 2013-11-20 成都理想境界科技有限公司 Panoramic map database acquisition system and vision-based positioning and navigating method
CN104729485A (en) * 2015-03-03 2015-06-24 北京空间机电研究所 Visual positioning method based on vehicle-mounted panorama image and streetscape matching
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database
CN107024980A (en) * 2016-10-26 2017-08-08 阿里巴巴集团控股有限公司 Customer location localization method and device based on augmented reality
CN107084727A (en) * 2017-04-12 2017-08-22 武汉理工大学 A kind of vision positioning system and method based on high-precision three-dimensional map

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103398717A (en) * 2013-08-22 2013-11-20 成都理想境界科技有限公司 Panoramic map database acquisition system and vision-based positioning and navigating method
CN104729485A (en) * 2015-03-03 2015-06-24 北京空间机电研究所 Visual positioning method based on vehicle-mounted panorama image and streetscape matching
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database
CN107024980A (en) * 2016-10-26 2017-08-08 阿里巴巴集团控股有限公司 Customer location localization method and device based on augmented reality
CN107084727A (en) * 2017-04-12 2017-08-22 武汉理工大学 A kind of vision positioning system and method based on high-precision three-dimensional map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪淼: "城市三维街景地理信息服务平台设计与应用", 《测绘通报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910497A (en) * 2019-11-15 2020-03-24 北京信息科技大学 Method and system for realizing augmented reality map
CN110910497B (en) * 2019-11-15 2024-04-19 北京信息科技大学 Method and system for realizing augmented reality map
CN113739797A (en) * 2020-05-31 2021-12-03 华为技术有限公司 Visual positioning method and device
CN112100521A (en) * 2020-09-11 2020-12-18 广州宸祺出行科技有限公司 Method and system for identifying and positioning street view and acquiring panoramic picture
CN112100521B (en) * 2020-09-11 2023-12-22 广州宸祺出行科技有限公司 Method and system for identifying, positioning and obtaining panoramic picture through street view
CN112398526A (en) * 2020-10-30 2021-02-23 南京凯瑞得信息科技有限公司 Method for generating satellite spot wave beam based on Cesium simulation
CN113283285A (en) * 2021-03-19 2021-08-20 南京四维向量科技有限公司 Method for accurately positioning address based on image recognition technology
CN113188439A (en) * 2021-04-01 2021-07-30 深圳市磐锋精密技术有限公司 Internet-based automatic positioning method for mobile phone camera shooting
CN113008252B (en) * 2021-04-15 2023-08-22 东莞市异领电子有限公司 High-precision navigation device and navigation method based on panoramic photo
CN113008252A (en) * 2021-04-15 2021-06-22 西华大学 High-precision navigation device and navigation method based on panoramic photo
CN113532394A (en) * 2021-05-28 2021-10-22 昆山市水利测绘有限公司 Hydraulic engineering surveying and mapping method
CN114494376A (en) * 2022-01-29 2022-05-13 山西华瑞鑫信息技术股份有限公司 Mirror image registration method
CN114860976B (en) * 2022-04-29 2023-05-05 长沙公交智慧大数据科技有限公司 Image data query method and system based on big data
CN114860976A (en) * 2022-04-29 2022-08-05 南通智慧交通科技有限公司 Image data query method and system based on big data
CN115620154B (en) * 2022-12-19 2023-03-07 江苏星湖科技有限公司 Panoramic map superposition replacement method and system

Also Published As

Publication number Publication date
CN109520500B (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN109520500A (en) One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN106971403B (en) Point cloud image processing method and device
CN112184890B (en) Accurate positioning method of camera applied to electronic map and processing terminal
CN103703758B (en) mobile augmented reality system
TWI483215B (en) Augmenting image data based on related 3d point cloud data
CN109523471B (en) Method, system and device for converting ground coordinates and wide-angle camera picture coordinates
CN106878687A (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
JP2015084229A (en) Camera pose determination method and actual environment object recognition method
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN110858414A (en) Image processing method and device, readable storage medium and augmented reality system
CN206611521U (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN107843251A (en) The position and orientation estimation method of mobile robot
CN105335977B (en) The localization method of camera system and target object
CN115423863B (en) Camera pose estimation method and device and computer readable storage medium
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN109712249B (en) Geographic element augmented reality method and device
CN116858215B (en) AR navigation map generation method and device
CN109978997A (en) A kind of transmission line of electricity three-dimensional modeling method and system based on inclination image
CN109636850A (en) Visible light localization method in faced chamber under intelligent lamp
CN115880448B (en) Three-dimensional measurement method and device based on binocular imaging
CN105320725B (en) Obtain the method and device of the geographic object in acquisition point image
CN109784189A (en) Video satellite remote sensing images scape based on deep learning matches method and device thereof
CN114882106A (en) Pose determination method and device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant