CN109520500B - Accurate positioning and street view library acquisition method based on terminal shooting image matching - Google Patents

Accurate positioning and street view library acquisition method based on terminal shooting image matching Download PDF

Info

Publication number
CN109520500B
CN109520500B CN201811222795.5A CN201811222795A CN109520500B CN 109520500 B CN109520500 B CN 109520500B CN 201811222795 A CN201811222795 A CN 201811222795A CN 109520500 B CN109520500 B CN 109520500B
Authority
CN
China
Prior art keywords
road map
map
point
street view
longitude
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811222795.5A
Other languages
Chinese (zh)
Other versions
CN109520500A (en
Inventor
胡强
屈蔷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201811222795.5A priority Critical patent/CN109520500B/en
Publication of CN109520500A publication Critical patent/CN109520500A/en
Application granted granted Critical
Publication of CN109520500B publication Critical patent/CN109520500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

The invention provides a precise positioning and street view library acquisition method based on terminal shooting image matching, which comprises the steps of acquiring a panoramic image near the current coordinate of a user as a reference road view through a map APP of terminal equipment such as a mobile phone and a navigator, and acquiring a front high-definition real-time street view as a correction road view by using a terminal camera module; calculating the actual coordinate of the user by adopting a characteristic point extraction matching and monocular vision ranging algorithm to correct the original coordinate; and adding longitude and latitude labels to the corrected road map which is successfully corrected, and inputting the road map into a street view library. The invention adopts a high-definition street view library constructed on the basis of the existing panoramic map library of a map APP for positioning. The method is applied to the internet era with increasingly-rising position-based service requirements, and solves the problems of small coverage area, untimely updating and high acquisition cost of the panoramic map library. Meanwhile, the increase of selectable points and the optimization of the definition of the picture are realized when the user positions, and the visual positioning precision is greatly improved.

Description

Accurate positioning and street view library acquisition method based on terminal shooting image matching
Technical Field
The invention relates to the field of image processing and accurate positioning, in particular to the technical field of outdoor street view acquisition and pedestrian navigation positioning.
Background
In recent years, the development of map APP brings great convenience to the traveling of pedestrians and vehicles, and map developers simultaneously propose a panoramic map technology capable of reflecting the real environment of a destination. However, the acquisition of the panoramic map requires professional image equipment, and the consumed manpower and material resources are high. Meanwhile, the panoramic image mainly covers the trunk road, and most of the shooting time is one year ago, so that the problems of not wide environmental data coverage and not timely updating are caused, and inconvenience is brought to the traveling of pedestrians and vehicles.
At present, methods applied to positioning pedestrians and vehicles include: GPS positioning, base station positioning, inertial navigation positioning, and the like. The civil GPS positioning basically meets the daily activity requirement, but satellite signals are easy to receive and shield in urban areas erected in high buildings, so that the positioning is inaccurate; the positioning accuracy of the base stations is related to the number of the base stations at the positions, so that the more accurate positioning is difficult to realize in most areas; the inertial navigation positioning device is easy to generate integral error along with the increase of time, so that the device is difficult to be used independently.
With the development of computers and electronic devices, a more effective high-definition street view library acquisition and accurate positioning method will serve the society.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a street view library acquisition and accurate positioning method based on terminal shot image matching, and solves the problems that the existing panoramic map library of a map APP is small in coverage area, not timely in updating and high in acquisition cost. Meanwhile, the newly created high-definition street view library is used for carrying out visual positioning, so that the positioning error can be reduced, and great convenience is brought to the walking of pedestrians and vehicles.
In order to achieve the purpose, the invention adopts the following technical scheme:
an accurate positioning method based on terminal shooting image matching comprises the following steps:
1) obtaining a panoramic image near the current coordinate of a user as a reference road map through a map APP of a terminal device, obtaining a front high-definition real-time street view as a correction road map by using a terminal camera module, and recording the positioning longitude and latitude (X) of a shooting place of the reference road map in real time1,Y1) And corresponding yaw angle beta1Recording and correcting location longitude and latitude (X) of road map shooting location in real time2,Y2) And corresponding yaw angle beta2
2) Extracting the characteristic points of the reference road map and the corrected road map, matching the characteristic points, finding a group of characteristic points with the highest matching accuracy, and solving the actual longitude and latitude pairs (X) of the shooting points of the corrected road map by utilizing a monocular vision positioning algorithm and trigonometric function knowledge2,Y2) And (6) carrying out correction.
Preferably, in step 1), the terminal device is a device with a camera, such as a mobile phone and a navigator.
Preferably, in step 1), the selection method of the reference road map is as follows: calling a panoramic map near the coordinates by using a map APP, connecting a point to be corrected and a reference point of the panoramic map as a reference line, and selecting a picture which is in the panoramic map library and deviates from the reference line to the left or right by 30-90 degrees as a reference road map.
Preferably, in step 2), a surf (speeded UpRobust features) algorithm is adopted to extract feature points of the corrected road map and the reference road map, and a FLANN matcher is adopted to realize matching of the feature points of the corrected road map and the reference road map.
Preferably, in the step 2), the actual longitude and latitude of the shooting point of the corrected road map is calculated by using a monocular vision positioning algorithm and trigonometric function knowledge, and the specific steps include:
the position coordinates (X) of the reference roadmap photographing site a are known1,Y1) Distance L between point A and feature point C1And correcting the distance L between the road map shooting place B and the characteristic point C2Yaw angle beta1、β2Then, the direction angle η of the point a with respect to the point B is calculated as:
Figure BDA0001835199800000021
respectively calculating L by using the formula (2)1,L2
Figure BDA0001835199800000022
Wherein f isx,fyFor logical focal length, d is the physical size of each pixel, (i)F,jF) And (i)o,jo) Respectively representing two-dimensional pixel coordinates of the characteristic point C in an image coordinate system and two-dimensional pixel coordinates of the optical center of the camera in the image coordinate system;
the distance L between the point A and the point B can be known according to the trigonometric function relation3The calculation formula is as follows:
Figure BDA0001835199800000023
known distance L3And after the direction angle eta and the longitude and latitude information of the reference road map shooting location A, the longitude and latitude position information of the corrected road map shooting location B can be obtained according to the coordinate system transformation knowledge.
The street view library acquisition method based on the accurate positioning method comprises the following steps:
1) performing definition parameter calculation and content validity detection on the real-time street view image with the longitude and latitude successfully calculated;
2) and packaging and inputting the street view image meeting the conditions and the corresponding longitude and latitude information into a network server side together for storage to construct a high-definition street view library.
Has the advantages that: the invention adopts the high-definition street view library constructed on the basis of the existing panoramic map library of the map APP for positioning, so that the positioning area can cover the areas such as the residential street and the like which are not related to the traditional panoramic image. The method is applied to the internet era with increasingly-rising position-based service requirements, and solves the problems of small coverage area, untimely updating and high acquisition cost of the panoramic map library. Meanwhile, the increase of selectable points and the optimization of the definition of the picture are realized when the user positions, and the visual positioning precision is greatly improved.
Drawings
FIG. 1 is a flow chart of a street view library acquisition method based on terminal shot image matching according to the present invention;
fig. 2 is a panoramic map data source picture of the terminal map APP in the embodiment;
FIG. 3 is a schematic diagram of the SURF feature extraction algorithm according to the present embodiment;
fig. 4 is a diagram of a matching result of feature extraction between a picture shot by the terminal and a panorama matching picture in the embodiment;
FIG. 5 is a schematic diagram of a target linear range model according to the present embodiment;
fig. 6 is a schematic diagram illustrating the calculation of the latitude and longitude location information of the user according to the embodiment.
Detailed Description
The invention is further explained below with reference to the drawings.
The invention adopts the mobile phone, the navigator and other terminals with higher domestic popularity rate as the acquisition equipment of the high-definition street view library, and can improve the acquisition efficiency and the acquisition cost of the database compared with a panoramic map library which needs a plurality of vehicles and a plurality of digital cameras for acquisition.
Because the invention uses the visual positioning technology based on the extraction and matching of the image characteristic points, the panoramic map library is used as a positioning base library, and the data contained in the prior panoramic map library is only limited to the main road, the invention is only suitable for outdoor positioning and navigation.
The panoramic map is also called a panoramic all-around map, and refers to a map which simulates a three-dimensional picture into a three-dimensional effect of a real object, and a browser can drag the map to browse the effect of the real object from different angles. After multi-angle all-round shooting is carried out on the existing scene by using a digital camera, the computer is used for carrying out later stage stitching, and a playing program is loaded to finish three-dimensional virtual display.
The invention provides a high-definition street view library acquisition method based on terminal shot image matching, wherein a panoramic map data source picture is shown in figure 2, the selected experimental picture is located at the west door of Nanjing aerospace university, the image resolution is 756 multiplied by 434, the latitude and longitude coordinates of a panoramic map after being picked up are 118.793467 and 31.94281, the pitch angle is 6.313968561389864, the yaw angle is 73.5777451151586, and the zoom level is 3.
The data source picture has the characteristics that the shooting visual angle is basically in a horizontal state, the image color difference is large, the included environmental objects are complex, and the efficiency of a feature extraction algorithm can be effectively verified.
In this embodiment, as shown in fig. 1, the street view library acquisition method based on matching of terminal-shot images includes the following steps:
(1) acquiring reference roadmaps and corrected roadmap data by using the panoramic map library and the terminal equipment: calling a panoramic map near the APP, connecting the to-be-positioned point with a reference point of the panoramic map, selecting an image with a reference line deviated from the left or right direction by 30-90 degrees as a reference road map, and recording the positioning longitude and latitude (X) of the reference road map in real time1,Y1) And corresponding yaw angle beta1
Meanwhile, a terminal cellular mobile network and a positioning service switch are required to be turned on, and the positioning longitude and latitude (X) of the shooting moment is recorded in real time2,Y2) And yaw angle beta of magnetic compass gyroscope output2
(2) Packaging and uploading the reference road map and the corrected road map to an internet cloud: since the operation speed of the terminal processor cannot perform image matching and user coordinate information resolving in real time, the operation is performed on the server by using the uploading cloud function of the map APP.
(3) And (3) realizing image matching by utilizing an SURF algorithm and a FLANN matcher: firstly, all feature points of a shot image and a panoramic matching image are extracted by adopting an SURF algorithm, and the implementation scheme is shown in FIG. 3:
step 1 is to construct a Hessian matrix, generate all interest points and use the interest points for feature extraction. For image f (x, y), the Hessian matrix calculation formula is as follows:
Figure BDA0001835199800000041
when the discriminant of the Hessian matrix obtains a local maximum, the current point can be judged to be brighter or darker than other points in the surrounding neighborhood, so that the position of the characteristic point which is possibly positioned is determined.
Step 2 is to construct a scale space: due to the scale invariant characteristic of the object, in order to conveniently search the feature points and improve the calculation efficiency, a scale space needs to be constructed, rectangular filtering is adopted to replace Gaussian filtering, the original sizes of the generated images among different groups are consistent, the sizes of filter templates used among different groups are gradually increased, the original images are filtered by changing the sizes of the filters, the scale spaces with different scales can be constructed, and meanwhile, integral images are adopted as intermediaries to accelerate convolution operation.
Step 3, positioning the characteristic points: and comparing each pixel point processed by the Hessian matrix with 26 points in the neighborhood of the two-dimensional image and the scale space, preliminarily positioning the feature points, filtering the feature points with weak energy and wrong positioning, and screening out the final stable feature points.
And 4, distributing the main directions of the feature points: and (4) counting Harr wavelet characteristics in the circular neighborhood of the characteristic point, and taking the sector direction with the maximum sum of horizontal and vertical Harr wavelet characteristics in the sector of 60 degrees as the main direction of the characteristic point.
And step 5, generating a feature point descriptor: and 4-4 rectangular area blocks are selected around the characteristic points, haar wavelet characteristic values of 25 pixels in each area in the horizontal direction and the vertical direction are counted to construct characteristic vectors which are used as descriptors for describing useful information of the characteristic points.
After the feature points on the street view image and the panoramic matching image are successfully extracted, a feature quantity set of the feature points is constructed through an FLANN matcher, feature quantities are compared and screened to obtain a possibly correct matching set, a threshold value is continuously changed, error matching is eliminated until a group of matching points with the highest matching accuracy is obtained, and the feature point extraction and matching effects are shown in FIG. 4.
According to the effect graph after feature point extraction, the two graphs have more than one thousand feature points, and then the matching degree is determined by calculating the Euclidean distance between the two feature points, and the specific method for eliminating the error matching is as follows:
calculating the maximum value max _ dist and the minimum value min _ dist of the Euclidean distances among the feature points, setting a threshold value h, and if the Euclidean distance of the detected feature points is smaller than h × min _ dist, determining the feature points which are successfully matched;
and observing the matched effect, and continuously adjusting the threshold h until a group of matching points with the highest matching accuracy is obtained.
(4) And (3) monocular vision positioning resolving the original coordinate of user coordinate correction: the principle of the target straight-line ranging model is shown in fig. 5:
the distance from the camera to the image is converted into the image coordinates of the feature points which are found in the image and have the highest matching accuracy, the feature point coordinates are extracted through an image processing program, and a point-to-point linear distance measurement model is built according to the feature point coordinates and the optical center coordinates to calculate the complete distance from the camera to the measured object.
Assuming that the optical axis is in the horizontal direction, the arbitrary spatial point coordinate P is (X)w,Yw0), the position (i, j) of the corresponding image point p can be obtained according to the image coordinate correlation knowledge, and the calculation formula is as follows:
Figure BDA0001835199800000061
wherein f isx,fyFor logical focal length, d is the physical size of each pixel, (c)x,cy) Is the origin coordinate under the image coordinate system;
according to the relevant mathematical knowledge, the area mapping relation between the object surface and the target image is known as follows:
Figure BDA0001835199800000062
wherein S2Is the image area of the target;
the target object distance is expressed as:
Figure BDA0001835199800000063
wherein L iswFrom a characteristic point P to an origin O in a world coordinate systemw(ii) a distance (i)F,jF) And (i)o,jo) The coordinates of the image coordinate system of the focus and the optical center;
calculating the distances L from the target characteristic point to the panoramic matching image shooting point and the street view image shooting point respectively1,L2Then, the longitude and latitude calculation diagram of the shooting point is shown in fig. 6:
the position coordinates (X) of the reference roadmap photographing site a are known1,Y1) Distance L between point A and point feature C1Correcting the distance L between the road map shooting point B and the point C2Yaw angle beta1、β2Then, the direction angle η of the point a with respect to the point B is calculated as:
Figure BDA0001835199800000064
the distance L between the point A and the point B can be known according to the trigonometric function relation3The calculation formula is as follows:
Figure BDA0001835199800000065
known distance L3And after the direction angle eta and the longitude and latitude information of the reference road map shooting location A, the longitude and latitude position information of the corrected road map shooting location B can be obtained according to the coordinate system transformation knowledge.
(5) And (3) image screening, finishing high-definition street view library acquisition: and performing definition parameter calculation and content validity detection on the real-time street view image with successfully resolved longitude and latitude, and packaging and inputting the street view image meeting the conditions and the corresponding longitude and latitude information into a network server side together for storage to construct a high-definition street view library.
In view of the situations that the street view image shot by some terminals is fuzzy and has low resolution due to the problems of software, hardware, shooting environment and the like, the existing method for evaluating the definition degree of the street view image is to obtain a gray level image of the image, if the gray level distribution is wide, the low gray level point is adjacent to or close to the high gray level point, and the image noise is low, the definition of the image is good, and the resolution detection requires that an APP cloud end sets the image to be capable of being checked and uploaded with the attribute of the street view image by using a threshold value for screening;
at present, the main method for detecting the legitimacy of street view images is to adopt a deep learning method based on a convolutional neural network to construct a training set for feature extraction and label classification, and then extract features from the street view images uploaded to a cloud for image classification and legitimacy detection.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (5)

1. An accurate positioning method based on terminal shooting image matching is characterized by comprising the following steps:
1) obtaining a panoramic image near the current coordinate of a user as a reference road map through a map APP of a terminal device, obtaining a front high-definition real-time street view as a correction road map by using a terminal camera module, and recording the positioning longitude and latitude (X) of a shooting place of the reference road map in real time1,Y1) And corresponding yaw angle beta1Recording and correcting location longitude and latitude (X) of road map shooting location in real time2,Y2) And corresponding yaw angle beta2
2) Extracting the characteristic points of the reference road map and the correction road map, matching the characteristic points, and finding out the matching positiveA group of characteristic points with the highest accuracy are solved by using a monocular vision positioning algorithm and trigonometric function knowledge to calculate the actual longitude and latitude pairs (X) of the shooting points of the correction road map2,Y2) Carrying out correction;
in the step 2), the actual longitude and latitude of the shooting point of the corrected road map are calculated by using a monocular vision positioning algorithm and trigonometric function knowledge, and the method specifically comprises the following steps:
the position coordinates (X) of the reference roadmap photographing site a are known1,Y1) Distance L between point A and feature point C1And correcting the distance L between the road map shooting place B and the characteristic point C2Yaw angle beta1、β2Then, the direction angle η of the point a with respect to the point B is calculated as:
Figure FDA0002586827390000011
respectively calculating L by using the formula (2)1,L2
Figure FDA0002586827390000012
Wherein f isx,fyFor logical focal length, d is the physical size of each pixel, (i)F,jF) And (i)o,jo) Respectively representing two-dimensional pixel coordinates of the characteristic point C in an image coordinate system and two-dimensional pixel coordinates of the optical center of the camera in the image coordinate system;
the distance L between the point A and the point B can be known according to the trigonometric function relation3The calculation formula is as follows:
Figure FDA0002586827390000013
known distance L3And after the direction angle eta and the longitude and latitude information of the reference road map shooting location A, the longitude and latitude position information of the corrected road map shooting location B can be obtained according to the coordinate system transformation knowledge.
2. The accurate positioning method based on terminal shot image matching as claimed in claim 1, wherein in step 1), the terminal device is a mobile phone or a navigator.
3. The accurate positioning method based on terminal shot image matching according to claim 1, wherein in step 1), the reference road map is selected by a method comprising the following steps: calling a panoramic map near the coordinates by using a map APP, connecting a point to be corrected and a reference point of the panoramic map as a reference line, and selecting a picture which is in the panoramic map library and deviates from the reference line to the left or right by 30-90 degrees as a reference road map.
4. The accurate positioning method based on terminal shot image matching as claimed in claim 1, wherein in step 2), the SURF algorithm is adopted to extract the feature points of the correction road map and the reference road map, and the FLANN matcher is adopted to realize the matching of the feature points of the correction road map and the reference road map.
5. The streetscape library acquisition method based on the accurate positioning method of any one of claims 1 to 4, characterized by comprising the following steps:
1) performing definition parameter calculation and content validity detection on the real-time street view image with the longitude and latitude successfully calculated;
2) and packaging and inputting the street view image meeting the conditions and the corresponding longitude and latitude information into a network server side together for storage to construct a high-definition street view library.
CN201811222795.5A 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching Active CN109520500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811222795.5A CN109520500B (en) 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811222795.5A CN109520500B (en) 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching

Publications (2)

Publication Number Publication Date
CN109520500A CN109520500A (en) 2019-03-26
CN109520500B true CN109520500B (en) 2020-10-20

Family

ID=65772357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811222795.5A Active CN109520500B (en) 2018-10-19 2018-10-19 Accurate positioning and street view library acquisition method based on terminal shooting image matching

Country Status (1)

Country Link
CN (1) CN109520500B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739797A (en) * 2020-05-31 2021-12-03 华为技术有限公司 Visual positioning method and device
CN112100521B (en) * 2020-09-11 2023-12-22 广州宸祺出行科技有限公司 Method and system for identifying, positioning and obtaining panoramic picture through street view
CN112398526A (en) * 2020-10-30 2021-02-23 南京凯瑞得信息科技有限公司 Method for generating satellite spot wave beam based on Cesium simulation
CN113283285A (en) * 2021-03-19 2021-08-20 南京四维向量科技有限公司 Method for accurately positioning address based on image recognition technology
CN113188439B (en) * 2021-04-01 2022-08-12 深圳市磐锋精密技术有限公司 Internet-based automatic positioning method for mobile phone camera shooting
CN113008252B (en) * 2021-04-15 2023-08-22 东莞市异领电子有限公司 High-precision navigation device and navigation method based on panoramic photo
CN113532394A (en) * 2021-05-28 2021-10-22 昆山市水利测绘有限公司 Hydraulic engineering surveying and mapping method
CN114494376B (en) * 2022-01-29 2023-06-30 山西华瑞鑫信息技术股份有限公司 Mirror image registration method
CN114860976B (en) * 2022-04-29 2023-05-05 长沙公交智慧大数据科技有限公司 Image data query method and system based on big data
CN115620154B (en) * 2022-12-19 2023-03-07 江苏星湖科技有限公司 Panoramic map superposition replacement method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103398717A (en) * 2013-08-22 2013-11-20 成都理想境界科技有限公司 Panoramic map database acquisition system and vision-based positioning and navigating method
CN104729485A (en) * 2015-03-03 2015-06-24 北京空间机电研究所 Visual positioning method based on vehicle-mounted panorama image and streetscape matching
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database
CN107024980A (en) * 2016-10-26 2017-08-08 阿里巴巴集团控股有限公司 Customer location localization method and device based on augmented reality
CN107084727A (en) * 2017-04-12 2017-08-22 武汉理工大学 A kind of vision positioning system and method based on high-precision three-dimensional map

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103398717A (en) * 2013-08-22 2013-11-20 成都理想境界科技有限公司 Panoramic map database acquisition system and vision-based positioning and navigating method
CN104729485A (en) * 2015-03-03 2015-06-24 北京空间机电研究所 Visual positioning method based on vehicle-mounted panorama image and streetscape matching
CN106407315A (en) * 2016-08-30 2017-02-15 长安大学 Vehicle self-positioning method based on street view image database
CN107024980A (en) * 2016-10-26 2017-08-08 阿里巴巴集团控股有限公司 Customer location localization method and device based on augmented reality
CN107084727A (en) * 2017-04-12 2017-08-22 武汉理工大学 A kind of vision positioning system and method based on high-precision three-dimensional map

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
城市三维街景地理信息服务平台设计与应用;汪淼;《测绘通报》;20161231(第12期);第108-110页 *

Also Published As

Publication number Publication date
CN109520500A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN107133325B (en) Internet photo geographic space positioning method based on street view map
JP5980295B2 (en) Camera posture determination method and real environment object recognition method
CN111028358B (en) Indoor environment augmented reality display method and device and terminal equipment
CN112184890B (en) Accurate positioning method of camera applied to electronic map and processing terminal
US10089762B2 (en) Methods for navigating through a set of images
CN109596121B (en) Automatic target detection and space positioning method for mobile station
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
CN112836698A (en) Positioning method, positioning device, storage medium and electronic equipment
WO2023284358A1 (en) Camera calibration method and apparatus, electronic device, and storage medium
CN110636248B (en) Target tracking method and device
CN113673288B (en) Idle parking space detection method and device, computer equipment and storage medium
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN109784189A (en) Video satellite remote sensing images scape based on deep learning matches method and device thereof
CN112818866B (en) Vehicle positioning method and device and electronic equipment
Ayadi et al. A skyline-based approach for mobile augmented reality
CN112767477A (en) Positioning method, positioning device, storage medium and electronic equipment
JP2023523364A (en) Visual positioning method, device, equipment and readable storage medium
CN111220156B (en) Navigation method based on city live-action
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information
CN111758118B (en) Visual positioning method, device, equipment and readable storage medium
Roozenbeek Dutch Open Topographic Data Sets as Georeferenced Markers in Augmented Reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant