CN116468878A - AR equipment positioning method based on positioning map - Google Patents

AR equipment positioning method based on positioning map Download PDF

Info

Publication number
CN116468878A
CN116468878A CN202310472310.2A CN202310472310A CN116468878A CN 116468878 A CN116468878 A CN 116468878A CN 202310472310 A CN202310472310 A CN 202310472310A CN 116468878 A CN116468878 A CN 116468878A
Authority
CN
China
Prior art keywords
points
matching
pictures
equipment
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310472310.2A
Other languages
Chinese (zh)
Other versions
CN116468878B (en
Inventor
李春霞
刘坚
陈大清
胡相才
刘宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lanstar Technology Co ltd
Original Assignee
Shenzhen Lanstar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lanstar Technology Co ltd filed Critical Shenzhen Lanstar Technology Co ltd
Priority to CN202310472310.2A priority Critical patent/CN116468878B/en
Priority claimed from CN202310472310.2A external-priority patent/CN116468878B/en
Publication of CN116468878A publication Critical patent/CN116468878A/en
Application granted granted Critical
Publication of CN116468878B publication Critical patent/CN116468878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an AR equipment positioning method based on a positioning map, which comprises the following steps: collecting a scene picture with gps, extracting characteristic points, and carrying out characteristic point descriptor matching; solving a matching relative RT, performing SFM operation, and recovering all picture pose; selecting 100 feature points which are ranked at the front of each picture, and storing sift descriptors of corresponding three-dimensional points; according to scene division, storing every 50 x 50 m as a positioning map block; the AR equipment collects real-time pictures, carries out quick rough retrieval of the pictures, and carries out matching of the AR pictures and the characteristic points of the positioning map images; selecting 10 pictures which are matched in front, wherein descriptors of the pictures of the AR equipment correspond to three-dimensional points in the matching of the positioning map, and the PNP problem is formed; solving the PNP problem, and obtaining the pose of the query picture of the AR equipment; after PNP problems are settled in 10 pictures, obtaining the pose of the query image of the AR equipment, and obtaining the optimal pose according to the RANSAC principle. The method has the advantages of accurate real-time positioning and the like.

Description

AR equipment positioning method based on positioning map
Technical Field
The invention relates to the field of AR equipment, in particular to an AR equipment positioning method based on a positioning map.
Background
At present, the AR equipment is very fire, and one main function of the AR equipment is real-time positioning, and then various augmented reality and virtual reality operations are carried out on the basis of the positioning. Whereas current real-time positioning is typically based on IMU devices determining three heading angles of roll pitch yaw, specific geographic locations are determined based on gps devices. Often, AR devices are not equipped with particularly expensive sensors, resulting in inaccurate final output positions of these sensors.
Disclosure of Invention
The invention provides an AR equipment positioning method based on a positioning map, which does not need an IMU and a GPS (Global positioning System) equipment, can accurately position in real time by only needing a video sensor of the AR equipment, can position in a large range, and solves the problem of inaccurate output position of the sensor of the AR equipment in the prior art by reconstructing the positioning map based on a sift descriptor in advance.
Referring to fig. 1, an AR device positioning method based on a positioning map according to an embodiment of the present application includes the following steps:
collecting a scene picture with gps, extracting characteristic points, and carrying out characteristic point descriptor matching;
solving a matching relative RT, performing SFM operation, and recovering all picture pose;
selecting 100 feature points which are ranked at the front of each picture, and storing sift descriptors of corresponding three-dimensional points;
according to scene division, storing every 50 x 50 m as a positioning map block;
the AR equipment collects real-time pictures, carries out quick rough retrieval of the pictures, and carries out matching of the AR pictures and the characteristic points of the positioning map images;
selecting 10 pictures which are matched in front, wherein descriptors of the pictures of the AR equipment correspond to three-dimensional points in the matching of the positioning map, and the PNP problem is formed;
solving the PNP problem, and obtaining the pose of the query picture of the AR equipment;
after PNP problems are settled in 10 pictures, obtaining the pose of the query image of the AR equipment, and obtaining the optimal pose according to the RANSAC principle.
Preferably, in the extraction process of the sift descriptor, firstly, a DoG pyramid is used for extracting key points from an image, the key points are found at a change position, a series of images are obtained by Gaussian filtering under different scales, and then, the two Gaussian filtered images with different scales are subtracted to obtain a DoG image;
then for each key point, constructing a circular area by using the scale and the direction of the point, subtracting the average value from pixels in the circular area, normalizing, dividing the circular area into a plurality of sub-areas, and calculating a direction histogram in each sub-area;
finally, the direction histograms of all the subregions are concatenated to form a vector, which is the sift descriptor.
Preferably, the sift feature descriptor matching process includes the steps of:
calculating a sift characteristic descriptor of each key point, wherein the descriptor is a 128-dimensional vector and comprises characteristic information and gradient direction information of a surrounding area of the key point;
for each key point in the first image, calculating the similarity between the key point and all key points of the second image, and finding a plurality of key points which are matched with the key points best, wherein the plurality of key points which are matched best are candidate matching points;
for each candidate matching point, calculating the similarity between the candidate matching point and other candidate matching points in the first image, and screening according to the similarity.
Preferably, the screening method comprises: for each matching point, selecting the point with the highest similarity as a unique matching point; and for each matching point, calculating the similarity ratio with the suboptimal matching point, if the similarity ratio is smaller than a certain threshold value, considering the matching point as reliable, otherwise, discarding, and finally, obtaining the rest matching points as the final sift feature descriptor matching result.
Preferably, a multi-view based SfM is a technique mainly used for three-dimensional reconstruction, which allows to calculate the three-dimensional shape and position of an object from a plurality of images, i.e. reconstruct a three-dimensional model, comprising the steps of:
feature extraction, here using a sift algorithm to match corresponding points;
matching, namely calculating corresponding feature points at each view angle, and determining the corresponding relation between the two views by using a matching algorithm;
a basic matrix estimation, which calculates internal parameters between cameras and rotation and translation matrices based on the geometric relation of the corresponding points;
triangulation, calculating 3D position estimation between points in two images, determining matching among more images by using the existing 3D point estimation, and performing robust estimation;
reconstructing to obtain a possibly incomplete three-dimensional model, each view of which has a camera position estimate;
and (3) bundling adjustment, namely performing optimization adjustment on the camera positions and the feature points scattered in different views, and obtaining a final 3D reconstruction result.
Preferably, the method further comprises the steps of storing a positioning map and AR equipment for real-time positioning, wherein the positioning map mainly comprises three-dimensional points in sfm and corresponding descriptors thereof, the AR equipment is positioned in real time, along with continuous movement of the AR equipment, a shot photo can be obtained in real time, the shot photo is subjected to calculation of descriptive information, an image with the descriptive information similar to that of the top 10 is found in the positioning map, after the top 10 picture is found, 100 corresponding feature descriptors are queried and matched with the feature descriptors of the current picture, PNP problems are constructed, the pose of the picture is solved, 10 PNP problems exist in the 10 pictures, and finally the best one of the 10 poses is selected as the final pose according to RANSAC.
Preferably, the method further comprises the step of solving the problem of external parameters of the camera by tracking corresponding points of characteristic points of objects in a plurality of two-dimensional pictures on the same object and three-dimensional points on the corresponding object based on the two-dimensional picture coordinates and the PnP problem corresponding to the three-dimensional points;
the basic solving mode is that three-dimensional world coordinates are projected onto a two-dimensional plane by calculating a projection matrix of a camera, and errors between projection points and corresponding feature points are calculated; the iteration solution can be performed by using a RANSAC method and the like, and the camera external parameters are optimized by using a least squares method and the like in each iteration, and finally, the camera external parameters are obtained and represent the position and the direction of the camera in the three-dimensional space.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
compared with the prior art, the AR equipment positioning method based on the positioning map does not need an IMU and a GPS equipment, can accurately and real-time position the AR equipment only by using a video sensor of the AR equipment, can position the AR equipment in a large range, and can reconstruct a positioning map based on a sift descriptor in advance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an AR device positioning method based on a positioning map of the present invention;
fig. 2 is a schematic diagram of a sift feature descriptor matching of an AR device positioning method based on a positioning map according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, the invention provides an AR device positioning method based on a positioning map, comprising the following steps:
collecting a scene picture with gps, extracting characteristic points, and carrying out characteristic point descriptor matching;
solving a matching relative RT, performing SFM operation, and recovering all picture pose;
selecting 100 feature points which are ranked at the front of each picture, and storing sift descriptors of corresponding three-dimensional points;
according to scene division, storing every 50 x 50 m as a positioning map block;
the AR equipment collects real-time pictures, carries out quick rough retrieval of the pictures, and carries out matching of the AR pictures and the characteristic points of the positioning map images;
selecting 10 pictures which are matched in front, wherein descriptors of the pictures of the AR equipment correspond to three-dimensional points in the matching of the positioning map, and the PNP problem is formed;
solving the PNP problem, and obtaining the pose of the query picture of the AR equipment;
after PNP problems are settled in 10 pictures, obtaining the pose of the query image of the AR equipment, and obtaining the optimal pose according to the RANSAC principle.
If outdoor positioning is needed, a unmanned plane or a mobile phone can be used for shooting pictures, gps information is needed to be attached to the pictures, the gps information is not needed to be very accurate, and the follow-up algorithm can be further optimized. If the indoor scene needs to be positioned, a shooting mode such as a camera can be adopted, and because the indoor has no gps signal, a plurality of positioning points need to be manually filled for determining the scene scale.
The sift describes a sub-extraction process, first extracting keypoints from images using a DoG (differential gaussian function) pyramid. These key points are found at the variation (e.g. object edges, corner points, shading changes, etc.), a series of images are obtained by gaussian filtering at different scales, and then the DoG images are obtained by subtracting the gaussian filtered images at two different scales. Then for each key point, a circular region is constructed using the scale and direction of the point, the pixels within the circular region will subtract the average value and normalize, and then the circular region is divided into a number of sub-regions, and the direction histogram is recalculated within each sub-region. Finally, the direction histograms of all the sub-regions are concatenated to form a vector, which is the sift descriptor. The sift descriptor has rotation invariance and scale invariance, and is a commonly used image characteristic description method in the field of computer vision.
The sift feature descriptor matching process includes the steps of:
referring to fig. 2, a sift feature descriptor is calculated for each key point, where the descriptor is a 128-dimensional vector and includes feature information and gradient direction information of an area around the key point.
For each key point in the first image, calculating the similarity between the first image and all key points of the second image (generally, euclidean distance, cosine similarity or the like can be used), and finding a plurality of key points which are matched with the first image, namely candidate matching points.
For each candidate matching point, calculating the similarity between the candidate matching point and other candidate matching points in the first image, and screening according to the similarity. The following screening methods are generally employed:
a) For each matching point, selecting the point with the highest similarity as a unique matching point;
b) For each matching point, calculating the similarity ratio with the suboptimal matching point, if the similarity ratio is smaller than a certain threshold value, considering that the matching point is reliable, otherwise, discarding the matching point.
And finally, the remaining matching points are the final sift feature descriptor matching result.
Multi-view based Structure from Motion (SfM) is a technique that is primarily used for three-dimensional reconstruction. It makes it possible to calculate the three-dimensional shape and position of the object from a plurality of images, i.e. to reconstruct a three-dimensional model.
The basic steps are as follows:
feature extraction, where the sift algorithm is used to match the corresponding points, other algorithms may also be used.
Matching, namely calculating corresponding feature points at each view angle, and determining the corresponding relation between the two views by using a matching algorithm.
And (3) estimating a basic matrix, and calculating internal parameters between cameras and a rotation and translation matrix based on the geometric relation of the corresponding points.
Triangulation, the 3D position estimate between points in the given two images is calculated.
The existing 3D point estimation is used to determine the matches between more images and then a robust estimation is performed.
Reconstruction yields a possibly incomplete three-dimensional model with a camera position estimate for each view.
And (3) bundling adjustment, namely performing optimization adjustment on the camera positions and the feature points scattered in different views, and obtaining a final 3D reconstruction result. Among them, the bundle adjustment Bundle Adjustment (BA) procedure is the most representative and core procedure in SfM. In this process, parameters of all cameras and all feature points are considered, and initial estimates of the cameras and three-dimensional points are optimized by minimizing the re-projection error. In the BA process, parameters that need to be optimized include internal parameters of each camera, external translation and rotation vectors, and the positions of all three-dimensional points. The BA process iteratively optimizes the above parameters until it meets certain convergence criteria. Typically, BA is optimized using a nonlinear optimization algorithm L-BFGS. The optimization result of BA enables the camera of each view to optimally correspond to the 3D model of a specific scene, and ensures the spatial position accuracy of three-dimensional points.
A positioning map is stored. The positioning map mainly consists of three-dimensional points in sfm and corresponding descriptors thereof. Because of the large number of descriptors of each map feature point, the map file size formed by putting all data together is relatively large. Each graph therefore picks a feature descriptor that ranks top 100, based on the Track number for each point, i.e., how many pictures the point is commonly seen. The positioning map also needs to store the information of each picture, and because the pictures occupy a lot of disk space, the description information of each picture can be stored only, and then the description information is used for finding out the approximately similar pictures.
The AR device is located in real time. As the AR device moves continuously, a photographed picture can be obtained in real time. The picture is calculated to describe information, and an image with the description information similar to the description information of the top 10 is found in the positioning map, so that the more pictures are generally found, the more the effect is ensured. But actual verification found that generally similar pictures were ranked within the top 10. After finding the top 10 picture, query its corresponding 100 feature descriptors. Matching with the feature descriptors of the current picture. And constructing a PNP problem and solving the pose of the graph. The 10 graphs have 10 PNP problems, and finally, according to the RANSAC selection, the best one of the 10 poses is selected as the final pose.
Based on the two-dimensional picture coordinates and the PnP problem corresponding to the three-dimensional points, the problem of external parameters of the camera (namely, the position and the direction of the camera in the three-dimensional space) is solved by tracking the corresponding points of the characteristic points of the objects in the two-dimensional pictures on the same object and the three-dimensional points on the corresponding objects through the known internal parameters of the camera.
The basic solving mode is to calculate the projection matrix of the camera, project the three-dimensional world coordinates onto the two-dimensional plane, and calculate the error between the projection point and the corresponding feature point. This process can be solved iteratively using RANSAC et al methods, with least squares et al methods used in each iteration to optimize the camera's external parameters. The finally obtained camera external parameters are used for representing the position and the direction of the camera in the three-dimensional space.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
compared with the prior art, the AR equipment positioning method based on the positioning map does not need an IMU and a GPS equipment, can accurately and real-time position the AR equipment only by using a video sensor of the AR equipment, can position the AR equipment in a large range, and can reconstruct a positioning map based on a sift descriptor in advance.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (7)

1. The AR equipment positioning method based on the positioning map is characterized by comprising the following steps of:
collecting a scene picture with gps, extracting characteristic points, and carrying out characteristic point descriptor matching;
solving a matching relative RT, performing SFM operation, and recovering all picture pose;
selecting 100 feature points which are ranked at the front of each picture, and storing sift descriptors of corresponding three-dimensional points;
according to scene division, storing every 50 x 50 m as a positioning map block;
the AR equipment collects real-time pictures, carries out quick rough retrieval of the pictures, and carries out matching of the AR pictures and the characteristic points of the positioning map images;
selecting 10 pictures which are matched in front, wherein descriptors of the pictures of the AR equipment correspond to three-dimensional points in the matching of the positioning map, and the PNP problem is formed;
solving the PNP problem, and obtaining the pose of the query picture of the AR equipment;
after PNP problems are settled in 10 pictures, obtaining the pose of the query image of the AR equipment, and obtaining the optimal pose according to the RANSAC principle.
2. The AR equipment positioning method based on the positioning map according to claim 1, wherein in the sift descriptor extraction process, firstly, a DoG pyramid is used for extracting key points from an image, the key points are found at a change place, a series of images are obtained by Gaussian filtering under different scales, and then, the two Gaussian filtered images with different scales are subtracted to obtain a DoG image;
then for each key point, constructing a circular area by using the scale and the direction of the point, subtracting the average value from pixels in the circular area, normalizing, dividing the circular area into a plurality of sub-areas, and calculating a direction histogram in each sub-area;
finally, the direction histograms of all the subregions are concatenated to form a vector, which is the sift descriptor.
3. The AR device positioning method based on the positioning map according to claim 2, wherein the sift feature descriptor matching process comprises the steps of:
calculating a sift characteristic descriptor of each key point, wherein the descriptor is a 128-dimensional vector and comprises characteristic information and gradient direction information of a surrounding area of the key point;
for each key point in the first image, calculating the similarity between the key point and all key points of the second image, and finding a plurality of key points which are matched with the key points best, wherein the plurality of key points which are matched best are candidate matching points;
for each candidate matching point, calculating the similarity between the candidate matching point and other candidate matching points in the first image, and screening according to the similarity.
4. The AR device positioning method based on a positioning map according to claim 3, wherein the adoption of the screening method comprises: for each matching point, selecting the point with the highest similarity as a unique matching point; and for each matching point, calculating the similarity ratio with the suboptimal matching point, if the similarity ratio is smaller than a certain threshold value, considering the matching point as reliable, otherwise, discarding, and finally, obtaining the rest matching points as the final sift feature descriptor matching result.
5. The AR device positioning method based on a positioning map according to claim 1, wherein the multi-view-based SfM is a technique mainly used for three-dimensional reconstruction, which allows to calculate the three-dimensional shape and position of an object from a plurality of images, i.e. reconstruct a three-dimensional model, comprising the steps of:
feature extraction, here using a sift algorithm to match corresponding points;
matching, namely calculating corresponding feature points at each view angle, and determining the corresponding relation between the two views by using a matching algorithm;
a basic matrix estimation, which calculates internal parameters between cameras and rotation and translation matrices based on the geometric relation of the corresponding points;
triangulation, calculating 3D position estimation between points in two images, determining matching among more images by using the existing 3D point estimation, and performing robust estimation;
reconstructing to obtain a possibly incomplete three-dimensional model, each view of which has a camera position estimate;
and (3) bundling adjustment, namely performing optimization adjustment on the camera positions and the feature points scattered in different views, and obtaining a final 3D reconstruction result.
6. The AR equipment positioning method based on the positioning map according to claim 1, further comprising the steps of storing the positioning map and the AR equipment positioning in real time, wherein the positioning map mainly comprises three-dimensional points in sfm and corresponding descriptors thereof, the AR equipment positioning in real time, along with continuous movement of the AR equipment, a shot photo can be obtained in real time, the shot photo is subjected to calculation of descriptive information, an image with the descriptive information similar to the top 10 is found in the positioning map, after the top 10 picture is found, 100 corresponding feature descriptors are queried and matched with the feature descriptors of the current picture, PNP problems are constructed, the pose of the picture is solved, 10 PNP problems exist in the 10 pictures, and finally the best one of the 10 poses is selected as the final pose according to RAN selection.
7. The positioning map-based AR device positioning method according to claim 1, further comprising solving a problem of external parameters of the camera by tracking corresponding points of feature points of objects in the plurality of two-dimensional pictures on the same object and three-dimensional points on the corresponding object through known internal parameters of the camera based on PnP problems corresponding to coordinates and three-dimensional points of the two-dimensional pictures;
the basic solving mode is that three-dimensional world coordinates are projected onto a two-dimensional plane by calculating a projection matrix of a camera, and errors between projection points and corresponding feature points are calculated; the RANSAC method can be used for iteration solution, and the least square method is used for optimizing the external parameters of the camera in each iteration, and finally the obtained external parameters of the camera represent the position and the direction of the camera in the three-dimensional space.
CN202310472310.2A 2023-04-25 AR equipment positioning method based on positioning map Active CN116468878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310472310.2A CN116468878B (en) 2023-04-25 AR equipment positioning method based on positioning map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310472310.2A CN116468878B (en) 2023-04-25 AR equipment positioning method based on positioning map

Publications (2)

Publication Number Publication Date
CN116468878A true CN116468878A (en) 2023-07-21
CN116468878B CN116468878B (en) 2024-05-24

Family

ID=

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675453A (en) * 2019-10-16 2020-01-10 北京天睿空间科技股份有限公司 Self-positioning method for moving target in known scene
CN111882590A (en) * 2020-06-24 2020-11-03 广州万维创新科技有限公司 AR scene application method based on single picture positioning
CN113298871A (en) * 2021-05-14 2021-08-24 视辰信息科技(上海)有限公司 Map generation method, positioning method, system thereof, and computer-readable storage medium
CN113808269A (en) * 2021-09-23 2021-12-17 视辰信息科技(上海)有限公司 Map generation method, positioning method, system and computer readable storage medium
CN115526983A (en) * 2022-03-30 2022-12-27 荣耀终端有限公司 Three-dimensional reconstruction method and related equipment
CN115578539A (en) * 2022-12-07 2023-01-06 深圳大学 Indoor space high-precision visual position positioning method, terminal and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675453A (en) * 2019-10-16 2020-01-10 北京天睿空间科技股份有限公司 Self-positioning method for moving target in known scene
CN111882590A (en) * 2020-06-24 2020-11-03 广州万维创新科技有限公司 AR scene application method based on single picture positioning
CN113298871A (en) * 2021-05-14 2021-08-24 视辰信息科技(上海)有限公司 Map generation method, positioning method, system thereof, and computer-readable storage medium
CN113808269A (en) * 2021-09-23 2021-12-17 视辰信息科技(上海)有限公司 Map generation method, positioning method, system and computer readable storage medium
CN115526983A (en) * 2022-03-30 2022-12-27 荣耀终端有限公司 Three-dimensional reconstruction method and related equipment
CN115578539A (en) * 2022-12-07 2023-01-06 深圳大学 Indoor space high-precision visual position positioning method, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
Sattler et al. Benchmarking 6dof outdoor visual localization in changing conditions
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
CN110568447B (en) Visual positioning method, device and computer readable medium
CN107967457B (en) Site identification and relative positioning method and system adapting to visual characteristic change
Toft et al. Long-term 3d localization and pose from semantic labellings
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
CN110176032B (en) Three-dimensional reconstruction method and device
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN110799921A (en) Shooting method and device and unmanned aerial vehicle
WO2012177336A2 (en) Systems and methods for estimating the geographic location at which image data was captured
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN107038758B (en) Augmented reality three-dimensional registration method based on ORB operator
CA2787856A1 (en) Systems and methods for estimating the geographic location at which image data was captured
CN114066999A (en) Target positioning system and method based on three-dimensional modeling
CN115953471A (en) Indoor scene multi-scale vector image retrieval and positioning method, system and medium
CN116468878B (en) AR equipment positioning method based on positioning map
CN110674327A (en) More accurate positioning method and device
CN112146647B (en) Binocular vision positioning method and chip for ground texture
CN114723811A (en) Stereo vision positioning and mapping method for quadruped robot in unstructured environment
CN116468878A (en) AR equipment positioning method based on positioning map
CN111583331B (en) Method and device for simultaneous localization and mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant