CN111882590A - AR scene application method based on single picture positioning - Google Patents
AR scene application method based on single picture positioning Download PDFInfo
- Publication number
- CN111882590A CN111882590A CN202010587809.4A CN202010587809A CN111882590A CN 111882590 A CN111882590 A CN 111882590A CN 202010587809 A CN202010587809 A CN 202010587809A CN 111882590 A CN111882590 A CN 111882590A
- Authority
- CN
- China
- Prior art keywords
- data
- algorithm
- point cloud
- single picture
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Abstract
The invention discloses an AR scene application method based on single picture positioning.A server carries out three-dimensional map modeling on image data of a global scene; the three-dimensional map modeling comprises a descriptor corresponding to each feature point of the image data and three-dimensional coordinate parameters corresponding to the descriptors; the mobile terminal acquires single picture data and uploads the single picture data to the server; the server extracts the characteristic points of the single picture data to obtain a descriptor corresponding to each characteristic point; matching a descriptor corresponding to each characteristic point in a single picture with a descriptor corresponding to each characteristic point of the global scene model, and acquiring the pose of the single picture in the global scene model through three-dimensional coordinate parameters corresponding to the descriptors; and converting the pose of the single picture in the global scene model into an AR application coordinate system, and activating the AR application.
Description
Technical Field
The invention relates to the technical field of AR image recognition, in particular to an AR scene application method based on single picture positioning.
Background
Image-based modeling techniques have been an important research direction in the field of computer vision. At present, the main process is to establish the connection between images by feature point extraction and descriptor matching of feature points, and then to establish a real-world model from the images by using three-dimensional vision and mathematical modeling technology for the physical world. The method has the defects that picture camera parameters and models from various sources are different, and the estimation error of the camera model causes system error; and the amount of image data is too large, resulting in a long calculation time (ranging from days to a week).
The AR (augmented reality) technology is currently rapidly developing, mobile AR such as, for example, ArCore by Google, ArKit by apple; AR glasses, such as Hololens from Microsoft, domestic Shadow creator, and the like. Are dedicated to the development of AR, and the most important basic technology in AR is SLAM (real time localization and mapping technology). The method has the defects that the existing AR application is concentrated in a small range, is limited by plane recognition and image recognition, and cannot realize AR in a large pre-established scene model.
Disclosure of Invention
The invention aims to provide a method for positioning an AR scene through a single picture, so as to reduce the calculation amount of the mobile terminal for acquiring the AR scene and realize efficient presentation of the mobile terminal.
The invention relates to an AR scene application method based on single picture positioning, which comprises the following steps:
step 1: the server carries out three-dimensional map modeling on the image data of the global scene; the three-dimensional map modeling comprises a descriptor corresponding to each feature point of the image data and three-dimensional coordinate parameters corresponding to the descriptors;
step 2: the mobile terminal acquires single picture data and uploads the single picture data to the server;
and step 3: the server extracts the characteristic points of the single picture data to obtain a descriptor corresponding to each characteristic point;
and 4, step 4: matching a descriptor corresponding to each characteristic point in a single picture with a descriptor corresponding to each characteristic point of the global scene model, and acquiring the pose of the single picture in the global scene model through three-dimensional coordinate parameters corresponding to the descriptors;
and 5: and converting the pose of the single picture in the global scene model into an AR application coordinate system, and activating the AR application.
The invention relates to an AR scene application method based on single picture positioning.A server video data carries out three-dimensional modeling, obtains a descriptor corresponding to each characteristic point of a global scene model in image data and three-dimensional coordinate parameters corresponding to the descriptor, and a mobile terminal obtains single picture data and uploads the single picture data to the server; the server extracts the characteristic points of the single picture data to obtain a descriptor corresponding to each characteristic point; and matching the descriptor corresponding to each characteristic point in the single picture with the descriptor corresponding to each characteristic point of the global scene model, and acquiring the pose of the single picture in the global scene model through the three-dimensional coordinate parameters corresponding to the descriptors. The global scene model can be established with unified three-dimensional coordinate parameters, and the problem of different image camera parameters and models from various sources is avoided. And the mobile equipment is only responsible for acquiring single picture data, large calculation is not needed, and the surplus information is less, so that the equipment for acquiring data in the process of drawing construction and the pictures acquired by the equipment used by a user after the positioning algorithm is put into operation are unified, a more stable data source can be provided, the image modeling precision is improved, and the operation amount is reduced. The whole application is divided into two parts which are respectively operated on the server and the mobile device, so that the operation burden of the mobile device is reduced. And finally, converting the pose of the single picture in the global scene model into an AR application coordinate system, activating AR application, and providing a perfect application scene for image modeling. The positioning method based on the coordinate parameters of a single image also provides extremely high-precision positioning for AR application.
Drawings
Fig. 1 is a schematic flow chart of an AR scene application method based on single picture positioning according to the present invention.
Detailed Description
As shown in fig. 1, the AR image recognition method according to the present invention includes the following steps:
step 1: the server carries out three-dimensional map modeling on the image data of the global scene; the three-dimensional map modeling comprises a descriptor corresponding to each feature point of the image data and three-dimensional coordinate parameters corresponding to the descriptors;
step 2: the mobile terminal acquires single picture data and uploads the single picture data to the server;
and step 3: the server extracts the characteristic points of the single picture data to obtain a descriptor corresponding to each characteristic point;
and 4, step 4: matching a descriptor corresponding to each characteristic point in a single picture with a descriptor corresponding to each characteristic point of the global scene model, and acquiring the pose of the single picture in the global scene model through three-dimensional coordinate parameters corresponding to the descriptors;
and 5: and converting the pose of the single picture in the global scene model into an AR application coordinate system, and activating the AR application.
The invention relates to an AR scene application method based on single picture positioning.A server video data carries out three-dimensional modeling, obtains a descriptor corresponding to each characteristic point of a global scene model in image data and three-dimensional coordinate parameters corresponding to the descriptor, and a mobile terminal obtains single picture data and uploads the single picture data to the server; the server extracts the characteristic points of the single picture data to obtain a descriptor corresponding to each characteristic point; and matching the descriptor corresponding to each characteristic point in the single picture with the descriptor corresponding to each characteristic point of the global scene model, and acquiring the pose of the single picture in the global scene model through the three-dimensional coordinate parameters corresponding to the descriptors. The global scene model can be established with unified three-dimensional coordinate parameters, and the problem of different image camera parameters and models from various sources is avoided. And the mobile equipment is only responsible for acquiring single picture data, large calculation is not needed, and the surplus information is less, so that the equipment for acquiring data in the process of drawing construction and the pictures acquired by the equipment used by a user after the positioning algorithm is put into operation are unified, a more stable data source can be provided, the image modeling precision is improved, and the operation amount is reduced. The whole application is divided into two parts which are respectively operated on the server and the mobile device, so that the operation burden of the mobile device is reduced. And finally, converting the pose of the single picture in the global scene model into an AR application coordinate system, activating AR application, and providing a perfect application scene for image modeling. The positioning method based on the coordinate parameters of a single image also provides extremely high-precision positioning for AR application.
The step 1 comprises the following steps:
step 1-A1: the server extracts the feature points of the image data, obtains a descriptor corresponding to each feature point and three-dimensional coordinate parameters corresponding to the descriptors, recovers the three-dimensional data of the feature points in the image stream by a feature point algorithm and a computer multi-view geometric algorithm, and forms sparse point clouds;
step 1-A2: restoring the real scale of the point cloud according to the data source of the real scale of the sparse point cloud result through the data of the corresponding IMU and the VISLAM algorithm;
step 1-A3: and calculating dense three-dimensional point cloud by a multi-view photometric matching algorithm of computer vision.
Based on three-dimensional sparse/dense reconstruction of image data and support the recovery of a real scale by using IMU data, the error of 6 degrees of freedom (3 position degrees of freedom + 3 direction degrees of freedom) of a single picture in a known scene, namely a reconstructed global scene model, is positioned within 10 centimeters, and a foundation is provided for high-precision AR application. The VISLAM algorithm can be used for better recovering a high-quality sparse point cloud map, and IMU data can be used for better recovering scale information missing from the VISLAM algorithm.
The step 1-A1 comprises the following steps:
step 1-A1-1: the server extracts and screens image data by using an optical flow tracking algorithm of the image stream, and establishes a set of key frames of the image data;
step 1-A1-2: extracting feature points of SIFT or deep learning according to the set of key frames, and calculating a descriptor of each feature point;
step 1-A1-3: matching the key frames according to the descriptors corresponding to each feature point and the three-dimensional coordinate parameters corresponding to the descriptors, finding the one-to-one corresponding relation of the feature points, and verifying the screening result through the computer vision geometric relation;
step 1-A1-4: and modeling by combining a physical observation model of computer vision according to the matching result of the key frame, and optimizing by using a Bundle Adjustment algorithm to finally obtain the high-precision sparse point cloud. The key frame matching is used, so that the problem of key frame screening is solved, and the affix data is further removed. And (5) accelerating the chart building process. And the mapping precision is improved after the key frames are screened.
The step 1-A2 comprises the following steps:
step 1-A2-1: arranging the synchronous image and IMU data, and combining the result of the SFM algorithm to obtain the pose corresponding to the image in the sparse point cloud;
step 1-A2-2: estimating relative poses with real scales between frames by a VISLAM algorithm;
step 1-A2-3: and calculating a data source of the real scale of the established sparse point cloud by using an optimization algorithm through the pose calculated by the VISLAM algorithm and the pose calculated by the SFM algorithm, thereby recovering the real scale of the point cloud.
The step 1-A3 comprises the following steps:
step 1-A3-1: collecting depth information of the sparse point cloud and a common view relation graph between key frames;
step 1-A3-2: establishing a physical model of multi-view geometry based on photometric errors;
step 1-A3-3: using a Primal-Dual convex optimization method for the physical model of the multi-view geometry to perform rapid dense depth map recovery;
step 1-A3-4: and calculating a dense three-dimensional point cloud by using an algorithm of dense depth image reconstruction.
The three-dimensional map modeling also includes a real-scale data source. Step 1 also comprises the following steps:
step 1-B1: the method comprises the steps that a server obtains laser data, wherein the laser data comprise data of a real scale and image data;
step 1-B2: the server extracts a descriptor corresponding to each feature point of the image data and three-dimensional coordinate parameters corresponding to the descriptor, corresponding data of a real scale, and estimates the pose and the movement speed of the equipment through a visual SLAM tracking algorithm;
step 1-B3: according to the pose result and the movement speed, performing de-warping operation on the laser point cloud data;
step 1-B4: calculating the high-precision equipment pose by using the de-distorted laser point cloud through a three-dimensional point cloud ICP (inductively coupled plasma) algorithm;
step 1-B5: optimizing the estimation of the pose again by using a Bundle Adjustment optimization algorithm and combining the vision and the observation of the laser;
step 1-B6: and simultaneously calculating dense point cloud data and sparse feature point cloud data by using a point cloud fusion algorithm, and performing three-dimensional map modeling on the scene by using the server according to the dense point cloud data and the sparse feature point cloud data. The map is scanned using a laser device and modeled. The accuracy will be greatly improved. Laser devices, while providing high precision data, have problems with low frequency and distortion. These problems can be solved by high frequency vision algorithms.
The pose of a single picture in the global scene model is converted into the AR application coordinate system, AR application is activated, pictures of any equipment and users are supported, high-precision positioning is provided in the reconstructed model map by using the pictures, accumulated error correction of the mobile phone SLAM is supported, and all users can be unified to the coordinate system of the constructed model map.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (7)
1. An AR scene application method based on single picture positioning is characterized by comprising the following steps:
step 1: the server carries out three-dimensional map modeling on the image data of the global scene; the three-dimensional map modeling comprises a descriptor corresponding to each feature point of the image data and three-dimensional coordinate parameters corresponding to the descriptors;
step 2: the mobile terminal acquires single picture data and uploads the single picture data to the server;
and step 3: the server extracts the characteristic points of the single picture data to obtain a descriptor corresponding to each characteristic point;
and 4, step 4: matching a descriptor corresponding to each characteristic point in a single picture with a descriptor corresponding to each characteristic point of the global scene model, and acquiring the pose of the single picture in the global scene model through three-dimensional coordinate parameters corresponding to the descriptors;
and 5: and converting the pose of the single picture in the global scene model into an AR application coordinate system, and activating the AR application.
2. The method for applying the AR scene based on the single picture positioning as claimed in claim 1, wherein said step 1 comprises the steps of:
step 1-A1: the server extracts the feature points of the image data, obtains a descriptor corresponding to each feature point and three-dimensional coordinate parameters corresponding to the descriptors, recovers the three-dimensional data of the feature points in the image stream by a feature point algorithm and a computer multi-view geometric algorithm, and forms sparse point clouds;
step 1-A2: restoring the real scale of the point cloud according to the data source of the real scale of the sparse point cloud result through the data of the corresponding IMU and the VISLAM algorithm;
step 1-A3: and calculating dense three-dimensional point cloud by a multi-view photometric matching algorithm of computer vision.
3. The method as claimed in claim 2, wherein the step 1-a1 includes the following steps:
step 1-A1-1: the server extracts and screens image data by using an optical flow tracking algorithm of the image stream, and establishes a set of key frames of the image data;
step 1-A1-2: extracting feature points of SIFT or deep learning according to the set of key frames, and calculating a descriptor of each feature point;
step 1-A1-3: matching the key frames according to the descriptors corresponding to each feature point and the three-dimensional coordinate parameters corresponding to the descriptors, finding the one-to-one corresponding relation of the feature points, and verifying the screening result through the computer vision geometric relation;
step 1-A1-4: and modeling by combining a physical observation model of computer vision according to the matching result of the key frame, and optimizing by using a Bundle Adjustment algorithm to finally obtain the high-precision sparse point cloud.
4. The method as claimed in claim 2, wherein the step 1-a2 includes the following steps:
step 1-A2-1: arranging the synchronous image and IMU data, and combining the result of the SFM algorithm to obtain the pose corresponding to the image in the sparse point cloud;
step 1-A2-2: estimating relative poses with real scales between frames by a VISLAM algorithm;
step 1-A2-3: and calculating a data source of the real scale of the established sparse point cloud by using an optimization algorithm through the pose calculated by the VISLAM algorithm and the pose calculated by the SFM algorithm, thereby recovering the real scale of the point cloud.
5. The method for applying the AR scene based on the single picture positioning as claimed in claim 2, wherein said step 1-A3 comprises the following steps:
step 1-A3-1: collecting depth information of the sparse point cloud and a common view relation graph between key frames;
step 1-A3-2: establishing a physical model of multi-view geometry based on photometric errors;
step 1-A3-3: using a Primal-Dual convex optimization method for the physical model of the multi-view geometry to perform rapid dense depth map recovery;
step 1-A3-4: and calculating a dense three-dimensional point cloud by using an algorithm of dense depth image reconstruction.
6. The method as claimed in claim 1, wherein the three-dimensional map modeling further comprises a real-scale data source.
7. The method for applying the AR scene based on the single-picture positioning as claimed in claim 6, wherein the step 1 further comprises the steps of:
step 1-B1: the method comprises the steps that a server obtains laser data, wherein the laser data comprise data of a real scale and image data;
step 1-B2: the server extracts a descriptor corresponding to each feature point of the image data and three-dimensional coordinate parameters corresponding to the descriptor, corresponding data of a real scale, and estimates the pose and the movement speed of the equipment through a visual SLAM tracking algorithm;
step 1-B3: according to the pose result and the movement speed, performing de-warping operation on the laser point cloud data;
step 1-B4: calculating the high-precision equipment pose by using the de-distorted laser point cloud through a three-dimensional point cloud ICP (inductively coupled plasma) algorithm;
step 1-B5: optimizing the estimation of the pose again by using a Bundle Adjustment optimization algorithm and combining the vision and the observation of the laser;
step 1-B6: and simultaneously calculating dense point cloud data and sparse feature point cloud data by using a point cloud fusion algorithm, and performing three-dimensional map modeling on the scene by using the server according to the dense point cloud data and the sparse feature point cloud data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010587809.4A CN111882590A (en) | 2020-06-24 | 2020-06-24 | AR scene application method based on single picture positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010587809.4A CN111882590A (en) | 2020-06-24 | 2020-06-24 | AR scene application method based on single picture positioning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111882590A true CN111882590A (en) | 2020-11-03 |
Family
ID=73156572
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010587809.4A Pending CN111882590A (en) | 2020-06-24 | 2020-06-24 | AR scene application method based on single picture positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111882590A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022252347A1 (en) * | 2021-06-04 | 2022-12-08 | 华为技术有限公司 | 3d map retrieval method and apparatus |
CN116468878A (en) * | 2023-04-25 | 2023-07-21 | 深圳市兰星科技有限公司 | AR equipment positioning method based on positioning map |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745498A (en) * | 2014-01-16 | 2014-04-23 | 中国科学院自动化研究所 | Fast positioning method based on images |
CN103854283A (en) * | 2014-02-21 | 2014-06-11 | 北京理工大学 | Mobile augmented reality tracking registration method based on online study |
US20170186212A1 (en) * | 2015-03-31 | 2017-06-29 | Baidu Online Network Technology (Beijing) Co., Ltd. | Picture presentation method and apparatus |
CN107223269A (en) * | 2016-12-29 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | Three-dimensional scene positioning method and device |
CN108108748A (en) * | 2017-12-08 | 2018-06-01 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN108447116A (en) * | 2018-02-13 | 2018-08-24 | 中国传媒大学 | The method for reconstructing three-dimensional scene and device of view-based access control model SLAM |
CN108780228A (en) * | 2016-01-19 | 2018-11-09 | 奇跃公司 | Utilize the augmented reality system and method for image |
CN109978931A (en) * | 2019-04-04 | 2019-07-05 | 北京悉见科技有限公司 | Method for reconstructing three-dimensional scene and equipment, storage medium |
CN110238831A (en) * | 2019-07-23 | 2019-09-17 | 青岛理工大学 | Robot teaching system and method based on RGB-D image and teaching machine |
CN110243370A (en) * | 2019-05-16 | 2019-09-17 | 西安理工大学 | A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning |
CN110634150A (en) * | 2018-06-25 | 2019-12-31 | 上海汽车集团股份有限公司 | Method, system and device for generating instant positioning and map construction |
CN110766716A (en) * | 2019-09-10 | 2020-02-07 | 中国科学院深圳先进技术研究院 | Method and system for acquiring information of space unknown moving target |
CN110849367A (en) * | 2019-10-08 | 2020-02-28 | 杭州电子科技大学 | Indoor positioning and navigation method based on visual SLAM fused with UWB |
CN110889349A (en) * | 2019-11-18 | 2020-03-17 | 哈尔滨工业大学 | VSLAM-based visual positioning method for sparse three-dimensional point cloud chart |
-
2020
- 2020-06-24 CN CN202010587809.4A patent/CN111882590A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745498A (en) * | 2014-01-16 | 2014-04-23 | 中国科学院自动化研究所 | Fast positioning method based on images |
CN103854283A (en) * | 2014-02-21 | 2014-06-11 | 北京理工大学 | Mobile augmented reality tracking registration method based on online study |
US20170186212A1 (en) * | 2015-03-31 | 2017-06-29 | Baidu Online Network Technology (Beijing) Co., Ltd. | Picture presentation method and apparatus |
CN108780228A (en) * | 2016-01-19 | 2018-11-09 | 奇跃公司 | Utilize the augmented reality system and method for image |
CN107223269A (en) * | 2016-12-29 | 2017-09-29 | 深圳前海达闼云端智能科技有限公司 | Three-dimensional scene positioning method and device |
CN108108748A (en) * | 2017-12-08 | 2018-06-01 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN108447116A (en) * | 2018-02-13 | 2018-08-24 | 中国传媒大学 | The method for reconstructing three-dimensional scene and device of view-based access control model SLAM |
CN110634150A (en) * | 2018-06-25 | 2019-12-31 | 上海汽车集团股份有限公司 | Method, system and device for generating instant positioning and map construction |
CN109978931A (en) * | 2019-04-04 | 2019-07-05 | 北京悉见科技有限公司 | Method for reconstructing three-dimensional scene and equipment, storage medium |
CN110243370A (en) * | 2019-05-16 | 2019-09-17 | 西安理工大学 | A kind of three-dimensional semantic map constructing method of the indoor environment based on deep learning |
CN110238831A (en) * | 2019-07-23 | 2019-09-17 | 青岛理工大学 | Robot teaching system and method based on RGB-D image and teaching machine |
CN110766716A (en) * | 2019-09-10 | 2020-02-07 | 中国科学院深圳先进技术研究院 | Method and system for acquiring information of space unknown moving target |
CN110849367A (en) * | 2019-10-08 | 2020-02-28 | 杭州电子科技大学 | Indoor positioning and navigation method based on visual SLAM fused with UWB |
CN110889349A (en) * | 2019-11-18 | 2020-03-17 | 哈尔滨工业大学 | VSLAM-based visual positioning method for sparse three-dimensional point cloud chart |
Non-Patent Citations (5)
Title |
---|
RUIZHUO ZHANG等: "Automatic Extraction of High-Voltage Power Transmission Objects from UAV Lidar Point Clouds", 《REMOTE SENSING》, vol. 11, pages 1 - 33 * |
张建越: "基于嵌入式并行处理的视觉惯导SLAM算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, pages 138 - 3168 * |
扬奇智能社区: "AICUG公开课笔记|滴滴三维技术发展与实践", pages 1 - 17, Retrieved from the Internet <URL:《https://zhuanlan.zhihu.com/p/137105471?utm_id=0》> * |
李凯: "面向数字化工厂的便携式实时三维重建系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2019, pages 138 - 505 * |
陈丁等: "融合IMU 与单目视觉的无人机自主定位方法", 《系统仿真学报》, vol. 29, no. 1, pages 9 - 14 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022252347A1 (en) * | 2021-06-04 | 2022-12-08 | 华为技术有限公司 | 3d map retrieval method and apparatus |
CN116468878A (en) * | 2023-04-25 | 2023-07-21 | 深圳市兰星科技有限公司 | AR equipment positioning method based on positioning map |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110503703B (en) | Method and apparatus for generating image | |
CN110648398B (en) | Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data | |
US20200357136A1 (en) | Method and apparatus for determining pose of image capturing device, and storage medium | |
WO2019219012A1 (en) | Three-dimensional reconstruction method and device uniting rigid motion and non-rigid deformation | |
CN110189399B (en) | Indoor three-dimensional layout reconstruction method and system | |
CN110300292B (en) | Projection distortion correction method, device, system and storage medium | |
US11557083B2 (en) | Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method | |
CN108089191B (en) | Global positioning system and method based on laser radar | |
WO2021218123A1 (en) | Method and device for detecting vehicle pose | |
US11170552B2 (en) | Remote visualization of three-dimensional (3D) animation with synchronized voice in real-time | |
CN108958469B (en) | Method for adding hyperlinks in virtual world based on augmented reality | |
WO2023280038A1 (en) | Method for constructing three-dimensional real-scene model, and related apparatus | |
US10726614B2 (en) | Methods and systems for changing virtual models with elevation information from real world image processing | |
US20190073825A1 (en) | Enhancing depth sensor-based 3d geometry reconstruction with photogrammetry | |
CN111882590A (en) | AR scene application method based on single picture positioning | |
CN111784776A (en) | Visual positioning method and device, computer readable medium and electronic equipment | |
CN111739137A (en) | Method for generating three-dimensional attitude estimation data set | |
CN107330980A (en) | A kind of virtual furnishings arrangement system based on no marks thing | |
CN109788270B (en) | 3D-360-degree panoramic image generation method and device | |
CN112714263A (en) | Video generation method, device, equipment and storage medium | |
CN113496503A (en) | Point cloud data generation and real-time display method, device, equipment and medium | |
CN116012509A (en) | Virtual image driving method, system, equipment and storage medium | |
WO2023086398A1 (en) | 3d rendering networks based on refractive neural radiance fields | |
CN114882106A (en) | Pose determination method and device, equipment and medium | |
CN107993247A (en) | Tracking positioning method, system, medium and computing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220801 Address after: 510000 room 617, No. 193, Kexue Avenue, Huangpu District, Guangzhou City, Guangdong Province Applicant after: Guangzhou Gaowei Network Technology Co.,Ltd. Address before: Room 604, No. 193, Kexue Avenue, Huangpu District, Guangzhou, Guangdong 510000 Applicant before: Guangzhou wanwei Innovation Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right |