WO2022000713A1 - Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation - Google Patents

Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation Download PDF

Info

Publication number
WO2022000713A1
WO2022000713A1 PCT/CN2020/108443 CN2020108443W WO2022000713A1 WO 2022000713 A1 WO2022000713 A1 WO 2022000713A1 CN 2020108443 W CN2020108443 W CN 2020108443W WO 2022000713 A1 WO2022000713 A1 WO 2022000713A1
Authority
WO
WIPO (PCT)
Prior art keywords
assembly
scene
positioning
pose
self
Prior art date
Application number
PCT/CN2020/108443
Other languages
English (en)
Chinese (zh)
Inventor
叶波
唐健钧
丁晓
常壮
金莹莹
Original Assignee
南京翱翔信息物理融合创新研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京翱翔信息物理融合创新研究院有限公司 filed Critical 南京翱翔信息物理融合创新研究院有限公司
Publication of WO2022000713A1 publication Critical patent/WO2022000713A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Definitions

  • the invention relates to the technical field of self-positioning, in particular to an augmented reality self-positioning method based on aviation assembly.
  • the virtual reality guided assembly has been widely used in the field of complex product assembly, but the virtual reality equipment can only provide a single virtual information, there is no information in the real environment, the sense of substitution is not strong, so the use of augmented reality equipment for aviation products Assembly guidance avoids the disadvantage that virtual reality equipment can only provide a single virtual scene information.
  • the core of augmented reality is multi-sensor fusion self-positioning technology, which is widely used in unmanned driving, sweeping robots, logistics robots and augmented reality. middle. Through sensors such as the camera and inertial measurement unit carried by the carrier, the position and attitude of the carrier relative to the environment can be obtained in real time.
  • the inertial measurement unit Combined with the inertial measurement unit, it has the advantages of good position and attitude estimation in a short time, and the camera will cause problems when the camera moves rapidly. Due to the shortcomings of fuzzy defects, the positioning accuracy of multi-sensor fusion is greatly improved. However, limited by the principle of visual positioning, in the blank area with fewer feature points, the device cannot be positioned and the positioning accuracy is poor.
  • the purpose of the present invention is to provide an augmented reality self-positioning method based on aviation assembly, which improves the long delivery cycle of aviation product assembly, the complex operation and the weak sense of substitution, aiming at the above-mentioned shortcomings of the prior art.
  • An augmented reality self-positioning method based on aviation assembly which includes designing a system framework, building an assembly scene, constructing a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, specifically including the following step:
  • Step 1 The design system framework adopts the client-server parallel development mode to receive and send data.
  • the client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server.
  • the server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
  • Step 2 After completing the system framework design of the first step, build an assembly scene.
  • the assembly scene includes a parts area, a to-be-assembled area, and a label area.
  • the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server;
  • Step 3 After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene.
  • the construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
  • Step 4 The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed.
  • Apriltag tags are attached to less areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured, and then the spatial position relationship of the assembly parts is established according to the assembly process and assembly manual, and transmitted to the server;
  • Step 5 The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion,
  • the created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene.
  • the real-time computing device pose is connected to the virtual and real scene fusion, and is used to load the virtual object to the client to realize the fusion display of the virtual object and the assembly scene;
  • Step 6 After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed.
  • the timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose.
  • the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
  • step 3 the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
  • step 3 the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
  • step 6 the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene.
  • the pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
  • step 5 The specific steps of step 5 are: (1) Calculate the pose of the Apriltag tag; (2) Calculate the IMU pose; (3) Calculate the VSLAM pose; (4) Transfer the calculated pose to the server and fuse the virtual model. The three-dimensional space coordinates are then transmitted to the client for fusion display.
  • the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
  • the beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
  • the beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
  • FIG. 1 is a frame diagram of an augmented reality system of the present invention
  • Fig. 2 is the assembling scene frame diagram of the present invention
  • Fig. 3 is the collection flow chart of the high-precision three-dimensional map of the assembly scene of the present invention.
  • FIG. 4 is a flow chart of the real-time positioning technology of the assembly process and equipment of the present invention.
  • the present invention includes designing a system framework, building an assembly scene, building a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, and specifically includes the following steps:
  • Step 1 The design system framework adopts the client-server parallel development mode to receive and send data.
  • the client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server.
  • the server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
  • Step 2 After completing the system framework design of the first step, build an assembly scene.
  • the assembly scene includes a parts area, a to-be-assembled area, and a label area.
  • the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server; wherein the position and posture relationship between the labels is obtained through the multiple labels. Select any label as the start label, set its position as the origin (0, 0, 0), the initial rotation attitude as (0, 0, 0), and the rest of the labels do displacement rotation relative to the start label, and the displacement rotation is as Position and rotation initial pose.
  • Step 3 After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene.
  • the construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
  • Step 4 The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed.
  • Apriltag tags are attached to fewer areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured. Calculate the relative pose of the assembly scene, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual, and transmit them to the server;
  • Step 5 The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion,
  • the created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene.
  • the real-time computing device pose is fused with the virtual and real scene, and is used to load the virtual object into the AR glasses on the client to realize the fusion display of the virtual object and the assembly scene,
  • the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
  • Step 6 After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed.
  • the timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose, where the label pose is detected by the depth camera on the augmented reality device, and then calculates the position and pose of the label relative to the augmented reality device.
  • the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
  • step 3 the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
  • step 3 the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
  • step 6 the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene.
  • the pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
  • step five The specific steps of step five are:
  • the coordinate system of the tag code is , the depth camera coordinate system is , any point on the label code
  • the coordinates in the depth camera coordinate system are , the corresponding relationship between the two is:
  • R is the rotation matrix representing the rotation of the depth camera coordinate system relative to the label code coordinate system
  • T is the translation vector representing the translation of the depth camera coordinate system relative to the label code
  • the image coordinate system is , the pixel coordinate system is , the point on the label code Imaging points in the image plane with the depth camera The corresponding relationship between them is:
  • formula (2) is the center of the image plane, are the normalized focal lengths of the x and y axes, is the depth camera internal parameter matrix, which can be used to convert the depth camera coordinate system to the image plane coordinate system; set , use the least squares method to solve formulas (1) and (2) to obtain the internal parameter matrix of the depth camera ; When the depth camera detects the tag, use the Apriltag algorithm to get R and T.
  • Step 1 Design the system framework: use the client-server parallel development mode to receive and send data. First, connect the AR glasses, inertial measurement unit and industrial computer in the client to the server wirelessly to connect the assembly scene. And the assembly process information is transmitted to the server, and then the server is wirelessly connected to the client to transmit the parsed pose of the assembly scene feature points and label information to the client;
  • Step 2 build an assembly scene: after completing the design system framework of step 1, assemble the parts to be assembled for aviation placed in the parts area in the to-be-assembled area, and then select label 1 in the 8 label areas arranged
  • the position of label 1 is set as the origin (0, 0, 0)
  • the initial rotation attitude is (0, 0, 0).
  • label 2 performs displacement rotation relative to label 1, and its displacement rotates
  • the rest of the labels are analogous, in which the rotation posture of each label is set to (0, 0, 0), that is, the spatial position of each label is adjusted to ensure that its orientation is consistent, such as Table 1 shows:
  • Step 3 Build a high-precision 3D map of the assembly scene: After completing the construction of the assembly scene in the second step, then use the depth camera to complete the initialization at label 1, and then perform video collection around the assembly scene, and the collected video images are processed. Feature extraction and optical flow tracking, filter the extracted video features, extract feature frames for feature point retention, and then combine with the distance information provided by the inertial measurement unit to obtain a dense three-dimensional map of the assembly scene; The dense 3D map of the assembly scene is filled with key frames and the label corner information corresponding to the key frames, and then a discrete map is established and fused with the dense 3D map to form a high-precision 3D map of the assembly scene;
  • Step 4 Build the self-positioning scene information: transfer the high-precision 3D map of the three-dimensional feature map of the assembly scene in step 3 to the building self-positioning scene information, then analyze the high-precision 3D map, and analyze the high-precision 3D map in the area with fewer feature points. Attach the artificial label Apriltag to form the label set of the assembly scene, then measure the relative pose relationship between the label sets, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual;
  • Step 5 Design a self-positioning vision system: transfer the spatial positional relationship of the building self-positioning scene information in step 4 to the design self-positioning vision system, which includes creating a virtual model, real-time computing device pose and virtual reality.
  • Scene fusion which connects the created virtual model with the pose of the real-time computing device, is used for the AR development platform to build a three-dimensional scene, and sets the three-dimensional spatial coordinates of the virtual model according to the spatial positional relationship of the assembled parts, and then places the augmented reality device on the In the scene, the pose of the depth camera in the device is calculated in real time, and then the real-time computing device pose is fused with the virtual and real scene to load the virtual object onto the AR glasses to realize the fusion display of the virtual object and the assembly scene;
  • Step 6 timing positioning process: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is carried out.
  • the timing positioning process first completes the initialization of the self-positioning vision system in the part to be assembled area, and then loads a high-precision three-dimensional map and Open two threads, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system will output the fusion result positioning; if the error is too large, use the label pose to correct the fusion pose. Finally, the self-positioning vision system outputs the corrected pose, thereby completing the self-positioning of the aviation assembly.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Un procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation est divulgué. Le procédé consiste : à concevoir une structure de système, à construire un scénario d'assemblage, à construire une carte tridimensionnelle à haute précision du scénario d'assemblage, à construire des informations de scénario d'auto-positionnement, à concevoir un système visuel à auto-positionnement, et à effectuer un processus de positionnement temporel. Au moyen de la présente invention, le degré de compréhension d'un opérateur concernant une tâche est efficacement amélioré, un seuil d'intervention de l'opérateur est réduit, l'achèvement efficace et fiable d'une tâche d'assemblage est garanti, et un positionnement précis peut également être obtenu dans une zone vierge avec moins de points caractéristiques.
PCT/CN2020/108443 2020-06-28 2020-08-11 Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation WO2022000713A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010597190.5A CN111968228B (zh) 2020-06-28 2020-06-28 一种基于航空装配的增强现实自定位方法
CN202010597190.5 2020-06-28

Publications (1)

Publication Number Publication Date
WO2022000713A1 true WO2022000713A1 (fr) 2022-01-06

Family

ID=73360965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/108443 WO2022000713A1 (fr) 2020-06-28 2020-08-11 Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation

Country Status (2)

Country Link
CN (1) CN111968228B (fr)
WO (1) WO2022000713A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016647A (zh) * 2022-07-07 2022-09-06 国网江苏省电力有限公司电力科学研究院 一种面向变电站故障模拟的增强现实三维注册方法
CN117848331A (zh) * 2024-03-06 2024-04-09 河北美泰电子科技有限公司 基于视觉标签地图的定位方法及装置
CN117974794A (zh) * 2024-04-02 2024-05-03 深圳市博硕科技股份有限公司 一种薄片摆件机精准视觉定位系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734945B (zh) * 2021-03-30 2021-08-17 上海交大智邦科技有限公司 一种基于增强现实的装配引导方法、系统及应用
CN113220121B (zh) * 2021-05-04 2023-05-09 西北工业大学 一种基于投影显示的ar紧固件辅助装配系统及方法
CN114323000B (zh) * 2021-12-17 2023-06-09 中国电子科技集团公司第三十八研究所 线缆ar引导装配系统及方法
CN114494594B (zh) * 2022-01-18 2023-11-28 中国人民解放军63919部队 基于深度学习的航天员操作设备状态识别方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110388919A (zh) * 2019-07-30 2019-10-29 上海云扩信息科技有限公司 增强现实中基于特征图和惯性测量的三维模型定位方法
US20190358547A1 (en) * 2016-11-14 2019-11-28 Lightcraft Technology Llc Spectator virtual reality system
CN110705017A (zh) * 2019-08-27 2020-01-17 四川科华天府科技有限公司 一种基于ar的模型拆组模拟系统及模拟方法
CN110928418A (zh) * 2019-12-11 2020-03-27 北京航空航天大学 一种基于mr的航空线缆辅助装配方法及系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190212142A1 (en) * 2018-01-08 2019-07-11 Glen C. Gustafson System and method for using digital technology to perform stereo aerial photo interpretation
CN109062398B (zh) * 2018-06-07 2021-06-29 中国航天员科研训练中心 一种基于虚拟现实与多模态人机接口的航天器交会对接方法
CN109759975A (zh) * 2019-03-21 2019-05-17 成都飞机工业(集团)有限责任公司 一种飞机舱位辅助操作的增强现实人工标志物的定位夹具
CN110076277B (zh) * 2019-05-07 2020-02-07 清华大学 基于增强现实技术的配钉方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190358547A1 (en) * 2016-11-14 2019-11-28 Lightcraft Technology Llc Spectator virtual reality system
CN110388919A (zh) * 2019-07-30 2019-10-29 上海云扩信息科技有限公司 增强现实中基于特征图和惯性测量的三维模型定位方法
CN110705017A (zh) * 2019-08-27 2020-01-17 四川科华天府科技有限公司 一种基于ar的模型拆组模拟系统及模拟方法
CN110928418A (zh) * 2019-12-11 2020-03-27 北京航空航天大学 一种基于mr的航空线缆辅助装配方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016647A (zh) * 2022-07-07 2022-09-06 国网江苏省电力有限公司电力科学研究院 一种面向变电站故障模拟的增强现实三维注册方法
CN117848331A (zh) * 2024-03-06 2024-04-09 河北美泰电子科技有限公司 基于视觉标签地图的定位方法及装置
CN117974794A (zh) * 2024-04-02 2024-05-03 深圳市博硕科技股份有限公司 一种薄片摆件机精准视觉定位系统
CN117974794B (zh) * 2024-04-02 2024-06-04 深圳市博硕科技股份有限公司 一种薄片摆件机精准视觉定位系统

Also Published As

Publication number Publication date
CN111968228B (zh) 2021-11-05
CN111968228A (zh) 2020-11-20

Similar Documents

Publication Publication Date Title
WO2022000713A1 (fr) Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation
CN109307508B (zh) 一种基于多关键帧的全景惯导slam方法
JP3486613B2 (ja) 画像処理装置およびその方法並びにプログラム、記憶媒体
CN108406731A (zh) 一种基于深度视觉的定位装置、方法及机器人
CN110261870A (zh) 一种用于视觉-惯性-激光融合的同步定位与建图方法
CN106898022A (zh) 一种手持式快速三维扫描系统及方法
CN107478214A (zh) 一种基于多传感器融合的室内定位方法及系统
CN108052103B (zh) 基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法
CN111156998A (zh) 一种基于rgb-d相机与imu信息融合的移动机器人定位方法
CN110533719B (zh) 基于环境视觉特征点识别技术的增强现实定位方法及装置
CN110033489A (zh) 一种车辆定位准确性的评估方法、装置及设备
CN112116651B (zh) 一种基于无人机单目视觉的地面目标定位方法和系统
Momeni-k et al. Height estimation from a single camera view
CN208323361U (zh) 一种基于深度视觉的定位装置及机器人
CN112541973B (zh) 虚实叠合方法与系统
CN115371665B (zh) 一种基于深度相机和惯性融合的移动机器人定位方法
CN116222543B (zh) 用于机器人环境感知的多传感器融合地图构建方法及系统
CN110751123B (zh) 一种单目视觉惯性里程计系统及方法
CN115147344A (zh) 一种增强现实辅助汽车维修中的零件三维检测与跟踪方法
CN113920191B (zh) 一种基于深度相机的6d数据集构建方法
CN111199576A (zh) 一种基于移动平台的室外大范围人体姿态重建方法
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
CN117115271A (zh) 无人机飞行过程中的双目相机外参数自标定方法及系统
Yang et al. Visual SLAM using multiple RGB-D cameras
CN112945233A (zh) 一种全局无漂移的自主机器人同时定位与地图构建方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20943235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20943235

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20943235

Country of ref document: EP

Kind code of ref document: A1