WO2022000713A1 - Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation - Google Patents
Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation Download PDFInfo
- Publication number
- WO2022000713A1 WO2022000713A1 PCT/CN2020/108443 CN2020108443W WO2022000713A1 WO 2022000713 A1 WO2022000713 A1 WO 2022000713A1 CN 2020108443 W CN2020108443 W CN 2020108443W WO 2022000713 A1 WO2022000713 A1 WO 2022000713A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- assembly
- scene
- positioning
- pose
- self
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 24
- 230000004927 fusion Effects 0.000 claims description 24
- 238000005259 measurement Methods 0.000 claims description 18
- 238000013461 design Methods 0.000 claims description 17
- 238000010276 construction Methods 0.000 claims description 16
- 238000011161 development Methods 0.000 claims description 8
- 239000011521 glass Substances 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 230000014759 maintenance of location Effects 0.000 claims description 4
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 abstract description 5
- 238000006073 displacement reaction Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Definitions
- the invention relates to the technical field of self-positioning, in particular to an augmented reality self-positioning method based on aviation assembly.
- the virtual reality guided assembly has been widely used in the field of complex product assembly, but the virtual reality equipment can only provide a single virtual information, there is no information in the real environment, the sense of substitution is not strong, so the use of augmented reality equipment for aviation products Assembly guidance avoids the disadvantage that virtual reality equipment can only provide a single virtual scene information.
- the core of augmented reality is multi-sensor fusion self-positioning technology, which is widely used in unmanned driving, sweeping robots, logistics robots and augmented reality. middle. Through sensors such as the camera and inertial measurement unit carried by the carrier, the position and attitude of the carrier relative to the environment can be obtained in real time.
- the inertial measurement unit Combined with the inertial measurement unit, it has the advantages of good position and attitude estimation in a short time, and the camera will cause problems when the camera moves rapidly. Due to the shortcomings of fuzzy defects, the positioning accuracy of multi-sensor fusion is greatly improved. However, limited by the principle of visual positioning, in the blank area with fewer feature points, the device cannot be positioned and the positioning accuracy is poor.
- the purpose of the present invention is to provide an augmented reality self-positioning method based on aviation assembly, which improves the long delivery cycle of aviation product assembly, the complex operation and the weak sense of substitution, aiming at the above-mentioned shortcomings of the prior art.
- An augmented reality self-positioning method based on aviation assembly which includes designing a system framework, building an assembly scene, constructing a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, specifically including the following step:
- Step 1 The design system framework adopts the client-server parallel development mode to receive and send data.
- the client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server.
- the server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
- Step 2 After completing the system framework design of the first step, build an assembly scene.
- the assembly scene includes a parts area, a to-be-assembled area, and a label area.
- the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server;
- Step 3 After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene.
- the construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
- Step 4 The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed.
- Apriltag tags are attached to less areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured, and then the spatial position relationship of the assembly parts is established according to the assembly process and assembly manual, and transmitted to the server;
- Step 5 The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion,
- the created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene.
- the real-time computing device pose is connected to the virtual and real scene fusion, and is used to load the virtual object to the client to realize the fusion display of the virtual object and the assembly scene;
- Step 6 After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed.
- the timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose.
- the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
- step 3 the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
- step 3 the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
- step 6 the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene.
- the pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
- step 5 The specific steps of step 5 are: (1) Calculate the pose of the Apriltag tag; (2) Calculate the IMU pose; (3) Calculate the VSLAM pose; (4) Transfer the calculated pose to the server and fuse the virtual model. The three-dimensional space coordinates are then transmitted to the client for fusion display.
- the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
- the beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
- the beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
- FIG. 1 is a frame diagram of an augmented reality system of the present invention
- Fig. 2 is the assembling scene frame diagram of the present invention
- Fig. 3 is the collection flow chart of the high-precision three-dimensional map of the assembly scene of the present invention.
- FIG. 4 is a flow chart of the real-time positioning technology of the assembly process and equipment of the present invention.
- the present invention includes designing a system framework, building an assembly scene, building a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, and specifically includes the following steps:
- Step 1 The design system framework adopts the client-server parallel development mode to receive and send data.
- the client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server.
- the server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
- Step 2 After completing the system framework design of the first step, build an assembly scene.
- the assembly scene includes a parts area, a to-be-assembled area, and a label area.
- the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server; wherein the position and posture relationship between the labels is obtained through the multiple labels. Select any label as the start label, set its position as the origin (0, 0, 0), the initial rotation attitude as (0, 0, 0), and the rest of the labels do displacement rotation relative to the start label, and the displacement rotation is as Position and rotation initial pose.
- Step 3 After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene.
- the construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
- Step 4 The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed.
- Apriltag tags are attached to fewer areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured. Calculate the relative pose of the assembly scene, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual, and transmit them to the server;
- Step 5 The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion,
- the created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene.
- the real-time computing device pose is fused with the virtual and real scene, and is used to load the virtual object into the AR glasses on the client to realize the fusion display of the virtual object and the assembly scene,
- the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
- Step 6 After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed.
- the timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose, where the label pose is detected by the depth camera on the augmented reality device, and then calculates the position and pose of the label relative to the augmented reality device.
- the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
- step 3 the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
- step 3 the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
- step 6 the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene.
- the pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
- step five The specific steps of step five are:
- the coordinate system of the tag code is , the depth camera coordinate system is , any point on the label code
- the coordinates in the depth camera coordinate system are , the corresponding relationship between the two is:
- R is the rotation matrix representing the rotation of the depth camera coordinate system relative to the label code coordinate system
- T is the translation vector representing the translation of the depth camera coordinate system relative to the label code
- the image coordinate system is , the pixel coordinate system is , the point on the label code Imaging points in the image plane with the depth camera The corresponding relationship between them is:
- formula (2) is the center of the image plane, are the normalized focal lengths of the x and y axes, is the depth camera internal parameter matrix, which can be used to convert the depth camera coordinate system to the image plane coordinate system; set , use the least squares method to solve formulas (1) and (2) to obtain the internal parameter matrix of the depth camera ; When the depth camera detects the tag, use the Apriltag algorithm to get R and T.
- Step 1 Design the system framework: use the client-server parallel development mode to receive and send data. First, connect the AR glasses, inertial measurement unit and industrial computer in the client to the server wirelessly to connect the assembly scene. And the assembly process information is transmitted to the server, and then the server is wirelessly connected to the client to transmit the parsed pose of the assembly scene feature points and label information to the client;
- Step 2 build an assembly scene: after completing the design system framework of step 1, assemble the parts to be assembled for aviation placed in the parts area in the to-be-assembled area, and then select label 1 in the 8 label areas arranged
- the position of label 1 is set as the origin (0, 0, 0)
- the initial rotation attitude is (0, 0, 0).
- label 2 performs displacement rotation relative to label 1, and its displacement rotates
- the rest of the labels are analogous, in which the rotation posture of each label is set to (0, 0, 0), that is, the spatial position of each label is adjusted to ensure that its orientation is consistent, such as Table 1 shows:
- Step 3 Build a high-precision 3D map of the assembly scene: After completing the construction of the assembly scene in the second step, then use the depth camera to complete the initialization at label 1, and then perform video collection around the assembly scene, and the collected video images are processed. Feature extraction and optical flow tracking, filter the extracted video features, extract feature frames for feature point retention, and then combine with the distance information provided by the inertial measurement unit to obtain a dense three-dimensional map of the assembly scene; The dense 3D map of the assembly scene is filled with key frames and the label corner information corresponding to the key frames, and then a discrete map is established and fused with the dense 3D map to form a high-precision 3D map of the assembly scene;
- Step 4 Build the self-positioning scene information: transfer the high-precision 3D map of the three-dimensional feature map of the assembly scene in step 3 to the building self-positioning scene information, then analyze the high-precision 3D map, and analyze the high-precision 3D map in the area with fewer feature points. Attach the artificial label Apriltag to form the label set of the assembly scene, then measure the relative pose relationship between the label sets, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual;
- Step 5 Design a self-positioning vision system: transfer the spatial positional relationship of the building self-positioning scene information in step 4 to the design self-positioning vision system, which includes creating a virtual model, real-time computing device pose and virtual reality.
- Scene fusion which connects the created virtual model with the pose of the real-time computing device, is used for the AR development platform to build a three-dimensional scene, and sets the three-dimensional spatial coordinates of the virtual model according to the spatial positional relationship of the assembled parts, and then places the augmented reality device on the In the scene, the pose of the depth camera in the device is calculated in real time, and then the real-time computing device pose is fused with the virtual and real scene to load the virtual object onto the AR glasses to realize the fusion display of the virtual object and the assembly scene;
- Step 6 timing positioning process: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is carried out.
- the timing positioning process first completes the initialization of the self-positioning vision system in the part to be assembled area, and then loads a high-precision three-dimensional map and Open two threads, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system will output the fusion result positioning; if the error is too large, use the label pose to correct the fusion pose. Finally, the self-positioning vision system outputs the corrected pose, thereby completing the self-positioning of the aviation assembly.
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Computer Graphics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010597190.5A CN111968228B (zh) | 2020-06-28 | 2020-06-28 | 一种基于航空装配的增强现实自定位方法 |
CN202010597190.5 | 2020-06-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022000713A1 true WO2022000713A1 (fr) | 2022-01-06 |
Family
ID=73360965
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/108443 WO2022000713A1 (fr) | 2020-06-28 | 2020-08-11 | Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111968228B (fr) |
WO (1) | WO2022000713A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115016647A (zh) * | 2022-07-07 | 2022-09-06 | 国网江苏省电力有限公司电力科学研究院 | 一种面向变电站故障模拟的增强现实三维注册方法 |
CN117848331A (zh) * | 2024-03-06 | 2024-04-09 | 河北美泰电子科技有限公司 | 基于视觉标签地图的定位方法及装置 |
CN117974794A (zh) * | 2024-04-02 | 2024-05-03 | 深圳市博硕科技股份有限公司 | 一种薄片摆件机精准视觉定位系统 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112734945B (zh) * | 2021-03-30 | 2021-08-17 | 上海交大智邦科技有限公司 | 一种基于增强现实的装配引导方法、系统及应用 |
CN113220121B (zh) * | 2021-05-04 | 2023-05-09 | 西北工业大学 | 一种基于投影显示的ar紧固件辅助装配系统及方法 |
CN114323000B (zh) * | 2021-12-17 | 2023-06-09 | 中国电子科技集团公司第三十八研究所 | 线缆ar引导装配系统及方法 |
CN114494594B (zh) * | 2022-01-18 | 2023-11-28 | 中国人民解放军63919部队 | 基于深度学习的航天员操作设备状态识别方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110388919A (zh) * | 2019-07-30 | 2019-10-29 | 上海云扩信息科技有限公司 | 增强现实中基于特征图和惯性测量的三维模型定位方法 |
US20190358547A1 (en) * | 2016-11-14 | 2019-11-28 | Lightcraft Technology Llc | Spectator virtual reality system |
CN110705017A (zh) * | 2019-08-27 | 2020-01-17 | 四川科华天府科技有限公司 | 一种基于ar的模型拆组模拟系统及模拟方法 |
CN110928418A (zh) * | 2019-12-11 | 2020-03-27 | 北京航空航天大学 | 一种基于mr的航空线缆辅助装配方法及系统 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190212142A1 (en) * | 2018-01-08 | 2019-07-11 | Glen C. Gustafson | System and method for using digital technology to perform stereo aerial photo interpretation |
CN109062398B (zh) * | 2018-06-07 | 2021-06-29 | 中国航天员科研训练中心 | 一种基于虚拟现实与多模态人机接口的航天器交会对接方法 |
CN109759975A (zh) * | 2019-03-21 | 2019-05-17 | 成都飞机工业(集团)有限责任公司 | 一种飞机舱位辅助操作的增强现实人工标志物的定位夹具 |
CN110076277B (zh) * | 2019-05-07 | 2020-02-07 | 清华大学 | 基于增强现实技术的配钉方法 |
-
2020
- 2020-06-28 CN CN202010597190.5A patent/CN111968228B/zh active Active
- 2020-08-11 WO PCT/CN2020/108443 patent/WO2022000713A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190358547A1 (en) * | 2016-11-14 | 2019-11-28 | Lightcraft Technology Llc | Spectator virtual reality system |
CN110388919A (zh) * | 2019-07-30 | 2019-10-29 | 上海云扩信息科技有限公司 | 增强现实中基于特征图和惯性测量的三维模型定位方法 |
CN110705017A (zh) * | 2019-08-27 | 2020-01-17 | 四川科华天府科技有限公司 | 一种基于ar的模型拆组模拟系统及模拟方法 |
CN110928418A (zh) * | 2019-12-11 | 2020-03-27 | 北京航空航天大学 | 一种基于mr的航空线缆辅助装配方法及系统 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115016647A (zh) * | 2022-07-07 | 2022-09-06 | 国网江苏省电力有限公司电力科学研究院 | 一种面向变电站故障模拟的增强现实三维注册方法 |
CN117848331A (zh) * | 2024-03-06 | 2024-04-09 | 河北美泰电子科技有限公司 | 基于视觉标签地图的定位方法及装置 |
CN117974794A (zh) * | 2024-04-02 | 2024-05-03 | 深圳市博硕科技股份有限公司 | 一种薄片摆件机精准视觉定位系统 |
CN117974794B (zh) * | 2024-04-02 | 2024-06-04 | 深圳市博硕科技股份有限公司 | 一种薄片摆件机精准视觉定位系统 |
Also Published As
Publication number | Publication date |
---|---|
CN111968228B (zh) | 2021-11-05 |
CN111968228A (zh) | 2020-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022000713A1 (fr) | Procédé d'auto-positionnement par réalité augmentée basé sur un ensemble d'aviation | |
CN109307508B (zh) | 一种基于多关键帧的全景惯导slam方法 | |
JP3486613B2 (ja) | 画像処理装置およびその方法並びにプログラム、記憶媒体 | |
CN108406731A (zh) | 一种基于深度视觉的定位装置、方法及机器人 | |
CN110261870A (zh) | 一种用于视觉-惯性-激光融合的同步定位与建图方法 | |
CN106898022A (zh) | 一种手持式快速三维扫描系统及方法 | |
CN107478214A (zh) | 一种基于多传感器融合的室内定位方法及系统 | |
CN108052103B (zh) | 基于深度惯性里程计的巡检机器人地下空间同时定位和地图构建方法 | |
CN111156998A (zh) | 一种基于rgb-d相机与imu信息融合的移动机器人定位方法 | |
CN110533719B (zh) | 基于环境视觉特征点识别技术的增强现实定位方法及装置 | |
CN110033489A (zh) | 一种车辆定位准确性的评估方法、装置及设备 | |
CN112116651B (zh) | 一种基于无人机单目视觉的地面目标定位方法和系统 | |
Momeni-k et al. | Height estimation from a single camera view | |
CN208323361U (zh) | 一种基于深度视觉的定位装置及机器人 | |
CN112541973B (zh) | 虚实叠合方法与系统 | |
CN115371665B (zh) | 一种基于深度相机和惯性融合的移动机器人定位方法 | |
CN116222543B (zh) | 用于机器人环境感知的多传感器融合地图构建方法及系统 | |
CN110751123B (zh) | 一种单目视觉惯性里程计系统及方法 | |
CN115147344A (zh) | 一种增强现实辅助汽车维修中的零件三维检测与跟踪方法 | |
CN113920191B (zh) | 一种基于深度相机的6d数据集构建方法 | |
CN111199576A (zh) | 一种基于移动平台的室外大范围人体姿态重建方法 | |
Muffert et al. | The estimation of spatial positions by using an omnidirectional camera system | |
CN117115271A (zh) | 无人机飞行过程中的双目相机外参数自标定方法及系统 | |
Yang et al. | Visual SLAM using multiple RGB-D cameras | |
CN112945233A (zh) | 一种全局无漂移的自主机器人同时定位与地图构建方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20943235 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20943235 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20943235 Country of ref document: EP Kind code of ref document: A1 |