WO2022000713A1 - Augmented reality self-positioning method based on aviation assembly - Google Patents

Augmented reality self-positioning method based on aviation assembly Download PDF

Info

Publication number
WO2022000713A1
WO2022000713A1 PCT/CN2020/108443 CN2020108443W WO2022000713A1 WO 2022000713 A1 WO2022000713 A1 WO 2022000713A1 CN 2020108443 W CN2020108443 W CN 2020108443W WO 2022000713 A1 WO2022000713 A1 WO 2022000713A1
Authority
WO
WIPO (PCT)
Prior art keywords
assembly
scene
positioning
pose
self
Prior art date
Application number
PCT/CN2020/108443
Other languages
French (fr)
Chinese (zh)
Inventor
叶波
唐健钧
丁晓
常壮
金莹莹
Original Assignee
南京翱翔信息物理融合创新研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京翱翔信息物理融合创新研究院有限公司 filed Critical 南京翱翔信息物理融合创新研究院有限公司
Publication of WO2022000713A1 publication Critical patent/WO2022000713A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Definitions

  • the invention relates to the technical field of self-positioning, in particular to an augmented reality self-positioning method based on aviation assembly.
  • the virtual reality guided assembly has been widely used in the field of complex product assembly, but the virtual reality equipment can only provide a single virtual information, there is no information in the real environment, the sense of substitution is not strong, so the use of augmented reality equipment for aviation products Assembly guidance avoids the disadvantage that virtual reality equipment can only provide a single virtual scene information.
  • the core of augmented reality is multi-sensor fusion self-positioning technology, which is widely used in unmanned driving, sweeping robots, logistics robots and augmented reality. middle. Through sensors such as the camera and inertial measurement unit carried by the carrier, the position and attitude of the carrier relative to the environment can be obtained in real time.
  • the inertial measurement unit Combined with the inertial measurement unit, it has the advantages of good position and attitude estimation in a short time, and the camera will cause problems when the camera moves rapidly. Due to the shortcomings of fuzzy defects, the positioning accuracy of multi-sensor fusion is greatly improved. However, limited by the principle of visual positioning, in the blank area with fewer feature points, the device cannot be positioned and the positioning accuracy is poor.
  • the purpose of the present invention is to provide an augmented reality self-positioning method based on aviation assembly, which improves the long delivery cycle of aviation product assembly, the complex operation and the weak sense of substitution, aiming at the above-mentioned shortcomings of the prior art.
  • An augmented reality self-positioning method based on aviation assembly which includes designing a system framework, building an assembly scene, constructing a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, specifically including the following step:
  • Step 1 The design system framework adopts the client-server parallel development mode to receive and send data.
  • the client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server.
  • the server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
  • Step 2 After completing the system framework design of the first step, build an assembly scene.
  • the assembly scene includes a parts area, a to-be-assembled area, and a label area.
  • the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server;
  • Step 3 After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene.
  • the construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
  • Step 4 The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed.
  • Apriltag tags are attached to less areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured, and then the spatial position relationship of the assembly parts is established according to the assembly process and assembly manual, and transmitted to the server;
  • Step 5 The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion,
  • the created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene.
  • the real-time computing device pose is connected to the virtual and real scene fusion, and is used to load the virtual object to the client to realize the fusion display of the virtual object and the assembly scene;
  • Step 6 After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed.
  • the timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose.
  • the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
  • step 3 the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
  • step 3 the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
  • step 6 the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene.
  • the pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
  • step 5 The specific steps of step 5 are: (1) Calculate the pose of the Apriltag tag; (2) Calculate the IMU pose; (3) Calculate the VSLAM pose; (4) Transfer the calculated pose to the server and fuse the virtual model. The three-dimensional space coordinates are then transmitted to the client for fusion display.
  • the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
  • the beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
  • the beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
  • FIG. 1 is a frame diagram of an augmented reality system of the present invention
  • Fig. 2 is the assembling scene frame diagram of the present invention
  • Fig. 3 is the collection flow chart of the high-precision three-dimensional map of the assembly scene of the present invention.
  • FIG. 4 is a flow chart of the real-time positioning technology of the assembly process and equipment of the present invention.
  • the present invention includes designing a system framework, building an assembly scene, building a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, and specifically includes the following steps:
  • Step 1 The design system framework adopts the client-server parallel development mode to receive and send data.
  • the client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server.
  • the server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
  • Step 2 After completing the system framework design of the first step, build an assembly scene.
  • the assembly scene includes a parts area, a to-be-assembled area, and a label area.
  • the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server; wherein the position and posture relationship between the labels is obtained through the multiple labels. Select any label as the start label, set its position as the origin (0, 0, 0), the initial rotation attitude as (0, 0, 0), and the rest of the labels do displacement rotation relative to the start label, and the displacement rotation is as Position and rotation initial pose.
  • Step 3 After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene.
  • the construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
  • Step 4 The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed.
  • Apriltag tags are attached to fewer areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured. Calculate the relative pose of the assembly scene, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual, and transmit them to the server;
  • Step 5 The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion,
  • the created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene.
  • the real-time computing device pose is fused with the virtual and real scene, and is used to load the virtual object into the AR glasses on the client to realize the fusion display of the virtual object and the assembly scene,
  • the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
  • Step 6 After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed.
  • the timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose, where the label pose is detected by the depth camera on the augmented reality device, and then calculates the position and pose of the label relative to the augmented reality device.
  • the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
  • step 3 the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
  • step 3 the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
  • step 6 the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene.
  • the pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
  • step five The specific steps of step five are:
  • the coordinate system of the tag code is , the depth camera coordinate system is , any point on the label code
  • the coordinates in the depth camera coordinate system are , the corresponding relationship between the two is:
  • R is the rotation matrix representing the rotation of the depth camera coordinate system relative to the label code coordinate system
  • T is the translation vector representing the translation of the depth camera coordinate system relative to the label code
  • the image coordinate system is , the pixel coordinate system is , the point on the label code Imaging points in the image plane with the depth camera The corresponding relationship between them is:
  • formula (2) is the center of the image plane, are the normalized focal lengths of the x and y axes, is the depth camera internal parameter matrix, which can be used to convert the depth camera coordinate system to the image plane coordinate system; set , use the least squares method to solve formulas (1) and (2) to obtain the internal parameter matrix of the depth camera ; When the depth camera detects the tag, use the Apriltag algorithm to get R and T.
  • Step 1 Design the system framework: use the client-server parallel development mode to receive and send data. First, connect the AR glasses, inertial measurement unit and industrial computer in the client to the server wirelessly to connect the assembly scene. And the assembly process information is transmitted to the server, and then the server is wirelessly connected to the client to transmit the parsed pose of the assembly scene feature points and label information to the client;
  • Step 2 build an assembly scene: after completing the design system framework of step 1, assemble the parts to be assembled for aviation placed in the parts area in the to-be-assembled area, and then select label 1 in the 8 label areas arranged
  • the position of label 1 is set as the origin (0, 0, 0)
  • the initial rotation attitude is (0, 0, 0).
  • label 2 performs displacement rotation relative to label 1, and its displacement rotates
  • the rest of the labels are analogous, in which the rotation posture of each label is set to (0, 0, 0), that is, the spatial position of each label is adjusted to ensure that its orientation is consistent, such as Table 1 shows:
  • Step 3 Build a high-precision 3D map of the assembly scene: After completing the construction of the assembly scene in the second step, then use the depth camera to complete the initialization at label 1, and then perform video collection around the assembly scene, and the collected video images are processed. Feature extraction and optical flow tracking, filter the extracted video features, extract feature frames for feature point retention, and then combine with the distance information provided by the inertial measurement unit to obtain a dense three-dimensional map of the assembly scene; The dense 3D map of the assembly scene is filled with key frames and the label corner information corresponding to the key frames, and then a discrete map is established and fused with the dense 3D map to form a high-precision 3D map of the assembly scene;
  • Step 4 Build the self-positioning scene information: transfer the high-precision 3D map of the three-dimensional feature map of the assembly scene in step 3 to the building self-positioning scene information, then analyze the high-precision 3D map, and analyze the high-precision 3D map in the area with fewer feature points. Attach the artificial label Apriltag to form the label set of the assembly scene, then measure the relative pose relationship between the label sets, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual;
  • Step 5 Design a self-positioning vision system: transfer the spatial positional relationship of the building self-positioning scene information in step 4 to the design self-positioning vision system, which includes creating a virtual model, real-time computing device pose and virtual reality.
  • Scene fusion which connects the created virtual model with the pose of the real-time computing device, is used for the AR development platform to build a three-dimensional scene, and sets the three-dimensional spatial coordinates of the virtual model according to the spatial positional relationship of the assembled parts, and then places the augmented reality device on the In the scene, the pose of the depth camera in the device is calculated in real time, and then the real-time computing device pose is fused with the virtual and real scene to load the virtual object onto the AR glasses to realize the fusion display of the virtual object and the assembly scene;
  • Step 6 timing positioning process: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is carried out.
  • the timing positioning process first completes the initialization of the self-positioning vision system in the part to be assembled area, and then loads a high-precision three-dimensional map and Open two threads, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system will output the fusion result positioning; if the error is too large, use the label pose to correct the fusion pose. Finally, the self-positioning vision system outputs the corrected pose, thereby completing the self-positioning of the aviation assembly.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed is an augmented reality self-positioning method based on an aviation assembly. The method comprises: designing a system framework, building an assembly scenario, constructing a high-precision three-dimensional map of the assembly scenario, building self-positioning scenario information, designing a self-positioning visual system, and performing a timing positioning process. By means of the present invention, the degree of understanding of an operator regarding a task is effectively improved, an operation threshold of the operator is reduced, the efficient and reliable completion of an assembly task is guaranteed, and precise positioning can also be achieved in a blank area with fewer feature points.

Description

一种基于航空装配的增强现实自定位方法An Augmented Reality Self-Location Method Based on Aeronautical Assembly 技术领域technical field
本发明涉及自定位技术领域,具体涉及一种基于航空装配的增强现实自定位方法。The invention relates to the technical field of self-positioning, in particular to an augmented reality self-positioning method based on aviation assembly.
背景技术Background technique
目前,航空复杂产品装配过程中,存在装配零件数量大,协调关系繁多,操作复杂度高的问题,操作人员要频繁的在虚拟二维环境(电脑端)与真实装配环境中进行思维切换,使得装配操作指令无法与真实环境进行融合,从而导致传统装配方式作业效率低下、出错率高,延长了设备交付周期。At present, in the assembly process of complex aviation products, there are problems such as large number of assembly parts, many coordination relationships, and high operational complexity. Operators must frequently switch between the virtual two-dimensional environment (computer terminal) and the real assembly environment, making the Assembly operation instructions cannot be integrated with the real environment, resulting in inefficient operation and high error rate of traditional assembly methods, and prolonging the equipment delivery cycle.
随着虚拟现实引导装配已经在复杂产品装配领域得到了广泛应用,但虚拟现实设备只能提供单一的虚拟信息,没有真实环境中的信息,代入感不强,从而采用了增强现实设备对航空产品装配进行引导,避免了虚拟现实设备只能提供单一虚拟场景信息的缺点,同时,增强现实的核心是多传感器融合自定位技术,该技术广泛应用于无人驾驶、扫地机器人、物流机器人以及增强现实中。其通过载体携带的相机和惯性测量单元等传感器,可以实时的得到载体相对于环境的位置和姿态,结合惯性测量单元在短时间内具有较好的位姿估计优点以及相机在快速运动下会造成的模糊缺陷的缺点,多传感器融合的定位精度得到极大提高。但受限于视觉定位原理,在特征点较少的空白区域会导致设备无法定位、定位精度差的问题。With the virtual reality guided assembly has been widely used in the field of complex product assembly, but the virtual reality equipment can only provide a single virtual information, there is no information in the real environment, the sense of substitution is not strong, so the use of augmented reality equipment for aviation products Assembly guidance avoids the disadvantage that virtual reality equipment can only provide a single virtual scene information. At the same time, the core of augmented reality is multi-sensor fusion self-positioning technology, which is widely used in unmanned driving, sweeping robots, logistics robots and augmented reality. middle. Through sensors such as the camera and inertial measurement unit carried by the carrier, the position and attitude of the carrier relative to the environment can be obtained in real time. Combined with the inertial measurement unit, it has the advantages of good position and attitude estimation in a short time, and the camera will cause problems when the camera moves rapidly. Due to the shortcomings of fuzzy defects, the positioning accuracy of multi-sensor fusion is greatly improved. However, limited by the principle of visual positioning, in the blank area with fewer feature points, the device cannot be positioned and the positioning accuracy is poor.
技术问题technical problem
本发明的目的就是针对上述现有技术的不足,提供一种改善航空产品装配交付周期长、操作复杂以及代入感不强的基于航空装配的增强现实自定位方法。The purpose of the present invention is to provide an augmented reality self-positioning method based on aviation assembly, which improves the long delivery cycle of aviation product assembly, the complex operation and the weak sense of substitution, aiming at the above-mentioned shortcomings of the prior art.
技术解决方案technical solutions
本发明采用的技术方案如下:The technical scheme adopted in the present invention is as follows:
一种基于航空装配的增强现实自定位方法,它包括设计系统框架、搭建装配场景、构建装配场景的高精度三维地图、搭建自定位场景信息、设计自定位视觉系统和定时定位流程,具体包括如下步骤:An augmented reality self-positioning method based on aviation assembly, which includes designing a system framework, building an assembly scene, constructing a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, specifically including the following step:
步骤一:所述设计系统框架采用客户端-服务器并行开发的模式进行数据间的接受和发送,所述客户端通过无线与服务器相连,用于将装配场景和装配工艺信息传输到服务器,所述服务器通过无线与客户端相连,用于将解析出的装配场景特征点和标签信息的位姿传输到客户端;Step 1: The design system framework adopts the client-server parallel development mode to receive and send data. The client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server. The server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
步骤二:完成所述步骤一的系统框架设计后,进行搭建装配场景,所述装配场景包括零件区、待装配区和标签区,所述零件区用于放置装配原件,所述待装配区用于对装配原件进行装配,所述标签区包括多个标签,用于关联多个标签之间的位置和姿态关系,传输至服务器;Step 2: After completing the system framework design of the first step, build an assembly scene. The assembly scene includes a parts area, a to-be-assembled area, and a label area. For assembling the original assembly, the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server;
步骤三:完成所述步骤二的装配场景搭建后,进行构建装配场景的高精度三维地图,所述构建装配场景高精度三维地图先利用深度相机与惯性测量单元提供的距离信息获得装配场景的稠密三维地图,接着利用Apriltag标签,对装配场景的稠密三维地图进行信息填充,建立离散地图,然后将稠密三维地图与离散地图融合形成装配场景的高精度三维地图,传输至服务器;Step 3: After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene. The construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
步骤四:所述步骤三中的构建装配场景三维特征地图的高精度三维地图传输到所述搭建自定位场景信息,所述搭建自定位场景信息先对高精度三维地图进行分析,并在特征点较少区域附着Apriltag标签,形成装配场景的标签集,接着测量出标签集之间的相对位姿关系,然后根据装配工艺及装配手册,建立装配零件的空间位置关系,传输至服务器;Step 4: The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed. Apriltag tags are attached to less areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured, and then the spatial position relationship of the assembly parts is established according to the assembly process and assembly manual, and transmitted to the server;
步骤五:所述步骤四中的搭建自定位场景信息的空间位置关系传输到所述设计自定位视觉系统,所述设计自定位视觉系统包括创建虚拟模型、实时计算设备位姿和虚实场景融合,所述创建虚拟模型与实时计算设备位姿相连,用于AR开发平台搭建三维场景,且根据装配零件的空间位置关系,设定虚拟模型的三维空间坐标,紧接着将增强现实设备放置到场景中,实时计算出设备内深度相机的位姿,所述实时计算设备位姿与虚实场景融合相连,用于将虚拟物体加载到客户端上,实现虚拟物体与装配场景的融合显示;Step 5: The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion, The created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene. , calculates the pose of the depth camera in the device in real time, and the real-time computing device pose is connected to the virtual and real scene fusion, and is used to load the virtual object to the client to realize the fusion display of the virtual object and the assembly scene;
步骤六:完成所述步骤五的设计自定位视觉系统后,进行定时定位流程,所述定时定位流程首先在零件的待装配区完成自定位视觉系统初始化,接着加载高精度三维地图且开启两个线程,然后将两个线程的位姿进行比较,若误差满足设定要求,则自定位视觉系统输出融合结果定位;若误差过大,则利用标签位姿对融合位姿进行修正,自定位视觉系统输出修正后位姿。Step 6: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed. The timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose.
步骤一中,所述客户端包括AR眼镜、惯性测量单元和工控机,所述惯性测量单元包括传感器,所述工控机与传感器相连,用于控制传感器将计算的数据通过串口传输到服务器。In step 1, the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
步骤三中,所述深度相机用于采集装配场景一周的视频,对采集到的视频图像进行特征提取和光流跟踪,将提取的视频特征进行筛选,然后提取出特征帧进行特征点保留。In step 3, the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
步骤三中,所述信息填充包括Apriltag标签的关键帧以及关键帧对应的标签角点信息。In step 3, the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
步骤六中,所述加载高精度三维地图分为两个线程,一个线程为实时检测Apriltag标签信息,接着根据Apriltag标签估计深度相机相对标签位姿,然后将标签与自定位场景的空间位置关系换算相对于世界坐标的位姿;另一个线程为根据装配场景中的特征点融合惯性测量单元进行融合定位,实时得到深度相机相对于世界坐标系的位姿。In step 6, the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene. The pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
步骤五的具体步骤为:(1)计算Apriltag标签的位姿;(2)计算IMU位姿;(3)计算VSLAM位姿;(4)将计算好的位姿传输至服务器且融合虚拟模型的三维空间坐标,然后传输至客户端进行融合显示。The specific steps of step 5 are: (1) Calculate the pose of the Apriltag tag; (2) Calculate the IMU pose; (3) Calculate the VSLAM pose; (4) Transfer the calculated pose to the server and fuse the virtual model. The three-dimensional space coordinates are then transmitted to the client for fusion display.
步骤五中,所述设备位姿包括Apriltag标签的位姿、IMU位姿以及VSLAM位姿。In step 5, the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
本发明的有益效果有:操作人员穿戴增强现实设备,服务器解读装配指令,同时装配指令以虚拟信息的方式呈现在操作人员眼前,指引操作人员去零件区寻找零件,并引导操作人员到达待装配区域,指导操作人员装配注意事项,有效的提高操作人员对任务的理解程度,降低了操作人员的操作门槛,保证高效可靠的完成装配任务,同时在特征点较少的空白区域也可以精准定位。The beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
有益效果beneficial effect
本发明的有益效果有:操作人员穿戴增强现实设备,服务器解读装配指令,同时装配指令以虚拟信息的方式呈现在操作人员眼前,指引操作人员去零件区寻找零件,并引导操作人员到达待装配区域,指导操作人员装配注意事项,有效的提高操作人员对任务的理解程度,降低了操作人员的操作门槛,保证高效可靠的完成装配任务,同时在特征点较少的空白区域也可以精准定位。The beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
附图说明Description of drawings
图1为本发明的增强现实系统框架图;1 is a frame diagram of an augmented reality system of the present invention;
图2为本发明的装配场景框架图;Fig. 2 is the assembling scene frame diagram of the present invention;
图3为本发明的装配场景高精度三维地图的采集流程图;Fig. 3 is the collection flow chart of the high-precision three-dimensional map of the assembly scene of the present invention;
图4为本发明的装配工艺与设备实时定位技术流程图。FIG. 4 is a flow chart of the real-time positioning technology of the assembly process and equipment of the present invention.
本发明的实施方式Embodiments of the present invention
 下面结合附图对本发明作进一步地说明:Below in conjunction with accompanying drawing, the present invention is further described:
 如图1-4所示,本发明它包括设计系统框架、搭建装配场景、构建装配场景高精度三维地图、搭建自定位场景信息、设计自定位视觉系统和定时定位流程,具体包括如下步骤:As shown in Figures 1-4, the present invention includes designing a system framework, building an assembly scene, building a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, and specifically includes the following steps:
步骤一:所述设计系统框架采用客户端-服务器并行开发的模式进行数据间的接受和发送,所述客户端通过无线与服务器相连,用于将装配场景和装配工艺信息传输到服务器,所述服务器通过无线与客户端相连,用于将解析出的装配场景特征点和标签信息的位姿传输到客户端;Step 1: The design system framework adopts the client-server parallel development mode to receive and send data. The client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server. The server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
步骤二:完成所述步骤一的系统框架设计后,进行搭建装配场景,所述装配场景包括零件区、待装配区和标签区,所述零件区用于放置装配原件,所述待装配区用于对装配原件进行装配,所述标签区包括多个标签,用于关联多个标签之间的位置和姿态关系,传输至服务器;其中标签之间的位置和姿态关系是通过在多个标签中选择任意一个标签作为起始标签,设定其位置为原点(0,0,0),旋转初始姿态为(0,0,0),其余标签相对于起始标签做位移旋转,其位移旋转作为位置和旋转初始姿态。Step 2: After completing the system framework design of the first step, build an assembly scene. The assembly scene includes a parts area, a to-be-assembled area, and a label area. For assembling the original assembly, the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server; wherein the position and posture relationship between the labels is obtained through the multiple labels. Select any label as the start label, set its position as the origin (0, 0, 0), the initial rotation attitude as (0, 0, 0), and the rest of the labels do displacement rotation relative to the start label, and the displacement rotation is as Position and rotation initial pose.
步骤三:完成所述步骤二的装配场景搭建后,进行构建装配场景的高精度三维地图,所述构建装配场景高精度三维地图先利用深度相机与惯性测量单元提供的距离信息获得装配场景的稠密三维地图,接着利用Apriltag标签,对装配场景的稠密三维地图进行信息填充,建立离散地图,然后将稠密三维地图与离散地图融合形成装配场景的高精度三维地图,传输至服务器;Step 3: After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene. The construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
步骤四:所述步骤三中的构建装配场景三维特征地图的高精度三维地图传输到所述搭建自定位场景信息,所述搭建自定位场景信息先对高精度三维地图进行分析,并在特征点较少区域附着Apriltag标签,形成装配场景的标签集,接着测量出标签集之间的相对位姿关系,其中标签与装配场景之间的关系可从三维地图直接得出,接着通过增强现实设备可算出装配场景的相对位姿,然后根据装配工艺及装配手册,建立装配零件的空间位置关系,传输至服务器;Step 4: The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed. Apriltag tags are attached to fewer areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured. Calculate the relative pose of the assembly scene, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual, and transmit them to the server;
步骤五:所述步骤四中的搭建自定位场景信息的空间位置关系传输到所述设计自定位视觉系统,所述设计自定位视觉系统包括创建虚拟模型、实时计算设备位姿和虚实场景融合,所述创建虚拟模型与实时计算设备位姿相连,用于AR开发平台搭建三维场景,且根据装配零件的空间位置关系,设定虚拟模型的三维空间坐标,紧接着将增强现实设备放置到场景中,实时计算出设备内深度相机的位姿,所述实时计算设备位姿与虚实场景融合相连,用于将虚拟物体加载到客户端上的AR眼镜中,实现虚拟物体与装配场景的融合显示,其中设备位姿包括Apriltag标签的位姿、IMU位姿以及VSLAM位姿。Step 5: The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion, The created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene. , calculates the pose of the depth camera in the device in real time, the real-time computing device pose is fused with the virtual and real scene, and is used to load the virtual object into the AR glasses on the client to realize the fusion display of the virtual object and the assembly scene, The device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
步骤六:完成所述步骤五的设计自定位视觉系统后,进行定时定位流程,所述定时定位流程首先在零件的待装配区完成自定位视觉系统初始化,接着加载高精度三维地图且开启两个线程,然后将两个线程的位姿进行比较,若误差满足设定要求,则自定位视觉系统输出融合结果定位;若误差过大,则利用标签位姿对融合位姿进行修正,自定位视觉系统输出修正后位姿,其中标签位姿是通过增强现实设备上的深度相机检测到标签,然后计算出标签相对于增强现实设备的位置姿态。Step 6: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed. The timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose, where the label pose is detected by the depth camera on the augmented reality device, and then calculates the position and pose of the label relative to the augmented reality device.
步骤一中,所述客户端包括AR眼镜、惯性测量单元和工控机,所述惯性测量单元包括传感器,所述工控机与传感器相连,用于控制传感器将计算的数据通过串口传输到服务器。In step 1, the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
步骤三中,所述深度相机用于采集装配场景一周的视频,对采集到的视频图像进行特征提取和光流跟踪,将提取的视频特征进行筛选,然后提取出特征帧进行特征点保留。In step 3, the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
步骤三中,所述信息填充包括Apriltag标签的关键帧以及关键帧对应的标签角点信息。In step 3, the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
步骤六中,所述加载高精度三维地图分为两个线程,一个线程为实时检测Apriltag标签信息,接着根据Apriltag标签估计深度相机相对标签位姿,然后将标签与自定位场景的空间位置关系换算相对于世界坐标的位姿;另一个线程为根据装配场景中的特征点融合惯性测量单元进行融合定位,实时得到深度相机相对于世界坐标系的位姿。In step 6, the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene. The pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
步骤五的具体步骤为:The specific steps of step five are:
(1)计算Apriltag标签的位姿:该标签码坐标系为
Figure 739235dest_path_image001
,深度相机坐标系为
Figure 263889dest_path_image002
,所述标签码上任意一点
Figure 394656dest_path_image003
在深度相机坐标系下的坐标为
Figure 595830dest_path_image004
,二者之间的对应关系为:
(1) Calculate the pose of the Apriltag tag: the coordinate system of the tag code is
Figure 739235dest_path_image001
, the depth camera coordinate system is
Figure 263889dest_path_image002
, any point on the label code
Figure 394656dest_path_image003
The coordinates in the depth camera coordinate system are
Figure 595830dest_path_image004
, the corresponding relationship between the two is:
                   
                   
Figure 756422dest_path_image005
          (1)
Figure 756422dest_path_image005
(1)
在公式(1)中,R为旋转矩阵代表深度相机坐标系相对于标签码坐标系的旋转,T为平移向量代表深度相机坐标系相对于标签码的平移;
Figure 334033dest_path_image006
为深度相机外部参数矩阵,利用该矩阵可以将标签码坐标系转换到深度相机坐标系下。
In formula (1), R is the rotation matrix representing the rotation of the depth camera coordinate system relative to the label code coordinate system, and T is the translation vector representing the translation of the depth camera coordinate system relative to the label code;
Figure 334033dest_path_image006
is the depth camera external parameter matrix, which can be used to convert the label code coordinate system to the depth camera coordinate system.
其中图像坐标系为
Figure 866777dest_path_image007
,像素坐标系为
Figure 442115dest_path_image008
,则标签码上的点
Figure 827353dest_path_image003
与深度相机图像平面中的成像点
Figure 208656dest_path_image009
之间的对应关系为:
where the image coordinate system is
Figure 866777dest_path_image007
, the pixel coordinate system is
Figure 442115dest_path_image008
, the point on the label code
Figure 827353dest_path_image003
Imaging points in the image plane with the depth camera
Figure 208656dest_path_image009
The corresponding relationship between them is:
                     
Figure 861485dest_path_image010
                  (2)
Figure 861485dest_path_image010
(2)
在公式(2)中,
Figure 607725dest_path_image011
为图像平面中心,
Figure 228062dest_path_image012
是x轴和y轴的归一化焦距,
Figure 396744dest_path_image013
为深度相机内部参数矩阵,利用该矩阵可将深度相机坐标系转换到图像平面坐标系下;设
Figure 153347dest_path_image014
,使用最小二乘法对公式(1)和(2)求解,得到深度相机的内部参数矩阵
Figure 86799dest_path_image013
;当深度相机检测到标签时,利用Apriltag算法得到R和T。
In formula (2),
Figure 607725dest_path_image011
is the center of the image plane,
Figure 228062dest_path_image012
are the normalized focal lengths of the x and y axes,
Figure 396744dest_path_image013
is the depth camera internal parameter matrix, which can be used to convert the depth camera coordinate system to the image plane coordinate system; set
Figure 153347dest_path_image014
, use the least squares method to solve formulas (1) and (2) to obtain the internal parameter matrix of the depth camera
Figure 86799dest_path_image013
; When the depth camera detects the tag, use the Apriltag algorithm to get R and T.
(2)计算IMU位姿:利用IMU采集设备运动过程中的数据;在任意时间
Figure 928853dest_path_image015
Figure 626858dest_path_image016
之间,利用公式(3)对惯性测量单元的角速率进行积分,可得到设备沿着三个轴的角度增量,记为
Figure 769126dest_path_image017
;在任意时间
Figure 857168dest_path_image015
Figure 937251dest_path_image016
之间,利用公式(4)对惯性测量单元的加速度进行二重积分,可得到设备在该段时间的位移
Figure 464047dest_path_image018
(2) Calculate the pose of the IMU: use the IMU to collect data during the movement of the device; at any time
Figure 928853dest_path_image015
and
Figure 626858dest_path_image016
In between, using formula (3) to integrate the angular rate of the inertial measurement unit, the angle increment of the device along the three axes can be obtained, which is recorded as
Figure 769126dest_path_image017
; at any time
Figure 857168dest_path_image015
and
Figure 937251dest_path_image016
In between, using formula (4) to double-integrate the acceleration of the inertial measurement unit, the displacement of the device during this period can be obtained
Figure 464047dest_path_image018
.
                                                 
Figure 647772dest_path_image019
                                                (3)
Figure 647772dest_path_image019
(3)
                                                 
Figure 703453dest_path_image020
                                                     (4)
Figure 703453dest_path_image020
(4)
(3)计算VSLAM位姿:利用深度相机对场景进行三维地图采集,其中空间n个三维点记为
Figure 536411dest_path_image021
,而投影的像素坐标为
Figure 539002dest_path_image022
,两者满足如下关系:
(3) Calculate the VSLAM pose: use the depth camera to collect a three-dimensional map of the scene, where n three-dimensional points in space are recorded as
Figure 536411dest_path_image021
, while the projected pixel coordinates are
Figure 539002dest_path_image022
, the two satisfy the following relationship:
                                               
Figure 124704dest_path_image023
                                                 (5)
Figure 124704dest_path_image023
(5)
其中
Figure 869063dest_path_image024
为深度相机位姿态的李代数表示法,接着采用光束法平差(Bundle Adjustment)来进行最小化投影误差,将误差求和并构建最小二乘,然后寻找最精确的深度相机位姿,使公式(6)最小化且得出R和T。
in
Figure 869063dest_path_image024
It is the Lie algebra representation of the depth camera pose, and then uses Bundle Adjustment to minimize the projection error, sum the errors and construct the least squares, and then find the most accurate depth camera pose, so that the formula (6) Minimize and obtain R and T.
                                    
Figure 173005dest_path_image025
                                    (6)
Figure 173005dest_path_image025
(6)
(4)将计算好的位姿传输至服务器且融合虚拟模型的三维空间坐标,然后传输至客户端进行融合显示:在得到Apriltag标签位姿、IMU位姿和VSLAM位姿后,利用传统的优化方式将IMU的零偏加入到状态变量中,接着采用紧耦合的方式将深度相机的位姿、速度以及IMU的零偏构建的目标状态方程进行估计,如公式(7)所示,系统的十五维的状态变量表示为: (4) Transfer the calculated pose to the server and fuse the three-dimensional space coordinates of the virtual model, and then transmit it to the client for fusion display: After obtaining the Apriltag label pose, IMU pose and VSLAM pose, use the traditional optimization The zero bias of the IMU is added to the state variable, and then the pose, velocity, and the target state equation constructed by the zero bias of the depth camera are estimated in a tightly coupled manner, as shown in formula (7), the ten The five-dimensional state variable is expressed as:
                                            
Figure 526757dest_path_image026
                                              (7)
Figure 526757dest_path_image026
(7)
公式(7)中,
Figure 435807dest_path_image027
Figure 82558dest_path_image028
Figure 139376dest_path_image029
分别是深度相机的旋转、平移和速度,
Figure 483770dest_path_image030
Figure 794796dest_path_image031
分别是IMU的加速度计和陀螺仪的零偏;而本系统采用局部Apriltag标签辅助定位的策略,紧耦合的系统状态变量表示为:
In formula (7),
Figure 435807dest_path_image027
,
Figure 82558dest_path_image028
and
Figure 139376dest_path_image029
are the rotation, translation and speed of the depth camera, respectively,
Figure 483770dest_path_image030
and
Figure 794796dest_path_image031
are the zero offsets of the accelerometer and gyroscope of the IMU, respectively; while this system adopts the strategy of local Apriltag tag-assisted positioning, and the tightly coupled system state variables are expressed as:
                            
Figure 894339dest_path_image032
                               (8)
Figure 894339dest_path_image032
(8)
公式(8)中,
Figure 159492dest_path_image033
表示Apriltag标签的定位位姿,
Figure 104314dest_path_image034
表示VSLAM的定位位姿,
Figure 269848dest_path_image035
表示Apriltag标签和VSLAM的位姿差,因此存在两种情况下的系统变量,当
Figure 274713dest_path_image036
时,即视觉定位累计误差设有超过设定阈值,继续视觉融合惯导定位;当
Figure 227494dest_path_image037
时,即视觉定位累计误差较大,然后采取Apriltag标签局部定位来融合惯导定位。
In formula (8),
Figure 159492dest_path_image033
Represents the positioning pose of the Apriltag tag,
Figure 104314dest_path_image034
Represents the positioning pose of VSLAM,
Figure 269848dest_path_image035
Represents the pose difference between the Apriltag tag and VSLAM, so there are system variables in two cases, when
Figure 274713dest_path_image036
, that is, the cumulative error of visual positioning exceeds the set threshold, and the visual fusion inertial navigation positioning continues; when
Figure 227494dest_path_image037
, that is, the cumulative error of visual positioning is large, and then the local positioning of the Apriltag label is used to integrate the inertial navigation positioning.
实施例: Example:
步骤一,设计系统框架:采用客户端-服务器并行开发的模式进行数据间的接受和发送,先将客户端中的AR眼镜、惯性测量单元和工控机通过无线与服务器相连,用于将装配场景和装配工艺信息传输到服务器,接着将服务器通过无线与客户端相连,用于将解析出的装配场景特征点和标签信息的位姿传输到客户端;Step 1: Design the system framework: use the client-server parallel development mode to receive and send data. First, connect the AR glasses, inertial measurement unit and industrial computer in the client to the server wirelessly to connect the assembly scene. And the assembly process information is transmitted to the server, and then the server is wirelessly connected to the client to transmit the parsed pose of the assembly scene feature points and label information to the client;
步骤二,搭建装配场景:完成所述步骤一的设计系统框架后,将放置在零件区的用于航空的待装配零件在待装配区组装完成,接着在布置的8个标签区中选择标签1作为起始标签,设定标签1的位置为原点(0,0,0),旋转初始姿态为(0,0,0),与此同时,标签2相对于标签1做位移旋转,其位移旋转作为其位置(Position)和旋转姿态(Rotation),其余标签以此类推,其中各个标签的旋转姿态均设置为(0,0,0),即调整每个标签的空间位置保证其朝向一致,如表1所示:Step 2, build an assembly scene: after completing the design system framework of step 1, assemble the parts to be assembled for aviation placed in the parts area in the to-be-assembled area, and then select label 1 in the 8 label areas arranged As the starting label, the position of label 1 is set as the origin (0, 0, 0), and the initial rotation attitude is (0, 0, 0). At the same time, label 2 performs displacement rotation relative to label 1, and its displacement rotates As its position (Position) and rotation attitude (Rotation), the rest of the labels are analogous, in which the rotation posture of each label is set to (0, 0, 0), that is, the spatial position of each label is adjusted to ensure that its orientation is consistent, such as Table 1 shows:
表1  装配场景标签的空间位置关系Table 1 Spatial Position Relationship of Assembly Scene Labels
   标签号label number       位置 Location   旋转位姿Rotation pose
11 (0,0,0)(0, 0, 0) (0,0,0)(0, 0, 0)
22 (0,8,0)(0, 8, 0) (0,0,0)(0, 0, 0)
33 (4,8,0)(4, 8, 0) (0,0,0)(0, 0, 0)
44 (8,8,0)(8, 8, 0) (0,0,0)(0, 0, 0)
55 (10,6,0)(10, 6, 0) (0,0,0)(0, 0, 0)
66 (10,0,0)(10, 0, 0) (0,0,0)(0, 0, 0)
77 (8,-2,0)(8, -2, 0) (0,0,0)(0, 0, 0)
88 (4,-2,0)(4, -2, 0) (0,0,0)(0, 0, 0)
  
步骤三,构建装配场景高精度三维地图:完成所述步骤二的装配场景搭建后,紧接着利用深度相机在标签1处完成初始化,然后绕装配场景一周进行视频采集,对采集到的视频图像进行特征提取和光流跟踪,将提取的视频特征进行筛选,提取出特征帧进行特征点保留,然后与惯性测量单元提供的距离信息相结合,从而获得装配场景的稠密三维地图;同时利用Apriltag标签,对装配场景的稠密三维地图进行关键帧以及关键帧对应的标签角点信息的填充,进而建立离散地图且与稠密三维地图进行融合形成装配场景的高精度三维地图; Step 3: Build a high-precision 3D map of the assembly scene: After completing the construction of the assembly scene in the second step, then use the depth camera to complete the initialization at label 1, and then perform video collection around the assembly scene, and the collected video images are processed. Feature extraction and optical flow tracking, filter the extracted video features, extract feature frames for feature point retention, and then combine with the distance information provided by the inertial measurement unit to obtain a dense three-dimensional map of the assembly scene; The dense 3D map of the assembly scene is filled with key frames and the label corner information corresponding to the key frames, and then a discrete map is established and fused with the dense 3D map to form a high-precision 3D map of the assembly scene;
步骤四,搭建自定位场景信息:将步骤三中的构建装配场景三维特征地图的高精度三维地图传输到搭建自定位场景信息中,接着对高精度三维地图进行分析,并在特征点较少区域附着人工标签Apriltag,形成装配场景的标签集,接着测量出标签集之间的相对位姿关系,然后根据装配工艺及装配手册,建立装配零件的空间位置关系;Step 4: Build the self-positioning scene information: transfer the high-precision 3D map of the three-dimensional feature map of the assembly scene in step 3 to the building self-positioning scene information, then analyze the high-precision 3D map, and analyze the high-precision 3D map in the area with fewer feature points. Attach the artificial label Apriltag to form the label set of the assembly scene, then measure the relative pose relationship between the label sets, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual;
步骤五,设计自定位视觉系统:将步骤四中的搭建自定位场景信息的空间位置关系传输到设计自定位视觉系统中,该设计自定位视觉系统包括创建虚拟模型、实时计算设备位姿和虚实场景融合,将创建虚拟模型与实时计算设备位姿相连,用于AR开发平台搭建三维场景,且根据装配零件的空间位置关系,设定虚拟模型的三维空间坐标,紧接着将增强现实设备放置到场景中,实时计算出设备内深度相机的位姿,接着将实时计算设备位姿与虚实场景融合相连,用于将虚拟物体加载到AR眼镜上,实现虚拟物体与装配场景的融合显示;Step 5: Design a self-positioning vision system: transfer the spatial positional relationship of the building self-positioning scene information in step 4 to the design self-positioning vision system, which includes creating a virtual model, real-time computing device pose and virtual reality. Scene fusion, which connects the created virtual model with the pose of the real-time computing device, is used for the AR development platform to build a three-dimensional scene, and sets the three-dimensional spatial coordinates of the virtual model according to the spatial positional relationship of the assembled parts, and then places the augmented reality device on the In the scene, the pose of the depth camera in the device is calculated in real time, and then the real-time computing device pose is fused with the virtual and real scene to load the virtual object onto the AR glasses to realize the fusion display of the virtual object and the assembly scene;
步骤六,定时定位流程:完成所述步骤五的设计自定位视觉系统后,进行定时定位流程,该定时定位流程首先在零件的待装配区完成自定位视觉系统初始化,接着加载高精度三维地图且开启两个线程,然后将两个线程的位姿进行比较,若误差满足设定要求,则自定位视觉系统输出融合结果定位;若误差过大,则利用标签位姿对融合位姿进行修正,最后自定位视觉系统输出修正后位姿,从而完成此次航空装配的自定位。Step 6, timing positioning process: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is carried out. The timing positioning process first completes the initialization of the self-positioning vision system in the part to be assembled area, and then loads a high-precision three-dimensional map and Open two threads, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system will output the fusion result positioning; if the error is too large, use the label pose to correct the fusion pose. Finally, the self-positioning vision system outputs the corrected pose, thereby completing the self-positioning of the aviation assembly.
本发明涉及的其它未说明部分与现有技术相同。Other unexplained parts involved in the present invention are the same as those in the prior art.

Claims (7)

  1. 一种基于航空装配的增强现实自定位方法,其特征是它包括设计系统框架、搭建装配场景、构建装配场景的高精度三维地图、搭建自定位场景信息、设计自定位视觉系统和定时定位流程,具体包括如下步骤:An augmented reality self-positioning method based on aviation assembly is characterized in that it includes designing a system framework, constructing an assembly scene, constructing a high-precision three-dimensional map of the assembly scene, constructing self-positioning scene information, designing a self-positioning vision system and a timing positioning process, Specifically include the following steps:
    步骤一:所述设计系统框架采用客户端-服务器并行开发的模式进行数据间的接受和发送,所述客户端通过无线与服务器相连,用于将装配场景和装配工艺信息传输到服务器,所述服务器通过无线与客户端相连,用于将解析出的装配场景特征点和标签信息的位姿传输到客户端;Step 1: The design system framework adopts the client-server parallel development mode to receive and send data. The client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server. The server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
    步骤二:完成所述步骤一的系统框架设计后,进行搭建装配场景,所述装配场景包括零件区、待装配区和标签区,所述零件区用于放置装配原件,所述待装配区用于对装配原件进行装配,所述标签区包括多个标签,用于关联各个标签之间的位置和姿态关系,传输至服务器;Step 2: After completing the system framework design of the first step, build an assembly scene. The assembly scene includes a parts area, a to-be-assembled area, and a label area. For assembling the original assembly, the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the labels and transmit them to the server;
    步骤三:完成所述步骤二的装配场景搭建后,进行构建装配场景的高精度三维地图,所述构建装配场景高精度三维地图先利用深度相机与惯性测量单元提供的距离信息获得装配场景的稠密三维地图,接着利用Apriltag标签,对装配场景的稠密三维地图进行信息填充,建立离散地图,然后将稠密三维地图与离散地图融合形成装配场景的高精度三维地图,传输至服务器;Step 3: After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene. The construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
    步骤四:所述步骤三中的构建装配场景三维特征地图的高精度三维地图传输到所述搭建自定位场景信息,所述搭建自定位场景信息先对高精度三维地图进行分析,并在特征点较少区域附着Apriltag标签,形成装配场景的标签集,接着测量出标签集之间的相对位姿关系,然后根据装配工艺及装配手册,建立装配零件的空间位置关系,传输至服务器;Step 4: The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed. Apriltag tags are attached to less areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured, and then the spatial position relationship of the assembly parts is established according to the assembly process and assembly manual, and transmitted to the server;
    步骤五:所述步骤四中的搭建自定位场景信息的空间位置关系传输到所述设计自定位视觉系统,所述设计自定位视觉系统包括创建虚拟模型、实时计算设备位姿和虚实场景融合,所述创建虚拟模型与实时计算设备位姿相连,用于AR开发平台搭建三维场景,且根据装配零件的空间位置关系,设定虚拟模型的三维空间坐标,紧接着将增强现实设备放置到场景中,实时计算出设备内深度相机的位姿,所述实时计算设备位姿与虚实场景融合相连,用于将虚拟物体加载到客户端上,实现虚拟物体与装配场景的融合显示;Step 5: The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion, The created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene. , calculates the pose of the depth camera in the device in real time, and the real-time computing device pose is connected to the virtual and real scene fusion, and is used to load the virtual object to the client to realize the fusion display of the virtual object and the assembly scene;
    步骤六:完成所述步骤五的设计自定位视觉系统后,进行定时定位流程,所述定时定位流程首先在零件的待装配区完成自定位视觉系统初始化,接着加载高精度三维地图且开启两个线程,然后将两个线程的位姿进行比较,若误差满足设定要求,则自定位视觉系统输出融合结果定位;若误差过大,则利用标签位姿对融合位姿进行修正,自定位视觉系统输出修正后位姿。Step 6: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed. The timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose.
  2. 根据权利要求1所述的基于航空装配的增强现实自定位方法,其特征是,步骤一中,所述客户端包括AR眼镜、惯性测量单元和工控机,所述惯性测量单元包括传感器,所述工控机与传感器相连,用于控制传感器将计算的数据通过串口传输到服务器。The augmented reality self-positioning method based on aviation assembly according to claim 1, wherein in step 1, the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the The industrial computer is connected with the sensor and is used to control the sensor to transmit the calculated data to the server through the serial port.
  3. 根据权利要求1所述的基于航空装配的增强现实自定位方法,其特征是,步骤三中,所述深度相机用于采集装配场景一周的视频,对采集到的视频图像进行特征提取和光流跟踪,将提取的视频特征进行筛选,然后提取出特征帧进行特征点保留。The augmented reality self-positioning method based on aviation assembly according to claim 1, wherein in step 3, the depth camera is used to collect a video of the assembly scene for a week, and feature extraction and optical flow tracking are performed on the collected video images. , filter the extracted video features, and then extract feature frames for feature point retention.
  4. 根据权利要求1所述的基于航空装配的增强现实自定位方法,其特征是,步骤三中,所述信息填充包括Apriltag标签的关键帧以及关键帧对应的标签角点信息。The augmented reality self-positioning method based on aviation assembly according to claim 1, wherein in step 3, the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
  5. 根据权利要求1所述的基于航空装配的增强现实自定位方法,其特征是,步骤六中,所述加载高精度三维地图分为两个线程,一个线程为实时检测Apriltag标签信息,接着根据Apriltag标签估计深度相机相对标签位姿,然后将标签与自定位场景的空间位置关系换算相对于世界坐标的位姿;另一个线程为根据装配场景中的特征点融合惯性测量单元进行融合定位,实时得到深度相机相对于世界坐标系的位姿。The augmented reality self-positioning method based on aviation assembly according to claim 1, wherein in step 6, the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect Apriltag label information in real time, and then according to Apriltag The label estimates the pose of the depth camera relative to the label, and then converts the spatial position relationship between the label and the self-positioning scene to the pose relative to the world coordinates; another thread fuses the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtains real-time results. The pose of the depth camera relative to the world coordinate system.
  6. 根据权利要求1所述的基于航空装配的增强现实自定位方法,其特征是,步骤五的具体步骤为:(1)计算Apriltag标签的位姿;(2)计算IMU位姿;(3)计算VSLAM位姿;(4)将计算好的位姿传输至服务器且融合虚拟模型的三维空间坐标,然后传输至客户端进行融合显示。The augmented reality self-positioning method based on aviation assembly according to claim 1, wherein the specific steps of step 5 are: (1) calculating the pose of the Apriltag tag; (2) calculating the pose of the IMU; (3) calculating VSLAM pose; (4) transmit the calculated pose to the server and fuse the three-dimensional space coordinates of the virtual model, and then transmit it to the client for fusion display.
  7. 根据权利要求1所述的基于航空装配的增强现实自定位方法,其特征是,步骤五中,所述设备位姿包括Apriltag标签的位姿、IMU位姿以及VSLAM位姿。The augmented reality self-positioning method based on aviation assembly according to claim 1, wherein in step 5, the device pose includes the pose of the Apriltag tag, the pose of the IMU, and the pose of the VSLAM.
PCT/CN2020/108443 2020-06-28 2020-08-11 Augmented reality self-positioning method based on aviation assembly WO2022000713A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010597190.5A CN111968228B (en) 2020-06-28 2020-06-28 Augmented reality self-positioning method based on aviation assembly
CN202010597190.5 2020-06-28

Publications (1)

Publication Number Publication Date
WO2022000713A1 true WO2022000713A1 (en) 2022-01-06

Family

ID=73360965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/108443 WO2022000713A1 (en) 2020-06-28 2020-08-11 Augmented reality self-positioning method based on aviation assembly

Country Status (2)

Country Link
CN (1) CN111968228B (en)
WO (1) WO2022000713A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016647A (en) * 2022-07-07 2022-09-06 国网江苏省电力有限公司电力科学研究院 Augmented reality three-dimensional registration method for substation fault simulation
CN117848331A (en) * 2024-03-06 2024-04-09 河北美泰电子科技有限公司 Positioning method and device based on visual tag map
CN117974794A (en) * 2024-04-02 2024-05-03 深圳市博硕科技股份有限公司 Accurate visual positioning system of thin slice goods of furniture for display rather than for use machine

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734945B (en) * 2021-03-30 2021-08-17 上海交大智邦科技有限公司 Assembly guiding method, system and application based on augmented reality
CN113220121B (en) * 2021-05-04 2023-05-09 西北工业大学 AR fastener auxiliary assembly system and method based on projection display
CN114323000B (en) * 2021-12-17 2023-06-09 中国电子科技集团公司第三十八研究所 Cable AR guide assembly system and method
CN114494594B (en) * 2022-01-18 2023-11-28 中国人民解放军63919部队 Deep learning-based astronaut operation equipment state identification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110388919A (en) * 2019-07-30 2019-10-29 上海云扩信息科技有限公司 Threedimensional model localization method in augmented reality based on characteristic pattern and inertia measurement
US20190358547A1 (en) * 2016-11-14 2019-11-28 Lightcraft Technology Llc Spectator virtual reality system
CN110705017A (en) * 2019-08-27 2020-01-17 四川科华天府科技有限公司 Model disassembling and assembling simulation system and simulation method based on AR
CN110928418A (en) * 2019-12-11 2020-03-27 北京航空航天大学 Aviation cable auxiliary assembly method and system based on MR

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190212142A1 (en) * 2018-01-08 2019-07-11 Glen C. Gustafson System and method for using digital technology to perform stereo aerial photo interpretation
CN109062398B (en) * 2018-06-07 2021-06-29 中国航天员科研训练中心 Spacecraft rendezvous and docking method based on virtual reality and multi-mode human-computer interface
CN109759975A (en) * 2019-03-21 2019-05-17 成都飞机工业(集团)有限责任公司 A kind of positioning fixture of augmented reality artificial target's object of aircraft freight space auxiliary operation
CN110076277B (en) * 2019-05-07 2020-02-07 清华大学 Nail matching method based on augmented reality technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190358547A1 (en) * 2016-11-14 2019-11-28 Lightcraft Technology Llc Spectator virtual reality system
CN110388919A (en) * 2019-07-30 2019-10-29 上海云扩信息科技有限公司 Threedimensional model localization method in augmented reality based on characteristic pattern and inertia measurement
CN110705017A (en) * 2019-08-27 2020-01-17 四川科华天府科技有限公司 Model disassembling and assembling simulation system and simulation method based on AR
CN110928418A (en) * 2019-12-11 2020-03-27 北京航空航天大学 Aviation cable auxiliary assembly method and system based on MR

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016647A (en) * 2022-07-07 2022-09-06 国网江苏省电力有限公司电力科学研究院 Augmented reality three-dimensional registration method for substation fault simulation
CN117848331A (en) * 2024-03-06 2024-04-09 河北美泰电子科技有限公司 Positioning method and device based on visual tag map
CN117974794A (en) * 2024-04-02 2024-05-03 深圳市博硕科技股份有限公司 Accurate visual positioning system of thin slice goods of furniture for display rather than for use machine
CN117974794B (en) * 2024-04-02 2024-06-04 深圳市博硕科技股份有限公司 Accurate visual positioning system of thin slice goods of furniture for display rather than for use machine

Also Published As

Publication number Publication date
CN111968228B (en) 2021-11-05
CN111968228A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
JP3486613B2 (en) Image processing apparatus and method, program, and storage medium
CN108406731A (en) A kind of positioning device, method and robot based on deep vision
CN110261870A (en) It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN106898022A (en) A kind of hand-held quick three-dimensional scanning system and method
CN107478214A (en) A kind of indoor orientation method and system based on Multi-sensor Fusion
CN108052103B (en) Underground space simultaneous positioning and map construction method of inspection robot based on depth inertia odometer
CN111156998A (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
CN110033489A (en) A kind of appraisal procedure, device and the equipment of vehicle location accuracy
CN112116651B (en) Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
Momeni-k et al. Height estimation from a single camera view
CN208323361U (en) A kind of positioning device and robot based on deep vision
CN112541973B (en) Virtual-real superposition method and system
CN115371665B (en) Mobile robot positioning method based on depth camera and inertial fusion
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
CN110751123B (en) Monocular vision inertial odometer system and method
CN115147344A (en) Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance
CN113920191B (en) 6D data set construction method based on depth camera
CN111199576A (en) Outdoor large-range human body posture reconstruction method based on mobile platform
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
CN117115271A (en) Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process
Yang et al. Visual SLAM using multiple RGB-D cameras
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20943235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20943235

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20943235

Country of ref document: EP

Kind code of ref document: A1