技术问题technical problem
本发明的目的就是针对上述现有技术的不足,提供一种改善航空产品装配交付周期长、操作复杂以及代入感不强的基于航空装配的增强现实自定位方法。The purpose of the present invention is to provide an augmented reality self-positioning method based on aviation assembly, which improves the long delivery cycle of aviation product assembly, the complex operation and the weak sense of substitution, aiming at the above-mentioned shortcomings of the prior art.
技术解决方案technical solutions
本发明采用的技术方案如下:The technical scheme adopted in the present invention is as follows:
一种基于航空装配的增强现实自定位方法,它包括设计系统框架、搭建装配场景、构建装配场景的高精度三维地图、搭建自定位场景信息、设计自定位视觉系统和定时定位流程,具体包括如下步骤:An augmented reality self-positioning method based on aviation assembly, which includes designing a system framework, building an assembly scene, constructing a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, specifically including the following step:
步骤一:所述设计系统框架采用客户端-服务器并行开发的模式进行数据间的接受和发送,所述客户端通过无线与服务器相连,用于将装配场景和装配工艺信息传输到服务器,所述服务器通过无线与客户端相连,用于将解析出的装配场景特征点和标签信息的位姿传输到客户端;Step 1: The design system framework adopts the client-server parallel development mode to receive and send data. The client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server. The server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
步骤二:完成所述步骤一的系统框架设计后,进行搭建装配场景,所述装配场景包括零件区、待装配区和标签区,所述零件区用于放置装配原件,所述待装配区用于对装配原件进行装配,所述标签区包括多个标签,用于关联多个标签之间的位置和姿态关系,传输至服务器;Step 2: After completing the system framework design of the first step, build an assembly scene. The assembly scene includes a parts area, a to-be-assembled area, and a label area. For assembling the original assembly, the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server;
步骤三:完成所述步骤二的装配场景搭建后,进行构建装配场景的高精度三维地图,所述构建装配场景高精度三维地图先利用深度相机与惯性测量单元提供的距离信息获得装配场景的稠密三维地图,接着利用Apriltag标签,对装配场景的稠密三维地图进行信息填充,建立离散地图,然后将稠密三维地图与离散地图融合形成装配场景的高精度三维地图,传输至服务器;Step 3: After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene. The construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
步骤四:所述步骤三中的构建装配场景三维特征地图的高精度三维地图传输到所述搭建自定位场景信息,所述搭建自定位场景信息先对高精度三维地图进行分析,并在特征点较少区域附着Apriltag标签,形成装配场景的标签集,接着测量出标签集之间的相对位姿关系,然后根据装配工艺及装配手册,建立装配零件的空间位置关系,传输至服务器;Step 4: The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed. Apriltag tags are attached to less areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured, and then the spatial position relationship of the assembly parts is established according to the assembly process and assembly manual, and transmitted to the server;
步骤五:所述步骤四中的搭建自定位场景信息的空间位置关系传输到所述设计自定位视觉系统,所述设计自定位视觉系统包括创建虚拟模型、实时计算设备位姿和虚实场景融合,所述创建虚拟模型与实时计算设备位姿相连,用于AR开发平台搭建三维场景,且根据装配零件的空间位置关系,设定虚拟模型的三维空间坐标,紧接着将增强现实设备放置到场景中,实时计算出设备内深度相机的位姿,所述实时计算设备位姿与虚实场景融合相连,用于将虚拟物体加载到客户端上,实现虚拟物体与装配场景的融合显示;Step 5: The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion, The created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene. , calculates the pose of the depth camera in the device in real time, and the real-time computing device pose is connected to the virtual and real scene fusion, and is used to load the virtual object to the client to realize the fusion display of the virtual object and the assembly scene;
步骤六:完成所述步骤五的设计自定位视觉系统后,进行定时定位流程,所述定时定位流程首先在零件的待装配区完成自定位视觉系统初始化,接着加载高精度三维地图且开启两个线程,然后将两个线程的位姿进行比较,若误差满足设定要求,则自定位视觉系统输出融合结果定位;若误差过大,则利用标签位姿对融合位姿进行修正,自定位视觉系统输出修正后位姿。Step 6: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed. The timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose.
步骤一中,所述客户端包括AR眼镜、惯性测量单元和工控机,所述惯性测量单元包括传感器,所述工控机与传感器相连,用于控制传感器将计算的数据通过串口传输到服务器。In step 1, the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
步骤三中,所述深度相机用于采集装配场景一周的视频,对采集到的视频图像进行特征提取和光流跟踪,将提取的视频特征进行筛选,然后提取出特征帧进行特征点保留。In step 3, the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
步骤三中,所述信息填充包括Apriltag标签的关键帧以及关键帧对应的标签角点信息。In step 3, the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
步骤六中,所述加载高精度三维地图分为两个线程,一个线程为实时检测Apriltag标签信息,接着根据Apriltag标签估计深度相机相对标签位姿,然后将标签与自定位场景的空间位置关系换算相对于世界坐标的位姿;另一个线程为根据装配场景中的特征点融合惯性测量单元进行融合定位,实时得到深度相机相对于世界坐标系的位姿。In step 6, the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene. The pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
步骤五的具体步骤为:(1)计算Apriltag标签的位姿;(2)计算IMU位姿;(3)计算VSLAM位姿;(4)将计算好的位姿传输至服务器且融合虚拟模型的三维空间坐标,然后传输至客户端进行融合显示。The specific steps of step 5 are: (1) Calculate the pose of the Apriltag tag; (2) Calculate the IMU pose; (3) Calculate the VSLAM pose; (4) Transfer the calculated pose to the server and fuse the virtual model. The three-dimensional space coordinates are then transmitted to the client for fusion display.
步骤五中,所述设备位姿包括Apriltag标签的位姿、IMU位姿以及VSLAM位姿。In step 5, the device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
本发明的有益效果有:操作人员穿戴增强现实设备,服务器解读装配指令,同时装配指令以虚拟信息的方式呈现在操作人员眼前,指引操作人员去零件区寻找零件,并引导操作人员到达待装配区域,指导操作人员装配注意事项,有效的提高操作人员对任务的理解程度,降低了操作人员的操作门槛,保证高效可靠的完成装配任务,同时在特征点较少的空白区域也可以精准定位。The beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
有益效果beneficial effect
本发明的有益效果有:操作人员穿戴增强现实设备,服务器解读装配指令,同时装配指令以虚拟信息的方式呈现在操作人员眼前,指引操作人员去零件区寻找零件,并引导操作人员到达待装配区域,指导操作人员装配注意事项,有效的提高操作人员对任务的理解程度,降低了操作人员的操作门槛,保证高效可靠的完成装配任务,同时在特征点较少的空白区域也可以精准定位。The beneficial effects of the invention are as follows: the operator wears the augmented reality device, the server interprets the assembly instructions, and at the same time, the assembly instructions are presented in front of the operator in the form of virtual information, guiding the operator to go to the parts area to find parts, and guiding the operator to the area to be assembled. , instruct the operator to assemble the precautions, effectively improve the operator's understanding of the task, lower the operator's operating threshold, ensure the efficient and reliable completion of the assembly task, and can also accurately locate in the blank area with fewer feature points.
本发明的实施方式Embodiments of the present invention
下面结合附图对本发明作进一步地说明:Below in conjunction with accompanying drawing, the present invention is further described:
如图1-4所示,本发明它包括设计系统框架、搭建装配场景、构建装配场景高精度三维地图、搭建自定位场景信息、设计自定位视觉系统和定时定位流程,具体包括如下步骤:As shown in Figures 1-4, the present invention includes designing a system framework, building an assembly scene, building a high-precision three-dimensional map of the assembly scene, building self-positioning scene information, designing a self-positioning vision system and a timing positioning process, and specifically includes the following steps:
步骤一:所述设计系统框架采用客户端-服务器并行开发的模式进行数据间的接受和发送,所述客户端通过无线与服务器相连,用于将装配场景和装配工艺信息传输到服务器,所述服务器通过无线与客户端相连,用于将解析出的装配场景特征点和标签信息的位姿传输到客户端;Step 1: The design system framework adopts the client-server parallel development mode to receive and send data. The client is connected to the server wirelessly, and is used to transmit the assembly scene and assembly process information to the server. The server is wirelessly connected to the client, and is used to transmit the parsed pose of the assembly scene feature points and label information to the client;
步骤二:完成所述步骤一的系统框架设计后,进行搭建装配场景,所述装配场景包括零件区、待装配区和标签区,所述零件区用于放置装配原件,所述待装配区用于对装配原件进行装配,所述标签区包括多个标签,用于关联多个标签之间的位置和姿态关系,传输至服务器;其中标签之间的位置和姿态关系是通过在多个标签中选择任意一个标签作为起始标签,设定其位置为原点(0,0,0),旋转初始姿态为(0,0,0),其余标签相对于起始标签做位移旋转,其位移旋转作为位置和旋转初始姿态。Step 2: After completing the system framework design of the first step, build an assembly scene. The assembly scene includes a parts area, a to-be-assembled area, and a label area. For assembling the original assembly, the label area includes a plurality of labels, which are used to associate the position and attitude relationship between the plurality of labels and transmit them to the server; wherein the position and posture relationship between the labels is obtained through the multiple labels. Select any label as the start label, set its position as the origin (0, 0, 0), the initial rotation attitude as (0, 0, 0), and the rest of the labels do displacement rotation relative to the start label, and the displacement rotation is as Position and rotation initial pose.
步骤三:完成所述步骤二的装配场景搭建后,进行构建装配场景的高精度三维地图,所述构建装配场景高精度三维地图先利用深度相机与惯性测量单元提供的距离信息获得装配场景的稠密三维地图,接着利用Apriltag标签,对装配场景的稠密三维地图进行信息填充,建立离散地图,然后将稠密三维地图与离散地图融合形成装配场景的高精度三维地图,传输至服务器;Step 3: After completing the construction of the assembly scene in the second step, construct a high-precision three-dimensional map of the assembly scene. The construction of the high-precision three-dimensional map of the assembly scene first uses the distance information provided by the depth camera and the inertial measurement unit to obtain the density of the assembly scene. 3D map, then use the Apriltag tag to fill the dense 3D map of the assembly scene with information to create a discrete map, and then fuse the dense 3D map and the discrete map to form a high-precision 3D map of the assembly scene, and transmit it to the server;
步骤四:所述步骤三中的构建装配场景三维特征地图的高精度三维地图传输到所述搭建自定位场景信息,所述搭建自定位场景信息先对高精度三维地图进行分析,并在特征点较少区域附着Apriltag标签,形成装配场景的标签集,接着测量出标签集之间的相对位姿关系,其中标签与装配场景之间的关系可从三维地图直接得出,接着通过增强现实设备可算出装配场景的相对位姿,然后根据装配工艺及装配手册,建立装配零件的空间位置关系,传输至服务器;Step 4: The high-precision three-dimensional map of the three-dimensional feature map of the assembly scene constructed in the step three is transmitted to the construction self-positioning scene information, and the construction self-positioning scene information is first analyzed. Apriltag tags are attached to fewer areas to form a tag set of the assembly scene, and then the relative pose relationship between the tag sets is measured. Calculate the relative pose of the assembly scene, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual, and transmit them to the server;
步骤五:所述步骤四中的搭建自定位场景信息的空间位置关系传输到所述设计自定位视觉系统,所述设计自定位视觉系统包括创建虚拟模型、实时计算设备位姿和虚实场景融合,所述创建虚拟模型与实时计算设备位姿相连,用于AR开发平台搭建三维场景,且根据装配零件的空间位置关系,设定虚拟模型的三维空间坐标,紧接着将增强现实设备放置到场景中,实时计算出设备内深度相机的位姿,所述实时计算设备位姿与虚实场景融合相连,用于将虚拟物体加载到客户端上的AR眼镜中,实现虚拟物体与装配场景的融合显示,其中设备位姿包括Apriltag标签的位姿、IMU位姿以及VSLAM位姿。Step 5: The spatial position relationship of the construction self-positioning scene information in the step 4 is transmitted to the design self-positioning vision system, and the designing the self-positioning vision system includes creating a virtual model, real-time computing equipment pose and virtual and real scene fusion, The created virtual model is connected to the pose of the real-time computing device, and is used for the AR development platform to build a three-dimensional scene, and the three-dimensional space coordinates of the virtual model are set according to the spatial positional relationship of the assembly parts, and then the augmented reality device is placed in the scene. , calculates the pose of the depth camera in the device in real time, the real-time computing device pose is fused with the virtual and real scene, and is used to load the virtual object into the AR glasses on the client to realize the fusion display of the virtual object and the assembly scene, The device pose includes the pose of the Apriltag tag, the IMU pose, and the VSLAM pose.
步骤六:完成所述步骤五的设计自定位视觉系统后,进行定时定位流程,所述定时定位流程首先在零件的待装配区完成自定位视觉系统初始化,接着加载高精度三维地图且开启两个线程,然后将两个线程的位姿进行比较,若误差满足设定要求,则自定位视觉系统输出融合结果定位;若误差过大,则利用标签位姿对融合位姿进行修正,自定位视觉系统输出修正后位姿,其中标签位姿是通过增强现实设备上的深度相机检测到标签,然后计算出标签相对于增强现实设备的位置姿态。Step 6: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is performed. The timing positioning process first completes the initialization of the self-positioning vision system in the to-be-assembled area of the part, then loads the high-precision three-dimensional map and opens two thread, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system outputs the fusion result positioning; The system outputs the corrected pose, where the label pose is detected by the depth camera on the augmented reality device, and then calculates the position and pose of the label relative to the augmented reality device.
步骤一中,所述客户端包括AR眼镜、惯性测量单元和工控机,所述惯性测量单元包括传感器,所述工控机与传感器相连,用于控制传感器将计算的数据通过串口传输到服务器。In step 1, the client includes AR glasses, an inertial measurement unit and an industrial computer, the inertial measurement unit includes a sensor, and the industrial computer is connected to the sensor for controlling the sensor to transmit the calculated data to the server through a serial port.
步骤三中,所述深度相机用于采集装配场景一周的视频,对采集到的视频图像进行特征提取和光流跟踪,将提取的视频特征进行筛选,然后提取出特征帧进行特征点保留。In step 3, the depth camera is used to collect a week's video of the assembly scene, perform feature extraction and optical flow tracking on the collected video images, filter the extracted video features, and then extract feature frames for feature point retention.
步骤三中,所述信息填充包括Apriltag标签的关键帧以及关键帧对应的标签角点信息。In step 3, the information filling includes the key frame of the Apriltag label and the label corner point information corresponding to the key frame.
步骤六中,所述加载高精度三维地图分为两个线程,一个线程为实时检测Apriltag标签信息,接着根据Apriltag标签估计深度相机相对标签位姿,然后将标签与自定位场景的空间位置关系换算相对于世界坐标的位姿;另一个线程为根据装配场景中的特征点融合惯性测量单元进行融合定位,实时得到深度相机相对于世界坐标系的位姿。In step 6, the loading of the high-precision three-dimensional map is divided into two threads, one thread is to detect the Apriltag label information in real time, then estimate the relative label pose of the depth camera according to the Apriltag label, and then convert the label and the spatial position relationship of the self-positioning scene. The pose relative to the world coordinate; another thread is to fuse the inertial measurement unit according to the feature points in the assembly scene for fusion positioning, and obtain the pose of the depth camera relative to the world coordinate system in real time.
步骤五的具体步骤为:The specific steps of step five are:
(1)计算Apriltag标签的位姿:该标签码坐标系为
,深度相机坐标系为
,所述标签码上任意一点
在深度相机坐标系下的坐标为
,二者之间的对应关系为:
(1) Calculate the pose of the Apriltag tag: the coordinate system of the tag code is , the depth camera coordinate system is , any point on the label code The coordinates in the depth camera coordinate system are , the corresponding relationship between the two is:
在公式(1)中,R为旋转矩阵代表深度相机坐标系相对于标签码坐标系的旋转,T为平移向量代表深度相机坐标系相对于标签码的平移;
为深度相机外部参数矩阵,利用该矩阵可以将标签码坐标系转换到深度相机坐标系下。
In formula (1), R is the rotation matrix representing the rotation of the depth camera coordinate system relative to the label code coordinate system, and T is the translation vector representing the translation of the depth camera coordinate system relative to the label code; is the depth camera external parameter matrix, which can be used to convert the label code coordinate system to the depth camera coordinate system.
其中图像坐标系为
,像素坐标系为
,则标签码上的点
与深度相机图像平面中的成像点
之间的对应关系为:
where the image coordinate system is , the pixel coordinate system is , the point on the label code Imaging points in the image plane with the depth camera The corresponding relationship between them is:
在公式(2)中,
为图像平面中心,
是x轴和y轴的归一化焦距,
为深度相机内部参数矩阵,利用该矩阵可将深度相机坐标系转换到图像平面坐标系下;设
,使用最小二乘法对公式(1)和(2)求解,得到深度相机的内部参数矩阵
;当深度相机检测到标签时,利用Apriltag算法得到R和T。
In formula (2), is the center of the image plane, are the normalized focal lengths of the x and y axes, is the depth camera internal parameter matrix, which can be used to convert the depth camera coordinate system to the image plane coordinate system; set , use the least squares method to solve formulas (1) and (2) to obtain the internal parameter matrix of the depth camera ; When the depth camera detects the tag, use the Apriltag algorithm to get R and T.
(2)计算IMU位姿:利用IMU采集设备运动过程中的数据;在任意时间
和
之间,利用公式(3)对惯性测量单元的角速率进行积分,可得到设备沿着三个轴的角度增量,记为
;在任意时间
和
之间,利用公式(4)对惯性测量单元的加速度进行二重积分,可得到设备在该段时间的位移
。
(2) Calculate the pose of the IMU: use the IMU to collect data during the movement of the device; at any time and In between, using formula (3) to integrate the angular rate of the inertial measurement unit, the angle increment of the device along the three axes can be obtained, which is recorded as ; at any time and In between, using formula (4) to double-integrate the acceleration of the inertial measurement unit, the displacement of the device during this period can be obtained .
(3)计算VSLAM位姿:利用深度相机对场景进行三维地图采集,其中空间n个三维点记为
,而投影的像素坐标为
,两者满足如下关系:
(3) Calculate the VSLAM pose: use the depth camera to collect a three-dimensional map of the scene, where n three-dimensional points in space are recorded as , while the projected pixel coordinates are , the two satisfy the following relationship:
其中
为深度相机位姿态的李代数表示法,接着采用光束法平差(Bundle Adjustment)来进行最小化投影误差,将误差求和并构建最小二乘,然后寻找最精确的深度相机位姿,使公式(6)最小化且得出R和T。
in It is the Lie algebra representation of the depth camera pose, and then uses Bundle Adjustment to minimize the projection error, sum the errors and construct the least squares, and then find the most accurate depth camera pose, so that the formula (6) Minimize and obtain R and T.
(4)将计算好的位姿传输至服务器且融合虚拟模型的三维空间坐标,然后传输至客户端进行融合显示:在得到Apriltag标签位姿、IMU位姿和VSLAM位姿后,利用传统的优化方式将IMU的零偏加入到状态变量中,接着采用紧耦合的方式将深度相机的位姿、速度以及IMU的零偏构建的目标状态方程进行估计,如公式(7)所示,系统的十五维的状态变量表示为: (4) Transfer the calculated pose to the server and fuse the three-dimensional space coordinates of the virtual model, and then transmit it to the client for fusion display: After obtaining the Apriltag label pose, IMU pose and VSLAM pose, use the traditional optimization The zero bias of the IMU is added to the state variable, and then the pose, velocity, and the target state equation constructed by the zero bias of the depth camera are estimated in a tightly coupled manner, as shown in formula (7), the ten The five-dimensional state variable is expressed as:
公式(7)中,
、
和
分别是深度相机的旋转、平移和速度,
和
分别是IMU的加速度计和陀螺仪的零偏;而本系统采用局部Apriltag标签辅助定位的策略,紧耦合的系统状态变量表示为:
In formula (7), , and are the rotation, translation and speed of the depth camera, respectively, and are the zero offsets of the accelerometer and gyroscope of the IMU, respectively; while this system adopts the strategy of local Apriltag tag-assisted positioning, and the tightly coupled system state variables are expressed as:
公式(8)中,
表示Apriltag标签的定位位姿,
表示VSLAM的定位位姿,
表示Apriltag标签和VSLAM的位姿差,因此存在两种情况下的系统变量,当
时,即视觉定位累计误差设有超过设定阈值,继续视觉融合惯导定位;当
时,即视觉定位累计误差较大,然后采取Apriltag标签局部定位来融合惯导定位。
In formula (8), Represents the positioning pose of the Apriltag tag, Represents the positioning pose of VSLAM, Represents the pose difference between the Apriltag tag and VSLAM, so there are system variables in two cases, when , that is, the cumulative error of visual positioning exceeds the set threshold, and the visual fusion inertial navigation positioning continues; when , that is, the cumulative error of visual positioning is large, and then the local positioning of the Apriltag label is used to integrate the inertial navigation positioning.
实施例: Example:
步骤一,设计系统框架:采用客户端-服务器并行开发的模式进行数据间的接受和发送,先将客户端中的AR眼镜、惯性测量单元和工控机通过无线与服务器相连,用于将装配场景和装配工艺信息传输到服务器,接着将服务器通过无线与客户端相连,用于将解析出的装配场景特征点和标签信息的位姿传输到客户端;Step 1: Design the system framework: use the client-server parallel development mode to receive and send data. First, connect the AR glasses, inertial measurement unit and industrial computer in the client to the server wirelessly to connect the assembly scene. And the assembly process information is transmitted to the server, and then the server is wirelessly connected to the client to transmit the parsed pose of the assembly scene feature points and label information to the client;
步骤二,搭建装配场景:完成所述步骤一的设计系统框架后,将放置在零件区的用于航空的待装配零件在待装配区组装完成,接着在布置的8个标签区中选择标签1作为起始标签,设定标签1的位置为原点(0,0,0),旋转初始姿态为(0,0,0),与此同时,标签2相对于标签1做位移旋转,其位移旋转作为其位置(Position)和旋转姿态(Rotation),其余标签以此类推,其中各个标签的旋转姿态均设置为(0,0,0),即调整每个标签的空间位置保证其朝向一致,如表1所示:Step 2, build an assembly scene: after completing the design system framework of step 1, assemble the parts to be assembled for aviation placed in the parts area in the to-be-assembled area, and then select label 1 in the 8 label areas arranged As the starting label, the position of label 1 is set as the origin (0, 0, 0), and the initial rotation attitude is (0, 0, 0). At the same time, label 2 performs displacement rotation relative to label 1, and its displacement rotates As its position (Position) and rotation attitude (Rotation), the rest of the labels are analogous, in which the rotation posture of each label is set to (0, 0, 0), that is, the spatial position of each label is adjusted to ensure that its orientation is consistent, such as Table 1 shows:
表1
装配场景标签的空间位置关系Table 1
Spatial Position Relationship of Assembly Scene Labels
标签号label number
|
位置 Location
|
旋转位姿Rotation pose
|
11
|
(0,0,0)(0, 0, 0)
|
(0,0,0)(0, 0, 0)
|
22
|
(0,8,0)(0, 8, 0)
|
(0,0,0)(0, 0, 0)
|
33
|
(4,8,0)(4, 8, 0)
|
(0,0,0)(0, 0, 0)
|
44
|
(8,8,0)(8, 8, 0)
|
(0,0,0)(0, 0, 0)
|
55
|
(10,6,0)(10, 6, 0)
|
(0,0,0)(0, 0, 0)
|
66
|
(10,0,0)(10, 0, 0)
|
(0,0,0)(0, 0, 0)
|
77
|
(8,-2,0)(8, -2, 0)
|
(0,0,0)(0, 0, 0)
|
88
|
(4,-2,0)(4, -2, 0)
|
(0,0,0)(0, 0, 0)
|
步骤三,构建装配场景高精度三维地图:完成所述步骤二的装配场景搭建后,紧接着利用深度相机在标签1处完成初始化,然后绕装配场景一周进行视频采集,对采集到的视频图像进行特征提取和光流跟踪,将提取的视频特征进行筛选,提取出特征帧进行特征点保留,然后与惯性测量单元提供的距离信息相结合,从而获得装配场景的稠密三维地图;同时利用Apriltag标签,对装配场景的稠密三维地图进行关键帧以及关键帧对应的标签角点信息的填充,进而建立离散地图且与稠密三维地图进行融合形成装配场景的高精度三维地图; Step 3: Build a high-precision 3D map of the assembly scene: After completing the construction of the assembly scene in the second step, then use the depth camera to complete the initialization at label 1, and then perform video collection around the assembly scene, and the collected video images are processed. Feature extraction and optical flow tracking, filter the extracted video features, extract feature frames for feature point retention, and then combine with the distance information provided by the inertial measurement unit to obtain a dense three-dimensional map of the assembly scene; The dense 3D map of the assembly scene is filled with key frames and the label corner information corresponding to the key frames, and then a discrete map is established and fused with the dense 3D map to form a high-precision 3D map of the assembly scene;
步骤四,搭建自定位场景信息:将步骤三中的构建装配场景三维特征地图的高精度三维地图传输到搭建自定位场景信息中,接着对高精度三维地图进行分析,并在特征点较少区域附着人工标签Apriltag,形成装配场景的标签集,接着测量出标签集之间的相对位姿关系,然后根据装配工艺及装配手册,建立装配零件的空间位置关系;Step 4: Build the self-positioning scene information: transfer the high-precision 3D map of the three-dimensional feature map of the assembly scene in step 3 to the building self-positioning scene information, then analyze the high-precision 3D map, and analyze the high-precision 3D map in the area with fewer feature points. Attach the artificial label Apriltag to form the label set of the assembly scene, then measure the relative pose relationship between the label sets, and then establish the spatial position relationship of the assembly parts according to the assembly process and assembly manual;
步骤五,设计自定位视觉系统:将步骤四中的搭建自定位场景信息的空间位置关系传输到设计自定位视觉系统中,该设计自定位视觉系统包括创建虚拟模型、实时计算设备位姿和虚实场景融合,将创建虚拟模型与实时计算设备位姿相连,用于AR开发平台搭建三维场景,且根据装配零件的空间位置关系,设定虚拟模型的三维空间坐标,紧接着将增强现实设备放置到场景中,实时计算出设备内深度相机的位姿,接着将实时计算设备位姿与虚实场景融合相连,用于将虚拟物体加载到AR眼镜上,实现虚拟物体与装配场景的融合显示;Step 5: Design a self-positioning vision system: transfer the spatial positional relationship of the building self-positioning scene information in step 4 to the design self-positioning vision system, which includes creating a virtual model, real-time computing device pose and virtual reality. Scene fusion, which connects the created virtual model with the pose of the real-time computing device, is used for the AR development platform to build a three-dimensional scene, and sets the three-dimensional spatial coordinates of the virtual model according to the spatial positional relationship of the assembled parts, and then places the augmented reality device on the In the scene, the pose of the depth camera in the device is calculated in real time, and then the real-time computing device pose is fused with the virtual and real scene to load the virtual object onto the AR glasses to realize the fusion display of the virtual object and the assembly scene;
步骤六,定时定位流程:完成所述步骤五的设计自定位视觉系统后,进行定时定位流程,该定时定位流程首先在零件的待装配区完成自定位视觉系统初始化,接着加载高精度三维地图且开启两个线程,然后将两个线程的位姿进行比较,若误差满足设定要求,则自定位视觉系统输出融合结果定位;若误差过大,则利用标签位姿对融合位姿进行修正,最后自定位视觉系统输出修正后位姿,从而完成此次航空装配的自定位。Step 6, timing positioning process: After completing the design of the self-positioning vision system in the fifth step, the timing positioning process is carried out. The timing positioning process first completes the initialization of the self-positioning vision system in the part to be assembled area, and then loads a high-precision three-dimensional map and Open two threads, and then compare the poses of the two threads. If the error meets the set requirements, the self-positioning vision system will output the fusion result positioning; if the error is too large, use the label pose to correct the fusion pose. Finally, the self-positioning vision system outputs the corrected pose, thereby completing the self-positioning of the aviation assembly.
本发明涉及的其它未说明部分与现有技术相同。Other unexplained parts involved in the present invention are the same as those in the prior art.