CN114332360A - A collaborative three-dimensional mapping method and system - Google Patents
A collaborative three-dimensional mapping method and system Download PDFInfo
- Publication number
- CN114332360A CN114332360A CN202111510369.3A CN202111510369A CN114332360A CN 114332360 A CN114332360 A CN 114332360A CN 202111510369 A CN202111510369 A CN 202111510369A CN 114332360 A CN114332360 A CN 114332360A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- visual positioning
- camera
- unmanned aerial
- aerial vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000000007 visual effect Effects 0.000 claims abstract description 155
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 238000010276 construction Methods 0.000 claims abstract description 9
- 239000003550 marker Substances 0.000 claims description 59
- 238000013519 translation Methods 0.000 claims description 36
- 239000011159 matrix material Substances 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 17
- 230000007613 environmental effect Effects 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 12
- 239000003795 chemical substances by application Substances 0.000 claims description 9
- 230000004807 localization Effects 0.000 claims description 6
- 230000010365 information processing Effects 0.000 claims description 5
- 238000013461 design Methods 0.000 claims description 4
- 238000002360 preparation method Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 3
- 238000013146 percutaneous coronary intervention Methods 0.000 claims 2
- 238000006243 chemical reaction Methods 0.000 claims 1
- 238000012937 correction Methods 0.000 claims 1
- 238000012216 screening Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
本发明提供一种协同三维建图方法及系统,包括:通过云端检测视觉定位标记;通过所述视觉定位标记优化无人机视觉里程计的位姿估计;通过所述视觉定位标记优化无人车视觉里程计的位姿估计;通过所述云端完成ORB‑SLAM框架的局部地图构建线程和闭环检测线程。与现有技术相比,本发明主要基于ORB‑SLAM框架和云端实现,由无人机与无人车自身实现ORB‑SLAM中的跟踪线程,由云端实现ORB‑SLAM中的局部地图构建线程和闭环检测线程,利用视觉定位标记优化无人机视觉里程计的位姿估计,以及利用视觉定位标记优化无人车视觉里程计的位姿估计,能够解决协同SLAM系统实时性难以满足以及解决协同SLAM系统定位不准的问题,能够实现鲁棒性好、精度高和实时性强的协同三维建图系统。
The present invention provides a collaborative three-dimensional mapping method and system, including: detecting visual positioning marks through the cloud; optimizing the pose estimation of the UAV visual odometer through the visual positioning marks; optimizing the unmanned vehicle through the visual positioning marks The pose estimation of visual odometry; the local map construction thread and closed-loop detection thread of the ORB-SLAM framework are completed through the cloud. Compared with the prior art, the present invention is mainly implemented based on the ORB-SLAM framework and the cloud, the UAV and the unmanned vehicle itself realize the tracking thread in ORB-SLAM, and the cloud realizes the local map construction thread and the ORB-SLAM. The closed-loop detection thread, using visual positioning markers to optimize the pose estimation of the UAV visual odometry, and using the visual positioning markers to optimize the pose estimation of the unmanned vehicle visual odometry, can solve the problem that the real-time performance of the collaborative SLAM system is difficult to meet and solve the problem of collaborative SLAM. The problem of inaccurate positioning of the system can realize a collaborative 3D mapping system with good robustness, high precision and strong real-time performance.
Description
技术领域technical field
本发明涉及协同三维建图领域,具体是涉及一种协同三维建图方法及系统。The invention relates to the field of collaborative three-dimensional mapping, in particular to a collaborative three-dimensional mapping method and system.
背景技术Background technique
现有技术中存在采用路标与单目相机传感器技术实现多机器人的三维平面建图的技术,但现有技术系统实时性差;In the prior art, there is a technology that uses road sign and monocular camera sensor technology to realize the three-dimensional plane mapping of multi-robots, but the real-time performance of the prior art system is poor;
还存在采用路标与云架构实现单机器人的二维平面建图,但这种系统不适合大规模环境应用。There are also road signs and cloud architectures to realize two-dimensional plane mapping of a single robot, but this system is not suitable for large-scale environmental applications.
发明内容SUMMARY OF THE INVENTION
为了克服现有技术的不足,本发明提供了一种协同三维建图方法及系统,具体技术方案如下所示:In order to overcome the deficiencies of the prior art, the present invention provides a collaborative three-dimensional mapping method and system, and the specific technical solutions are as follows:
一种协同三维建图方法,包括:A collaborative three-dimensional mapping method, comprising:
通过云端检测视觉定位标记;Detect visual positioning markers through the cloud;
通过所述视觉定位标记优化无人机视觉里程计的位姿估计;Optimize the pose estimation of the UAV visual odometer by using the visual positioning marker;
通过所述视觉定位标记优化无人车视觉里程计的位姿估计;Optimize the pose estimation of the unmanned vehicle visual odometer by using the visual positioning marker;
通过所述云端完成ORB-SLAM框架的局部地图构建线程和闭环检测线程。The local map construction thread and closed-loop detection thread of the ORB-SLAM framework are completed through the cloud.
在一个具体的实施例中,还包括:In a specific embodiment, it also includes:
采集环境信息,采用Docker作为云端容器,采用Kubernetes作为容器的调度服务,采用BRPC和Beego作为网络构架搭建云平台,使多智能体端与所述云端通讯;Collect environmental information, use Docker as the cloud container, use Kubernetes as the scheduling service of the container, and use BRPC and Beego as the network architecture to build a cloud platform, so that the multi-agent terminal communicates with the cloud;
多智能体包括一台所述无人机和一台所述无人车,所述无人机和所述无人车构成集中式体系结构,所述无人机前方位置装备第一单目相机且所述第一单目相机的镜头朝下,所述无人车前方位置装备第二单目相机且所述第二单目相机的镜头朝前;The multi-agent includes one UAV and one UAV, the UAV and the UAV form a centralized architecture, and the position in front of the UAV is equipped with a first monocular camera and the lens of the first monocular camera is facing downward, the front position of the unmanned vehicle is equipped with a second monocular camera and the lens of the second monocular camera is facing forward;
选至少2处环境点,打上所述视觉定位标记。Select at least 2 environmental points and mark them with the visual positioning marks.
在一个具体的实施例中,还包括:In a specific embodiment, it also includes:
所述环境信息包括图像信息,对所述图像信息采用ORB-SLAM算法提取特征点和描述子;The environmental information includes image information, and the ORB-SLAM algorithm is used to extract feature points and descriptors from the image information;
通过PnP算法求得深度,得到点云信息;The depth is obtained by the PnP algorithm, and the point cloud information is obtained;
利用所述云平台进行地图初始化,若所述云平台上有地图,则将所述图像信息与所述云端的所述关键帧进行匹配确定初始位置,若所述云平台上没有地图,则将所述图像信息和所述地图等信息作为云平台系统地图的起始;Use the cloud platform to initialize the map, if there is a map on the cloud platform, then match the image information with the key frame in the cloud to determine the initial position, if there is no map on the cloud platform, then The image information and the information such as the map are used as the start of the cloud platform system map;
通过匹配特征点对或者重定位方法估计相机位姿;Estimate camera pose by matching feature point pairs or relocation methods;
建立图像特征点和局部点云地图间的关系;Establish the relationship between image feature points and local point cloud maps;
根据所述关键帧的判断条件,提取所述关键帧上传给所述云端。According to the judgment condition of the key frame, the key frame is extracted and uploaded to the cloud.
在一个具体的实施例中,所述“建立图像特征点和局部点云地图间的关系”具体包括:In a specific embodiment, the "establishing the relationship between the image feature points and the local point cloud map" specifically includes:
当局部地图由于环境上的遮挡或纹理缺失等原理导致跟踪失败时,系统采取下列方式进行重定位:When the local map fails to track due to occlusion or lack of texture in the environment, the system performs relocation in the following ways:
在所述无人机或所述无人车上的局部地图里去重新定位和匹配参考帧;Reposition and match reference frames in the local map on the UAV or the UAV;
通过当前帧的信息在所述云平台上进行重定位。Relocation is performed on the cloud platform through the information of the current frame.
在一个具体的实施例中,所述“通过云端检测视觉定位标记”具体包括:In a specific embodiment, the "detecting visual positioning marks through the cloud" specifically includes:
进行图像边缘检测;Perform image edge detection;
筛选出四边形的轮廓边缘;Filter out the contour edges of the quadrilateral;
对所述四边形的轮廓边缘进行解码,识别所述视觉定位标记。The outline edges of the quadrilateral are decoded to identify the visual location markers.
在一个具体的实施例中,所述“通过视觉定位标记优化无人机视觉里程计的位姿估计”具体包括:In a specific embodiment, the "optimizing the pose estimation of the UAV's visual odometry by visual positioning markers" specifically includes:
定义坐标系,定义无人机装载相机坐标系PC、无人机坐标系PA、视觉定位标记坐标系PB以及世界坐标系PW,所述世界坐标系PW定义为所述无人机第一帧;Define the coordinate system, define the UAV mounted camera coordinate system PC, the UAV coordinate system PA, the visual positioning marker coordinate system PB and the world coordinate system PW , and the world coordinate system PW is defined as the unmanned aerial vehicle . machine first frame;
所述无人机装载相机坐标系PC的YOZ平面与所述无人机坐标系PA的YOZ平面平行,并设置所述无人机坐标系PA的原点在所述无人机中心;The YOZ plane of the UAV-mounted camera coordinate system PC is parallel to the YOZ plane of the UAV coordinate system P A , and the origin of the UAV coordinate system P A is set at the center of the UAV;
计算出所述无人机装载相机坐标系PC到所述世界坐标系PW的关系;Calculate the relationship between the UAV-mounted camera coordinate system P C and the world coordinate system P W ;
计算出所述无人机装载相机坐标系PC与所述视觉定位标记坐标系PB的相对位姿和 Calculate the relative pose of the UAV-mounted camera coordinate system P C and the visual positioning marker coordinate system P B and
通过所述视觉定位标记得到的相对位姿和视觉里程计得到的相对位姿,求出轨迹误差,并将所述轨迹误差平分在所述无人机的每个关键帧上,使得闭环关键帧与实际误差减小。Through the relative pose obtained by the visual positioning mark and the relative pose obtained by the visual odometry, the trajectory error is obtained, and the trajectory error is equally divided into each key frame of the UAV, so that the closed-loop key frame The actual error is reduced.
在一个具体的实施例中,所述“计算出所述无人机装载相机坐标系PC到所述世界坐标系PW的关系”具体包括:In a specific embodiment, the "calculating the relationship between the UAV-mounted camera coordinate system P C and the world coordinate system P W " specifically includes:
所述无人机坐标系PA与所述无人机装载相机坐标系PC是平行关系,既有:The UAV coordinate system P A and the UAV mounted camera coordinate system P C are in a parallel relationship, including:
其中,PA表示所述无人机坐标系的坐标,PC表示所述无人机装载相机坐标系的坐标,为所述无人机坐标系PA与所述无人机装载相机坐标系PC之间的平移向量,表示所述相机距离所述无人机中心的距离;Wherein, PA represents the coordinates of the UAV coordinate system, PC represents the coordinates of the UAV mounted camera coordinate system, is the translation vector between the UAV coordinate system P A and the UAV mounted camera coordinate system PC, representing the distance between the camera and the center of the UAV;
所述视觉定位标记坐标系PB与所述世界坐标系PW之间的关系满足:The relationship between the visual positioning marker coordinate system P B and the world coordinate system P W satisfies:
其中,PW为所述世界坐标系的坐标,PB为所述视觉定位标记坐标系的坐标,为所述世界坐标系PW与所述视觉定位标记坐标系PB之间的平移向量;Wherein, P W is the coordinates of the world coordinate system, P B is the coordinates of the visual positioning marker coordinate system, is the translation vector between the world coordinate system P W and the visual positioning marker coordinate system P B ;
角φ、θ和ψ分别是欧拉角,设所述世界坐标系PW到所述无人机坐标系PA的旋转矩阵为所述视觉定位标记坐标系PB到所述无人机装载相机坐标系PC的旋转矩阵为则:Angles φ, θ, and ψ are Euler angles respectively. Let the rotation matrix from the world coordinate system P W to the UAV coordinate system P A be The rotation matrix of the coordinate system P B of the visual positioning marker to the coordinate system P C of the UAV mounted camera is: but:
上述c代表cos,s代表sin,根据上式可得所述视觉定位标记坐标系PB和所述无人机装载相机坐标系PC旋转关系包括:The above-mentioned c represents cos, and s represents sin. According to the above formula, the rotational relationship between the coordinate system P B of the visual positioning mark and the coordinate system P C of the UAV mounted camera can be obtained, including:
而所述无人机装载相机坐标系PC到所述视觉定位标记坐标系PB的关系表示是:And the relationship between the coordinate system P C of the camera mounted on the drone and the coordinate system P B of the visual positioning mark is expressed as:
其中,为所述无人机装载相机坐标系PC到所述视觉定位标记坐标系PB的旋转矩阵,为所述无人机装载相机坐标系PC到所述视觉定位标记坐标系PB的平移向量;in, Load the rotation matrix of the camera coordinate system P C to the visual positioning marker coordinate system P B for the UAV, Load the translation vector of the camera coordinate system P C to the visual positioning marker coordinate system P B for the UAV;
则得到所述无人机装载相机坐标系PC到所述世界坐标系PW的关系包括:Then, the relationship between the UAV-mounted camera coordinate system P C and the world coordinate system P W includes:
其中,为所述无人机坐标系PA到所述世界坐标系PW的旋转矩阵,为所述无人机坐标系PA到所述世界坐标系PW的平移向量,为所述无人机坐标系PA到所述无人机装载相机坐标系PC的平移向量。in, is the rotation matrix from the UAV coordinate system P A to the world coordinate system P W , is the translation vector from the UAV coordinate system P A to the world coordinate system P W , A translation vector from the UAV coordinate system PA to the UAV loading camera coordinate system PC .
在一个具体的实施例中,所述“计算出所述无人机装载相机坐标系PC与所述视觉定位标记坐标系PB的相对位姿和”具体包括:In a specific embodiment, the "calculating the relative pose of the UAV - mounted camera coordinate system PC and the visual positioning marker coordinate system PB " and "Includes:
使用相机模型将所述视觉定位标记投影到相机的2D像素平面,得到:Using the camera model to project the visual localization markers onto the camera's 2D pixel plane yields:
其中M代表相机内参矩阵,[u,v,1]代表所述视觉定位标记投影到归一化平面的坐标,[XB,YB,ZB]代表视觉定位标记在所述视觉定位标记坐标系PB中的坐标,代表所述视觉定位标记坐标系PB到所述无人机装载相机坐标系PC的平移向量,代表所述视觉定位标记坐标系PB到所述无人机装载相机坐标系PC的旋转矩阵,s=1/ZC代表未知的尺度因子,ZC代表所述视觉定位标记在相机坐标系下的Z轴坐标,采用直接线性变换算法计算得到和 where M represents the camera internal parameter matrix, [u, v, 1] represents the coordinates of the visual positioning mark projected to the normalized plane, [XB, YB, ZB] represents the visual positioning mark in the visual positioning mark coordinate system P B coordinates in , represents the translation vector of the coordinate system P B of the visual positioning marker to the coordinate system P C of the UAV mounted camera, Represents the rotation matrix from the visual positioning marker coordinate system P B to the UAV-mounted camera coordinate system PC, s=1/Z C represents the unknown scale factor, and Z C represents the visual positioning marker in the camera coordinate system The Z-axis coordinates below are calculated by the direct linear transformation algorithm. and
在一个具体的实施例中,所述“通过视觉定位标记优化无人车视觉里程计的位姿估计”具体包括:In a specific embodiment, the "optimizing the pose estimation of the visual odometry of the unmanned vehicle through visual positioning markers" specifically includes:
定义坐标系,定义无人车装载相机坐标系PC、视觉定位标记坐标系PB以及世界坐标系PW,所述世界坐标系PW定义为所述无人机第一帧,所述无人车装载相机坐标系PC与所述无人车坐标系PA的关系确定;Define the coordinate system, define the unmanned vehicle loading camera coordinate system P C , the visual positioning marker coordinate system P B and the world coordinate system P W , the world coordinate system P W is defined as the first frame of the UAV, and the no The relationship between the coordinate system PC of the camera for loading the vehicle and the coordinate system PA of the unmanned vehicle is determined;
得到所述无人车装载相机坐标系PC与所述世界坐标系PW相对位姿Tcw、所述视觉定位标记坐标系PB与所述无人车装载相机坐标系PC相对位姿Tbc、以及所述视觉定位标记坐标系PB与所述世界坐标系PW相对位姿Tbw;Obtain the relative pose T cw of the unmanned vehicle-mounted camera coordinate system P C and the world coordinate system P W , and the relative pose T cw of the visual positioning marker coordinate system P B and the unmanned vehicle-mounted camera coordinate system PC T bc , and the relative pose T bw of the visual positioning marker coordinate system P B and the world coordinate system P W ;
优化无人车位姿与点云坐标;Optimize the pose and point cloud coordinates of the unmanned vehicle;
定义所述视觉定位标记坐标系PB与所述无人车装载相机坐标系PC相互间的相对误差是:The relative error between the coordinate system P B of the visual positioning mark and the coordinate system P C of the camera mounted on the unmanned vehicle is defined as:
构建优化目标函数:Build the optimization objective function:
其中:in:
Tcw∈{(Rcw,tcw)|Rcw∈SO3,tcw∈R3}Tbc∈{(Rbc,tbc)|Rbc∈SO3,tbc∈R3}T cw ∈{(R cw ,t cw )|R cw ∈ SO 3 ,t cw ∈ R 3 }T bc ∈{(R bc ,t bc )|R bc ∈ SO 3 ,t bc ∈R 3 }
其中,SO3表示三维特殊正交群,tcw表示从所述无人车装载相机坐标系PC到所述世界坐标系PW的平移误差,tbc表示从所述视觉定位标记坐标系PB到所述无人车装载相机坐标系PC的平移误差,R3表示维数为3的一组基,Rcw表示从所述无人车装载相机坐标系PC到所述世界坐标系PW的平移误差,Rbc表示从所述视觉定位标记坐标系PB到所述无人车装载相机坐标系PC的旋转误差;Among them, SO 3 represents a three-dimensional special orthogonal group, t cw represents the translation error from the camera coordinate system PC mounted on the unmanned vehicle to the world coordinate system P W , and t bc represents the coordinate system P from the visual positioning marker The translation error from B to the camera coordinate system PC mounted on the unmanned vehicle, R 3 represents a set of bases with a dimension of 3, and R cw represents from the camera coordinate system PC mounted on the unmanned vehicle to the world coordinate system The translation error of P W , R bc represents the rotation error from the visual positioning marker coordinate system P B to the unmanned vehicle loading camera coordinate system PC;
相机运动不止造成旋转误差Rcw、Rbc以及平移误差tcw、tbc,还伴随尺度上的漂移,故进行针对尺度的变换,并采用Sim3变换算法,因此:Camera motion not only causes rotation errors R cw , R bc and translation errors t cw , t bc , but also accompanies scale drift, so scale transformation is carried out, and Sim3 transformation algorithm is used, so:
Scw=(Rcw,tcw,s=1),(Rcw,tcw)=Tcw S cw =(R cw ,t cw ,s=1),(R cw ,t cw )=T cw
Sbc=(Rbc,tbc,s=1),(Rbc,tbc)=Tbc S bc =(R bc ,t bc ,s=1),(R bc ,t bc )=T bc
其中,Scw代表视觉定位标记点从所述世界坐标系PW到所述无人车装载相机坐标系PC的相似变换,Sbc代表所述视觉定位标记点从所述视觉定位标记坐标系PB到所述无人车装载相机坐标系PC的相似变换,s表示未知到尺度因子;Wherein, S cw represents the similar transformation of the visual positioning mark point from the world coordinate system P W to the unmanned vehicle mounted camera coordinate system PC, and S bc represents the visual positioning mark point from the visual positioning mark coordinate system. Similar transformation from P B to the coordinate system PC of the camera mounted on the unmanned vehicle, s represents the unknown scale factor;
假设优化后的Sim3姿态为那么纠正完成的姿态是:Suppose the optimized Sim3 pose is Then the corrected pose is:
其中,Rbw表示所述视觉定位标记点从所述世界坐标系PW到所述视觉定位标记坐标系PB的旋转矩阵,tbw表示所述视觉定位标记点从所述世界坐标系PW到所述视觉定位标记坐标系PB的平移,s表示未知到尺度因子,代表优化后的旋转矩阵、平移向量和尺度因子,代表优化后的相似变换;Wherein, R bw represents the rotation matrix of the visual positioning marker point from the world coordinate system P W to the visual positioning marker coordinate system P B , and t bw represents the visual positioning marker point from the world coordinate system P W to the translation of the visual positioning marker coordinate system P B , s represents the unknown to scale factor, represents the optimized rotation matrix, translation vector and scale factor, represents the optimized similarity transformation;
设定无人车在优化发生前的3D位置为则可以得到变换后的坐标:Set the 3D position of the unmanned vehicle before the optimization takes place as Then you can get the transformed coordinates:
其中代表所述无人车优化后的位姿。in represents the optimized pose of the unmanned vehicle.
一种协同三维建图系统,用于实现上述所述的协同三维建图方法,包括:A collaborative 3D mapping system for implementing the above-mentioned collaborative 3D mapping method, comprising:
环境准备模块,用于采集环境信息;Environment preparation module for collecting environment information;
信息处理模块,用于从获取的所述环境信息中,采用ORB-SLAM算法框架中的Tracking线程设计思想,提取关键帧;The information processing module is used for extracting key frames from the obtained environmental information using the Tracking thread design idea in the ORB-SLAM algorithm framework;
检测模块,用于通过云端检测视觉定位标记;The detection module is used to detect the visual positioning mark through the cloud;
第一优化模块,用于通过所述视觉定位标记优化无人机视觉里程计的位姿估计;a first optimization module, used for optimizing the pose estimation of the UAV visual odometry through the visual positioning mark;
第二优化模块,用于通过所述视觉定位标记优化无人车视觉里程计的位姿估计;执行模块,用于通过所述云端完成ORB-SLAM框架的局部地图构建线程和闭环检测线程。The second optimization module is used for optimizing the pose estimation of the visual odometry of the unmanned vehicle through the visual positioning mark; the execution module is used for completing the local map construction thread and closed-loop detection thread of the ORB-SLAM framework through the cloud.
相对于现有技术,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:
本发明提供的一种协同三维建图方法及系统,能够解决协同SLAM系统实时性难以满足以及解决协同SLAM系统定位不准的问题,能够实现鲁棒性好、精度高和实时性强的协同三维建图系统。The method and system for collaborative 3D mapping provided by the present invention can solve the problem that the real-time performance of the collaborative SLAM system is difficult to meet and solve the problem of inaccurate positioning of the collaborative SLAM system, and can realize the collaborative 3D mapping with good robustness, high precision and strong real-time performance. Mapping system.
为使本发明的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present invention more obvious and easy to understand, preferred embodiments are given below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings used in the embodiments. It should be understood that the following drawings only show some embodiments of the present invention, and therefore do not It should be regarded as a limitation of the scope, and for those of ordinary skill in the art, other related drawings can also be obtained according to these drawings without any creative effort.
图1是实施例中相机的成像模型示意图;1 is a schematic diagram of an imaging model of a camera in an embodiment;
图2是实施例中协同三维建图方法的流程步骤图;Fig. 2 is a flow chart of a collaborative three-dimensional mapping method in an embodiment;
图3是实施例中协同三维建图系统的模块图。FIG. 3 is a block diagram of a collaborative three-dimensional mapping system in an embodiment.
具体实施方式Detailed ways
实施例Example
如图1-图2所示,本实施例提供了一种协同三维建图方法,包括:As shown in FIGS. 1-2 , this embodiment provides a collaborative 3D mapping method, including:
环境准备,采集环境信息;Environmental preparation, collecting environmental information;
信息处理,从获取的环境信息中,采用ORB-SLAM算法框架中的Tracking线程设计思想,提取关键帧;Information processing, from the obtained environmental information, using the Tracking thread design idea in the ORB-SLAM algorithm framework to extract key frames;
通过云端检测视觉定位标记,视觉定位标记即路标;Detect visual positioning marks through the cloud, and visual positioning marks are road signs;
通过视觉定位标记优化无人机视觉里程计的位姿估计;Optimize the pose estimation of UAV visual odometry through visual localization markers;
通过视觉定位标记优化无人车视觉里程计的位姿估计;Optimize the pose estimation of the unmanned vehicle visual odometry through visual localization markers;
通过云端完成ORB-SLAM框架的局部地图构建线程和闭环检测线程。The local map construction thread and closed-loop detection thread of the ORB-SLAM framework are completed through the cloud.
具体地,云端执行ORB-SLAM中的局部地图构建线程(Local Mapping线程)和闭环检测线程(Loop Closing线程)。协同SLAM(Cooperative simultaneous localization andmapping,CSLAM)在容错性、鲁棒性和执行效率上比单机器人更有优势,在未知环境下的灾难救援、资源探测和空间探测等任务中具有重要影响力。CSLAM系统中的数据计算存储量大,大多数机器人个体无法满足实时性要求。CSLAM系统通常在大规模环境中执行任务,大量的计算所累计的系统误差(位姿估计误差等)在一定程度上不能完全消除。况且当环境中存在大量重复性地貌时,特征点匹配或重叠区域匹配算法可能出现一定程度的误匹配。累计的系统误差和误匹配都会影响CSLAM系统建图精度,因此在环境中布置少量路标,使机器人能够根据路标优化自身位姿,对建图精度提高具有重要意义。三维地图与二维地图相比,信息量更丰富,更能反映真实世界的客观存在形式。Specifically, the cloud executes the local map construction thread (Local Mapping thread) and the loop closure detection thread (Loop Closing thread) in ORB-SLAM. Cooperative SLAM (Cooperative simultaneous localization and mapping, CSLAM) has more advantages than a single robot in fault tolerance, robustness and execution efficiency, and has important influence in tasks such as disaster rescue, resource detection and space detection in unknown environments. The amount of data computing and storage in the CSLAM system is large, and most individual robots cannot meet the real-time requirements. CSLAM systems usually perform tasks in large-scale environments, and the systematic errors (pose estimation errors, etc.) accumulated by a large number of calculations cannot be completely eliminated to a certain extent. Moreover, when there are a large number of repetitive landforms in the environment, the feature point matching or overlapping area matching algorithm may have a certain degree of mismatching. Accumulated system errors and mismatches will affect the mapping accuracy of the CSLAM system. Therefore, a small number of road signs are arranged in the environment, so that the robot can optimize its own pose according to the road signs, which is of great significance to improve the mapping accuracy. Compared with two-dimensional maps, three-dimensional maps are richer in information and can better reflect the objective existence of the real world.
具体地,视觉定位标记技术,即路标技术能够辅助相机激光雷达传感器实现更精准的定位与建图,云架构技术能够将多机器人SLAM技术中的复杂运算转移到云端实现,解决多机器人计算存储资源有限的问题,三维平面的地图环境信息更丰富,更有利于无人机实现导航避障等功能。Specifically, the visual positioning and marking technology, that is, the road marking technology, can assist the camera lidar sensor to achieve more accurate positioning and mapping, and the cloud architecture technology can transfer the complex operations in the multi-robot SLAM technology to the cloud for realization, solving multi-robot computing and storage resources. The limited problem, the three-dimensional plane map environment information is richer, which is more conducive to the UAV to achieve functions such as navigation and obstacle avoidance.
优选地,本实施例在大规模未知环境中选择较为宽敞的地方打上路标(AprilTag码),无人机和无人车装载上单目相机,多智能体行进过程中利用单目相机实时采集环境信息,利用ORB-SLAM框架进行协同三维建图,并利用AprilTag码对ORB-SLAM位姿估计进行优化,利用Docker+Kubernetes+BRPC+Beego技术搭建云平台,将计算量大、存储要求高的任务部署在云端,多智能体端用于跟踪和重定位。Preferably, in this embodiment, a relatively spacious place is selected to be marked with road signs (AprilTag codes) in a large-scale unknown environment, and a monocular camera is mounted on the drone and the unmanned vehicle, and the monocular camera is used to collect the environment in real time during the traveling of the multi-agent. information, use the ORB-SLAM framework for collaborative 3D mapping, and use the AprilTag code to optimize the ORB-SLAM pose estimation, and use the Docker+Kubernetes+BRPC+Beego technology to build a cloud platform to perform tasks with large amounts of computation and high storage requirements. Deployed in the cloud, the multi-agent side is used for tracking and relocation.
优选地,本实施例结合路标AprilTag+云架构+多机器人+SLAM三维建图技术,实现无人协同三维建图,能够解决协同SLAM系统实时性难以满足以及解决协同SLAM系统定位不准的问题,能够实现鲁棒性好、精度高和实时性强的无人协同三维建图系统。Preferably, this embodiment combines the road sign AprilTag + cloud architecture + multi-robot + SLAM three-dimensional mapping technology to realize unmanned collaborative three-dimensional mapping, which can solve the problem that the real-time performance of the collaborative SLAM system is difficult to meet and solve the problem of inaccurate positioning of the collaborative SLAM system. Realize an unmanned collaborative 3D mapping system with good robustness, high precision and strong real-time performance.
本实施例中,“采集环境信息”具体包括:In this embodiment, "collecting environmental information" specifically includes:
采用Docker+Kubernetes+BRPC+Beego技术搭建云平台,使多智能体端与云端通讯,具体地,采用Docker作为云端容器,采用Kubernetes作为容器的调度服务,采用BRPC和Beego作为网络构架搭建云平台,使多智能体端与云端通讯;Docker+Kubernetes+BRPC+Beego technology is used to build a cloud platform, so that multi-agent terminals communicate with the cloud. Specifically, Docker is used as the cloud container, Kubernetes is used as the scheduling service of the container, and BRPC and Beego are used as the network architecture to build the cloud platform. Enable the multi-agent terminal to communicate with the cloud;
多智能体包括一台无人机和一台无人车,无人机和无人车构成集中式体系结构;The multi-agent includes a drone and an unmanned vehicle, and the drone and the unmanned vehicle form a centralized architecture;
选至少2处环境点,打上视觉定位标记,即打上AprilTag码。Select at least 2 environmental points and mark them with visual positioning marks, that is, mark them with the AprilTag code.
本实施例中,“无人机和无人车构成集中式体系结构”具体包括:In this embodiment, "UAVs and unmanned vehicles form a centralized architecture" specifically include:
无人机前方位置装备第一单目相机且第一单目相机的镜头朝下,无人车前方位置装备第二单目相机且第二单目相机的镜头朝前。The position in front of the drone is equipped with a first monocular camera with the lens of the first monocular camera facing downward, and the position in front of the unmanned vehicle is equipped with a second monocular camera with the lens of the second monocular camera facing forward.
本实施例中,“信息处理”具体包括:In this embodiment, "information processing" specifically includes:
环境信息包括图像信息,对图像信息采用ORB-SLAM算法提取特征点和描述子;The environmental information includes image information, and the ORB-SLAM algorithm is used to extract feature points and descriptors for the image information;
通过PnP算法求得深度,得到点云信息;The depth is obtained by the PnP algorithm, and the point cloud information is obtained;
利用云平台进行地图初始化,若云平台上有地图,则将图像信息与云端的关键帧进行匹配确定初始位置,若云平台上没有地图,则将图像信息和地图等信息作为云平台系统地图的起始;Use the cloud platform to initialize the map. If there is a map on the cloud platform, match the image information with the key frames in the cloud to determine the initial position. If there is no map on the cloud platform, use the image information and map information as the information of the cloud platform system map. start;
通过匹配特征点对或者重定位方法估计相机位姿;Estimate camera pose by matching feature point pairs or relocation methods;
建立图像特征点和局部点云地图间的关系;Establish the relationship between image feature points and local point cloud maps;
根据关键帧的判断条件,提取关键帧上传给云端。According to the judgment conditions of the key frame, the key frame is extracted and uploaded to the cloud.
本实施例中,“建立图像特征点和局部点云地图间的关系”具体包括:In this embodiment, "establishing the relationship between the image feature points and the local point cloud map" specifically includes:
当局部地图由于环境上的遮挡或纹理缺失等原理导致跟踪失败时,系统采取下列方式进行重定位:When the local map fails to track due to occlusion or lack of texture in the environment, the system performs relocation in the following ways:
在无人机或无人车上的局部地图里去重新定位和匹配参考帧;Relocate and match reference frames in local maps on drones or unmanned vehicles;
通过当前帧的信息在云平台上进行重定位。Relocation is performed on the cloud platform through the information of the current frame.
本实施例中,“通过云端检测视觉定位标记”具体包括:In this embodiment, "detecting visual positioning marks through the cloud" specifically includes:
进行图像边缘检测;Perform image edge detection;
筛选出四边形的轮廓边缘;Filter out the contour edges of the quadrilateral;
对四边形的轮廓边缘进行解码,识别视觉定位标记,即识别路标(AprilTag)。The outline edge of the quadrilateral is decoded, and the visual positioning mark is identified, that is, the road sign (AprilTag) is identified.
本实施例中,“通过视觉定位标记优化无人机视觉里程计的位姿估计”具体包括:In this embodiment, "optimizing the pose estimation of the UAV visual odometry by visual positioning markers" specifically includes:
定义坐标系,定义无人机装载相机坐标系PC、无人机坐标系PA、视觉定位标记坐标系PB以及世界坐标系PW,世界坐标系PW定义为无人机第一帧;Define the coordinate system, define the UAV mounted camera coordinate system P C , UAV coordinate system P A , the visual positioning marker coordinate system P B and the world coordinate system P W , and the world coordinate system P W is defined as the first frame of the UAV ;
无人机装载相机坐标系PC的YOZ平面与无人机坐标系PA的YOZ平面平行,并设置无人机坐标系PA的原点在无人机中心;The YOZ plane of the UAV-mounted camera coordinate system PC is parallel to the YOZ plane of the UAV coordinate system P A , and the origin of the UAV coordinate system P A is set at the center of the UAV;
计算出无人机装载相机坐标系PC到世界坐标系PW的关系;Calculate the relationship between the UAV mounted camera coordinate system P C and the world coordinate system P W ;
计算出无人机装载相机坐标系PC与视觉定位标记坐标系PB的相对位姿和 Calculate the relative pose of the UAV-mounted camera coordinate system P C and the visual positioning marker coordinate system P B and
通过视觉定位标记即通过路标(AprilTag)得到的相对位姿和视觉里程计得到的相对位姿,求出轨迹误差,并将轨迹误差平分在无人机的每个关键帧上,使得闭环关键帧与实际误差减小。Through the visual positioning mark, that is, the relative pose obtained by the road sign (AprilTag) and the relative pose obtained by the visual odometry, the trajectory error is obtained, and the trajectory error is equally divided into each key frame of the UAV, so that the closed-loop key frame The actual error is reduced.
本实施例中,“计算出无人机装载相机坐标系PC到世界坐标系PW的关系”具体包括:In this embodiment, "calculating the relationship between the UAV-mounted camera coordinate system P C and the world coordinate system P W " specifically includes:
无人机坐标系PA与无人机装载相机坐标系PC是平行关系,既有:The UAV coordinate system P A is in a parallel relationship with the UAV mounted camera coordinate system PC, including:
其中,PA表示无人机坐标系的坐标,PC表示无人机装载相机坐标系的坐标,为无人机坐标系PA与无人机装载相机坐标系PC之间的平移向量,表示相机距离无人机中心的距离;Among them, P A represents the coordinates of the UAV coordinate system, PC represents the coordinates of the UAV mounted camera coordinate system, is the translation vector between the UAV coordinate system P A and the UAV-mounted camera coordinate system PC, indicating the distance between the camera and the center of the UAV;
视觉定位标记坐标系PB与世界坐标系PW之间的关系满足:The relationship between the visual positioning marker coordinate system P B and the world coordinate system P W satisfies:
其中,PW为世界坐标系的坐标,PB为视觉定位标记坐标系的坐标,为世界坐标系PW与视觉定位标记坐标系PB之间的平移向量;Among them, P W is the coordinate of the world coordinate system, P B is the coordinate of the visual positioning marker coordinate system, is the translation vector between the world coordinate system P W and the visual positioning marker coordinate system P B ;
角φ、θ和ψ分别是欧拉角,设世界坐标系PW到无人机坐标系PA的旋转矩阵为视觉定位标记坐标系PB到无人机装载相机坐标系PC的旋转矩阵为则:The angles φ, θ and ψ are Euler angles respectively. Let the rotation matrix from the world coordinate system P W to the UAV coordinate system P A be The rotation matrix from the visual positioning marker coordinate system P B to the UAV-mounted camera coordinate system PC is: but:
上述c代表cos,s代表sin,根据上式可得视觉定位标记坐标系PB和无人机装载相机坐标系PC旋转关系包括:The above c represents cos, and s represents sin. According to the above formula, the rotation relationship between the coordinate system P B of the visual positioning mark and the coordinate system P C of the drone mounted camera can be obtained including:
而无人机装载相机坐标系PC到视觉定位标记坐标系PB的关系表示是:The relationship between the UAV-mounted camera coordinate system P C and the visual positioning marker coordinate system P B is expressed as:
其中,为无人机装载相机坐标系PC到视觉定位标记坐标系PB的旋转矩阵,为无人机装载相机坐标系PC到视觉定位标记坐标系PB的平移向量;in, Load the rotation matrix of the camera coordinate system P C to the visual positioning marker coordinate system P B for the UAV, Load the translation vector of the camera coordinate system P C to the visual positioning marker coordinate system P B for the UAV;
则得到无人机装载相机坐标系PC到世界坐标系PW的关系包括:Then the relationship between the UAV-mounted camera coordinate system P C and the world coordinate system P W includes:
其中,为无人机坐标系PA到世界坐标系PW的旋转矩阵,为无人机坐标系PA到世界坐标系PW的平移向量,为无人机坐标系PA到无人机装载相机坐标系PC的平移向量。其中和未知。in, is the rotation matrix from the UAV coordinate system P A to the world coordinate system P W , is the translation vector from the UAV coordinate system P A to the world coordinate system P W , Translation vector for the UAV coordinate system P A to the UAV loading camera coordinate system PC. in and unknown.
本实施例中,“计算出无人机装载相机坐标系PC与视觉定位标记坐标系PB的相对位姿和”具体包括:In this embodiment, "calculate the relative pose of the UAV - mounted camera coordinate system PC and the visual positioning marker coordinate system PB and "Includes:
使用相机模型将视觉定位标记投影到相机的2D像素平面,得到:Using the camera model to project the visual localization markers onto the camera's 2D pixel plane yields:
其中M代表相机内参矩阵,[u,v,1]代表视觉定位标记投影到归一化平面的坐标,[XB,YB,ZB]代表视觉定位标记在视觉定位标记坐标系PB中的坐标,代表视觉定位标记坐标系PB到无人机装载相机坐标系PC的平移向量,代表视觉定位标记坐标系PB到无人机装载相机坐标系PC的旋转矩阵,s=1/ZC代表未知的尺度因子,ZC代表视觉定位标记在相机坐标系下的Z轴坐标,采用DLT(Direct Linear Transform,直接线性变换)算法计算得到和 where M represents the camera internal parameter matrix, [u, v, 1] represents the coordinates of the visual positioning mark projected to the normalized plane, [XB, YB, ZB] represents the coordinates of the visual positioning mark in the visual positioning mark coordinate system P B , represents the translation vector from the coordinate system P B of the visual positioning marker to the coordinate system P C of the UAV mounted camera, Represents the rotation matrix from the visual positioning marker coordinate system P B to the UAV mounted camera coordinate system PC, s=1/Z C represents the unknown scale factor, Z C represents the Z-axis coordinate of the visual positioning marker in the camera coordinate system, Calculated by DLT (Direct Linear Transform, direct linear transform) algorithm and
本实施例中,“通过视觉定位标记优化无人车视觉里程计的位姿估计”具体包括:In this embodiment, "optimizing the pose estimation of the visual odometry of the unmanned vehicle through visual positioning marks" specifically includes:
定义坐标系,定义无人车装载相机坐标系PC、视觉定位标记坐标系PB以及世界坐标系PW,世界坐标系PW定义为无人机第一帧,无人车装载相机坐标系PC与无人车坐标系PA的关系确定;Define the coordinate system, define the unmanned vehicle mounted camera coordinate system P C , the visual positioning marker coordinate system P B and the world coordinate system P W , the world coordinate system P W is defined as the first frame of the drone, and the unmanned vehicle mounted camera coordinate system The relationship between PC and unmanned vehicle coordinate system PA is determined;
得到无人车装载相机坐标系PC与世界坐标系PW相对位姿Tcw、视觉定位标记坐标系PB与无人车装载相机坐标系PC相对位姿Tbc、以及视觉定位标记坐标系PB与世界坐标系PW相对位姿Tbw;Obtain the relative pose T cw of the unmanned vehicle mounted camera coordinate system PC and the world coordinate system P W , the relative pose T bc of the visual positioning marker coordinate system P B and the unmanned vehicle mounted camera coordinate system PC , and the visual positioning marker coordinates The relative pose T bw of the system P B and the world coordinate system P W ;
优化无人车位姿与点云坐标;Optimize the pose and point cloud coordinates of the unmanned vehicle;
定义视觉定位标记坐标系PB与无人车装载相机坐标系PC相互间的相对误差是:Define the relative error between the coordinate system P B of the visual positioning mark and the coordinate system P C of the camera mounted on the unmanned vehicle:
构建优化目标函数:Build the optimization objective function:
其中:in:
Tcw∈{(Rcw,tcw)|Rcw∈SO3,tcw∈R3}Tbc∈{(Rbc,tbc)|Rbc∈SO3,tbc∈R3}T cw ∈{(R cw ,t cw )|R cw ∈ SO 3 ,t cw ∈ R 3 }T bc ∈{(R bc ,t bc )|R bc ∈ SO 3 ,t bc ∈R 3 }
其中,SO3表示三维特殊正交群,tcw表示从无人车装载相机坐标系PC到世界坐标系PW的平移误差,tbc表示从视觉定位标记坐标系PB到无人车装载相机坐标系PC的平移误差,R3表示维数为3的一组基,Rcw表示从无人车装载相机坐标系PC到世界坐标系PW的平移误差,Rbc表示从视觉定位标记坐标系PB到无人车装载相机坐标系PC的旋转误差;Among them, SO 3 represents the three-dimensional special orthogonal group, t cw represents the translation error from the camera coordinate system P C of the unmanned vehicle loading to the world coordinate system P W , and t bc represents the visual positioning marker coordinate system P B to the unmanned vehicle loading. The translation error of the camera coordinate system PC, R 3 represents a set of bases with a dimension of 3, R cw represents the translation error from the camera coordinate system PC loaded by the unmanned vehicle to the world coordinate system P W , and R bc represents the visual positioning . The rotation error from the marker coordinate system P B to the coordinate system PC of the camera mounted on the unmanned vehicle;
相机运动不止造成旋转误差Rcw、Rbc以及平移误差tcw、tbc,还伴随尺度上的漂移,故进行针对尺度的变换,并采用Sim3变换算法,因此:Camera motion not only causes rotation errors R cw , R bc and translation errors t cw , t bc , but also accompanies scale drift, so scale transformation is carried out, and Sim3 transformation algorithm is used, so:
Scw=(Rcw,tcw,s=1),(Rcw,tcw)=Tcw S cw =(R cw ,t cw ,s=1),(R cw ,t cw )=T cw
Sbc=(Rbc,tbc,s=1),(Rbc,tbc)=Tbc S bc =(R bc ,t bc ,s=1),(R bc ,t bc )=T bc
其中,Sim3变换算法就是使用3对匹配点来进行相似变换的求解,进而解出两个坐标系之间的旋转矩阵、平移向量和尺度;Scw代表视觉定位标记点从世界坐标系PW到无人车装载相机坐标系PC的相似变换,Sbc代表视觉定位标记点从视觉定位标记坐标系PB到无人车装载相机坐标系PC的相似变换,s表示未知到尺度因子;Among them, the Sim3 transformation algorithm uses 3 pairs of matching points to solve the similarity transformation, and then solves the rotation matrix, translation vector and scale between the two coordinate systems; S cw represents the visual positioning marker point from the world coordinate system P W to Similar transformation of unmanned vehicle mounted camera coordinate system PC, S bc represents the similarity transformation of visual positioning marker point from visual positioning marker coordinate system P B to unmanned vehicle mounted camera coordinate system PC, s represents unknown to scale factor;
假设优化后的Sim3姿态为那么纠正完成的姿态是:Suppose the optimized Sim3 pose is Then the corrected pose is:
其中,Rbw表示视觉定位标记点从世界坐标系PW到视觉定位标记坐标系PB的旋转矩阵,tbw表示视觉定位标记点从世界坐标系PW到视觉定位标记坐标系PB的平移,s表示未知到尺度因子,代表优化后的旋转矩阵、平移向量和尺度因子,代表优化后的相似变换;Among them, R bw represents the rotation matrix of the visual positioning mark point from the world coordinate system P W to the visual positioning mark coordinate system P B , and t bw represents the translation of the visual positioning mark point from the world coordinate system P W to the visual positioning mark coordinate system P B , s represents the unknown scale factor, represents the optimized rotation matrix, translation vector and scale factor, represents the optimized similarity transformation;
设定无人车在优化发生前的3D位置为则可以得到变换后的坐标:Set the 3D position of the unmanned vehicle before the optimization takes place as Then you can get the transformed coordinates:
其中代表无人车优化后的位姿。in Represents the optimized pose of the unmanned vehicle.
如图3所示,一种协同三维建图系统,用于实现上述的协同三维建图方法,包括:As shown in Figure 3, a collaborative 3D mapping system is used to implement the above-mentioned collaborative 3D mapping method, including:
环境准备模块,用于采集环境信息;Environment preparation module for collecting environment information;
信息处理模块,用于从获取的环境信息中,采用ORB-SLAM算法框架中的Tracking线程设计思想,提取关键帧;The information processing module is used to extract key frames from the obtained environmental information, using the Tracking thread design idea in the ORB-SLAM algorithm framework;
检测模块,用于通过云端检测视觉定位标记,即检测路标(AprilTag);The detection module is used to detect visual positioning marks through the cloud, that is, to detect road signs (AprilTag);
第一优化模块,用于通过视觉定位标记优化无人机视觉里程计的位姿估计;The first optimization module is used to optimize the pose estimation of the UAV visual odometry through visual positioning markers;
第二优化模块,用于通过视觉定位标记优化无人车视觉里程计的位姿估计;The second optimization module is used to optimize the pose estimation of the visual odometry of the unmanned vehicle through the visual positioning mark;
执行模块,用于通过云端完成ORB-SLAM框架的局部地图构建线程和闭环检测线程。The execution module is used to complete the local map construction thread and closed-loop detection thread of the ORB-SLAM framework through the cloud.
与现有技术相比,本实施例提供的一种协同三维建图方法及系统,结合路标AprilTag+云架构+多机器人+SLAM三维建图技术,实现无人协同三维建图,能够解决协同SLAM系统实时性难以满足以及解决协同SLAM系统定位不准的问题,能够实现鲁棒性好、精度高和实时性强的协同三维建图系统。Compared with the prior art, a collaborative 3D mapping method and system provided by this embodiment, combined with the road sign AprilTag + cloud architecture + multi-robot + SLAM 3D mapping technology, realizes unmanned collaborative 3D mapping, and can solve the collaborative SLAM system. It is difficult to meet the real-time performance and solve the problem of inaccurate positioning of the collaborative SLAM system, which can realize a collaborative 3D mapping system with good robustness, high precision and strong real-time performance.
本领域技术人员可以理解附图只是一个优选实施场景的示意图,附图中的模块或流程并不一定是实施本发明所必须的。Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred implementation scenario, and the modules or processes in the accompanying drawing are not necessarily necessary to implement the present invention.
本领域技术人员可以理解实施场景中的装置中的模块可以按照实施场景描述进行分布于实施场景的装置中,也可以进行相应变化位于不同于本实施场景的一个或多个装置中。上述实施场景的模块可以合并为一个模块,也可以进一步拆分成多个子模块。Those skilled in the art can understand that the modules in the device in the implementation scenario may be distributed in the device in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the implementation scenario with corresponding changes. The modules of the above implementation scenarios may be combined into one module, or may be further split into multiple sub-modules.
上述本发明序号仅仅为了描述,不代表实施场景的优劣。The above serial numbers of the present invention are only for description, and do not represent the pros and cons of the implementation scenarios.
以上公开的仅为本发明的几个具体实施场景,但是,本发明并非局限于此,任何本领域的技术人员能思之的变化都应落入本发明的保护范围。The above disclosures are only a few specific implementation scenarios of the present invention, however, the present invention is not limited thereto, and any changes that can be conceived by those skilled in the art should fall within the protection scope of the present invention.
Claims (10)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111510369.3A CN114332360A (en) | 2021-12-10 | 2021-12-10 | A collaborative three-dimensional mapping method and system |
PCT/CN2022/138183 WO2023104207A1 (en) | 2021-12-10 | 2022-12-09 | Collaborative three-dimensional mapping method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111510369.3A CN114332360A (en) | 2021-12-10 | 2021-12-10 | A collaborative three-dimensional mapping method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114332360A true CN114332360A (en) | 2022-04-12 |
Family
ID=81051491
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111510369.3A Pending CN114332360A (en) | 2021-12-10 | 2021-12-10 | A collaborative three-dimensional mapping method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114332360A (en) |
WO (1) | WO2023104207A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965673A (en) * | 2022-11-23 | 2023-04-14 | 中国建筑一局(集团)有限公司 | Centralized multi-robot positioning method based on binocular vision |
CN115965758A (en) * | 2022-12-28 | 2023-04-14 | 无锡东如科技有限公司 | Three-dimensional reconstruction method for image cooperation monocular instance |
CN116228870A (en) * | 2023-05-05 | 2023-06-06 | 山东省国土测绘院 | Mapping method and system based on two-dimensional code SLAM precision control |
WO2023104207A1 (en) * | 2021-12-10 | 2023-06-15 | 深圳先进技术研究院 | Collaborative three-dimensional mapping method and system |
CN118010008A (en) * | 2024-04-08 | 2024-05-10 | 西北工业大学 | Binocular SLAM and inter-machine loop optimization-based double unmanned aerial vehicle co-location method |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116934829B (en) * | 2023-09-15 | 2023-12-12 | 天津云圣智能科技有限责任公司 | Unmanned aerial vehicle target depth estimation method and device, storage medium and electronic equipment |
CN117058209B (en) * | 2023-10-11 | 2024-01-23 | 山东欧龙电子科技有限公司 | Method for calculating depth information of visual image of aerocar based on three-dimensional map |
CN117893693B (en) * | 2024-03-15 | 2024-05-28 | 南昌航空大学 | Dense SLAM three-dimensional scene reconstruction method and device |
CN117906595B (en) * | 2024-03-20 | 2024-06-21 | 常熟理工学院 | Scene understanding navigation method and system based on feature point method visual SLAM |
CN118031976B (en) * | 2024-04-15 | 2024-07-09 | 中国科学院国家空间科学中心 | A human-machine collaborative system for exploring unknown environments |
CN118424256A (en) * | 2024-04-18 | 2024-08-02 | 北京化工大学 | Map building and positioning method and device for distributed multi-resolution map fusion |
CN118212294B (en) * | 2024-05-11 | 2024-09-27 | 济南昊中自动化有限公司 | Automatic method and system based on three-dimensional visual guidance |
CN118169729B (en) * | 2024-05-14 | 2024-07-19 | 北京易控智驾科技有限公司 | A method, device and storage medium for positioning an unmanned vehicle |
CN118470099B (en) * | 2024-07-15 | 2024-09-24 | 济南大学 | Method and device for measuring object spatial posture based on monocular camera |
CN118938994B (en) * | 2024-07-19 | 2025-02-14 | 广德瑞鹰智能科技有限责任公司 | A multi-UAV collaborative inspection method and system based on reinforcement learning |
CN118521646B (en) * | 2024-07-25 | 2024-11-19 | 中国铁塔股份有限公司江西省分公司 | Image processing-based multi-machine type unmanned aerial vehicle power receiving frame alignment method and system |
CN119339004A (en) * | 2024-12-19 | 2025-01-21 | 国网江苏省电力有限公司建设分公司 | Substation equipment point cloud replacement method based on principal component analysis method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112115874A (en) * | 2020-09-21 | 2020-12-22 | 武汉大学 | Cloud-fused visual SLAM system and method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3474230B1 (en) * | 2017-10-18 | 2020-07-22 | Tata Consultancy Services Limited | Systems and methods for edge points based monocular visual slam |
CN110221623B (en) * | 2019-06-17 | 2024-10-18 | 酷黑科技(北京)有限公司 | Air-ground collaborative operation system and positioning method thereof |
CN111595333B (en) * | 2020-04-26 | 2023-07-28 | 武汉理工大学 | Modular unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN114332360A (en) * | 2021-12-10 | 2022-04-12 | 深圳先进技术研究院 | A collaborative three-dimensional mapping method and system |
-
2021
- 2021-12-10 CN CN202111510369.3A patent/CN114332360A/en active Pending
-
2022
- 2022-12-09 WO PCT/CN2022/138183 patent/WO2023104207A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112115874A (en) * | 2020-09-21 | 2020-12-22 | 武汉大学 | Cloud-fused visual SLAM system and method |
Non-Patent Citations (4)
Title |
---|
TOM HARDY: "一文详解PnP算法原理", CSDN论坛, 18 October 2021 (2021-10-18), pages 1 - 3 * |
刘盛等: "空地正交视角下的多机器人协同定位及融合建图", 控制理论与应用, vol. 35, no. 12, 15 December 2018 (2018-12-15), pages 1779 - 1787 * |
杨清娴: "基于计算机视觉的无人机智能管家飞控系统研究", 中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑, no. 01, 15 January 2020 (2020-01-15), pages 031 - 150 * |
沈晓卫等: "《卫星动中通技术》", vol. 1, 30 April 2020, 北京邮电大学出版社, pages: 161 - 162 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023104207A1 (en) * | 2021-12-10 | 2023-06-15 | 深圳先进技术研究院 | Collaborative three-dimensional mapping method and system |
CN115965673A (en) * | 2022-11-23 | 2023-04-14 | 中国建筑一局(集团)有限公司 | Centralized multi-robot positioning method based on binocular vision |
CN115965673B (en) * | 2022-11-23 | 2023-09-12 | 中国建筑一局(集团)有限公司 | Centralized multi-robot positioning method based on binocular vision |
CN115965758A (en) * | 2022-12-28 | 2023-04-14 | 无锡东如科技有限公司 | Three-dimensional reconstruction method for image cooperation monocular instance |
CN116228870A (en) * | 2023-05-05 | 2023-06-06 | 山东省国土测绘院 | Mapping method and system based on two-dimensional code SLAM precision control |
CN118010008A (en) * | 2024-04-08 | 2024-05-10 | 西北工业大学 | Binocular SLAM and inter-machine loop optimization-based double unmanned aerial vehicle co-location method |
CN118010008B (en) * | 2024-04-08 | 2024-06-07 | 西北工业大学 | Binocular SLAM and inter-machine loop optimization-based double unmanned aerial vehicle co-location method |
Also Published As
Publication number | Publication date |
---|---|
WO2023104207A1 (en) | 2023-06-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114332360A (en) | A collaborative three-dimensional mapping method and system | |
CN109211241B (en) | Autonomous positioning method of UAV based on visual SLAM | |
CN112734852B (en) | Robot mapping method and device and computing equipment | |
CN109270534B (en) | An online calibration method for smart car laser sensor and camera | |
Seok et al. | Rovo: Robust omnidirectional visual odometry for wide-baseline wide-fov camera systems | |
CN110930495A (en) | Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium | |
CN110033489A (en) | A kind of appraisal procedure, device and the equipment of vehicle location accuracy | |
CN106840148A (en) | Wearable positioning and path guide method based on binocular camera under outdoor work environment | |
CN106940186A (en) | A kind of robot autonomous localization and air navigation aid and system | |
CN109615698A (en) | Multiple no-manned plane SLAM map blending algorithm based on the detection of mutual winding | |
CN102419178A (en) | Mobile robot positioning system and method based on infrared road signs | |
CN108519102A (en) | A binocular vision odometry calculation method based on reprojection | |
Zhao et al. | RTSfM: Real-time structure from motion for mosaicing and DSM mapping of sequential aerial images with low overlap | |
CN116989772B (en) | An air-ground multi-modal multi-agent collaborative positioning and mapping method | |
CN115371673A (en) | A binocular camera target location method based on Bundle Adjustment in an unknown environment | |
CN109459759A (en) | City Terrain three-dimensional rebuilding method based on quadrotor drone laser radar system | |
US11514588B1 (en) | Object localization for mapping applications using geometric computer vision techniques | |
CN111812978B (en) | Cooperative SLAM method and system for multiple unmanned aerial vehicles | |
CN109871024A (en) | A UAV Pose Estimation Method Based on Lightweight Visual Odometry | |
Jian et al. | Lvcp: Lidar-vision tightly coupled collaborative real-time relative positioning | |
CN111862200A (en) | A method of unmanned aerial vehicle positioning in coal shed | |
Dubois et al. | AirMuseum: a heterogeneous multi-robot dataset for stereo-visual and inertial simultaneous localization and mapping | |
CN110160503A (en) | A kind of unmanned plane landscape matching locating method for taking elevation into account | |
Xue et al. | Visual-marker based localization for flat-variation scene | |
Roozing et al. | Low-cost vision-based 6-DOF MAV localization using IR beacons |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |