CN114332360A - Collaborative three-dimensional mapping method and system - Google Patents

Collaborative three-dimensional mapping method and system Download PDF

Info

Publication number
CN114332360A
CN114332360A CN202111510369.3A CN202111510369A CN114332360A CN 114332360 A CN114332360 A CN 114332360A CN 202111510369 A CN202111510369 A CN 202111510369A CN 114332360 A CN114332360 A CN 114332360A
Authority
CN
China
Prior art keywords
coordinate system
unmanned aerial
aerial vehicle
camera
visual positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111510369.3A
Other languages
Chinese (zh)
Inventor
徐坤
冯时羽
李慧云
党少博
潘仲鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202111510369.3A priority Critical patent/CN114332360A/en
Publication of CN114332360A publication Critical patent/CN114332360A/en
Priority to PCT/CN2022/138183 priority patent/WO2023104207A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a collaborative three-dimensional mapping method and a collaborative three-dimensional mapping system, which comprise the following steps: detecting the visual positioning mark through a cloud end; optimizing pose estimation of the unmanned aerial vehicle visual odometer through the visual positioning mark; optimizing the pose estimation of the unmanned vehicle vision odometer through the vision positioning mark; and completing a local map construction thread and a closed loop detection thread of the ORB-SLAM framework through the cloud. Compared with the prior art, the method is mainly realized based on an ORB-SLAM framework and a cloud end, the unmanned aerial vehicle and the unmanned vehicle realize a tracking thread in the ORB-SLAM, the cloud end realizes a local map construction thread and a closed loop detection thread in the ORB-SLAM, the visual positioning mark is utilized to optimize the pose estimation of the unmanned aerial vehicle visual odometer, the visual positioning mark is utilized to optimize the pose estimation of the unmanned vehicle visual odometer, the problems that the real-time performance of the cooperative SLAM system is difficult to meet and the positioning of the cooperative SLAM system is inaccurate can be solved, and the cooperative three-dimensional mapping system with good robustness, high precision and strong real-time performance can be realized.

Description

Collaborative three-dimensional mapping method and system
Technical Field
The invention relates to the field of collaborative three-dimensional mapping, in particular to a collaborative three-dimensional mapping method and a collaborative three-dimensional mapping system.
Background
In the prior art, a technology of realizing three-dimensional plane construction of multiple robots by adopting a road sign and monocular camera sensor technology exists, but the system in the prior art has poor real-time performance;
the two-dimensional plane mapping of a single robot is realized by adopting a road sign and a cloud architecture, but the system is not suitable for large-scale environment application.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a collaborative three-dimensional mapping method and a collaborative three-dimensional mapping system, and the specific technical scheme is as follows:
a collaborative three-dimensional mapping method comprises the following steps:
detecting the visual positioning mark through a cloud end;
optimizing pose estimation of the unmanned aerial vehicle visual odometer through the visual positioning mark;
optimizing the pose estimation of the unmanned vehicle vision odometer through the vision positioning mark;
and completing a local map construction thread and a closed loop detection thread of the ORB-SLAM framework through the cloud.
In a specific embodiment, the method further comprises the following steps:
collecting environment information, adopting Docker as a cloud container, Kubernetes as a scheduling service of the container, and BRPC and Beego as network frameworks to build a cloud platform, so that a multi-agent end communicates with the cloud end;
the multi-agent comprises the unmanned aerial vehicle and the unmanned vehicle, the unmanned aerial vehicle and the unmanned vehicle form a centralized system structure, a first monocular camera is arranged in the front of the unmanned aerial vehicle, the lens of the first monocular camera faces downwards, a second monocular camera is arranged in the front of the unmanned vehicle, and the lens of the second monocular camera faces forwards;
and (4) selecting at least 2 environmental points, and printing the visual positioning mark.
In a specific embodiment, the method further comprises the following steps:
the environment information comprises image information, and feature points and descriptors are extracted from the image information by adopting an ORB-SLAM algorithm;
obtaining depth through a PnP algorithm to obtain point cloud information;
map initialization is carried out by utilizing the cloud platform, if a map exists on the cloud platform, the image information is matched with the key frame of the cloud end to determine an initial position, and if no map exists on the cloud platform, the image information, the map and other information are used as the start of a cloud platform system map;
estimating the pose of the camera by matching the feature point pairs or a repositioning method;
establishing a relation between the image feature points and the local point cloud map;
and extracting the key frame and uploading the key frame to the cloud according to the judgment condition of the key frame.
In a specific embodiment, the "establishing a relationship between the image feature point and the local point cloud map" specifically includes:
when the local map fails to track due to the principles of shielding or texture missing on the environment and the like, the system adopts the following modes to reposition:
relocating and matching reference frames in a local map on the drone or the drone vehicle;
and carrying out repositioning on the cloud platform according to the information of the current frame.
In a specific embodiment, the "visually positioning mark through cloud detection" specifically includes:
carrying out image edge detection;
screening out the outline edges of the quadrangle;
and decoding the outline edge of the quadrangle and identifying the visual positioning mark.
In a specific embodiment, the "optimizing the pose estimation of the unmanned aerial vehicle visual odometer by the visual positioning mark" specifically includes:
defining coordinate system, defining coordinate system P of unmanned aerial vehicle loading cameraCUnmanned aerial vehicle coordinate system PAVisual positioning mark coordinate system PBAnd a world coordinate system PWSaid world coordinate system PWDefining as the drone first frame;
unmanned aerial vehicle loads camera coordinate system PCThe YOZ plane and the unmanned aerial vehicle coordinate system PAYOZ planes are parallel, and the unmanned aerial vehicle coordinate system P is arrangedAIs at the drone center;
calculating the coordinate system P of the unmanned aerial vehicle loading cameraCTo the world coordinate system PWThe relationship of (1);
calculating the coordinate system P of the unmanned aerial vehicle loading cameraCAnd the visual positioning mark coordinate system PBRelative position and attitude of
Figure BDA0003405060780000021
And
Figure BDA0003405060780000022
and solving a track error through the relative pose obtained by the visual positioning mark and the relative pose obtained by the visual odometer, and equally dividing the track error on each key frame of the unmanned aerial vehicle, so that the closed loop key frame and the actual error are reduced.
In a specific embodiment, the "calculating the drone loading camera coordinate system PCTo the world coordinate system PWThe relationship of (1) specifically includes:
unmanned aerial vehicle coordinate system PAWith the unmanned aerial vehicle loads camera coordinate system PCIs a parallel relationship, existing:
Figure BDA0003405060780000031
wherein, PACoordinates, P, representing the coordinate system of the droneCCoordinates representing the drone loading camera coordinate system,
Figure BDA0003405060780000032
is the unmanned aerial vehicle coordinate system PAAnd said does notMan-machine loaded camera coordinate system PCA translation vector therebetween representing a distance of the camera from the center of the drone;
the visual positioning marker coordinate system PBAnd the world coordinate system PWThe relationship between them satisfies:
Figure BDA0003405060780000033
wherein, PWIs the coordinate of the world coordinate system, PBFor the coordinates of the visual positioning marker coordinate system,
Figure BDA0003405060780000034
for the world coordinate system PWAnd the visual positioning mark coordinate system PBA translation vector therebetween;
angles phi, theta and psi are euler angles, respectively, given said world coordinate system PWTo the unmanned aerial vehicle coordinate system PAIs a rotation matrix of
Figure BDA0003405060780000035
The visual positioning marker coordinate system PBTo the unmanned aerial vehicle load camera coordinate system PCIs a rotation matrix of
Figure BDA0003405060780000036
Then:
Figure BDA0003405060780000037
Figure BDA0003405060780000038
c represents cos, s represents sin, and the visual positioning mark coordinate system P can be obtained according to the formulaBAnd the unmanned aerial vehicle loads the camera coordinate system PCThe rotational relationship includes:
Figure BDA0003405060780000039
and the unmanned aerial vehicle loads the camera coordinate system PCTo the visual positioning marker coordinate system PBThe relational expression of (A) is:
Figure BDA00034050607800000310
wherein,
Figure BDA00034050607800000311
for the unmanned aerial vehicle to be loaded with a camera coordinate system PCTo the visual positioning marker coordinate system PBThe rotation matrix of (a) is,
Figure BDA00034050607800000312
for the unmanned aerial vehicle to be loaded with a camera coordinate system PCTo the visual positioning marker coordinate system PBThe translation vector of (a);
obtaining the coordinate system P of the unmanned aerial vehicle loading cameraCTo the world coordinate system PWThe relationship of (1) includes:
Figure BDA0003405060780000041
wherein,
Figure BDA0003405060780000042
is the unmanned aerial vehicle coordinate system PATo the world coordinate system PWThe rotation matrix of (a) is,
Figure BDA0003405060780000043
is the unmanned aerial vehicle coordinate system PATo the world coordinate system PWThe translation vector of (a) is calculated,
Figure BDA0003405060780000044
is the unmanned aerial vehicle coordinate system PATo the unmanned aerial vehicleCamera-mounted coordinate system PCThe translation vector of (2).
In a specific embodiment, the "calculating the drone loading camera coordinate system PCAnd the visual positioning mark coordinate system PBRelative position and attitude of
Figure BDA0003405060780000045
And
Figure BDA0003405060780000046
the method specifically comprises the following steps:
projecting the visual localization markers to a 2D pixel plane of a camera using a camera model, resulting in:
Figure BDA0003405060780000047
wherein M represents a camera reference matrix, [ u, v,1 ]]Coordinates representing the projection of said visual positioning markers onto a normalized plane, [ XB, YB, ZB]Representing a visual positioning mark in the visual positioning mark coordinate system PBThe coordinates of (a) are (b),
Figure BDA0003405060780000048
representing the visual positioning marker coordinate system PBTo the unmanned aerial vehicle load camera coordinate system PCThe translation vector of (a) is calculated,
Figure BDA0003405060780000049
representing the visual positioning marker coordinate system PBTo the unmanned aerial vehicle load camera coordinate system PCS 1/ZCRepresenting an unknown scale factor, ZCRepresenting the Z-axis coordinate of the visual positioning mark under a camera coordinate system, and obtaining the Z-axis coordinate by adopting a direct linear transformation algorithm
Figure BDA00034050607800000410
And
Figure BDA00034050607800000411
in a specific embodiment, the "optimizing the pose estimation of the unmanned vehicle visual odometer by the visual positioning mark" specifically includes:
defining a coordinate system, defining a coordinate system P of an unmanned vehicle-mounted cameraCVisual positioning mark coordinate system PBAnd a world coordinate system PWSaid world coordinate system PWDefined as the unmanned aerial vehicle first frame, the unmanned aerial vehicle is loaded with a camera coordinate system PCAnd the unmanned vehicle coordinate system PADetermining the relationship of (1);
obtaining the coordinate system P of the unmanned vehicle loading cameraCAnd the world coordinate system PWRelative pose TcwThe visual positioning mark coordinate system PBAnd the unmanned vehicle-mounted camera coordinate system PCRelative pose TbcAnd the visual positioning mark coordinate system PBAnd the world coordinate system PWRelative pose Tbw
Optimizing the pose and point cloud coordinates of the unmanned vehicle;
defining said visual positioning marker coordinate system PBAnd the unmanned vehicle-mounted camera coordinate system PCThe relative error between each other is:
Figure BDA0003405060780000051
constructing an optimization objective function:
Figure BDA0003405060780000052
wherein:
Tcw∈{(Rcw,tcw)|Rcw∈SO3,tcw∈R3}Tbc∈{(Rbc,tbc)|Rbc∈SO3,tbc∈R3}
wherein, SO3Representing three-dimensional special orthogonal groups, tcwRepresenting camera coordinates loaded from the unmanned vehicleIs PCTo the world coordinate system PWTranslation error of tbcRepresenting a coordinate system P from said visual positioning markerBTo the unmanned vehicle-mounted camera coordinate system PCTranslation error of R3Representing a set of radicals of dimension 3, RcwRepresenting camera coordinate system P loaded from said unmanned vehicleCTo the world coordinate system PWTranslation error of RbcRepresenting a coordinate system P from said visual positioning markerBTo the unmanned vehicle-mounted camera coordinate system PCThe rotational error of (a);
the camera motion not only causes a rotation error Rcw、RbcAnd translation error tcw、tbcSince the scale is also shifted in accordance with the scale, the transform is performed for the scale, and the Sim3 transform algorithm is used, so that:
Scw=(Rcw,tcw,s=1),(Rcw,tcw)=Tcw
Sbc=(Rbc,tbc,s=1),(Rbc,tbc)=Tbc
wherein S iscwRepresenting visual alignment mark points from the world coordinate system PWTo the unmanned vehicle-mounted camera coordinate system PCBy similarity transformation of SbcRepresenting the visual alignment mark point from the visual alignment mark coordinate system PBTo the unmanned vehicle-mounted camera coordinate system PCS denotes the unknown to scale factor;
assume an optimized Sim3 pose of
Figure BDA0003405060780000059
Then the pose for which the correction is complete is:
Figure BDA0003405060780000058
wherein R isbwRepresenting the visual alignment marker point from the world coordinate system PWTo the visual positioning marker coordinate system PBScrew ofRotation matrix, tbwRepresenting the visual alignment marker point from the world coordinate system PWTo the visual positioning marker coordinate system PBS denotes the unknown to scale factor,
Figure BDA0003405060780000053
representing the optimized rotation matrix, translation vector and scale factor,
Figure BDA0003405060780000054
representing the optimized similarity transformation;
setting the 3D position of the unmanned vehicle before optimization occurs as
Figure BDA0003405060780000055
The transformed coordinates can be found:
Figure BDA0003405060780000056
wherein
Figure BDA0003405060780000057
Representing the optimized pose of the unmanned vehicle.
A collaborative three-dimensional mapping system is used for realizing the collaborative three-dimensional mapping method, and comprises the following steps:
the environment preparation module is used for acquiring environment information;
the information processing module is used for extracting the key frame from the acquired environment information by adopting a Tracking thread design idea in an ORB-SLAM algorithm framework;
the detection module is used for detecting the visual positioning mark through the cloud end;
the first optimization module is used for optimizing the pose estimation of the unmanned aerial vehicle visual odometer through the visual positioning mark;
the second optimization module is used for optimizing the pose estimation of the unmanned vehicle vision odometer through the vision positioning mark; and the execution module is used for completing a local map construction thread and a closed loop detection thread of the ORB-SLAM framework through the cloud.
Compared with the prior art, the invention has the following beneficial effects:
the cooperative three-dimensional mapping method and the cooperative three-dimensional mapping system can solve the problems that the real-time performance of the cooperative SLAM system is difficult to meet and the positioning of the cooperative SLAM system is inaccurate, and can realize the cooperative three-dimensional mapping system with good robustness, high precision and strong real-time performance.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic view of an imaging model of a camera in an embodiment;
FIG. 2 is a flowchart illustrating the three-dimensional collaborative mapping method according to an embodiment;
FIG. 3 is a block diagram of the collaborative three-dimensional mapping system according to an embodiment.
Detailed Description
Examples
As shown in fig. 1-2, the present embodiment provides a collaborative three-dimensional mapping method, including:
preparing environment, and collecting environment information;
processing information, namely extracting a key frame from the acquired environment information by adopting a Tracking thread design idea in an ORB-SLAM algorithm framework;
detecting a visual positioning mark through a cloud end, wherein the visual positioning mark is a road sign;
optimizing pose estimation of the unmanned aerial vehicle visual odometer through the visual positioning mark;
optimizing the pose estimation of the unmanned vehicle vision odometer through the vision positioning mark;
and completing a local map construction thread and a closed loop detection thread of the ORB-SLAM framework through the cloud.
Specifically, the cloud executes a Local Mapping thread (Local Mapping thread) and a closed Loop detection thread (Loop cloning thread) in the ORB-SLAM. The Cooperative SLAM (CSLAM) is superior to a single robot in terms of fault tolerance, robustness and execution efficiency, and has an important influence on tasks such as disaster relief, resource detection and space detection in an unknown environment. The data calculation storage in the CSLAM system is large, and most of robot individuals cannot meet the real-time requirement. CSLAM systems usually perform tasks in large-scale environments, and the system errors (pose estimation errors, etc.) accumulated by a large number of calculations cannot be completely eliminated to some extent. Moreover, when there are a large number of repetitive features in the environment, the feature point matching or overlap region matching algorithm may be mismatched to some extent. The accumulated system errors and the mismatching affect the map building precision of the CSLAM system, so that a small number of road signs are arranged in the environment, the robot can optimize the self pose according to the road signs, and the method has important significance for improving the map building precision. Compared with a two-dimensional map, the three-dimensional map has richer information quantity and can better reflect the objective existence form of the real world.
Specifically, the visual positioning marking technology, namely the road sign technology can assist the camera laser radar sensor in realizing more accurate positioning and map building, the cloud architecture technology can transfer complex operation in the multi-robot SLAM technology to the cloud end for realization, the problem that the multi-robot computing and storing resources are limited is solved, map environment information of a three-dimensional plane is richer, and functions of navigation, obstacle avoidance and the like of the unmanned aerial vehicle are facilitated.
Preferably, a spacious place is selected to be marked with a road sign (AprilTag code) in a large-scale unknown environment, a monocular camera is loaded on an unmanned aerial vehicle and an unmanned vehicle, the monocular camera is used for collecting environment information in real time in the process of multi-agent traveling, an ORB-SLAM framework is used for collaborative three-dimensional mapping, the AprilTag code is used for optimizing ORB-SLAM pose estimation, a cloud platform is built by Docker + Kubernets + BRPC + Beego technology, tasks with large calculation amount and high storage requirement are deployed at the cloud end, and a multi-agent end is used for tracking and repositioning.
Preferably, in this embodiment, by combining the road sign aprilat + cloud architecture + multiple robots + SLAM three-dimensional mapping technology, unmanned cooperative three-dimensional mapping is realized, the problems that the real-time performance of the cooperative SLAM system is difficult to meet and the positioning of the cooperative SLAM system is inaccurate can be solved, and the unmanned cooperative three-dimensional mapping system with good robustness, high precision and strong real-time performance can be realized.
In this embodiment, "collecting environmental information" specifically includes:
the cloud platform is built by adopting Docker + Kubernets + BRPC + Beego technology, so that the multi-agent end is communicated with a cloud end, specifically, Docker is used as a cloud end container, Kubernets is used as scheduling service of the container, BRPC and Beego are used as network frameworks to build the cloud platform, and the multi-agent end is communicated with the cloud end;
the multi-agent comprises an unmanned aerial vehicle and an unmanned vehicle, and the unmanned aerial vehicle and the unmanned vehicle form a centralized system structure;
and (3) selecting at least 2 environment points, and marking a visual positioning mark, namely, an AprilTag code.
In this embodiment, "unmanned aerial vehicle and unmanned vehicle constitute centralized architecture" specifically includes:
unmanned aerial vehicle place ahead position is equipped with first monocular camera and the camera lens of first monocular camera down, and unmanned vehicles position is equipped with second monocular camera and the camera lens of second monocular camera forward.
In this embodiment, "information processing" specifically includes:
the environment information comprises image information, and feature points and descriptors are extracted from the image information by adopting an ORB-SLAM algorithm;
obtaining depth through a PnP algorithm to obtain point cloud information;
carrying out map initialization by using a cloud platform, matching image information with a key frame of a cloud end to determine an initial position if the cloud platform has a map, and taking the image information, the map and other information as the start of a cloud platform system map if the cloud platform does not have the map;
estimating the pose of the camera by matching the feature point pairs or a repositioning method;
establishing a relation between the image feature points and the local point cloud map;
and extracting the key frame and uploading the key frame to the cloud according to the judgment condition of the key frame.
In this embodiment, the "establishing a relationship between the image feature points and the local point cloud map" specifically includes:
when the local map fails to track due to the principles of shielding or texture missing on the environment and the like, the system adopts the following modes to reposition:
repositioning and matching reference frames in a local map on the drone or drone vehicle;
and carrying out repositioning on the cloud platform through the information of the current frame.
In this embodiment, the "visual positioning mark is detected through the cloud" specifically includes:
carrying out image edge detection;
screening out the outline edges of the quadrangle;
the contour edge of the quadrangle is decoded to identify the visual positioning mark, i.e. identify the road sign (AprilTag).
In this embodiment, the "optimizing the pose estimation of the unmanned aerial vehicle visual odometer by the visual positioning mark" specifically includes:
defining coordinate system, defining coordinate system P of unmanned aerial vehicle loading cameraCUnmanned aerial vehicle coordinate system PAVisual positioning mark coordinate system PBAnd a world coordinate system PWWorld coordinate system PWDefining as a first frame of the drone;
unmanned aerial vehicle loads camera coordinate system PCYOZ plane and unmanned aerial vehicle coordinate system PAYOZ planes are parallel, and an unmanned aerial vehicle coordinate system P is arrangedAThe origin of (a) is at the center of the unmanned aerial vehicle;
calculating a coordinate system P of a camera loaded by the unmanned aerial vehicleCTo the world coordinate system PWThe relationship of (1);
calculating a coordinate system P of a camera loaded by the unmanned aerial vehicleCAnd a visual alignment mark coordinate system PBRelative position and attitude of
Figure BDA0003405060780000091
And
Figure BDA0003405060780000092
the trajectory error is solved through the visual positioning mark, namely the relative pose obtained through the road sign (AprilTag) and the relative pose obtained through the visual odometer, and is equally divided on each key frame of the unmanned aerial vehicle, so that the closed loop key frame and the actual error are reduced.
In this embodiment, "calculate the coordinate system P of the camera loaded by the droneCTo the world coordinate system PWThe relationship of (1) specifically includes:
unmanned aerial vehicle coordinate system PAWith unmanned aerial vehicle loading camera coordinate system PCIs a parallel relationship, existing:
Figure BDA0003405060780000093
wherein, PACoordinates representing the coordinate system of the drone, PCCoordinates representing the drone loading camera coordinate system,
Figure BDA0003405060780000094
for unmanned aerial vehicle coordinate system PAWith unmanned aerial vehicle loading camera coordinate system PCA translation vector therebetween, representing the distance of the camera from the center of the drone;
visual positioning marker coordinate system PBWith the world coordinate system PWThe relationship between them satisfies:
Figure BDA0003405060780000095
wherein, PWAs coordinates of the world coordinate system, PBFor visual positioning of the coordinates of the coordinate system of the markers,
Figure BDA0003405060780000096
as a world coordinate system PWAnd a visual alignment mark coordinate system PBA translation vector therebetween;
the angles phi, theta and psi are Euler angles, respectively, given a world coordinate system PWTo unmanned aerial vehicle coordinate system PAIs a rotation matrix of
Figure BDA0003405060780000097
Visual positioning marker coordinate system PBLoading of camera coordinate System P to unmanned aerial vehicleCIs a rotation matrix of
Figure BDA0003405060780000098
Then:
Figure BDA0003405060780000099
Figure BDA00034050607800000910
c represents cos, s represents sin, and the visual positioning mark coordinate system P can be obtained according to the formulaBAnd unmanned aerial vehicle loads camera coordinate system PCThe rotational relationship includes:
Figure BDA00034050607800000911
and unmanned aerial vehicle loads camera coordinate system PCTo the visual alignment marker coordinate system PBThe relational expression of (A) is:
Figure BDA0003405060780000101
wherein,
Figure BDA0003405060780000102
loading a camera coordinate system P for an unmanned aerial vehicleCTo the visual positioning markCoordinate system PBThe rotation matrix of (a) is,
Figure BDA0003405060780000103
loading a camera coordinate system P for an unmanned aerial vehicleCTo the visual alignment marker coordinate system PBThe translation vector of (a);
then obtaining a coordinate system P of the unmanned aerial vehicle loading cameraCTo the world coordinate system PWThe relationship of (1) includes:
Figure BDA0003405060780000104
wherein,
Figure BDA0003405060780000105
for unmanned aerial vehicle coordinate system PATo the world coordinate system PWThe rotation matrix of (a) is,
Figure BDA0003405060780000106
for unmanned aerial vehicle coordinate system PATo the world coordinate system PWThe translation vector of (a) is calculated,
Figure BDA0003405060780000107
for unmanned aerial vehicle coordinate system PALoading of camera coordinate System P to unmanned aerial vehicleCThe translation vector of (2). Wherein
Figure BDA0003405060780000108
And
Figure BDA0003405060780000109
is unknown.
In this embodiment, "calculate the coordinate system P of the camera loaded by the droneCAnd a visual alignment mark coordinate system PBRelative position and attitude of
Figure BDA00034050607800001010
And
Figure BDA00034050607800001011
the method specifically comprises the following steps:
projecting the visual localization markers onto the 2D pixel plane of the camera using the camera model yields:
Figure BDA00034050607800001012
wherein M represents a camera reference matrix, [ u, v,1 ]]Coordinates representing the projection of the visual alignment marks onto the normalized plane, [ XB, YB, ZB]Representing the visual alignment mark in a visual alignment mark coordinate system PBThe coordinates of (a) are (b),
Figure BDA00034050607800001013
representing a visual positioning marker coordinate system PBLoading of camera coordinate System P to unmanned aerial vehicleCThe translation vector of (a) is calculated,
Figure BDA00034050607800001014
representing a visual positioning marker coordinate system PBLoading of camera coordinate System P to unmanned aerial vehicleCS 1/ZCRepresenting an unknown scale factor, ZCRepresenting the Z-axis coordinate of the visual positioning mark in a camera coordinate system, and calculating by adopting a DLT (Direct Linear Transform) algorithm
Figure BDA00034050607800001015
And
Figure BDA00034050607800001016
in this embodiment, the "optimizing the pose estimation of the unmanned vehicle vision odometer by the vision positioning mark" specifically includes:
defining a coordinate system, defining a coordinate system P of an unmanned vehicle-mounted cameraCVisual positioning mark coordinate system PBAnd a world coordinate system PWWorld coordinate system PWDefined as the first frame of the unmanned plane, unmanned vehicle-mounted camera coordinate system PCAnd unmanned vehicle coordinate system PADetermining the relationship of (1);
obtaining a coordinate system P of the unmanned vehicle loading cameraCSit with the worldMarker system PWRelative pose TcwVisual positioning mark coordinate system PBAnd unmanned vehicle-mounted camera coordinate system PCRelative pose TbcAnd a visual positioning marker coordinate system PBWith the world coordinate system PWRelative pose Tbw
Optimizing the pose and point cloud coordinates of the unmanned vehicle;
defining a visual alignment marker coordinate system PBAnd unmanned vehicle-mounted camera coordinate system PCThe relative error between each other is:
Figure BDA0003405060780000111
constructing an optimization objective function:
Figure BDA0003405060780000112
wherein:
Tcw∈{(Rcw,tcw)|Rcw∈SO3,tcw∈R3}Tbc∈{(Rbc,tbc)|Rbc∈SO3,tbc∈R3}
wherein, SO3Representing three-dimensional special orthogonal groups, tcwRepresenting camera coordinate system P loaded from unmanned vehicleCTo the world coordinate system PWTranslation error of tbcRepresenting a coordinate system P for positioning a marker from visionBUnmanned vehicle-mounted camera coordinate system PCTranslation error of R3Representing a set of radicals of dimension 3, RcwRepresenting camera coordinate system P loaded from unmanned vehicleCTo the world coordinate system PWTranslation error of RbcRepresenting a coordinate system P for positioning a marker from visionBUnmanned vehicle-mounted camera coordinate system PCThe rotational error of (a);
the camera motion not only causes a rotation error Rcw、RbcAnd translation error tcw、tbcAlso accompanied by a drift in dimensionSo a scale-directed transformation is performed and the Sim3 transformation algorithm is used, so:
Scw=(Rcw,tcw,s=1),(Rcw,tcw)=Tcw
Sbc=(Rbc,tbc,s=1),(Rbc,tbc)=Tbc
the Sim3 transformation algorithm is to solve similarity transformation by using 3 pairs of matching points, and further solve a rotation matrix, a translation vector and a scale between two coordinate systems; scwRepresenting visual alignment mark points from the world coordinate system PWUnmanned vehicle-mounted camera coordinate system PCBy similarity transformation of SbcCoordinate system P of secondary visual positioning mark representing visual positioning mark pointBUnmanned vehicle-mounted camera coordinate system PCS denotes the unknown to scale factor;
assume an optimized Sim3 pose of
Figure BDA0003405060780000113
Then the pose for which the correction is complete is:
Figure BDA0003405060780000114
wherein R isbwRepresenting visual alignment mark points from the world coordinate system PWTo the visual alignment marker coordinate system PBRotation matrix of tbwRepresenting visual alignment mark points from the world coordinate system PWTo the visual alignment marker coordinate system PBS denotes the unknown to scale factor,
Figure BDA0003405060780000115
representing the optimized rotation matrix, translation vector and scale factor,
Figure BDA0003405060780000116
representing the optimized similarity transformation;
setting unmanned vehicles before optimization occursThe 3D position is
Figure BDA0003405060780000121
The transformed coordinates can be found:
Figure BDA0003405060780000122
wherein
Figure BDA0003405060780000123
Representing the optimized pose of the unmanned vehicle.
As shown in fig. 3, a collaborative three-dimensional mapping system for implementing the collaborative three-dimensional mapping method includes:
the environment preparation module is used for acquiring environment information;
the information processing module is used for extracting the key frame from the acquired environment information by adopting a Tracking thread design idea in an ORB-SLAM algorithm framework;
a detection module for detecting a visual positioning mark, namely, a landmark (aprilat), through a cloud;
the first optimization module is used for optimizing the pose estimation of the unmanned aerial vehicle visual odometer through the visual positioning mark;
the second optimization module is used for optimizing the pose estimation of the unmanned vehicle vision odometer through the vision positioning mark;
and the execution module is used for completing a local map construction thread and a closed loop detection thread of the ORB-SLAM framework through the cloud.
Compared with the prior art, the cooperative three-dimensional mapping method and the cooperative three-dimensional mapping system provided by the embodiment combine the road sign AprilTag, the cloud architecture, the multiple robots and the SLAM three-dimensional mapping technology to realize unmanned cooperative three-dimensional mapping, can solve the problems that the cooperative SLAM system is difficult to meet in real time and the cooperative SLAM system is inaccurate in positioning, and can realize the cooperative three-dimensional mapping system with good robustness, high precision and strong real-time performance.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned invention numbers are merely for description and do not represent the merits of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present invention, however, the present invention is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (10)

1. A collaborative three-dimensional mapping method is characterized by comprising the following steps:
detecting the visual positioning mark through a cloud end;
optimizing pose estimation of the unmanned aerial vehicle visual odometer through the visual positioning mark;
optimizing the pose estimation of the unmanned vehicle vision odometer through the vision positioning mark;
and completing a local map construction thread and a closed loop detection thread of the ORB-SLAM framework through the cloud.
2. The collaborative three-dimensional mapping method according to claim 1, further comprising:
collecting environment information, adopting Docker as a cloud container, Kubernetes as a scheduling service of the container, and BRPC and Beego as network frameworks to build a cloud platform, so that a multi-agent end communicates with the cloud end;
the multi-agent comprises the unmanned aerial vehicle and the unmanned vehicle, the unmanned aerial vehicle and the unmanned vehicle form a centralized system structure, a first monocular camera is arranged in the front of the unmanned aerial vehicle, the lens of the first monocular camera faces downwards, a second monocular camera is arranged in the front of the unmanned vehicle, and the lens of the second monocular camera faces forwards;
and (4) selecting at least 2 environmental points, and printing the visual positioning mark.
3. The collaborative three-dimensional mapping method according to claim 2, further comprising:
the environment information comprises image information, and feature points and descriptors are extracted from the image information by adopting an ORB-SLAM algorithm;
obtaining depth through a PnP algorithm to obtain point cloud information;
map initialization is carried out by utilizing the cloud platform, if a map exists on the cloud platform, the image information is matched with the key frame of the cloud end to determine an initial position, and if no map exists on the cloud platform, the image information, the map and other information are used as the start of a cloud platform system map;
estimating the pose of the camera by matching the feature point pairs or a repositioning method;
establishing a relation between the image feature points and the local point cloud map;
and extracting the key frame and uploading the key frame to the cloud according to the judgment condition of the key frame.
4. The collaborative three-dimensional mapping method according to claim 3, wherein the establishing of the relationship between the image feature points and the local point cloud map specifically comprises:
when the local map fails to track due to the principles of shielding or texture missing on the environment and the like, the system adopts the following modes to reposition:
relocating and matching reference frames in a local map on the drone or the drone vehicle;
and carrying out repositioning on the cloud platform according to the information of the current frame.
5. The collaborative three-dimensional mapping method according to claim 1, wherein the "visual positioning mark through cloud detection" specifically includes:
carrying out image edge detection;
screening out the outline edges of the quadrangle;
and decoding the outline edge of the quadrangle and identifying the visual positioning mark.
6. The collaborative three-dimensional mapping method according to claim 1, wherein the optimizing the pose estimation of the unmanned aerial vehicle visual odometer by the visual positioning markers specifically comprises:
defining coordinate system, defining coordinate system P of unmanned aerial vehicle loading cameraCUnmanned aerial vehicle coordinate system PAVisual positioning mark coordinate system PBAnd a world coordinate system PWSaid world coordinate system PWDefining as the drone first frame;
unmanned aerial vehicle loads camera coordinate system PCThe YOZ plane and the unmanned aerial vehicle coordinate system PAYOZ planes are parallel, and the unmanned aerial vehicle coordinate system P is arrangedAIs at the drone center;
calculating the coordinate system P of the unmanned aerial vehicle loading cameraCTo the world coordinate system PWThe relationship of (1);
calculating the coordinate system P of the unmanned aerial vehicle loading cameraCAnd the visual positioning mark coordinate system PBRelative position and attitude of
Figure FDA0003405060770000025
And
Figure FDA0003405060770000026
and solving a track error through the relative pose obtained by the visual positioning mark and the relative pose obtained by the visual odometer, and equally dividing the track error on each key frame of the unmanned aerial vehicle, so that the closed loop key frame and the actual error are reduced.
7. The collaborative three-dimensional mapping method according to claim 6, wherein the "calculating the UAV" is performedLoad camera coordinate system PCTo the world coordinate system PWThe relationship of (1) specifically includes:
unmanned aerial vehicle coordinate system PAWith the unmanned aerial vehicle loads camera coordinate system PCIs a parallel relationship, existing:
Figure FDA0003405060770000021
wherein, PACoordinates, P, representing the coordinate system of the droneCCoordinates representing the drone loading camera coordinate system,
Figure FDA0003405060770000024
is the unmanned aerial vehicle coordinate system PAWith the unmanned aerial vehicle loads camera coordinate system PCA translation vector therebetween representing a distance of the camera from the center of the drone;
the visual positioning marker coordinate system PBAnd the world coordinate system PWThe relationship between them satisfies:
Figure FDA0003405060770000022
wherein, PWIs the coordinate of the world coordinate system, PBFor the coordinates of the visual positioning marker coordinate system,
Figure FDA0003405060770000023
for the world coordinate system PWAnd the visual positioning mark coordinate system PBA translation vector therebetween;
angles phi, theta and psi are euler angles, respectively, given said world coordinate system PWTo the unmanned aerial vehicle coordinate system PAIs a rotation matrix of
Figure FDA0003405060770000031
The visual positioning marker coordinate system PBTo the unmanned aerial vehicle load camera coordinate system PCIs a rotation matrix of
Figure FDA0003405060770000032
Then:
Figure FDA0003405060770000033
Figure FDA0003405060770000034
c represents cos, s represents sin, and the visual positioning mark coordinate system P can be obtained according to the formulaBAnd the unmanned aerial vehicle loads the camera coordinate system PCThe rotational relationship includes:
Figure FDA0003405060770000035
and the unmanned aerial vehicle loads the camera coordinate system PCTo the visual positioning marker coordinate system PBThe relational expression of (A) is:
Figure FDA0003405060770000036
wherein,
Figure FDA0003405060770000037
for the unmanned aerial vehicle to be loaded with a camera coordinate system PCTo the visual positioning marker coordinate system PBThe rotation matrix of (a) is,
Figure FDA0003405060770000038
for the unmanned aerial vehicle to be loaded with a camera coordinate system PCTo the visual positioning marker coordinate system PBThe translation vector of (a);
obtaining the coordinates of the unmanned aerial vehicle loading cameraIs PCTo the world coordinate system PWThe relationship of (1) includes:
Figure FDA0003405060770000039
wherein,
Figure FDA00034050607700000310
is the unmanned aerial vehicle coordinate system PATo the world coordinate system PWThe rotation matrix of (a) is,
Figure FDA00034050607700000311
is the unmanned aerial vehicle coordinate system PATo the world coordinate system PWThe translation vector of (a) is calculated,
Figure FDA00034050607700000312
is the unmanned aerial vehicle coordinate system PATo the unmanned aerial vehicle load camera coordinate system PCThe translation vector of (2).
8. The collaborative three-dimensional mapping method according to claim 6, wherein said "calculating said drone loading camera coordinate system PCAnd the visual positioning mark coordinate system PBRelative position and attitude of
Figure FDA00034050607700000313
And
Figure FDA00034050607700000314
the method specifically comprises the following steps:
projecting the visual localization markers to a 2D pixel plane of a camera using a camera model, resulting in:
Figure FDA00034050607700000315
wherein M represents the camera internal reference momentMatrix, [ u, v,1 ]]Coordinates representing the projection of said visual positioning markers onto a normalized plane, [ XB, YB, ZB]Representing a visual positioning mark in the visual positioning mark coordinate system PBThe coordinates of (a) are (b),
Figure FDA0003405060770000041
representing the visual positioning marker coordinate system PBTo the unmanned aerial vehicle load camera coordinate system PCThe translation vector of (a) is calculated,
Figure FDA0003405060770000042
representing the visual positioning marker coordinate system PBTo the unmanned aerial vehicle load camera coordinate system PCS 1/ZCRepresenting an unknown scale factor, ZCRepresenting the Z-axis coordinate of the visual positioning mark under a camera coordinate system, and obtaining the Z-axis coordinate by adopting a direct linear transformation algorithm
Figure FDA0003405060770000043
And
Figure FDA0003405060770000044
9. the collaborative three-dimensional mapping method according to claim 1, wherein the optimizing pose estimation of the unmanned vehicle visual odometer by the visual positioning markers specifically comprises:
defining a coordinate system, defining a coordinate system P of an unmanned vehicle-mounted cameraCVisual positioning mark coordinate system PBAnd a world coordinate system PWSaid world coordinate system PWDefined as the unmanned aerial vehicle first frame, the unmanned aerial vehicle is loaded with a camera coordinate system PCAnd the unmanned vehicle coordinate system PADetermining the relationship of (1);
obtaining the coordinate system P of the unmanned vehicle loading cameraCAnd the world coordinate system PWRelative pose TcwThe visual positioning mark coordinate system PBAnd the unmanned vehicle-mounted camera coordinate system PCRelative positionPosture TbcAnd the visual positioning mark coordinate system PBAnd the world coordinate system PWRelative pose Tbw
Optimizing the pose and point cloud coordinates of the unmanned vehicle;
defining said visual positioning marker coordinate system PBAnd the unmanned vehicle-mounted camera coordinate system PCThe relative error between each other is:
Figure FDA0003405060770000045
constructing an optimization objective function:
Figure FDA0003405060770000046
wherein:
Tcw∈{(Rcw,tcw)|Rcw∈SO3,tcw∈R3} Tbc∈{(Rbc,tbc)|Rbc∈SO3,tbc∈R3}
wherein, SO3Representing three-dimensional special orthogonal groups, tcwRepresenting camera coordinate system P loaded from said unmanned vehicleCTo the world coordinate system PWTranslation error of tbcRepresenting a coordinate system P from said visual positioning markerBTo the unmanned vehicle-mounted camera coordinate system PCTranslation error of R3Representing a set of radicals of dimension 3, RcwRepresenting camera coordinate system P loaded from said unmanned vehicleCTo the world coordinate system PWTranslation error of RbcRepresenting a coordinate system P from said visual positioning markerBTo the unmanned vehicle-mounted camera coordinate system PCThe rotational error of (a);
the camera motion not only causes a rotation error Rcw、RbcAnd translation error tcw、tbcSince the scale shift is also accompanied, the scale conversion is performed and Sim is used3 transformation algorithm, therefore:
Scw=(Rcw,tcw,s=1),(Rcw,tcw)=Tcw
Sbc=(Rbc,tbc,s=1),(Rbc,tbc)=Tbc
wherein S iscwRepresenting visual alignment mark points from the world coordinate system PWTo the unmanned vehicle-mounted camera coordinate system PCBy similarity transformation of SbcRepresenting the visual alignment mark point from the visual alignment mark coordinate system PBTo the unmanned vehicle-mounted camera coordinate system PCS denotes the unknown to scale factor;
assume an optimized Sim3 pose of
Figure FDA0003405060770000051
Then the pose for which the correction is complete is:
Figure FDA0003405060770000052
wherein R isbwRepresenting the visual alignment marker point from the world coordinate system PWTo the visual positioning marker coordinate system PBRotation matrix of tbwRepresenting the visual alignment marker point from the world coordinate system PWTo the visual positioning marker coordinate system PBS denotes the unknown to scale factor,
Figure FDA0003405060770000053
representing the optimized rotation matrix, translation vector and scale factor,
Figure FDA0003405060770000054
representing the optimized similarity transformation;
setting the 3D position of the unmanned vehicle before optimization occurs as
Figure FDA0003405060770000055
The transformed coordinates can be found:
Figure FDA0003405060770000056
wherein
Figure FDA0003405060770000057
Representing the optimized pose of the unmanned vehicle.
10. A collaborative three-dimensional mapping system for implementing the collaborative three-dimensional mapping method according to any one of claims 1 to 9, comprising:
the environment preparation module is used for acquiring environment information;
the information processing module is used for extracting the key frame from the acquired environment information by adopting a Tracking thread design idea in an ORB-SLAM algorithm framework;
the detection module is used for detecting the visual positioning mark through the cloud end;
the first optimization module is used for optimizing the pose estimation of the unmanned aerial vehicle visual odometer through the visual positioning mark;
the second optimization module is used for optimizing the pose estimation of the unmanned vehicle vision odometer through the vision positioning mark;
and the execution module is used for completing a local map construction thread and a closed loop detection thread of the ORB-SLAM framework through the cloud.
CN202111510369.3A 2021-12-10 2021-12-10 Collaborative three-dimensional mapping method and system Pending CN114332360A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111510369.3A CN114332360A (en) 2021-12-10 2021-12-10 Collaborative three-dimensional mapping method and system
PCT/CN2022/138183 WO2023104207A1 (en) 2021-12-10 2022-12-09 Collaborative three-dimensional mapping method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111510369.3A CN114332360A (en) 2021-12-10 2021-12-10 Collaborative three-dimensional mapping method and system

Publications (1)

Publication Number Publication Date
CN114332360A true CN114332360A (en) 2022-04-12

Family

ID=81051491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111510369.3A Pending CN114332360A (en) 2021-12-10 2021-12-10 Collaborative three-dimensional mapping method and system

Country Status (2)

Country Link
CN (1) CN114332360A (en)
WO (1) WO2023104207A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965758A (en) * 2022-12-28 2023-04-14 无锡东如科技有限公司 Three-dimensional reconstruction method for image cooperation monocular instance
CN115965673A (en) * 2022-11-23 2023-04-14 中国建筑一局(集团)有限公司 Centralized multi-robot positioning method based on binocular vision
CN116228870A (en) * 2023-05-05 2023-06-06 山东省国土测绘院 Mapping method and system based on two-dimensional code SLAM precision control
WO2023104207A1 (en) * 2021-12-10 2023-06-15 深圳先进技术研究院 Collaborative three-dimensional mapping method and system
CN118010008A (en) * 2024-04-08 2024-05-10 西北工业大学 Binocular SLAM and inter-machine loop optimization-based double unmanned aerial vehicle co-location method

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934829B (en) * 2023-09-15 2023-12-12 天津云圣智能科技有限责任公司 Unmanned aerial vehicle target depth estimation method and device, storage medium and electronic equipment
CN117058209B (en) * 2023-10-11 2024-01-23 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map
CN117893693B (en) * 2024-03-15 2024-05-28 南昌航空大学 Dense SLAM three-dimensional scene reconstruction method and device
CN117906595B (en) * 2024-03-20 2024-06-21 常熟理工学院 Scene understanding navigation method and system based on feature point method vision SLAM
CN118031976B (en) * 2024-04-15 2024-07-09 中国科学院国家空间科学中心 Man-machine cooperative system for exploring unknown environment
CN118424256A (en) * 2024-04-18 2024-08-02 北京化工大学 Map building and positioning method and device for distributed multi-resolution map fusion
CN118212294B (en) * 2024-05-11 2024-09-27 济南昊中自动化有限公司 Automatic method and system based on three-dimensional visual guidance
CN118169729B (en) * 2024-05-14 2024-07-19 北京易控智驾科技有限公司 Positioning method and equipment for unmanned vehicle and storage medium
CN118470099B (en) * 2024-07-15 2024-09-24 济南大学 Object space pose measurement method and device based on monocular camera
CN118521646A (en) * 2024-07-25 2024-08-20 中国铁塔股份有限公司江西省分公司 Image processing-based multi-machine type unmanned aerial vehicle power receiving frame alignment method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3474230B1 (en) * 2017-10-18 2020-07-22 Tata Consultancy Services Limited Systems and methods for edge points based monocular visual slam
CN110221623B (en) * 2019-06-17 2024-10-18 酷黑科技(北京)有限公司 Air-ground collaborative operation system and positioning method thereof
CN111595333B (en) * 2020-04-26 2023-07-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertia laser data fusion
CN112115874B (en) * 2020-09-21 2022-07-15 武汉大学 Cloud-fused visual SLAM system and method
CN114332360A (en) * 2021-12-10 2022-04-12 深圳先进技术研究院 Collaborative three-dimensional mapping method and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023104207A1 (en) * 2021-12-10 2023-06-15 深圳先进技术研究院 Collaborative three-dimensional mapping method and system
CN115965673A (en) * 2022-11-23 2023-04-14 中国建筑一局(集团)有限公司 Centralized multi-robot positioning method based on binocular vision
CN115965673B (en) * 2022-11-23 2023-09-12 中国建筑一局(集团)有限公司 Centralized multi-robot positioning method based on binocular vision
CN115965758A (en) * 2022-12-28 2023-04-14 无锡东如科技有限公司 Three-dimensional reconstruction method for image cooperation monocular instance
CN116228870A (en) * 2023-05-05 2023-06-06 山东省国土测绘院 Mapping method and system based on two-dimensional code SLAM precision control
CN118010008A (en) * 2024-04-08 2024-05-10 西北工业大学 Binocular SLAM and inter-machine loop optimization-based double unmanned aerial vehicle co-location method
CN118010008B (en) * 2024-04-08 2024-06-07 西北工业大学 Binocular SLAM and inter-machine loop optimization-based double unmanned aerial vehicle co-location method

Also Published As

Publication number Publication date
WO2023104207A1 (en) 2023-06-15

Similar Documents

Publication Publication Date Title
CN114332360A (en) Collaborative three-dimensional mapping method and system
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
CN102419178B (en) Mobile robot positioning system and method based on infrared road sign
Pink Visual map matching and localization using a global feature map
Seok et al. Rovo: Robust omnidirectional visual odometry for wide-baseline wide-fov camera systems
US10872246B2 (en) Vehicle lane detection system
JP4978615B2 (en) Target identification device
CN114459467B (en) VI-SLAM-based target positioning method in unknown rescue environment
WO2021254019A1 (en) Method, device and system for cooperatively constructing point cloud map
CN113570662B (en) System and method for 3D localization of landmarks from real world images
US11514588B1 (en) Object localization for mapping applications using geometric computer vision techniques
CN115790571A (en) Simultaneous positioning and map construction method based on mutual observation of heterogeneous unmanned system
Zhao et al. RTSfM: Real-time structure from motion for mosaicing and DSM mapping of sequential aerial images with low overlap
CN111812978B (en) Cooperative SLAM method and system for multiple unmanned aerial vehicles
Avgeris et al. Single vision-based self-localization for autonomous robotic agents
Huang et al. Metric monocular localization using signed distance fields
Lin et al. PVO: Panoramic visual odometry
Wen et al. Roadside hd map object reconstruction using monocular camera
Contreras et al. Efficient decentralized collaborative mapping for outdoor environments
Roozing et al. Low-cost vision-based 6-DOF MAV localization using IR beacons
Van Hamme et al. Robust visual odometry using uncertainty models
Li et al. Multicam-SLAM: Non-overlapping Multi-camera SLAM for Indirect Visual Localization and Navigation
Hernández et al. Visual SLAM with oriented landmarks and partial odometry
Fang et al. Marker-based mapping and localization for autonomous valet parking
CN113403942A (en) Label-assisted bridge detection unmanned aerial vehicle visual navigation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination