CN115018999A - Multi-robot-cooperation dense point cloud map construction method and device - Google Patents

Multi-robot-cooperation dense point cloud map construction method and device Download PDF

Info

Publication number
CN115018999A
CN115018999A CN202210611705.1A CN202210611705A CN115018999A CN 115018999 A CN115018999 A CN 115018999A CN 202210611705 A CN202210611705 A CN 202210611705A CN 115018999 A CN115018999 A CN 115018999A
Authority
CN
China
Prior art keywords
map
point cloud
dense point
filtering
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210611705.1A
Other languages
Chinese (zh)
Inventor
罗可其
夏益民
蔡述庭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202210611705.1A priority Critical patent/CN115018999A/en
Publication of CN115018999A publication Critical patent/CN115018999A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Abstract

The embodiment of the invention provides a method and a device for constructing a dense point cloud map with multi-robot cooperation, wherein the method comprises the following steps: acquiring RGB-D images of key frames respectively acquired by two robots, extracting ORB characteristic data according to the RGB-D images and resolving to obtain a pose relation between the key frames; performing down-sampling filtering on the extracted RGB-D image to screen map points, constructing a key frame dense point cloud map by using a pinhole camera model, and adding the key frame map to a local map through a key frame pose relationship; processing the local map through voxel filtering, and respectively finishing the construction of dense point cloud maps of the two robots; calculating and matching the distance between the word vectors of the two dense point cloud maps and the key frame image by using a word bag model constructed by ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, performing primary voxel filtering and statistical filtering, and constructing a multi-robot dense point cloud map; the multi-robot SLAM map construction is realized under different scenes, and the method has better real-time performance and precision and better application prospect.

Description

Multi-robot-cooperation dense point cloud map construction method and device
Technical Field
The invention relates to the technical field of computers, in particular to a multi-robot-cooperation dense point cloud map construction method, a multi-robot-cooperation dense point cloud map construction device, a computer device and a storage medium.
Background
Visual SLAM (Visual Simultaneous Localization And Mapping, V-SLAM) is an important research direction in the field of computer vision, And the technology has been widely applied to scenes such as three-dimensional reconstruction, automatic driving, augmented reality And the like. At present, V-SLAM has developed a plurality of relatively mature open source algorithms, such as MonoSLAM, PTAM, ORB-SLAM, LSD-SLAM, SVO, DTAM, DVO, DSO, VINS, RGBD-SLAM-V2, DSM and the like, which have better efficiency and precision when used for single-robot SLAM but are not used for multi-robot SLAM. Among the plurality of SLAM algorithms, the ORB-SLAM3 is one of algorithms with the best comprehensive performance, and has the advantages of high feature extraction efficiency, high positioning precision, support of various sensors, support of loop detection, relocation, multi-map management and the like. But also has the defects of less information quantity of the constructed sparse point cloud map, lower single-robot map building efficiency and the like.
Although the surrounding environment can be described to a certain extent by the sparse point cloud, the positioning requirement of the robot in some scenes is met, but the distance between the point and the point is large, and the requirements of the robot such as navigation obstacle avoidance cannot be met. The difference of dense point cloud from sparse point cloud is that the number of map points of the dense point cloud is more and more dense, the environment can be described in more detail, and the navigation obstacle avoidance requirements of the robot can be met after further processing, but the dense point cloud constructed by the traditional algorithm has the defects of higher requirements on computing platform computing power and memory and the like.
Research on a single-robot SLAM algorithm is relatively mature, but a multi-robot SLAM has higher efficiency and robustness, so research on the multi-robot SLAM is gradually hot. Early multi-robot SLAM was mainly based on a filtering algorithm, such as using a Constrained Local sub-map Filter (CLSF), but the algorithm has the disadvantages of low precision, limited indoor environment, etc. The distributed multi-robot SLAM algorithm based on the Kimera algorithm has high precision and robustness, but has high requirements on computing power and poor real-time performance under a non-GPU platform, and a method for solving the problems adopts cloud computing, but also has the problem that the distributed multi-robot SLAM algorithm cannot continue to operate when a communication signal is unstable.
Disclosure of Invention
In view of the above problems, embodiments of the present invention have been made to provide a multi-robot-cooperative dense point cloud mapping method, a multi-robot-cooperative dense point cloud mapping apparatus, a computer device, and a storage medium that overcome or at least partially solve the above problems.
In order to solve the above problems, the embodiment of the invention discloses a dense point cloud map construction method with multi-robot cooperation, which comprises the following steps:
acquiring RGB-D images of key frames respectively acquired by two robots, extracting ORB characteristic data according to the RGB-D images and resolving to obtain a pose relation between the key frames;
performing down-sampling filtering on the extracted RGB-D image to screen map points, constructing a key frame dense point cloud map by using a pinhole camera model, and adding the key frame map to a local map through a key frame pose relationship;
processing the local map through voxel filtering, and respectively finishing the construction of dense point cloud maps of two robots;
and calculating and matching the distance between the word vectors of the two dense point cloud maps and the key frame image by using a word bag model constructed by the ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, performing primary voxel filtering and statistical filtering, and constructing a multi-robot dense point cloud map.
Preferably, the down-sampling filtering and map point screening of the extracted RGB-D image, the key frame dense point cloud map construction by using the pinhole camera model, and the key frame map addition to the local map by the key frame pose relationship include:
performing down-sampling filtering on the RGB-D image by adopting an interlaced alternate sampling mode;
converting the data after the down-sampling filtering into a dense point cloud map of the key frame, and transforming the map into a local map by using the pose of the key frame;
voxel filtering is performed on the local map.
Preferably, the method for constructing a multi-robot dense point cloud map includes the steps of calculating and matching the distance between word vectors of key frame images of two dense point cloud maps by using a word bag model constructed by using ORB features, estimating a transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, and performing one-time voxel filtering and statistical filtering processing to construct the multi-robot dense point cloud map, and includes the following steps:
constructing a K-ary tree dictionary model;
performing word vector matching aiming at the image characteristics;
and estimating the transformation relation of the two maps through an ICP (inductively coupled plasma) algorithm, and carrying out registration.
Preferably, the constructing a K-ary tree dictionary model includes:
extracting ORB characteristics;
clustering the characteristics by using a K-means clustering method to generate words;
and constructing a K-ary tree dictionary model for storing words.
Preferably, the word vector matching for the image features includes:
extracting image features, judging words to which the images belong, and counting the categories and the frequencies of the words;
calculating the weight by a TF-IDF method;
constructing a word vector;
and carrying out score calculation according to the word vector similarity, carrying out Gaussian filtering according to the score, and identifying the image pair with the similarity score higher than a preset threshold value.
Preferably, the estimating, by an ICP algorithm, a transformation relation of the two maps for registration includes:
extracting and matching ORB characteristics;
constructing an error function
And solving the constructed error function through an ICP algorithm to perform registration.
The embodiment of the invention discloses a dense point cloud map construction device with multi-robot cooperation, which comprises:
the acquisition module is used for acquiring RGB-D images of key frames respectively acquired by the two robots, extracting ORB characteristic data according to the RGB-D images and resolving to obtain a pose relation between the key frames;
the local map adding module is used for performing down-sampling filtering on the extracted RGB-D image to screen map points, constructing a key frame dense point cloud map by using a pinhole camera model, and adding the key frame map to the local map through a key frame pose relation;
the dense point cloud map building module is used for processing the local map through voxel filtering and respectively completing the building of dense point cloud maps of the two robots;
and the multi-robot dense point cloud map construction module is used for calculating and matching the distance between the word vectors of the key frame images of the two dense point cloud maps by using a word bag model constructed by using ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, performing one-time voxel filtering and statistical filtering treatment, and constructing the multi-robot dense point cloud map.
Preferably, the local map adding module includes:
the down-sampling filtering submodule is used for performing down-sampling filtering on the RGB-D image in an interlaced alternate sampling mode;
the local map transformation submodule is used for converting the data subjected to down-sampling filtering into a dense point cloud map of the key frame and transforming the map into the local map by utilizing the pose of the key frame;
and the voxel filtering submodule is used for carrying out voxel filtering on the local map.
The embodiment of the invention discloses computer equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the steps of the multi-robot cooperation dense point cloud map construction method.
The embodiment of the invention discloses a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the multi-robot cooperation dense point cloud map construction method are realized.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, based on the composition scheme of the ORB-SLAM3 algorithm, a two-layer filtering algorithm is provided for screening map points, the construction of a single-robot dense point cloud map is efficiently realized, and the problem that the traditional dense point cloud map construction algorithm has high requirements on computing power and memory is solved; and combining a bag-of-words model, providing a scheme for indirectly registering a dense point cloud map by matching a map key frame image, and finally performing filtering and denoising processing on the map by utilizing voxel filtering and statistical filtering. Compared with a comparison algorithm, an experimental result shows that the algorithm can reliably realize the construction of the multi-robot SLAM map under different scenes, has better real-time performance and precision and has better application prospect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts
FIG. 1 is a flowchart illustrating steps of an embodiment of a method for constructing a multi-robot-collaborative dense point cloud map according to the present invention;
FIG. 2 is a frame diagram of an embodiment of a multi-robot collaborative dense point cloud map construction method according to the present invention;
FIG. 3 is a flowchart of the steps of a dense point cloud map construction process according to an embodiment of the present invention;
FIG. 4 is a diagram of a dictionary construction process in accordance with an embodiment of the present invention;
FIG. 5 is a schematic illustration of a three dimensional coordinate transformation of an embodiment of the present invention;
FIG. 6 is a block diagram of an embodiment of a multi-robot collaborative dense point cloud map construction apparatus according to the present invention;
FIG. 7 is an internal block diagram of a computer device of an embodiment.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the embodiments of the present invention more clearly apparent, the embodiments of the present invention are described in further detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a method for constructing a dense point cloud map with multi-robot cooperation according to the embodiment of the present invention is shown, and the method specifically includes the following steps:
step 101, acquiring RGB-D images of key frames respectively acquired by two robots, extracting ORB characteristic data according to the RGB-D images and resolving to obtain a pose relation between the key frames;
the frame diagram of the multi-robot cooperative dense point cloud map construction method is shown in FIG. 2. Firstly, two robots carrying RGB-D cameras respectively run an ORB-SLAM3 algorithm, RGB-D images, poses and ORB characteristic data of all key frames are respectively extracted from a key frame database, then the extracted RGB-D images are subjected to down-sampling filtering to screen map points, then a pinhole camera model is used for key frame dense point cloud map construction, the key frame map is added to a local map through key frame pose relations, and finally the local map is processed through voxel filtering to complete construction of two single robot dense point cloud maps. And calculating and matching the distance between the word vectors of the key frame images of the two maps by using a word bag model constructed by using ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, and finally fusing the two maps and carrying out voxel filtering and statistical filtering processing once to finally realize the construction of the multi-robot dense point cloud map.
102, performing downsampling filtering on the extracted RGB-D image to screen map points, constructing a key frame dense point cloud map by using a pinhole model, and adding the key frame map to a local map through a key frame attitude relationship;
in the embodiment of the invention, the pinhole camera model can be abstracted to a mathematical model formula (1);
Figure RE-GDA0003786351090000061
the formula (2) can be obtained after the formula (1) is transformed;
P C =ZK -1 P uv (2)
wherein P is uv Homogeneous coordinates of any pixel in the RGB map; k is -1 Is an inverse matrix of the camera parameters; z is the depth of the pixel, P C The coordinates of the point in this camera coordinate system.
The two-layer filtering composition algorithm provided by the text can ensure that the composition speed is improved to a certain extent on the premise of not generating serious distortion on the map, and the memory consumption is reduced.
The process of constructing the dense point cloud map of the invention is shown in fig. 3, and comprises the following steps:
step 11: and (4) down-sampling and filtering. Downsampling filtering can fundamentally reduce the amount of data. And when the RGB image and the depth image are traversed, an interlaced and spaced sampling scheme is adopted. The method can reduce the number of map points by more than 75 percent, and has no serious influence on the composition effect; namely, the RGB-D image is subjected to down-sampling filtering in an interlaced alternate sampling mode;
step 12: and constructing a local dense point cloud map. Converting the data after down-sampling filtering into a dense point cloud map of the key frame by using the formula (2), and transforming the map into a local map by using the pose of the key frame; converting the data after downsampling and filtering into a dense point cloud map of the key frame, and transforming the map into a local map by using the pose of the key frame;
step 13: voxel filtering is performed on the local map. The voxel filtering can achieve the effect of down-sampling without destroying the geometrical structure of the point cloud. And when more key frame maps are added into the local map, rasterizing the point cloud, and representing all points in the voxel grid by using the barycenter of all the points in the voxel grid. The method needs a large amount of calculation, but voxel filtering can also ensure a certain degree of real-time performance after the down-sampling filtering in the step 1.
103, processing the local map through voxel filtering to respectively complete the construction of dense point cloud maps of the two robots;
and 104, calculating and matching the distance between the word vectors of the key frame images of the two dense point cloud maps by using a word bag model constructed by the ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, performing primary voxel filtering and statistical filtering, and constructing a multi-robot dense point cloud map.
The embodiment of the invention comprises the following steps: constructing a K-ary tree dictionary model; performing word vector matching aiming at the image characteristics; and estimating the transformation relation of the two maps by an ICP (inductively coupled plasma) algorithm, and carrying out registration.
Further, Bag of Words model (BoW) is a method of describing complex information. The visual bag-of-words model classifies the image feature descriptors into words through a clustering algorithm, and a K-ary tree dictionary is constructed to facilitate retrieval. Counting the category and frequency of the words appearing in the image to be matched, calculating the weight of the words, and converting all the weights of the words into a word vector to describe the image. The similarity of the images can be judged by calculating the word vector distance of the two images.
The construction of the K-ary tree dictionary model comprises the following steps:
step 21: and (4) extracting features. The ORB characteristics are adopted in the embodiment of the invention, and the common image characteristics comprise SITF, SURF and the like;
step 22: and generating words. Clustering the features by using a K-means clustering method to generate words, as shown in part (a) of FIG. 4;
step 23: and constructing a dictionary. Constructing a KD-tree (K-ary tree dictionary model) to store words and facilitate the retrieval of the words, as shown in part (b) of FIG. 4.
Wherein the word vector matching for the image features comprises the following steps:
step 31: and (5) word statistics. Extracting image features, judging words to which the images belong, and counting the classes and the frequencies of the words;
step 32: and calculating the word weight. Calculating the weight by using a TF-IDF method;
TF i the frequency of the ith word in the image is defined as the number n of times that the word appears in the image i Comparing the total word number N in the image, as formula (3);
Figure BDA0003672184200000081
IDF i the characteristic number n of leaf nodes where the ith word is positioned i The negative logarithm of the ratio of the relative total number of features n, as in equation (4);
Figure BDA0003672184200000082
ith word weight w i Is TF i And IDF i The product of the equation (5);
w i =TF i *IDF i (5)
step 33: and constructing a word vector. Defining a word vector V for the nth image n As shown in formula (6);
V n =[w 1 ,w 2 ,w 3 ,......,w j ] (6)
step 34: and calculating the similarity score. Common similarity scoring methods are dot product, L1 norm, L2 norm, KL divergence/relative entropy, baryta distance, chi-squared distance, etc. To ensure matching speed, a simpler L1 norm is used. The similarity score S is calculated according to the formula (7), wherein V ij And the word vector represents the j time image of the ith map.
Figure RE-GDA0003786351090000083
Step 35: and carrying out Gaussian filtering on the similarity score. Since the RGB-D image is sampled at successive time intervals, it is possible to useThe source image is considered similar to the target image and its neighbors. And the adjacent similarity scores are weighted by a Gaussian filter algorithm, so that mismatching can be reduced. Gauss filtered similarity score S guass The calculation formula is as formula (8);
Figure BDA0003672184200000084
step 36: and (5) image matching. And selecting the image pairs with the similarity scores higher than the threshold value.
Further, the estimation of the transformation relation of the two maps through the ICP algorithm for registration includes the following steps:
step 41: extracting and matching ORB characteristics;
ORB feature extraction and matching. Obtaining a set of well matched 3D points P and P' in the similar image, as formula (9)
Figure RE-GDA0003786351090000091
Step 42: constructing an error function;
(1) assuming that the pose transformation relation rotation matrix of the matched image is R 3×3 The translational phasor is t 3×1 If so, the ith pair of matching points has a constraint relation of an equation (10);
Figure BDA0003672184200000092
(2) defining an error term of the ith paired point pair as formula (11);
e i =p i -(Rp′ i +t) (11)
step 43: and solving the constructed error function through an ICP algorithm to perform registration.
ICP solution can construct a least square problem as shown in formula (12), and R and t are solved by an SVD method so that the sum of error squares is minimized.
Figure BDA0003672184200000093
The calculation steps are as follows:
(1) calculating the centroids of the two groups of point clouds, and respectively recording as mu and mu', as formulas (13) and (14);
Figure BDA0003672184200000094
Figure BDA0003672184200000095
(2) constructing a matrix H with the centroid removed, as shown in a formula (15);
Figure BDA0003672184200000096
(3) performing SVD on the matrix H to obtain an expression (16);
H=UΣV T (16)
(4) the result is calculated, the rotation matrix R is as formula (17), and the translation phasor t is as formula (18). And integrating R and t into a homogeneous transformation matrix as shown in the formula (19).
R=VU T (17)
t=μ'-Rμ (18)
Figure BDA0003672184200000101
The method is further applied to the invention, the step of fusing the two maps, carrying out one-time voxel filtering and statistical filtering processing, and constructing the multi-robot dense point cloud map comprises the following steps:
step 51: calculating coordinate transformation relation T of two maps m . Two similar images are obtained through a bag-of-words model, and the camera external parameters of the images are assumed to be T respectively c1 And T c2 Calculating the transformation matrix T between the images by ICP algorithm cc . Suppose that the coordinates of a certain point on the map in the coordinate systems of the two maps are respectivelyp and p' as shown in FIG. 5.
The constraint conditions of the two map coordinate systems are expressed by an expression (20);
p=T m p' (20)
the constraint condition between the two images is an expression (21);
Figure BDA0003672184200000102
the integrated formula (20) and the formula (21), formula (22) can be obtained;
Figure BDA0003672184200000103
step 52: RANSAC algorithm optimization. Since there may be mismatching in more than one pair of images determined to be similar by the bag-of-words model, the resulting T m The method is not unique and ensures the correctness of the method, and outliers are screened out through a RANSAC algorithm to calculate the optimal T m
Step 53: and (4) voxel filtering. After map fusion, two parts which are overlapped certainly exist, and voxel filtering is needed to remove repeated parts so as to reduce memory consumption; the voxel filtering specific step may refer to step 13.
Step 54: and (5) carrying out statistical filtering. Because the infrared light of the RGB-D camera is easily influenced by natural light, a certain number of outliers with noise exist in the data, and the noise points need to be removed through statistical filtering. The statistical filtering assumes that the average distances between all points and the nearest K nearest points satisfy Gaussian distribution, the variance of the average distances is calculated to determine a threshold, and when the average distance of a certain point is greater than the threshold, the point can be judged to be an outlier. Since the statistical filtering is computationally intensive and only effective in removing noise points that are not significant, this document is only applied before the final map output.
In the embodiment of the invention, based on the composition scheme of the ORB-SLAM3 algorithm, a two-layer filtering algorithm is provided for screening map points, the construction of a single-robot dense point cloud map is efficiently realized, and the problem that the traditional dense point cloud map construction algorithm has high requirements on computing power and memory is solved; and combining a bag-of-words model, providing a scheme for indirectly registering a dense point cloud map by matching a map key frame image, and finally performing filtering and denoising processing on the map by utilizing voxel filtering and statistical filtering. Compared with a comparison algorithm, an experimental result shows that the algorithm can reliably realize the construction of the multi-robot SLAM map under different scenes, and has better real-time performance and precision, so that the method has better application prospect.
It is noted that, for simplicity of explanation, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will appreciate that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the invention. Further, those of skill in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the embodiments of the invention.
Referring to fig. 6, a structural block diagram of an embodiment of a dense point cloud map construction apparatus with multi-robot cooperation according to an embodiment of the present invention is shown, which may specifically include the following modules:
the acquisition module 301 is configured to acquire RGB-D images of key frames acquired by two robots, extract ORB feature data according to the RGB-D images, and resolve the extracted ORB feature data to obtain a pose relationship between the key frames;
the local map adding module 302 is used for performing down-sampling filtering on the extracted RGB-D image to screen map points, constructing a key frame dense point cloud map by using a pinhole camera model, and adding the key frame map to the local map through a key frame pose relationship;
the dense point cloud map building module 303 is used for processing the local map through voxel filtering and respectively completing the building of dense point cloud maps of the two robots;
and the multi-robot dense point cloud map construction module 304 is used for calculating and matching the distance between the two dense point cloud map key frame image word vectors by using a word bag model constructed by the ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, performing voxel filtering and statistical filtering once, and constructing the multi-robot dense point cloud map.
Preferably, the local map adding module includes:
the down-sampling filtering submodule is used for performing down-sampling filtering on the RGB-D image in an interlaced alternate sampling mode;
the local map transformation submodule is used for converting the data subjected to down-sampling filtering into a dense point cloud map of the key frame and transforming the map into the local map by utilizing the pose of the key frame;
and the voxel filtering submodule is used for carrying out voxel filtering on the local map.
Preferably, the multi-robot dense point cloud map building module comprises:
the K-ary tree dictionary building submodule is used for building a K-ary tree dictionary model;
the word vector matching submodule is used for carrying out word vector matching on the image characteristics;
and the registration submodule is used for estimating the transformation relation of the two maps through an ICP (inductively coupled plasma) algorithm and performing registration.
Preferably, the K-ary tree dictionary construction sub-module includes:
a feature extraction unit for extracting ORB features;
the word generating unit is used for clustering the characteristics by using a K-means clustering method to generate words;
and the constructing unit is used for constructing a K-ary tree dictionary model for storing words.
Preferably, the word vector matching sub-module includes:
the statistical unit is used for extracting image characteristics, judging the word to which the statistical unit belongs and counting the category and frequency of the appearing word;
a weight calculation unit for calculating a weight by a TF-IDF method;
the word vector construction unit is used for constructing word vectors;
and the calculating unit is used for carrying out grading calculation on the word vector similarity, carrying out Gaussian filtering on the grades and identifying the image pair with the similarity grade higher than a preset threshold value.
Preferably, the registration sub-module comprises:
the extraction and matching unit is used for extracting and matching ORB characteristics;
an error function construction unit for constructing an error function
And the registration unit is used for solving the constructed error function through an ICP algorithm to perform registration.
The modules in the multi-robot cooperative dense point cloud map building device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, or can be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The multi-robot-cooperation dense point cloud map construction device can be used for executing the multi-robot-cooperation dense point cloud map construction method provided by any embodiment, and has corresponding functions and beneficial effects.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a multi-robot cooperation dense point cloud map construction method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 7 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the scope of the present application as such, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of the embodiments of fig. 1-2.
In one embodiment, a computer readable storage medium is provided, having stored thereon a computer program, which when executed by a processor, performs the steps of the embodiments of fig. 1-2 below.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the true scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or end device that comprises the element.
The method for constructing the multi-robot-cooperation dense point cloud map, the device for constructing the multi-robot-cooperation dense point cloud map, the computer equipment and the storage medium are introduced in detail, specific examples are applied in the method for explaining the principle and the implementation mode of the invention, and the description of the embodiments is only used for assisting in understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A dense point cloud map construction method based on multi-robot cooperation is characterized by comprising the following steps:
acquiring RGB-D images of key frames respectively acquired by two robots, extracting ORB characteristic data according to the RGB-D images and resolving to obtain a pose relation between the key frames;
performing down-sampling filtering on the extracted RGB-D image to screen map points, constructing a key frame dense point cloud map by using a pinhole camera model, and adding the key frame map to a local map through a key frame pose relationship;
processing the local map through voxel filtering, and respectively finishing the construction of dense point cloud maps of the two robots;
and calculating and matching the distance between the word vectors of the two dense point cloud maps and the key frame image by using a word bag model constructed by the ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, performing primary voxel filtering and statistical filtering, and constructing a multi-robot dense point cloud map.
2. The multi-robot-cooperation dense point cloud map construction method according to claim 1, wherein the step of performing downsampling filtering on the extracted RGB-D image to screen map points, the step of constructing a key frame dense point cloud map by using a pinhole camera model, and the step of adding the key frame map to the local map through a key frame pose relationship comprises the steps of:
performing down-sampling filtering on the RGB-D image by adopting an interlaced alternate sampling mode;
converting the data after the down-sampling filtering into a dense point cloud map of the key frame, and transforming the map into a local map by using the pose of the key frame;
voxel filtering is performed on the local map.
3. The method for constructing the dense point cloud map with the cooperation of multiple robots as claimed in claim 1, wherein the word bag model constructed by utilizing ORB features is used for calculating and matching the distance between word vectors of key frame images of two dense point cloud maps, an ICP algorithm is used for estimating the transformation relation of the two maps, the two maps are fused and subjected to voxel filtering and statistical filtering once, and the method for constructing the dense point cloud map with multiple robots comprises the following steps:
constructing a K-ary tree dictionary model;
performing word vector matching aiming at the image characteristics;
and estimating the transformation relation of the two maps by an ICP (inductively coupled plasma) algorithm, and carrying out registration.
4. The multi-robot collaborative dense point cloud map construction method according to claim 3, wherein the constructing a K-ary tree dictionary model comprises:
extracting ORB characteristics;
clustering the characteristics by using a K-means clustering method to generate words;
and constructing a K-ary tree dictionary model for storing words.
5. The multi-robot collaborative dense point cloud map construction method according to claim 3, wherein the performing word vector matching for image features comprises:
extracting image features, judging words to which the images belong, and counting the categories and the frequencies of the words;
calculating the weight by a TF-IDF method;
constructing a word vector;
and carrying out score calculation according to the word vector similarity, carrying out Gaussian filtering according to the score, and identifying the image pair with the similarity score higher than a preset threshold value.
6. The multi-robot-cooperation dense point cloud map construction method according to claim 3, wherein the estimation of the transformation relation of the two maps through the ICP algorithm for registration comprises:
extracting and matching ORB characteristics;
constructing an error function
And solving the constructed error function through an ICP algorithm to perform registration.
7. A dense point cloud map construction device with multi-robot cooperation is characterized by comprising the following components:
the acquisition module is used for acquiring RGB-D images of key frames respectively acquired by the two robots, extracting ORB characteristic data according to the RGB-D images and resolving to obtain a pose relation between the key frames;
the local map adding module is used for performing down-sampling filtering on the extracted RGB-D image to screen map points, constructing a key frame dense point cloud map by using a pinhole camera model, and adding the key frame map to the local map through a key frame pose relationship;
the dense point cloud map building module is used for processing the local map through voxel filtering and respectively completing the building of dense point cloud maps of the two robots;
and the multi-robot dense point cloud map construction module is used for calculating and matching the distance between the word vectors of the key frame images of the two dense point cloud maps by using a word bag model constructed by the ORB characteristics, estimating the transformation relation of the two maps by using an ICP (inductively coupled plasma) algorithm, fusing the two maps, performing one-time voxel filtering and statistical filtering treatment, and constructing the multi-robot dense point cloud map.
8. The multi-robot collaborative dense point cloud mapping apparatus of claim 7, wherein the local map adding module comprises:
the down-sampling filtering submodule is used for performing down-sampling filtering on the RGB-D image in an interlaced alternate sampling mode;
the local map transformation submodule is used for converting the data subjected to down-sampling filtering into a dense point cloud map of the key frame and transforming the map into the local map by using the pose of the key frame;
and the voxel filtering submodule is used for carrying out voxel filtering on the local map.
9. A computer device comprising a memory storing a computer program and a processor, wherein the processor when executing the computer program implements the steps of the multi-robot collaborative dense point cloud map construction method of any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the multi-robot-collaborative dense point cloud map construction method of any one of claims 1 to 6.
CN202210611705.1A 2022-05-31 2022-05-31 Multi-robot-cooperation dense point cloud map construction method and device Pending CN115018999A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210611705.1A CN115018999A (en) 2022-05-31 2022-05-31 Multi-robot-cooperation dense point cloud map construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210611705.1A CN115018999A (en) 2022-05-31 2022-05-31 Multi-robot-cooperation dense point cloud map construction method and device

Publications (1)

Publication Number Publication Date
CN115018999A true CN115018999A (en) 2022-09-06

Family

ID=83071039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210611705.1A Pending CN115018999A (en) 2022-05-31 2022-05-31 Multi-robot-cooperation dense point cloud map construction method and device

Country Status (1)

Country Link
CN (1) CN115018999A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965673A (en) * 2022-11-23 2023-04-14 中国建筑一局(集团)有限公司 Centralized multi-robot positioning method based on binocular vision
CN116030213A (en) * 2023-03-30 2023-04-28 千巡科技(深圳)有限公司 Multi-machine cloud edge collaborative map creation and dynamic digital twin method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965673A (en) * 2022-11-23 2023-04-14 中国建筑一局(集团)有限公司 Centralized multi-robot positioning method based on binocular vision
CN115965673B (en) * 2022-11-23 2023-09-12 中国建筑一局(集团)有限公司 Centralized multi-robot positioning method based on binocular vision
CN116030213A (en) * 2023-03-30 2023-04-28 千巡科技(深圳)有限公司 Multi-machine cloud edge collaborative map creation and dynamic digital twin method and system

Similar Documents

Publication Publication Date Title
Wang et al. SaliencyGAN: Deep learning semisupervised salient object detection in the fog of IoT
CN112258618B (en) Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
Lim et al. Real-time image-based 6-dof localization in large-scale environments
Costea et al. Creating roadmaps in aerial images with generative adversarial networks and smoothing-based optimization
CN110363817B (en) Target pose estimation method, electronic device, and medium
CN108734210B (en) Object detection method based on cross-modal multi-scale feature fusion
CN110728209A (en) Gesture recognition method and device, electronic equipment and storage medium
CN111625667A (en) Three-dimensional model cross-domain retrieval method and system based on complex background image
CN115018999A (en) Multi-robot-cooperation dense point cloud map construction method and device
CN111931764B (en) Target detection method, target detection frame and related equipment
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN112906520A (en) Gesture coding-based action recognition method and device
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN116385660A (en) Indoor single view scene semantic reconstruction method and system
CN116402976A (en) Training method and device for three-dimensional target detection model
CN114792401A (en) Training method, device and equipment of behavior recognition model and storage medium
CN117115911A (en) Hypergraph learning action recognition system based on attention mechanism
Shen et al. ImLiDAR: cross-sensor dynamic message propagation network for 3D object detection
CN113592015B (en) Method and device for positioning and training feature matching network
Kumar et al. COMPUTER VISION BASED DANCE POSTURE EXTRACTION USING SLIC.
CN113705304A (en) Image processing method and device, storage medium and computer equipment
CN113568983A (en) Scene graph generation method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination