CN117723048A - Multi-robot compressed communication collaborative mapping method and system under communication limitation - Google Patents

Multi-robot compressed communication collaborative mapping method and system under communication limitation Download PDF

Info

Publication number
CN117723048A
CN117723048A CN202311741478.5A CN202311741478A CN117723048A CN 117723048 A CN117723048 A CN 117723048A CN 202311741478 A CN202311741478 A CN 202311741478A CN 117723048 A CN117723048 A CN 117723048A
Authority
CN
China
Prior art keywords
map
robot
maps
local grid
compressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311741478.5A
Other languages
Chinese (zh)
Inventor
张泽旭
徐田来
袁帅
张良
郭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202311741478.5A priority Critical patent/CN117723048A/en
Publication of CN117723048A publication Critical patent/CN117723048A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-robot compressed communication collaborative mapping method and system under communication limitation, and relates to the technical field of multi-robot collaborative mapping. The technical key points of the invention include: the method comprises the steps that a master robot and a plurality of slave robots respectively establish a local grid map for the environment where the slave robots are located according to the pose of the master robot and the slave robots; the plurality of slave robots perform compression coding on the local grid maps, and transmit the compressed and coded local grid maps to the master robot respectively; and the host robot decodes the compressed and encoded local grid map, and fuses all the local grid maps to obtain a global map. The invention combines a convolutional neural network, a Huffman algorithm and an RLE algorithm to design an intelligent Huffman compression algorithm, which compresses an occupied grid map to less than 1% of the original size and can reduce the communication pressure by 99%. The invention can provide accurate navigation and positioning for the robot, thereby completing complex tasks such as reconnaissance, patrol, rescue and the like.

Description

Multi-robot compressed communication collaborative mapping method and system under communication limitation
Technical Field
The invention relates to the technical field of multi-robot collaborative mapping, in particular to a multi-robot compressed communication collaborative mapping method and system under communication limitation.
Background
Along with the development of sensor network technology and robot technology, the intelligent level of mobile robots is greatly improved, and the intelligent level of the mobile robots plays an important role in scenes such as daily life, military tasks, disaster rescue and the like. Wherein the navigation positioning of the mobile robot is particularly important, and the navigation positioning is dependent on the construction of an environment map. The construction of the environment map plays a vital role in the navigation and positioning of the mobile robot. The current single machine map construction has better results, but single robots and sensors often have the limitations of limited operation capacity, small visual field range and the like, and if the scene of map construction is overlarge and the terrain is complex, the single machine map construction needs a great amount of time. In order to break through the bottleneck of the work of a single robot, multi-robot collaborative research is favored by many students. Since the nineties of the last century, multi-agent systems and distributed artificial intelligence systems have evolved rapidly, with some results being introduced by some scholars into multi-robot collaboration, improving the robustness and maneuverability of multi-robot systems. The application of the multiple robots in more complex scenes, such as rescue scenes of underground mining, fire disaster, epidemic prevention and the like, is realized.
The multi-robot collaborative mapping is mainly used for solving two problems of transmission and splicing among all local maps. The problem of splicing the collaborative map built by multiple robots is to merge and splice the local map obtained by each robot, so as to obtain a complete global map. The most critical problem in map stitching is to obtain a transformation matrix between each local map; according to different splicing means, map splicing is mainly divided into a direct method and an indirect method: the direct method is to directly calculate a transformation matrix through data collected by a camera or a radar equipped with the robot; the indirect method is to calculate a relative transformation matrix by detecting and matching common features between local maps. There has been a great deal of research on the fusion and splicing problems of maps, and the technology is relatively mature. But there are some problems in the transmission of the map: for example, applying a multi-robot system to a combat scenario, an enemy has some interference capability with my communication devices such that the communication capability of the my system is limited. How to splice and fuse multiple local maps through efficient transmission in such cases remains a challenge.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a multi-robot compressed communication collaborative mapping method and system under the communication limitation.
According to one aspect of the present invention, a multi-robot compressed communication collaborative mapping method under communication limitation is provided, the method includes the following steps:
the method comprises the steps that a master robot and a plurality of slave robots respectively establish a local grid map for the environment where the slave robots are located according to the pose of the master robot and the slave robots;
the plurality of slave robots perform compression coding on the local grid maps, and transmit the compressed and coded local grid maps to the master robot respectively;
and the host robot decodes the compressed and encoded local grid map, and fuses all the local grid maps to obtain a global map.
Further, the master robot and the plurality of slave robots build a local grid map according to the pose of the master robot and the plurality of slave robots by using a Gmopping algorithm.
Further, the plurality of slave robots perform compression encoding on the local raster map by using a pre-trained compression communication network model, and the master robot decodes the compressed and encoded local raster map by using the pre-trained compression communication network model.
Further, the compressed communication network model comprises a feature extraction convolutional neural network, a Huffman encoder, a Huffman decoder and a map recovery convolutional neural network; the feature extraction convolutional neural network is used for extracting map features, and the map restoration convolutional neural network is used for restoring the map features into a grid map; the feature extraction convolutional neural network and the map restoration convolutional neural network are both composed of a plurality of weight layers.
Further, the process of compression encoding of their respective local grid maps by the plurality of slave robots using the pre-trained compressed communication network model includes: after obtaining map features through feature extraction convolutional neural network, calculating all pixel values appearing in the map featuresAnd each pixel value v i Probability of occurrence in map features; the probability of these values occurring is ordered from high to low, resulting in a set +.>Initializing n nodesAnd->One-to-one correspondence, two nodes of the corresponding minimum probability are merged into a new node, ++>Become { n } 1 ,…,n n-2 ,n merged }, at the same time->Also follow the update to { p } 1 ,…,p n-2 ,p merged -a }; this operation is repeated until +.>Only one node is left and +.>Also only one probability value is 1; obtaining a coding table corresponding to the map features according to the node fusion process>Wherein each pixel value v i Corresponding to a binary code i The method comprises the steps of carrying out a first treatment on the surface of the According to coding table->Map features are written as a string of binary codes; converting binary code into hexadecimal code h i ,h i I.e. the encoded character string.
Further, the process of compression encoding the local grid map of each slave robot by using the pre-trained compression communication network model further comprises the following steps: after obtaining the coded character string h i Then, the hexadecimal system is coded into h by using RLE algorithm i The character of the continuous repetition is replaced by the character itself and the repeated times, finally the replaced code character string e is obtained i
Further, the process of decoding the compressed and encoded local raster map by the host robot using the pre-trained compressed communication network model includes:
definition string d i Is an empty string; traversing character string e i Obtaining e i All characters and the number thereof, are added to d i In d i I.e. hexadecimal coding h in the encoder i The method comprises the steps of carrying out a first treatment on the surface of the Will d i Conversion to binary, according to a coding tableAnd (3) converting the corresponding binary code segment into a value in the map feature, and recovering the map feature through the map recovery convolutional neural network.
Further, the process of fusing all local grid maps includes: and sequencing all the local grid maps, then fusing the local grid maps in sequence, and finally fusing the local grid maps into a global map. The two map fusion methods are as follows:
if the initial relative pose of the two robots is known when the two maps to be fused are constructed, namely, the rigid transformation R and t between the maps constructed by the robots are known, and the two local maps are fused according to the R and t; if the initial relative pose of the robot in the map building is unknown, firstly, rigidly transforming R and t between two local grid maps, and fusing the two local grid maps according to the R and t; wherein,
the rigid transformations R and t are of the formula:
wherein θ represents a rotation relationship of the map, and R is a rotation matrix between the two maps; t represents a translation vector between two maps, t x 、t y Respectively representing the translation relation of the two maps after R rotation;
the formula of the fusion of every two is as follows: map fusion =map 1 +(map 2 ×R+t)。
Further, if the initial relative pose of the robot in the process of fusing all the local grid maps is unknown, rigid transformation R and t between the local grid maps are obtained by calculation according to the following process: respectively detecting ORB characteristic points in the local grid map; matching key points in the two local grid maps by using a violence matcher, and finding key points with the same distance in the two local grid maps to form a matching point pair; and calculating rigid transformation R, t between the two maps according to the matching point pairs.
According to another aspect of the present invention, there is provided a multi-robot compressed communication collaborative mapping system under communication limitation, the system comprising:
the local map building module is configured to build local grid maps for the environments of the master robot and the slave robots according to the pose of the master robot and the slave robots respectively;
the compression transmission module is configured to perform compression coding on the local grid map by the plurality of slave robots by adopting a pre-trained compression communication network model, and respectively transmit the compressed and coded local grid map to the master robot;
the global map building module is configured to decode the compressed and encoded local grid map by the host robot through a pre-trained compressed communication network model, and fuse all the local grid maps to obtain a global map;
the compressed communication network model comprises a feature extraction convolutional neural network, a Huffman encoder, a Huffman decoder and a map recovery convolutional neural network; the feature extraction convolutional neural network is used for extracting map features, and the map restoration convolutional neural network is used for restoring the map features into a grid map.
The beneficial technical effects of the invention are as follows:
in combination with the convolutional neural network, the Huffman algorithm and the RLE algorithm, the invention designs an intelligent Huffman compression algorithm which can compress the occupied grid map to less than 1% of the original size, wherein a lightweight convolutional neural network is designed in the intelligent compression algorithm, the size of the map is minimized under the condition of not losing key information of the map, and too much calculation force is not needed. The ultra-wideband is used as a communication medium to verify the multi-robot compressed communication collaborative mapping method, and the result shows that compared with the method without compression, the method can reduce the communication pressure by 99 percent and has a positioning error smaller than 5cm.
Drawings
The invention may be better understood by reference to the following description taken in conjunction with the accompanying drawings, which are included to provide a further illustration of the preferred embodiments of the invention and to explain the principles and advantages of the invention, together with the detailed description below.
Fig. 1 is a frame diagram of a multi-robot compressed communication collaborative mapping method under communication limitation according to an embodiment of the present invention.
Fig. 2 is a block diagram of feature extraction CNN in a compressed communication network model in an embodiment of the present invention.
Fig. 3 is a block diagram of a map restoration CNN in a compressed communication network model in an embodiment of the present invention.
Fig. 4 is a huffman coding tree in an embodiment of the present invention.
Fig. 5 is a comparison diagram of experiments in which the method of the present invention is applied to the collaborative mapping of two robots.
FIG. 6 is a graph showing the change of the comparative index of the map of two experiments under the same environment along with the experiment running time in the embodiment of the invention.
Fig. 7 is a graph showing the change of compression rate of a local map with running time when two robots are cooperatively mapped in the embodiment of the present invention.
FIG. 8 is a graph comparing the map transmission time required for two sets of experiments in accordance with an embodiment of the present invention.
Fig. 9 is a diagram showing the comparison of the bandwidths required for two sets of experimental map transmissions and the bandwidths of the actual uwb communication modules in an embodiment of the present invention.
FIG. 10 is an error map of two sets of experimentally built global maps providing positioning for a robot in an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, exemplary embodiments or examples of the present invention will be described below with reference to the accompanying drawings. It is apparent that the described embodiments or examples are only implementations or examples of a part of the invention, not all. All other embodiments or examples, which may be made by one of ordinary skill in the art without undue burden, are intended to be within the scope of the present invention based on the embodiments or examples herein.
In order to overcome the defects of the prior art, the invention provides a multi-robot compressed communication collaborative mapping method and system under the communication limitation. According to the invention, after a single robot obtains a local grid map by using a laser radar and a vision sensor, the local grid map is compressed by a designed Huffman compression method and then transmitted to a robot processing unit to be subjected to map splicing and fusion for decoding; the robot obtains the local grid map transmitted by each robot and then carries out map fusion on the map to obtain the global map of the unknown environment. The invention can finally realize the compression transmission of the local grid map by the Huffman compression algorithm under the limited communication environment, and further obtain the complete grid map by the map fusion.
The embodiment of the invention provides a multi-robot compressed communication collaborative mapping method under communication limitation, as shown in fig. 1, comprising the following steps:
step one, a master robot and a plurality of slave robots respectively establish a local grid map for the environment where the slave robots are located according to the pose of the master robot and the slave robots;
secondly, the plurality of slave robots perform compression coding on the local grid maps, and transmit the compressed and coded local grid maps to the master robot respectively;
and thirdly, decoding the compressed and encoded local grid map by the host robot, and fusing all the local grid maps to obtain a global map.
The method starts in step one. In the first step, the master robot and the plurality of slave robots respectively establish a local grid map for the environment where the slave robots are located according to the pose of the slave robots.
According to the embodiment of the invention, the map building robot adopts a Turtlebot2 which is matched with hardware such as a laser radar, a two-wheel differential driving chassis and the like. Using the Gmapping algorithm under ROS, two state variables of the pose and map of the robot are estimated at the same time as follows:
p(x t ,m|z 1:t ,u 1:t-1 ) (1)
wherein x is t The robot state is represented by m, the environment map is represented by m, the sensor observation condition is represented by z, and the control input is represented by u.
The Gapping algorithm is based on particle filtering, and is firstly state prediction, wherein the state of particles at the current moment is updated by a motion model, and Gaussian sampling noise is added to an initial value to perform rough state estimation; secondly, measuring, carrying out one-time scanning matching on the basis of rough state estimation, wherein the process is to move the predicted pose to six states of negative x, positive x, negative y, positive y, left rotation and right rotation on the basis of the predicted pose of the motion model, calculate the matching score in each state, and select the pose corresponding to the highest score as the optimal pose so as to improve the distribution based on the odometer model; then, the weight of each particle is calculated, and then, the resampling judgment is carried out by adopting the measure of the dispersion of the weight value, wherein the judgment formula is as follows:
N eff the larger the particle weight gap, the smaller the particles can represent the true distribution. N (N) eff When the threshold value is smaller, the particle is resampled when the particle is large in phase difference from the real distribution. And finally, building a graph, namely performing grid scale segmentation on the two-dimensional environment, and assuming that the state of each grid is independent. For a point in the environment, p (s=1) represents the probability that it is a passable state, and p (s=0) represents the state probability that it is an obstacle.
And then executing a step II, wherein in the step II, the plurality of slave robots perform compression coding on the local grid maps, and respectively transmit the compressed and coded local grid maps to the master robot.
According to the embodiment of the invention, the plurality of slave robots perform compression coding on the local grid map by adopting a pre-trained compression communication network model, and in order to realize the high compression of the local grid map, a compression communication network model fused by a convolutional neural network, a Huffman algorithm and an RLE algorithm is adopted.
The compressed communication network model consists of a feature extraction convolutional neural network, a Huffman encoder, a Huffman decoder and a map recovery convolutional neural network, and the network optimization function is designed as the formula (3):
wherein R (·) and F (·) represent a feature extraction convolutional neural network and a map restoration convolutional neural network, respectively; cod (·) is the encoder and decoder; m is the input of the local grid map, i.e. the network; θ 1 And theta 2 Is a parameter of the optimization function.
A local grid map feature extraction convolutional neural network and a map recovery convolutional neural network are designed, as shown in fig. 2, wherein the feature extraction convolutional neural network is composed of three weight layers: the first layer consists of a convolution layer (the convolution kernel is set to 3 x 3) and an activation function ReLu (the expressive force of the reinforcement model) whose purpose is to perform a preliminary feature extraction on the grid map. The number of convolved channels is set to 16 in consideration of the calculation power of a microcomputer loaded on the robot; the second layer consists of a convolution layer, a BN layer (batch normalization layer) and an activation function ReLu, wherein the purpose of the layer is to downsample the features extracted primarily and make the features more prominent, and in order to reduce the feature map extracted primarily to one fourth, the step length of the convolution layer is set to be 2, the convolution kernel is set to be 3 multiplied by 3, and the number of input and output channels is set to be 16; the last layer is a convolution layer with an input channel of 16 and an output channel of 1, and outputs the obtained map feature map feature
As shown in fig. 3, the map restoration convolutional neural network is composed of six weight layers: the three types are respectively a convolution layer+an activation function, a convolution layer+a BN layer+an activation function and a convolution layer. The purpose of the map restoration convolutional neural network is to extract the features from the map feature map obtained by the convolutional neural network feature And recovering to be a grid map. The residual structure is applied to the network because its skip-linking helps the network learn the differences between input and output, which makes the map feature Is more preserved so that the map restoration convolutional neural network can restore a map more consistent with the grid map input by the feature extraction convolutional neural network.
Map restoration convolutional neural network first pairs the input map feature Up-sampling is carried out, and then the up-sampling is input into a weight layer; in the first layer of the weight layer, the output channel of the convolution layer is 16, the convolution kernel is 3*3, and 16 convolution kernels are generatedMap features; the second layer to the fifth layer are composed of a convolution layer with an input/output channel of 16 and a convolution kernel of 3*3, a BN layer and an activation function; the last layer is a convolution layer of the input channel 16 and the output channel 1, so as to obtain a gray scale image; finally, the recovered grid map is composed of the obtained gray-scale map and up-sampling.
Adding a designed coder-decoder combining Huffman and RLE (Run Length Encoding run-length coding) algorithm into the feature extraction convolutional neural network and the map recovery convolutional neural network, wherein the coder-decoder algorithm is as follows:
calculated at map feature All values appearing in (a)And each value v i At map feature Here, since the map feature is actually a reduced map, essentially a picture, in which the value is the pixel value in the map feature picture; ordering the probability of occurrence of the pixel values from high to low to obtain a setThen initialize n nodes +.>And->One-to-one correspondence, two nodes of the corresponding minimum probability are merged into a new node, ++>Become { n } 1 ,…,n n-2 ,n merged }, at the same time->Also follow the update to { p } 1 ,…,p n-2 ,p merged Repeating this operation until +>Only one node is left and +.>There is also only one probability value of 1. According to the node fusion process, a Huffman coding tree is obtained as shown in fig. 4, wherein gray nodes represent initial nodes, and white nodes are new nodes generated by node fusion. Obtaining map feature Coding table of->Wherein each value v 1 Corresponding to a binary code i The method comprises the steps of carrying out a first treatment on the surface of the Then->m feature Written as a string of binary codes; converting binary code into hexadecimal code h i The encoded length is further compressed. In order to further increase the compression ratio and reduce the communication pressure, an RLE algorithm is adopted to make h i The character itself and the number of repetitions thereof are replaced by the character that is repeated consecutively. Thus, map feature Finally compressed into character string e i
And finally, executing a step three, wherein in the step three, the host robot decodes the compressed and encoded local grid map and fuses all the local grid maps to obtain a global map.
According to the embodiment of the invention, the host robot adopts a pre-trained compressed communication network model to decode the compressed and encoded local grid map. Wherein, the decoder algorithm is as follows: definition string d i Is an empty string; traversing character string e i Obtaining e i All characters and the number thereof, are added to d i In d i I.e. hexadecimal coding h in the encoder i The method comprises the steps of carrying out a first treatment on the surface of the Finally, code d i Conversion to binary, according to a coding tableMedium informationConverting corresponding binary coded segments into maps feature To recover the map feature
The pre-training process mainly comprises the following steps: forward propagation, computational loss, backward propagation, and parameter updating. The forward propagation is to send the map into a model to obtain an output recovery map; the calculated loss is a loss function (mean square error loss MSE) shown by the formula (4) loss ) Calculating the loss between the restored map and the original input map:
wherein M is i Representing the map of the input and,representing a restored map; back propagation is the calculation of the gradient of the model parameters with respect to loss; the parameter update uses Adam (Adaptive Moment Estimation is an adaptive learning rate optimization algorithm, which combines the first moment estimation and the second moment estimation of the gradient) optimizers, and model parameters minimizing the loss function are found in the training process by adaptively adjusting the learning rate of each parameter while considering the first moment and the second moment estimation of the gradient.
After the host robot decodes the compressed and encoded local grid map, all local grid maps need to be fused, and the splicing and fusing method is divided into the following two cases:
1) When the initial relative pose of the robot map is known, the rigid transformations R (rotation) and t (translation) between the robot map are as follows:
wherein θ represents a rotation relationship of the map, and R is a rotation matrix between the two maps; t represents a translation vector between two maps, t x 、t y Respectively represents the translation relation of two maps after R rotation。
And fusing the local grid maps in pairs by utilizing rigid transformation R and t among the local grid maps to obtain a global map. The two maps are fused as shown in formula (6):
map fusion =map 1 +(map 2 ×R+t) (6)
2) When the initial relative pose of the robot in the map building is unknown, the map fusion method is as follows:
firstly, respectively detecting ORB characteristic points in a map; then matching key points in the two maps by using a violence matcher, and finding key points with the same distance in the two maps to form a matching point pair; and calculating rigid transformation R and t of the two maps according to the key point pairs, and finally fusing a plurality of local grid maps in pairs to obtain a global map (the same type (6)).
A specific example of the process according to the invention is given below:
first, a trained compressed communication network model is deployed into the processor of each robot. Then, the robots respectively use Gapping algorithm to respectively establish local grid maps for the environments where the robots are located. Then, the slave machine inputs the built local grid map into the deployed model, and the coding table obtained by extracting CNN and the encoder from the characteristics in the modelAnd code e i And will +.>And e i To the host. Then, the host receives +.>And e i Post-input to deployed model, +.>And e i And restoring the local grid map through a decoder and a map restoration CNN in the model. Finally, the host uses local fusionThe merging and splicing method fuses the received local map and the local grid map built by the host computer to obtain the global map.
The global map obtained by the invention can provide accurate navigation and positioning for the robot so as to finish complex tasks such as reconnaissance, patrol, rescue and the like.
Further experiments prove the technical effect of the invention.
Fig. 5 is a comparison diagram of experiments in which the method of the present invention is applied to the collaborative mapping of two robots. The first line in the picture is the mapping process over time without using the method of the present invention, and the second line is the mapping process over time using the method of the present invention. FIG. 6 is a graph showing the change in contrast index of a map of two experiments under the same environment as the experiment was run. Wherein the ssim index represents the structural similarity of the global map built by the method and the global map not built by the method, the ssim range is [0,1], the more similar the map is, the more the map is, the two maps are irrelevant; psnr represents the quality of a map built using the method of the present invention, and a larger value represents a higher quality of the map, typically greater than 40dB, indicates that the map quality is excellent (i.e., a global map built without the present invention). Fig. 7 is a graph showing the compression rate change of a local map with the running time when the present invention is applied to the collaborative mapping of two robots. Fig. 8 is a comparison of the map transmission times required for the two sets of experiments in fig. 5. Fig. 9 is a comparison of the bandwidth required for the two sets of experimental map transmissions of fig. 5 and the actual uwb communication module bandwidth. Fig. 10 is an error map of providing positioning for a robot using global maps of the two sets of experiments of fig. 5, respectively.
From fig. 5-7, it can be seen that the multi-robot collaborative mapping method under the communication limitation provided by the invention has up to 99% on the local map, and the quality of the recovered local map is still guaranteed. As can be seen from fig. 8-9, the multi-robot collaborative mapping method under the communication limitation provided by the invention can meet the UWB hardware bandwidth and can realize real-time transmission of the map. As can be seen from fig. 10, the fine change of the local map generated by the multi-robot collaborative mapping method under the communication limitation provided by the invention hardly affects the global map obtained after fusion, so as to provide accurate positioning for the robot.
The invention has the advantages that the data types contained in the map features are few and most of continuous repeated data exist, so the map features can be converted into a section of very short character string by using the Huffman and RLE algorithm, wherein the Huffman algorithm aims at solving the problem that the data types contained in the map features are few, and the map features are processed by the Huffman algorithm so as to form shorter binary codes; the RLE algorithm aims at solving the problem that most of continuous repeated data exists in map features, the continuous data is represented by the data and the repeated times, and finally the compression rate of the map is up to 99%.
Another embodiment of the present invention provides a multi-robot compressed communication collaborative mapping system under communication limitation, the system comprising:
the local map building module is configured to build local grid maps for the environments of the master robot and the slave robots according to the pose of the master robot and the slave robots respectively;
the compression transmission module is configured to perform compression coding on the local grid map by the plurality of slave robots by adopting a pre-trained compression communication network model, and respectively transmit the compressed and coded local grid map to the master robot;
the global map building module is configured to decode the compressed and encoded local grid map by the host robot through a pre-trained compressed communication network model, and fuse all the local grid maps to obtain a global map;
the compressed communication network model comprises a feature extraction convolutional neural network, a Huffman encoder, a Huffman decoder and a map recovery convolutional neural network; the feature extraction convolutional neural network is used for extracting map features, and the map restoration convolutional neural network is used for restoring the map features into a grid map.
The function of the multi-robot compressed communication collaborative mapping system under the communication limitation of the embodiment of the invention can be illustrated by the multi-robot compressed communication collaborative mapping method under the communication limitation, so that the system embodiment is not described in detail, and the detailed description of the method embodiment is omitted.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (10)

1. The multi-robot compressed communication collaborative mapping method under the communication limitation is characterized by comprising the following steps:
the method comprises the steps that a master robot and a plurality of slave robots respectively establish a local grid map for the environment where the slave robots are located according to the pose of the master robot and the slave robots;
the plurality of slave robots perform compression coding on the local grid maps, and transmit the compressed and coded local grid maps to the master robot respectively;
and the host robot decodes the compressed and encoded local grid map, and fuses all the local grid maps to obtain a global map.
2. The method for collaborative mapping of multi-robot compressed communication under communication restriction according to claim 1, wherein the master robot and the plurality of slave robots build a local grid map according to their own pose using Gmapping algorithm.
3. The method for collaborative mapping of multi-robot compressed communication under communication constraints according to claim 1, wherein the plurality of slave robots perform compression encoding on their respective local raster maps using a pre-trained compressed communication network model, and the master robot decodes the compression-encoded local raster maps using the pre-trained compressed communication network model.
4. The method for collaborative mapping of multi-robot compressed communication under communication constraints according to claim 3, wherein the compressed communication network model includes a feature extraction convolutional neural network, a huffman encoder, a huffman decoder, and a map restoration convolutional neural network; the feature extraction convolutional neural network is used for extracting map features, and the map restoration convolutional neural network is used for restoring the map features into a grid map; the feature extraction convolutional neural network and the map restoration convolutional neural network are both composed of a plurality of weight layers.
5. The method for collaborative mapping of multi-robot compressed communication under communication constraints according to claim 4, wherein the process of compression encoding by the plurality of slave robots of their respective local grid maps using a pre-trained compressed communication network model includes: after obtaining map features through feature extraction convolutional neural network, calculating all pixel values appearing in the map featuresAnd each pixel value v i Probability of occurrence in map features; the probability of these values occurring is ordered from high to low, resulting in a set +.>Initializing n nodes->And->One-to-one correspondence, two nodes of the corresponding minimum probability are merged into a new node, ++>Become { n } 1 ,…,n n-2 ,n merged }, at the same time->Also follow the update to { p } 1 ,…,p n-2 ,p merged -a }; this operation is repeated until +.>Only one node is left and +.>Also only one probability value is 1; obtaining a coding table corresponding to the map features according to the node fusion processWherein each pixel value v i Corresponding to a binary code i The method comprises the steps of carrying out a first treatment on the surface of the According to coding table->Map features are written as a string of binary codes; converting binary code into hexadecimal code h i ,h i I.e. the encoded character string.
6. The method for collaborative mapping of multi-robot compressed communication under communication constraints according to claim 5, wherein the process of compression encoding by the plurality of slave robots of their respective local grid maps using a pre-trained compressed communication network model further comprises: after obtaining the coded character string h i Then, the hexadecimal system is coded into h by using RLE algorithm i The character of the continuous repetition is replaced by the character itself and the repeated times, finally the replaced code character string e is obtained i
7. The method for collaborative mapping of multi-robot compressed communication under communication constraints according to claim 6, wherein the decoding of the compression-encoded local raster map by the host robot using a pre-trained compressed communication network model includes:
definition string d i Is an empty stringThe method comprises the steps of carrying out a first treatment on the surface of the Traversing character string e i Obtaining e i All characters and the number thereof, are added to d i In d i I.e. hexadecimal coding h in the encoder i The method comprises the steps of carrying out a first treatment on the surface of the Will d i Conversion to binary, according to a coding tableAnd (3) converting the corresponding binary code segment into a value in the map feature, and recovering the map feature through the map recovery convolutional neural network.
8. The method for collaborative mapping of multi-robot compressed communication under communication constraints according to claim 1, wherein the process of merging all local grid maps comprises:
and sequencing all the local grid maps, then fusing the local grid maps in sequence, and finally fusing the local grid maps into a global map. The two map fusion methods are as follows: the rigid transformations R and t are of the formula:
if the initial relative pose of the two robots is known when the two maps to be fused are constructed, namely, the rigid transformation R and t between the maps constructed by the robots are known, and the two local maps are fused according to the R and t; if the initial relative pose of the robot in the map building is unknown, firstly, rigidly transforming R and t between two local grid maps, and fusing the two local grid maps according to the R and t; wherein,
wherein θ represents a rotation relationship of the map, and R is a rotation matrix between the two maps; t represents a translation vector between two maps, t x 、t y Respectively representing the translation relation of the two maps after R rotation;
the formula of the fusion of every two is as follows: map fusion =map 1 +(map 2 ×R+t)。
9. The method for collaborative mapping of multi-robot compressed communication under communication constraints according to claim 8, wherein if the initial relative pose of the robot in mapping is unknown during the process of fusing all local grid maps, the rigid transformations R and t between the local grid maps are obtained by the following calculation: respectively detecting ORB characteristic points in the local grid map; matching key points in the two local grid maps by using a violence matcher, and finding key points with the same distance in the two local grid maps to form a matching point pair; and calculating rigid transformation R, t between the two maps according to the matching point pairs.
10. A multi-robot compressed communication collaborative mapping system under communication limitation, comprising:
the local map building module is configured to build local grid maps for the environments of the master robot and the slave robots according to the pose of the master robot and the slave robots respectively;
the compression transmission module is configured to perform compression coding on the local grid map by the plurality of slave robots by adopting a pre-trained compression communication network model, and respectively transmit the compressed and coded local grid map to the master robot;
the global map building module is configured to decode the compressed and encoded local grid map by the host robot through a pre-trained compressed communication network model, and fuse all the local grid maps to obtain a global map;
the compressed communication network model comprises a feature extraction convolutional neural network, a Huffman encoder, a Huffman decoder and a map recovery convolutional neural network; the feature extraction convolutional neural network is used for extracting map features, and the map restoration convolutional neural network is used for restoring the map features into a grid map.
CN202311741478.5A 2023-12-18 2023-12-18 Multi-robot compressed communication collaborative mapping method and system under communication limitation Pending CN117723048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311741478.5A CN117723048A (en) 2023-12-18 2023-12-18 Multi-robot compressed communication collaborative mapping method and system under communication limitation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311741478.5A CN117723048A (en) 2023-12-18 2023-12-18 Multi-robot compressed communication collaborative mapping method and system under communication limitation

Publications (1)

Publication Number Publication Date
CN117723048A true CN117723048A (en) 2024-03-19

Family

ID=90210226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311741478.5A Pending CN117723048A (en) 2023-12-18 2023-12-18 Multi-robot compressed communication collaborative mapping method and system under communication limitation

Country Status (1)

Country Link
CN (1) CN117723048A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111830985A (en) * 2020-07-24 2020-10-27 中南大学 Multi-robot positioning method, system and centralized communication system
CN113311827A (en) * 2021-05-08 2021-08-27 东南大学 Robot indoor map capable of improving storage efficiency and generation method thereof
WO2021175434A1 (en) * 2020-03-05 2021-09-10 Cambridge Enterprise Limited System and method for predicting a map from an image
CN114626539A (en) * 2020-12-10 2022-06-14 中国科学院深圳先进技术研究院 Distributed SLAM system and learning method thereof
CN114842108A (en) * 2022-04-22 2022-08-02 东南大学 Probability grid map processing method and device and storage device
CN115499808A (en) * 2022-08-15 2022-12-20 上海澳悦智能科技有限公司 Two-dimensional map information sharing method among multiple mobile robots
CN116698014A (en) * 2023-07-03 2023-09-05 中国计量大学 Map fusion and splicing method based on multi-robot laser SLAM and visual SLAM
CN117029817A (en) * 2023-06-28 2023-11-10 江苏科技大学 Two-dimensional grid map fusion method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021175434A1 (en) * 2020-03-05 2021-09-10 Cambridge Enterprise Limited System and method for predicting a map from an image
CN111830985A (en) * 2020-07-24 2020-10-27 中南大学 Multi-robot positioning method, system and centralized communication system
CN114626539A (en) * 2020-12-10 2022-06-14 中国科学院深圳先进技术研究院 Distributed SLAM system and learning method thereof
CN113311827A (en) * 2021-05-08 2021-08-27 东南大学 Robot indoor map capable of improving storage efficiency and generation method thereof
CN114842108A (en) * 2022-04-22 2022-08-02 东南大学 Probability grid map processing method and device and storage device
CN115499808A (en) * 2022-08-15 2022-12-20 上海澳悦智能科技有限公司 Two-dimensional map information sharing method among multiple mobile robots
CN117029817A (en) * 2023-06-28 2023-11-10 江苏科技大学 Two-dimensional grid map fusion method and system
CN116698014A (en) * 2023-07-03 2023-09-05 中国计量大学 Map fusion and splicing method based on multi-robot laser SLAM and visual SLAM

Similar Documents

Publication Publication Date Title
CN110225341B (en) Task-driven code stream structured image coding method
US10462476B1 (en) Devices for compression/decompression, system, chip, and electronic device
CN111652899B (en) Video target segmentation method for space-time component diagram
CN118233636A (en) Video compression using depth generative models
CN111445476B (en) Monocular depth estimation method based on multi-mode unsupervised image content decoupling
CN105430416B (en) A kind of Method of Fingerprint Image Compression based on adaptive sparse domain coding
KR20200018283A (en) Method for training a convolutional recurrent neural network and for semantic segmentation of inputted video using the trained convolutional recurrent neural network
CN115393396B (en) Unmanned aerial vehicle target tracking method based on mask pre-training
CN113139446A (en) End-to-end automatic driving behavior decision method, system and terminal equipment
CN113132727B (en) Scalable machine vision coding method and training method of motion-guided image generation network
CN110827305A (en) Semantic segmentation and visual SLAM tight coupling method oriented to dynamic environment
CN116229394A (en) Automatic driving image recognition method, device and recognition equipment
Kim et al. Multi-task learning with future states for vision-based autonomous driving
CN116600119B (en) Video encoding method, video decoding method, video encoding device, video decoding device, computer equipment and storage medium
CN117501696A (en) Parallel context modeling using information shared between partitions
CN117723048A (en) Multi-robot compressed communication collaborative mapping method and system under communication limitation
Khan et al. Latent space reinforcement learning for steering angle prediction
CN116630369A (en) Unmanned aerial vehicle target tracking method based on space-time memory network
WO2022100140A1 (en) Compression encoding method and apparatus, and decompression method and apparatus
CN115035173A (en) Monocular depth estimation method and system based on interframe correlation
CN115131414A (en) Unmanned aerial vehicle image alignment method based on deep learning, electronic equipment and storage medium
CN118202389A (en) Point cloud compression probability prediction method based on self-adaptive deep learning
CN113284042A (en) Multi-path parallel image content feature optimization style migration method and system
CN113191943B (en) Multi-path parallel image content characteristic separation style migration method and system
Srinivas et al. Faster depth estimation for situational awareness on urban streets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination