CN112927256A - Boundary fusion method and device for partitioned area and mobile robot - Google Patents

Boundary fusion method and device for partitioned area and mobile robot Download PDF

Info

Publication number
CN112927256A
CN112927256A CN202110280256.2A CN202110280256A CN112927256A CN 112927256 A CN112927256 A CN 112927256A CN 202110280256 A CN202110280256 A CN 202110280256A CN 112927256 A CN112927256 A CN 112927256A
Authority
CN
China
Prior art keywords
boundary
map data
confidence
fusion
end point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110280256.2A
Other languages
Chinese (zh)
Inventor
楼力政
朱建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ezviz Software Co Ltd
Original Assignee
Hangzhou Ezviz Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ezviz Software Co Ltd filed Critical Hangzhou Ezviz Software Co Ltd
Priority to CN202110280256.2A priority Critical patent/CN112927256A/en
Publication of CN112927256A publication Critical patent/CN112927256A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20128Atlas-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application discloses a boundary fusion method of a divided region, which comprises the steps of matching a first boundary obtained by performing region division based on first map data with a second boundary obtained by performing region division based on second map data, wherein a difference of dynamic update exists between the first map data and the second map data; and for the successfully matched first boundary and second boundary, fusing the end points of the first boundary and the position information of the end points corresponding to the second boundary according to the first boundary confidence coefficient of the first boundary and the second boundary confidence coefficient of the second boundary to obtain position information after end point fusion, and taking the boundary determined by the position information after end point fusion as a fusion boundary. The method and the device enable the real-time dynamic region division result to have continuous consistency and stability.

Description

Boundary fusion method and device for partitioned area and mobile robot
Technical Field
The invention relates to the technical field of machine vision, in particular to a boundary fusion method for a segmentation region.
Background
Currently, map division in the field of mobile robots can be mainly divided into two modes, one mode is division based on a map graph, the other mode is division according to local features, and the used map is generally a two-dimensional map. The method based on graph division is generally used for dividing a two-dimensional grid map or a two-dimensional topological map, when a map constructed by three-dimensional point clouds is aimed at, two-dimensional projection is generally required, and a certain plane which is easy to divide is used as a reference. For the method based on feature division, more, the single frame data features and the representative scene are combined through training, or the features at the region switching positions are learned, so that different scene regions are distinguished.
The traditional graph segmentation method can be used for region division of a two-dimensional map, object segmentation of a two-dimensional picture or object segmentation in a three-dimensional scene. These graph partitioning methods generally divide the region or object therein based on a single sample of data. As an example, in the area division of a two-dimensional map, a global two-dimensional grid map is generally used, and the area division is performed on the map.
In mobile robot platforms, often an unknown environment is faced, i.e. without a global map; at this time, the collected information, including maps and pictures, is dynamically updated in real time. For this situation, the conventional graph partitioning method has a large difference in the result of partitioning each time of dynamically real-time updated data, and does not have the consistency of continuous states, so that the difference of the result of real-time partitioning along with the change of the states is large, which has a large influence on the mobile robot strategy selection.
Disclosure of Invention
The invention provides a boundary fusion method of a partitioned area, which aims to improve the consistency of the partitioned area based on dynamically real-time updated data.
The invention provides a boundary fusion method of a segmentation region, which comprises the following steps,
matching a first boundary obtained by performing area segmentation based on first map data with a second boundary obtained by performing area segmentation based on second map data, wherein a difference of dynamic update exists between the first map data and the second map data;
for the first boundary and the second boundary that match successfully,
fusing the end points of the first boundary and the position information of the end points corresponding to the second boundary according to the first boundary confidence of the first boundary and the second boundary confidence of the second boundary to obtain the position information after the end points are fused,
and taking the boundary determined by the position information after the end points are fused as a fused boundary.
Preferably, the fusing, according to the first boundary confidence of the first boundary and the second boundary confidence of the second boundary, the position information of the end point of the first boundary and the end point corresponding to the second boundary to obtain the position information after end point fusion, includes:
respectively carrying out weighted fusion on the pixel coordinates of the end points of the first boundary and the corresponding end points of the second boundary by adopting the first boundary confidence coefficient and the second boundary confidence coefficient to obtain the pixel coordinates after end point fusion;
the boundary confidence coefficient is the average value of the confidence coefficients of two end points on the boundary; wherein, the endpoint confidence is: the ratio of the number of consecutive barrier pixels adjacent to the end point to the total number of pixels in the end point neighborhood within the end point neighborhood.
Preferably, the weighting the pixel coordinates of the end point of the first boundary and the end point corresponding to the second boundary respectively by using the first confidence degree and the second confidence degree to obtain the pixel coordinates after end point fusion, includes:
taking the ratio of the first boundary confidence coefficient to the sum of the first boundary confidence coefficient and the second boundary confidence coefficient as a first fusion weight,
taking the ratio of the second boundary confidence coefficient to the sum of the first boundary confidence coefficient and the second boundary confidence coefficient as a second fusion weight,
multiplying the first fusion weight by the pixel coordinate of the first endpoint on the first boundary to obtain a first result,
multiplying the second fusion weight by the pixel coordinate of the first end point corresponding to the first end point and located on the second boundary to obtain a second result,
adding the first result and the second result to obtain a pixel coordinate after the first endpoint is fused;
multiplying the first fusion weight by the pixel coordinate of the second endpoint on the first boundary to obtain a third result,
multiplying the second fusion weight by the pixel coordinate of the second endpoint located on the second boundary and corresponding to the second endpoint to obtain a fourth result,
and adding the third result and the fourth result to obtain the pixel coordinate after the second endpoint is fused.
Preferably, the matching a first boundary obtained by performing area segmentation based on the first map data with a second boundary obtained by performing area segmentation based on the second map data includes:
for each of the first boundaries, the first boundary is,
if the distance between the centroid of the second boundary and the centroid of the first boundary and/or the distance between two endpoints of the second boundary and two corresponding endpoints of the first boundary is within a set distance threshold, the second boundary and the first boundary are successfully matched;
the method further comprises the step of enabling the user to select the target,
for a first boundary for which matching is unsuccessful, reducing a first boundary confidence for the first boundary,
for a second boundary which is not successfully matched, taking the second boundary as a fusion boundary, and keeping the confidence coefficient of the second boundary unchanged;
and for each fused boundary fused by the first boundary and the second boundary which are successfully matched, respectively carrying out weighted fusion on the first boundary confidence coefficient and the second boundary confidence coefficient for fusion according to the set weight to obtain the boundary confidence coefficient of the fused boundary.
Preferably, the reducing the first boundary confidence of the first boundary includes subtracting a set constant from the first boundary confidence of the first boundary,
the weighted fusion of the first boundary confidence coefficient and the second boundary confidence coefficient used for fusion according to the set weight respectively comprises the following steps:
weighting the first boundary confidence with a set first weight,
weighting the second boundary confidence by a set second weight,
adding the obtained weighting results;
the method further comprises the following steps:
and removing the fusion boundary of which the boundary confidence coefficient is smaller than the set confidence coefficient threshold value in the fusion boundary fused by the first boundary and the second boundary which are successfully matched.
Preferably, there is a dynamically updated difference between the first map data and the second map data, including,
when the difference between the first map data and the second map data is larger than a set difference threshold value, determining that a dynamically updated difference exists between the first map data and the second map data;
the method further comprises the step of enabling the user to select the target,
triggering area segmentation based on first map data when a difference between the first map data and the second map data is greater than a difference threshold;
the first map data is currently acquired map data, and the second map data is map data acquired last time.
Preferably, the method further comprises the step of,
performing probability rasterization on the first map to obtain a probability grid map, wherein grid values represent the probability of obstacles existing in the grid,
the boundary confidence coefficient is the average value of the confidence coefficients of two end points on the boundary; wherein, the endpoint confidence is determined as follows:
counting the number of continuous barrier grids adjacent to the end point in the end point neighborhood range, wherein the barrier grids are grids with grid values larger than a set probability threshold value;
and taking the ratio of the counted grid number to the total number of grids in the range of the end point neighborhood as the end point confidence coefficient.
Preferably, before the matching the first boundary obtained by area division based on the first map data with the second boundary obtained by area division based on the second map data, further comprises,
loading the first partition result edited by the user,
extracting the boundary edited by the user in the first partition result, marking, adding the boundary edited by the user into the user editing boundary set,
when the first map data is updated, triggering area segmentation based on the first map data to obtain a first boundary, adding the first boundary to a first boundary set, and calculating the first boundary confidence of each first boundary in the first boundary set;
locking the boundaries edited by the user in the first set of boundaries according to the label,
determining whether a second set of boundaries exists that includes a second boundary,
if yes, executing the step of matching each first boundary obtained by carrying out region division based on the first map data with a second boundary obtained by carrying out region division based on the second map data, and fusing the first boundaries except for the boundaries edited by the locked user in the first boundary set with the second boundaries in the second boundary set;
otherwise, merging the user editing boundary set with the first boundary set to serve as a second boundary set.
The present application further provides a device for merging boundaries of partitioned areas, comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the steps of any one of the above methods for merging boundaries of partitioned areas.
The present application further provides a mobile robot, which includes a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the steps of any one of the above-mentioned boundary fusion methods for segmented regions.
According to the boundary fusion method for the divided regions, provided by the invention, the first boundary and the second boundary obtained by performing region division on the two map data with dynamic update difference are fused according to the position information of the corresponding end points of the two boundaries, so that the fused boundary is obtained, the problem that the division result of the mobile robot on the map data in the motion state is inconsistent is solved, the real-time dynamic region division result has continuous consistency and stability, the change of the region division result caused by the influence of errors and noises of the real-time updated data is avoided, and the boundary has more robustness.
Drawings
Fig. 1 is a schematic diagram of a map boundary divided by a conventional graph division method in a moving process of a mobile robot.
Fig. 2 is a schematic flowchart of a process of performing boundary fusion based on a graph segmentation result according to an embodiment of the present application.
FIG. 3 is a diagram illustrating a boundary fusion method for a partition.
FIG. 4 is a schematic diagram of a voronoi diagram-based partitioning algorithm.
Fig. 5a is a schematic diagram of a map boundary divided based on the boundary fusion method of the present application.
FIG. 5b is a diagram illustrating endpoint confidence calculation.
FIG. 6 is a schematic diagram of the boundaries required in the fusion process and the relationship between the confidence levels of the boundaries.
Fig. 7 is a schematic flow chart illustrating a process of fusing a partition result edited by a user and a partition result divided by a graph division algorithm according to the present application.
FIG. 8 is a diagram illustrating a user editing a partition result and a partition result divided by a graph partitioning algorithm.
Fig. 9 is a schematic view of a boundary fusion device according to an embodiment of the present application.
Fig. 10 is another schematic view of a boundary fusion device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical means and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings.
The applicant finds that the traditional graph segmentation method has the situation that the marked boundary suddenly disappears in the moving process of the mobile robot. As an example, referring to fig. 1, fig. 1 is a schematic diagram of a map boundary divided by a conventional graph dividing method during a motion of a mobile robot, where a circle is the mobile robot, in a left figure, when the mobile robot is located at a position in a left figure, a divided region has the divided boundary, and in a right figure, the divided boundary suddenly disappears as the mobile robot moves to a position in a right figure. This is because the map data is updated in real time and changed according to the movement of the mobile robot, and particularly, when the map data is incomplete, the information itself is small, and when a local change of the map occurs, the determination of the distribution of the entire map is greatly affected, and a large difference is generated in the divided partition results.
The application provides a segmentation and fusion method based on confidence for the real-time dynamic data acquisition of a mobile robot. In the method, in the data acquisition process of the mobile robot, the graph of the two-dimensional map constructed in real time each time is segmented, the confidence coefficient of the two-dimensional map is calculated, and the current segmentation result and the last segmentation result are fused through the confidence coefficient.
Referring to fig. 2, fig. 2 is a schematic flowchart of performing boundary fusion based on a graph segmentation result according to an embodiment of the present disclosure. The method comprises the steps of (1) carrying out,
step 201, matching each first boundary in the first boundary set with a second boundary in the second boundary set,
the first boundary set is a set of first boundaries obtained by performing region segmentation based on first map data;
the second boundary set is a set of second boundaries obtained by performing region segmentation based on second map data;
a dynamically updated difference exists between the first map data and the second map data;
step 202, for a first boundary and a second boundary which are successfully matched, according to a first boundary confidence of the first boundary and a second boundary confidence of the second boundary, fusing the end points of the first boundary and the position information of the end points corresponding to the second boundary to obtain position information after end point fusion,
step 203, using the boundary determined by the position information after the end points are fused as a fusion boundary; that is, the line segment determined by the fused position where the end point is located is the fused boundary.
According to the method and the device, the characteristics of real-time update of the map data are combined, the segmentation results of the map data with the real-time update difference are fused, the segmentation results are fused in a continuous state, more robust and accurate boundary information is obtained, the situation that the data are inevitably fluctuated due to the influence of errors and noises caused by the real-time update data of the mobile robot is avoided, the corresponding change of the constructed map is effectively solved, and the robustness and the consistency of the segmentation results are enhanced. The segmentation result with higher robustness and time sequence consistency can be generated in a real-time scene by fusion based on the boundary confidence coefficient, so that the use requirements of the mobile robot on strategy and planning are better met.
For the convenience of understanding the present application, the following description will be given taking as an example that the first map data is currently collected in real time and the second map data is collected last time.
Referring to fig. 3, fig. 3 is a schematic diagram of a boundary fusion method for a segmentation region. For the currently acquired two-dimensional map data (first map data) updated in real time, the boundary fusion method of the divided regions includes,
step 301, judging whether the current map data is updated, if so, triggering and calling a graph segmentation algorithm to perform graph segmentation on the current two-dimensional map to obtain segmentation areas and boundaries among the segmentation areas.
Preferably, whether the difference between the currently acquired two-dimensional map data (first map) and the two-dimensional map data (second map) acquired last time is larger than a set difference threshold is judged, if yes, the currently acquired map data is updated, a graph segmentation algorithm is triggered and called to perform graph segmentation on the current two-dimensional map, and first boundaries between the segmented areas and the segmented areas are obtained, otherwise, the currently acquired map data is not updated, and the graph segmentation algorithm is not triggered and called. The difference may include a difference in the number of data, or a difference in the value of the data itself.
In this step, the graph partitioning algorithm may adopt a voronoi diagram-based partitioning algorithm that partitions the map into a plurality of regions by extracting a voronoi diagram of the binarized map, selecting key points on the voronoi diagram, and passing through the connecting lines of the key points and the adjacent obstacles.
Referring to fig. 4, fig. 4 is a schematic diagram of a voronoi diagram-based partitioning algorithm. The method comprises the steps of obtaining a graph, a graph and a graph, wherein the graph a represents an input map, the graph b represents an extracted voronoi graph, the graph c represents a key point selected on the voronoi graph, the key point is a point, with the minimum value, of the distance between a part of the voronoi graph and an obstacle, the distance is an Euclidean distance, and the graph d represents an area divided by a connecting line between the key point and the adjacent obstacle.
And 302, probability rasterization is carried out based on the current two-dimensional map data to obtain a probability grid map.
Probability rasterization is to divide the map into a × a grids (minimum squares), the length of a side of a grid depends on the map resolution, and is usually 1 cm to 5 cm. Each grid value is 0-1.0 and is used for representing the probability of the grid having the obstacle, and the higher the value is, the higher the probability of the grid being the obstacle is.
Step 303, determining a boundary confidence based on the probability grid map.
Taking an indoor map as an example, since areas are usually connected with each other in the form of a channel, a gate, etc., and two end points of a boundary line should naturally be connected with the edge of the channel, whether the end points are possibly located on the channel and the gate can be judged according to the distribution situation of obstacles in the neighborhood of the end points. The neighborhood range may be set according to design requirements, and preferably, is determined according to the resolution of the probability grid map. As in the left diagram of fig. 5a, the dotted circle represents the neighborhood range of the end point on the boundary, and as an example, the radius of the dotted circle is 0.2m, and the circle is converted into the number of pixels according to the map resolution, for example, when the map resolution is 0.05, the neighborhood range of the end point is 4 pixel ranges.
In the probability grid, since the probability of an unknown region is 0.5, a grid having a grid value of 0.5 or more can be used as an obstacle.
The number of continuous barrier grids adjacent to the end point in the neighborhood of the end point is counted as a first parameter for considering the confidence of the end point, that is, the number of grids with grid values larger than a set probability threshold is counted. And carrying out normalization processing on the first parameter to obtain the end point confidence. Optionally, the endpoint confidence is obtained by dividing the first parameter by the total number of grids in the range of the endpoint neighborhood. Expressed mathematically as:
e=y1/y2
wherein e is an endpoint confidence; y1 is the number of consecutive barrier grids adjacent to the endpoint in the neighborhood of the endpoint, i.e., the first parameter; y2 is the number of grids in the neighborhood.
For example, as in fig. 5b, grid 1 represents the end points of the boundary, the black grid represents the obstacle, and the dashed box represents the obstacle that contains the statistics in the neighborhood. The lower left black grid 2 is not statistically included because it is not adjacent to the end point and is not a continuous obstacle point in the neighborhood adjacent to the end point, so that the first parameter for the end point in the graph is 6, and if the neighborhood range is 6 × 6, the end point confidence is 6/(6 × 6).
Since the probability grid map can be formed according to a desired map resolution, the number of processes of pixels can be reduced, and thus, the end point confidence is calculated based on the probability grid map, which is advantageous in increasing the processing speed. It should be understood that endpoint confidence may also be processed directly based on pixel information of the image map. Thus, the endpoint confidence may be: the ratio of the number of consecutive barrier pixels adjacent to the end point to the total number of pixels in the end point neighborhood within the end point neighborhood.
Two endpoints exist in a boundary, and the confidence of the two endpoints is averaged and is recorded as the confidence of the boundary.
Step 304, determining whether there are the second boundary set obtained by the last segmentation and the boundary confidence of each second boundary in the set, if yes, executing step 305, otherwise, regarding the current first boundary set as the second boundary set, regarding the current first boundary confidence as the second boundary confidence, then executing step 307,
and 305, fusing the boundary according to the boundary confidence coefficient, and updating the boundary confidence coefficient of the fused boundary.
Setting a set of first boundaries obtained by segmentation based on current map data as a first boundary set, and marking the first boundary set as B, BiFor any first boundary in the first boundary set, the boundary confidence level set of the first boundary is C, biThe corresponding boundary confidence is ciAnd the second set of boundaries is BLThe boundary confidence level set of the second boundary is CL. The final set of fusion boundaries is BnewThe set is a set of fused boundaries, and the set of boundary confidence of the fused boundaries is Cnew
For each first boundary B in the first boundary set BiAt the second boundary set BLTo find a corresponding second boundary
Figure BDA00029779677100000816
The search conditions are as follows: the distance between the centroids of the two boundaries and/or the corresponding two end points is within a set distance threshold, if the corresponding boundary is found, the corresponding relationship between the two boundaries is recorded and is marked as a first boundary bnAnd a second boundary
Figure BDA00029779677100000815
And (7) corresponding. Thus, a first set of boundaries B and a second set of boundaries BLThe boundaries in (1) may include boundaries that match successfully and boundaries that match unsuccessfully, respectively.
And performing boundary fusion on the boundary which is successfully matched, namely the first boundary and the second boundary which are successfully matched. The specific fusion mode may be that a fusion weight is determined according to the boundary confidence, the end point position information of the fused boundary after fusion is calculated by using the fusion weight, and the boundary determined by the calculated end point position information is used as the fusion boundary.
Let two endpoints in the first boundary bn
Figure BDA0002977967710000081
And
Figure BDA0002977967710000082
respectively corresponding to the second boundary
Figure BDA0002977967710000083
Two end points of
Figure BDA0002977967710000084
To know
Figure BDA0002977967710000085
Then merge the boundary
Figure BDA0002977967710000086
Two end points of
Figure BDA0002977967710000087
And
Figure BDA0002977967710000088
respectively as follows:
Figure BDA0002977967710000089
Figure BDA00029779677100000810
Figure BDA00029779677100000817
Figure BDA00029779677100000811
wherein λ isnIs the first fusion weight value, and the second fusion weight value,
Figure BDA00029779677100000812
is a second fusion weight, cnIs a first boundary bnThe first boundary confidence of (a) is,
Figure BDA00029779677100000813
is the second boundary
Figure BDA00029779677100000814
The second boundary confidence of (2) can be understood as that a fusion weight is obtained by using the first boundary confidence and the second boundary confidence, and the fusion weight is used for performing weighted fusion on the pixel coordinates of the end point of the first boundary and the corresponding end point of the second boundary respectively.
Pixel coordinate calculation expanded to end point:
multiplying the first fusion weight by the pixel coordinate of the first endpoint on the first boundary to obtain a first result,
multiplying the second fusion weight by the pixel coordinate of the first end point corresponding to the first end point and located on the second boundary to obtain a second result,
adding the first result and the second result to obtain a pixel coordinate after the first endpoint is fused;
multiplying the first fusion weight by the pixel coordinate of the second endpoint on the first boundary to obtain a third result,
multiplying the second fusion weight by the pixel coordinate of the second endpoint located on the second boundary and corresponding to the second endpoint to obtain a fourth result,
and adding the third result and the fourth result to obtain the pixel coordinate after the second endpoint is fused.
Can be expressed by the following mathematical formula:
Figure BDA0002977967710000091
Figure BDA0002977967710000092
Figure BDA0002977967710000093
Figure BDA0002977967710000094
wherein the end points
Figure BDA0002977967710000095
Has pixel coordinates of
Figure BDA0002977967710000096
Endpoint
Figure BDA0002977967710000097
Has pixel coordinates of
Figure BDA0002977967710000098
Endpoint
Figure BDA0002977967710000099
Has pixel coordinates of
Figure BDA00029779677100000910
Endpoint
Figure BDA00029779677100000911
Has pixel coordinates of
Figure BDA00029779677100000912
Endpoint
Figure BDA00029779677100000913
Has pixel coordinates of
Figure BDA00029779677100000914
Endpoint
Figure BDA00029779677100000915
Has pixel coordinates of
Figure BDA00029779677100000916
Blending boundaries
Figure BDA00029779677100000917
Boundary confidence of
Figure BDA00029779677100000918
Is determined as follows:
weighting the first boundary confidence with a set first weight,
weighting the second boundary confidence by a set second weight,
adding the obtained weighting results;
can be expressed by the following mathematical formula:
Figure BDA00029779677100000919
wherein λ is1、λ2The first weight and the second weight, which are respectively set according to empirical values, are the sum of 1, and the magnitude of the two weights can indicate whether the boundary confidence coefficient of the current segmentation is more credible or the boundary confidence coefficient of the last segmentation. When the confidence degrees are consistent, the first weight and the second weight are both 0.5.
For the first boundary which is not successfully matched in the first boundary set B, the first boundary B can not be found in the second boundary setkThe corresponding second boundary, the first boundary b is loweredkFirst boundary confidence of
Figure BDA00029779677100000920
The method specifically comprises the following steps:
Figure BDA00029779677100000921
wherein the content of the first and second substances,
Figure BDA00029779677100000922
to reduce the confidence of the first boundary, ckIs a first boundary bkOriginal first boundary confidence of cconstIs a set constant.
For the second set of boundaries BLSecond boundary of unsuccessful matching
Figure BDA0002977967710000101
The second boundary is directly added to the fused boundary set BnewSo as to take the second boundary as a fusion boundary and keep the confidence of the second boundary unchanged, the second boundary confidence is expressed by a mathematical formula as follows:
Figure BDA0002977967710000102
Figure BDA0002977967710000103
step 306, for the fused boundary where the first boundary and the second boundary that are successfully matched in the fused boundary set are fused, deleting the fused boundary whose boundary confidence is smaller than the set confidence threshold, for example, deleting the boundary whose boundary confidence is equal to 0, so that the fused boundary is no longer used as the boundary.
And 307, judging whether the current map data acquisition is finished, if so, finishing the process, and otherwise, returning to the step 301.
Referring to fig. 6, fig. 6 is a schematic diagram of the boundary required in the fusion process and the relationship between the confidence levels of the boundary. Which shows the first set of boundaries B, the second set of boundariesSet of boundaries BLFusion boundary set BnewA first boundary confidence coefficient set C and a second boundary confidence coefficient set CLBoundary confidence set C of fusion boundarynewFirst boundary b of successful matchingnSecond boundary of successful matching
Figure BDA0002977967710000104
First boundary bk of unsuccessful matching, second boundary of unsuccessful matching
Figure BDA0002977967710000105
A relationship therebetween.
In the boundary fusion process, the fusion is related to the successfully matched boundary and the boundary confidence thereof, but is not related to the collection mode of the map data, so that the boundary fusion method of the application can be not only used for the fusion of the graph segmentation results from the map data collected by the same type of sensor, but also used for the fusion of the graph segmentation results from the map data collected by multiple sensors, for example, the boundary fusion can be performed by using the area division boundary of a grid map constructed by radar and the boundary identified and divided by using a door frame in vision. When used for border fusion under multiple sensors, only the border confidence needs to be normalized.
According to the boundary fusion method in the embodiment of the application, the first boundary obtained by dividing the first map and the second boundary obtained by dividing the second map are fused according to the boundary confidence, so that the real-time dynamic region division result has continuous consistency and stability, and sudden change or instability of the mobile robot in the strategies of judging the region where the mobile robot is and planning the region path and the like due to the fact that the mobile robot does not divide the boundary sometimes or sometimes can not occur, for example, in fig. 5a, after the fusion method is adopted, even if the position of the mobile robot changes, the boundary can not change, and compared with the situation that the boundary suddenly disappears in fig. 1, the boundary information in fig. 5a is more stable. In addition, in the continuous moving process of the mobile robot, the current segmentation boundary is fused whenever the difference between the first map data and the second map data is larger than a set threshold value, so that the fusion of the segmentation results of the mobile robot in a continuous state is realized, the result of the area segmentation is closer to the real result, and the reliability is higher.
Furthermore, by applying the boundary fusion method, the partition result edited by a user and the partition result divided by the graph partitioning algorithm can be fused. As an example, for a conventional general algorithm, the partitioning result remains unchanged after user intervention. However, when a new scene change occurs, the corresponding operation cannot be performed, and the user needs to perform division again. By using the boundary fusion method, the scenes can be further divided on the basis of user editing, so that the situation that the scenes change can be better dealt with, and the method is more intelligent.
Referring to fig. 7, fig. 7 is a schematic flow chart illustrating a process of fusing a partition result edited by a user and a partition result partitioned by a graph partitioning algorithm according to the present application. The fusion method comprises the steps of,
step 701, loading the first partition result edited by the user,
step 702, extracting the boundary edited by the user in the first partition result, marking the boundary, adding the boundary edited by the user to the user editing boundary set, and marking the boundary as BuSo as to distinguish the boundary edited by the user from the boundary area obtained by adopting the graph segmentation algorithm,
as an example, the boundary flag edited by the user may be set to locked and saved as an attribute.
Step 703, judging whether the current map data (first map data) is updated, if so, executing step 704, otherwise, returning to step 703;
step 704, triggering a graph segmentation algorithm to perform region segmentation based on the current map data to obtain a current boundary set Ba(first boundary set), and calculating the boundary confidence of the current boundary (first boundary) to obtain a boundary confidence set C of the current boundarya
Step (ii) of705, according to the set attribute mark, in the current boundary set BaLocking the boundary edited by the user so as to forbid the integration of the boundary edited by the user;
step 706, determine whether there is a second boundary set BLAnd its boundary confidence set CL
If yes, according to the step 305 or the step 201, the current boundary set B is collectedaThe boundary except the boundary edited by the locking user and the second boundary set BLThe boundary in (1) is fused, so that the boundary edited by the user is not fused.
Otherwise, edit the user into the boundary set BuWith the current boundary set BaMerging to add the boundary edited by the user to the current boundary set BaAs the second boundary set BL
And 707, determining whether the map data collection is finished, if so, finishing the process, otherwise, returning to the step 703.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a user edited partition result and a partition result divided by a graph partitioning algorithm. In the figure, a is an original map used by a user during editing, b is a user editing result, c is a map with a changed scene, and d is a result of real-time dynamic segmentation based on the user editing result. And c and d show that after the map is updated, the part of the user editing result is not changed, and the part of the map update is divided. Since the updated map can be divided on the basis of keeping the user editing result, the user does not need to re-divide the updated map. In addition, the marks of the edited areas, such as the marks of a kitchen, a living room, a bedroom and the like, can be continuously used in the new segmentation result, and the marks do not need to be marked again by the user. Therefore, by the fusion method, the scene segmentation can be updated on the basis of the editing of the user by combining the partition editing result of the user, the method has better applicability to scene change, and the invalid operation of the user on repeated editing of the updated scene is reduced.
Referring to fig. 9, fig. 9 is a schematic view of a boundary fusion device according to an embodiment of the present application. The device comprises a plurality of devices which are connected with each other,
a matching module for matching each first boundary in the first set of boundaries with a second boundary in the second set of boundaries,
the boundary fusion module is used for fusing the end points of the first boundary and the position information of the corresponding end points of the second boundary according to the first boundary confidence coefficient of the first boundary and the second boundary confidence coefficient of the second boundary for the first boundary and the second boundary which are successfully matched to obtain the position information after the end points are fused, and using the boundary determined by the position information after the end points are fused as a fusion boundary;
preferably, the device further comprises a control unit,
the region segmentation module is used for triggering graph segmentation to obtain a first boundary when the first map data is updated;
a confidence calculation module for calculating a boundary confidence,
the confidence level calculation module includes a confidence level calculation module,
an endpoint confidence computation submodule for computing an endpoint confidence for the endpoints on the boundary,
and the boundary confidence coefficient calculation submodule is used for calculating the boundary confidence coefficient according to the endpoint confidence coefficient.
Referring to fig. 10, fig. 10 is another schematic view of a boundary fusion device according to an embodiment of the present disclosure. Comprising a memory storing a computer program and a processor configured to perform the steps of the method for boundary fusion of segmented regions.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the storage medium, and when being executed by a processor, the computer program realizes the steps of the boundary fusion method of the divided areas.
For the device/network side device/storage medium embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for fusing boundaries of divided regions, the method comprising,
matching a first boundary obtained by performing area segmentation based on first map data with a second boundary obtained by performing area segmentation based on second map data, wherein a difference of dynamic update exists between the first map data and the second map data;
for the first boundary and the second boundary that match successfully,
fusing the end points of the first boundary and the position information of the end points corresponding to the second boundary according to the first boundary confidence of the first boundary and the second boundary confidence of the second boundary to obtain the position information after the end points are fused,
and taking the boundary determined by the position information after the end points are fused as a fused boundary.
2. The method according to claim 1, wherein the fusing the position information of the end point of the first boundary and the end point corresponding to the second boundary according to the first boundary confidence of the first boundary and the second boundary confidence of the second boundary to obtain the position information after end point fusion, includes:
respectively carrying out weighted fusion on the pixel coordinates of the end points of the first boundary and the corresponding end points of the second boundary by adopting the first boundary confidence coefficient and the second boundary confidence coefficient to obtain the pixel coordinates after end point fusion;
the boundary confidence coefficient is the average value of the confidence coefficients of two end points on the boundary; wherein, the endpoint confidence is: the ratio of the number of consecutive barrier pixels adjacent to the end point to the total number of pixels in the end point neighborhood within the end point neighborhood.
3. The method of claim 2, wherein the weighting the pixel coordinates of the end point of the first boundary and the corresponding end point of the second boundary respectively by using the first confidence degree and the second confidence degree to obtain the pixel coordinates after end point fusion comprises:
taking the ratio of the first boundary confidence coefficient to the sum of the first boundary confidence coefficient and the second boundary confidence coefficient as a first fusion weight,
taking the ratio of the second boundary confidence coefficient to the sum of the first boundary confidence coefficient and the second boundary confidence coefficient as a second fusion weight,
multiplying the first fusion weight by the pixel coordinate of the first endpoint on the first boundary to obtain a first result,
multiplying the second fusion weight by the pixel coordinate of the first end point corresponding to the first end point and located on the second boundary to obtain a second result,
adding the first result and the second result to obtain a pixel coordinate after the first endpoint is fused;
multiplying the first fusion weight by the pixel coordinate of the second endpoint on the first boundary to obtain a third result,
multiplying the second fusion weight by the pixel coordinate of the second endpoint located on the second boundary and corresponding to the second endpoint to obtain a fourth result,
and adding the third result and the fourth result to obtain the pixel coordinate after the second endpoint is fused.
4. The method of claim 1, wherein matching a first boundary obtained by area segmentation based on first map data with a second boundary obtained by area segmentation based on second map data comprises:
for each of the first boundaries, the first boundary is,
if the distance between the centroid of the second boundary and the centroid of the first boundary and/or the distance between two endpoints of the second boundary and two corresponding endpoints of the first boundary is within a set distance threshold, the second boundary and the first boundary are successfully matched;
the method further comprises the step of enabling the user to select the target,
for a first boundary for which matching is unsuccessful, reducing a first boundary confidence for the first boundary,
for a second boundary which is not successfully matched, taking the second boundary as a fusion boundary, and keeping the confidence coefficient of the second boundary unchanged;
and for each fused boundary fused by the first boundary and the second boundary which are successfully matched, respectively carrying out weighted fusion on the first boundary confidence coefficient and the second boundary confidence coefficient for fusion according to the set weight to obtain the boundary confidence coefficient of the fused boundary.
5. The method of claim 4, wherein reducing the first boundary confidence of the first boundary comprises subtracting a set constant from the first boundary confidence of the first boundary,
the weighted fusion of the first boundary confidence coefficient and the second boundary confidence coefficient used for fusion according to the set weight respectively comprises the following steps:
weighting the first boundary confidence with a set first weight,
weighting the second boundary confidence by a set second weight,
adding the obtained weighting results;
the method further comprises the following steps:
and removing the fusion boundary of which the boundary confidence coefficient is smaller than the set confidence coefficient threshold value in the fusion boundary fused by the first boundary and the second boundary which are successfully matched.
6. The method of claim 1, wherein there is a dynamically updated difference between the first map data and the second map data, comprising,
when the difference between the first map data and the second map data is larger than a set difference threshold value, determining that a dynamically updated difference exists between the first map data and the second map data;
the method further comprises the step of enabling the user to select the target,
triggering area segmentation based on first map data when a difference between the first map data and the second map data is greater than a difference threshold;
the first map data is currently acquired map data, and the second map data is map data acquired last time.
7. The method of claim 6, further comprising,
performing probability rasterization on the first map to obtain a probability grid map, wherein grid values represent the probability of obstacles existing in the grid,
the boundary confidence coefficient is the average value of the confidence coefficients of two end points on the boundary; wherein, the endpoint confidence is determined as follows:
counting the number of continuous barrier grids adjacent to the end point in the end point neighborhood range, wherein the barrier grids are grids with grid values larger than a set probability threshold value;
and taking the ratio of the counted grid number to the total number of grids in the range of the end point neighborhood as the end point confidence coefficient.
8. The method of claim 1, wherein prior to matching a first boundary obtained by area segmentation based on the first map data with a second boundary obtained by area segmentation based on the second map data, further comprising,
loading the first partition result edited by the user,
extracting the boundary edited by the user in the first partition result, marking, adding the boundary edited by the user into the user editing boundary set,
when the first map data is updated, triggering area segmentation based on the first map data to obtain a first boundary, adding the first boundary to a first boundary set, and calculating the first boundary confidence of each first boundary in the first boundary set;
locking the boundaries edited by the user in the first set of boundaries according to the label,
determining whether a second set of boundaries exists that includes a second boundary,
if yes, executing the step of matching each first boundary obtained by carrying out region division based on the first map data with a second boundary obtained by carrying out region division based on the second map data, and fusing the first boundaries except for the boundaries edited by the locked user in the first boundary set with the second boundaries in the second boundary set;
otherwise, merging the user editing boundary set with the first boundary set to serve as a second boundary set.
9. A boundary fusion apparatus for partitioned areas, comprising a memory storing a computer program and a processor configured to perform the steps of the boundary fusion method for partitioned areas according to any one of claims 1 to 8.
10. A mobile robot, characterized in that it comprises a memory storing a computer program and a processor configured to perform the steps of the method of boundary fusion of segmented regions according to any of claims 1 to 8.
CN202110280256.2A 2021-03-16 2021-03-16 Boundary fusion method and device for partitioned area and mobile robot Pending CN112927256A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110280256.2A CN112927256A (en) 2021-03-16 2021-03-16 Boundary fusion method and device for partitioned area and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110280256.2A CN112927256A (en) 2021-03-16 2021-03-16 Boundary fusion method and device for partitioned area and mobile robot

Publications (1)

Publication Number Publication Date
CN112927256A true CN112927256A (en) 2021-06-08

Family

ID=76175329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110280256.2A Pending CN112927256A (en) 2021-03-16 2021-03-16 Boundary fusion method and device for partitioned area and mobile robot

Country Status (1)

Country Link
CN (1) CN112927256A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023155681A1 (en) * 2022-02-18 2023-08-24 追觅创新科技(苏州)有限公司 Method and device for processing region information, storage medium, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102564431A (en) * 2011-11-14 2012-07-11 南京理工大学 Multi-sensor-fusion-based unstructured environment understanding method
CN110268354A (en) * 2019-05-09 2019-09-20 珊口(深圳)智能科技有限公司 Update the method and mobile robot of map
CN110796598A (en) * 2019-10-12 2020-02-14 劢微机器人科技(深圳)有限公司 Autonomous mobile robot, map splicing method and device thereof, and readable storage medium
CN111368760A (en) * 2020-03-09 2020-07-03 北京百度网讯科技有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN111709517A (en) * 2020-06-12 2020-09-25 武汉中海庭数据技术有限公司 Redundancy fusion positioning enhancement method and device based on confidence prediction system
CN111932644A (en) * 2019-05-13 2020-11-13 Aptiv技术有限公司 Method and system for fusing occupied maps

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102564431A (en) * 2011-11-14 2012-07-11 南京理工大学 Multi-sensor-fusion-based unstructured environment understanding method
CN110268354A (en) * 2019-05-09 2019-09-20 珊口(深圳)智能科技有限公司 Update the method and mobile robot of map
CN111932644A (en) * 2019-05-13 2020-11-13 Aptiv技术有限公司 Method and system for fusing occupied maps
US20200363809A1 (en) * 2019-05-13 2020-11-19 Aptiv Technologies Limited Method and system for fusing occupancy maps
CN110796598A (en) * 2019-10-12 2020-02-14 劢微机器人科技(深圳)有限公司 Autonomous mobile robot, map splicing method and device thereof, and readable storage medium
CN111368760A (en) * 2020-03-09 2020-07-03 北京百度网讯科技有限公司 Obstacle detection method and device, electronic equipment and storage medium
CN111709517A (en) * 2020-06-12 2020-09-25 武汉中海庭数据技术有限公司 Redundancy fusion positioning enhancement method and device based on confidence prediction system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023155681A1 (en) * 2022-02-18 2023-08-24 追觅创新科技(苏州)有限公司 Method and device for processing region information, storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
WO2022002039A1 (en) Visual positioning method and device based on visual map
CN109241913B (en) Ship detection method and system combining significance detection and deep learning
Hu et al. A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropy
US8199977B2 (en) System and method for extraction of features from a 3-D point cloud
Deschaud et al. A fast and accurate plane detection algorithm for large noisy point clouds using filtered normals and voxel growing
KR101928575B1 (en) Piecewise planar reconstruction of three-dimensional scenes
CN112084869B (en) Compact quadrilateral representation-based building target detection method
JP2007213560A (en) Computerized method for tracking object in frame sequence
CN110610143B (en) Crowd counting network method, system, medium and terminal for multi-task combined training
KR101854048B1 (en) Method and device for measuring confidence of depth map by stereo matching
CN112348836A (en) Method and device for automatically extracting building outline
CN111553425A (en) Template matching LSP algorithm, medium and equipment for visual positioning
CN112183434B (en) Building change detection method and device
CN113407027A (en) Pose acquisition method and device, electronic equipment and storage medium
Green et al. Normal distribution transform graph-based point cloud segmentation
CN112927256A (en) Boundary fusion method and device for partitioned area and mobile robot
WO2022247126A1 (en) Visual localization method and apparatus, and device, medium and program
CN115077540A (en) Map construction method and device
US20230281350A1 (en) A Computer Implemented Method of Generating a Parametric Structural Design Model
CN114283332A (en) Fuzzy clustering remote sensing image segmentation method, system, terminal and storage medium
Jin et al. Depth image-based plane detection
JP6725310B2 (en) Image processing device and program
CN113344989B (en) NCC and Census minimum spanning tree aerial image binocular stereo matching method
CN113744416B (en) Global point cloud filtering method, equipment and storage medium based on mask

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination