CN113744329A - Automatic region division and robot walking control method, system, equipment and medium - Google Patents

Automatic region division and robot walking control method, system, equipment and medium Download PDF

Info

Publication number
CN113744329A
CN113744329A CN202010478386.2A CN202010478386A CN113744329A CN 113744329 A CN113744329 A CN 113744329A CN 202010478386 A CN202010478386 A CN 202010478386A CN 113744329 A CN113744329 A CN 113744329A
Authority
CN
China
Prior art keywords
target
image
line
point
coordinate information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010478386.2A
Other languages
Chinese (zh)
Inventor
张希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Fotile Kitchen Ware Co Ltd
Original Assignee
Ningbo Fotile Kitchen Ware Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Fotile Kitchen Ware Co Ltd filed Critical Ningbo Fotile Kitchen Ware Co Ltd
Priority to CN202010478386.2A priority Critical patent/CN113744329A/en
Publication of CN113744329A publication Critical patent/CN113744329A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform

Abstract

The invention discloses a method, a system, equipment and a medium for automatically dividing regions and controlling the walking of a robot, wherein the method for automatically dividing the regions comprises the following steps: acquiring image information of a two-dimensional house type graph of a house; acquiring a line image corresponding to the two-dimensional house type image; acquiring straight line information of a straight line where the obstacle is located in the line image; matching all target end points in the line image and end point coordinate information corresponding to the target end points according to the straight line information and the line image; calculating to obtain the target position of each room door in the house; generating a target image; and dividing different target areas in the house according to the target images. According to the invention, automatic division of different areas in the house environment is realized, the cleaning robot is controlled to clean the different areas in different cleaning modes, manual participation is not required in the whole process, the use experience of a user is improved, and higher use requirements of the user are met.

Description

Automatic region division and robot walking control method, system, equipment and medium
Technical Field
The invention relates to the technical field of robot control, in particular to a method, a system, equipment and a medium for automatic region division and robot walking control.
Background
At present, when a cleaning robot is used to clean a house, in order to select a proper cleaning mode according to a current house scene or complete cleaning of a fixed area, the scene needs to be divided.
The most basic scene division is to divide each room of a house, and most of the existing mainstream cleaning robots manually finish the division of the rooms or areas by users, so that the efficiency is low and the user experience is influenced.
Disclosure of Invention
The invention aims to overcome the defects that in the prior art, a cleaning robot needs to manually divide a room or a region to complete room cleaning, the efficiency is low, the user experience is influenced and the like, and provides a method, a system, equipment and a medium for automatic region division and robot walking control.
The invention solves the technical problems through the following technical scheme:
the invention provides an automatic division method of areas in a house, which comprises the following steps:
acquiring image information corresponding to a two-dimensional house type graph of a house;
acquiring a line image corresponding to the two-dimensional house type graph based on the image information; wherein lines in the line image correspond to different obstructions in the house;
acquiring straight line information of a straight line where the obstacle is located in the line image; the straight line information comprises position coordinate information corresponding to each point on a straight line where the obstacle is located;
matching all target end points in the line image and end point coordinate information corresponding to the target end points according to the straight line information and the line image;
calculating the target position of each room door in the house according to the endpoint coordinate information;
processing the line image based on the target position of the room door to generate a target image;
and dividing different target areas in the house according to the target images.
Preferably, the step of obtaining the line image corresponding to the two-dimensional house type map according to the image information includes:
and carrying out binarization processing on the image information to obtain the line image corresponding to the two-dimensional house type graph.
Preferably, the step of acquiring the line information of the line where the obstacle is located in the line image includes:
performing Gaussian smoothing processing on the line image to obtain a first intermediate image;
performing edge extraction on the first intermediate image by adopting an edge extraction algorithm to obtain a second intermediate image;
and performing feature detection on the second intermediate image by adopting Hough transform to acquire the linear information of the obstacle in the second intermediate image.
Preferably, the step of obtaining the endpoint coordinate information corresponding to all the line segment endpoints in the line image according to the matching of the line information and the line image includes:
and traversing and comparing each point on the straight line where the obstacle is located with the line in the line image to obtain the target end point and the end point coordinate information corresponding to the target end point.
Preferably, the step of calculating the target position of each room door in the house according to the endpoint coordinate information comprises:
acquiring the endpoint coordinate information corresponding to two adjacent target endpoints on the same straight line;
calculating the distance between the two target end points according to the coordinate information of the two end points;
judging whether the distance is smaller than a set threshold value, if so, determining that the distance between the two target end points is the target position of the room door; and/or the presence of a gas in the gas,
the step of processing the line image based on the target position of the room door, generating a target image, comprises:
connecting two target end points corresponding to the target positions of the room doors in the line image to generate the target image.
Preferably, the step of dividing different target areas in the house according to the target image comprises:
acquiring coordinate information of each pixel point in the target image to form a first coordinate information set;
coordinate information corresponding to each line in the target image is removed from the first coordinate information set to obtain a second coordinate information set;
randomly selecting any pixel point corresponding to the second coordinate information set as a current point, and dividing the current point into a current area;
traversing and calculating a connecting line between the current point and other points to be partitioned in the second coordinate information set, judging whether the connecting line has an intersection point with a straight line where the obstacle is located in the target image, and if not, determining that the current point and the points to be partitioned are both in the current area; if yes, judging whether the current point is located in the area where the obstacle is located, and if not, determining that the current point and the point to be partitioned are both located in the current area; if so, determining that the current point and the point to be partitioned are not in the current area;
acquiring all pixel points corresponding to the current area;
removing the coordinate information of all pixel points corresponding to the current area from the second coordinate information set to obtain a new second coordinate information set;
and randomly selecting one point in the new second coordinate information set as the current point, dividing the current point into new current areas, and performing the step of traversing and calculating the connecting lines between the current point and other points to be partitioned in the second coordinate information set until all the target areas in the house are obtained.
Preferably, the obstacle comprises a wall, and/or an obstacle outside the wall.
The invention also provides a walking control method of the cleaning robot, which is realized by adopting the automatic division method of the areas in the house, and comprises the following steps:
presetting cleaning modes corresponding to different target areas in the house;
and controlling the cleaning robot to walk and clean different target areas according to the cleaning mode of the target areas.
The invention also provides an automatic division system of the area in the house, which comprises the following components:
the image information acquisition module is used for acquiring image information corresponding to a two-dimensional house type graph of a house;
the line image acquisition module is used for acquiring a line image corresponding to the two-dimensional house type graph based on the image information; wherein lines in the line image correspond to different obstructions in the house;
the line information acquisition module is used for acquiring line information of a line where the obstacle is located in the line image; the straight line information comprises position coordinate information corresponding to each point on a straight line where the obstacle is located;
the end point coordinate acquisition module is used for obtaining all target end points in the line image and end point coordinate information corresponding to the target end points according to the straight line information and the line image;
the room door determining module is used for calculating the target position of each room door in the house according to the endpoint coordinate information;
the target image acquisition module is used for processing the line image based on the target position of the room door to generate a target image;
and the target area dividing module is used for dividing different target areas in the house according to the target image.
Preferably, the line image obtaining module is configured to perform binarization processing on the image information to obtain a line image corresponding to the two-dimensional house type map.
Preferably, the straight line information acquiring module includes:
the first intermediate image acquisition unit is used for carrying out Gaussian smoothing processing on the line image to acquire a first intermediate image;
a second intermediate image obtaining unit, configured to perform edge extraction on the first intermediate image by using an edge extraction algorithm to obtain a second intermediate image;
and the straight line information acquisition unit is used for carrying out feature detection on the second intermediate image by adopting Hough transform to acquire the straight line information of the obstacle in the second intermediate image.
Preferably, the endpoint coordinate obtaining module is configured to traverse and compare each point on the straight line where the obstacle is located with a line in the line image, and obtain the target endpoint and endpoint coordinate information corresponding to the target endpoint.
Preferably, the room door determining module includes:
the target endpoint acquisition unit is used for acquiring the endpoint coordinate information corresponding to two adjacent target endpoints on the same straight line;
the distance calculation unit is used for calculating the distance between the two target end points according to the coordinate information of the two end points;
the judging unit is used for judging whether the distance is smaller than a set threshold value, and if so, determining the target position between the two target end points as the room door;
and the target image generating unit is used for connecting two target end points corresponding to the target positions of the room doors in the line image to generate the target image.
Preferably, the target area dividing module includes:
the coordinate set acquisition unit is used for acquiring coordinate information of each pixel point in the target image to form a first coordinate information set;
the first information removing unit is used for removing the coordinate information corresponding to each line in the target image from the first coordinate information set so as to obtain a second coordinate information set;
the selecting unit is used for randomly selecting any one pixel point corresponding to the second coordinate information set as a current point and dividing the current point into a current area;
the processing unit is used for calculating a connecting line between the current point and other points to be partitioned in the second coordinate information set in a traversing manner, judging whether the connecting line has an intersection point with a straight line where the obstacle is located in the target image, and if not, determining that the current point and the points to be partitioned are both in the current area; if yes, judging whether the current point is located in the area where the obstacle is located, and if not, determining that the current point and the point to be partitioned are both located in the current area; if so, determining that the current point and the point to be partitioned are not in the current area;
the first acquisition unit is used for acquiring all pixel points corresponding to the current area;
a second information removing unit, configured to remove the coordinate information of all pixel points corresponding to the current region from the second coordinate information set, so as to obtain a new second coordinate information set;
the selecting unit is further configured to randomly select a new point in the second coordinate information set as the current point, divide the current point into a new current area, and recall the processing unit until all the target areas in the house are obtained.
Preferably, the obstacle comprises a wall, and/or an obstacle outside the wall.
The invention also provides a walking control system of the cleaning robot, which is realized by adopting the automatic area dividing system in the house, and comprises:
the cleaning mode presetting module is used for presetting cleaning modes corresponding to different target areas in the house;
and the control module is used for controlling the cleaning robot to carry out walking cleaning on different target areas according to the cleaning mode of the target areas.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the automatic division method of the areas in the house when executing the computer program; or, the walking control method of the cleaning robot.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the above-described automatic division method of areas in a house; or, the step of the walking control method of the cleaning robot.
The positive progress effects of the invention are as follows:
in the invention, a two-dimensional house type image of a house is subjected to binarization processing to obtain line images, wherein each line corresponds to a wall body or other obstacles in the house; acquiring linear information of a straight line where an obstacle is located in the line image, then acquiring a target end point and coordinate information of the target end point, determining the position of each room door based on the coordinate information of the target end point, connecting the end points of the positions where the room doors are located in the line image, and sealing the open area of the room to generate a target image; finally, according to the coordinate information of each pixel point in the image, different areas in the house are marked out; the cleaning modes corresponding to different areas are preset, the cleaning robot is controlled to clean the different areas according to the cleaning modes, automatic division of the different areas in the house environment is achieved, the cleaning robot is controlled to adopt the different cleaning modes to clean the different areas, manual participation is not needed in the whole process, user use experience is improved, and higher user use requirements are met.
Drawings
Fig. 1 is a flowchart of an automatic division method of areas in a house according to embodiment 1 of the present invention.
Fig. 2 is a schematic view of a line image corresponding to a two-dimensional house type map in embodiment 1 of the present invention.
Fig. 3 is a flowchart of an automatic division method of areas in a house according to embodiment 2 of the present invention.
Fig. 4 is a schematic view of line information corresponding to a line image in embodiment 2 of the present invention.
Fig. 5 is a schematic diagram of a target image formed based on a room door position in embodiment 2 of the present invention.
Fig. 6 is a flowchart of a method for controlling the travel of a cleaning robot according to embodiment 3 of the present invention.
Fig. 7 is a block diagram of an automatic area division system in a house according to embodiment 4 of the present invention.
Fig. 8 is a block diagram of an automatic area division system in a house according to embodiment 5 of the present invention.
Fig. 9 is a block diagram of a walking control system of a cleaning robot according to embodiment 6 of the present invention.
Fig. 10 is a schematic structural diagram of an electronic device for implementing an automatic area division method in a house according to embodiment 7 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1, the automatic division method of the area in the house of the present embodiment includes:
s101, acquiring image information corresponding to a two-dimensional house type graph of a house;
for example, sensing devices such as a laser radar and a depth/binocular camera are used for collecting a two-dimensional indoor type image of a house, or a pre-stored two-dimensional indoor type image of the house is directly obtained.
S102, obtaining a line image corresponding to the two-dimensional house type graph based on the image information;
as shown in fig. 2, the lines in the line image correspond to different obstacles in the house;
the obstacles include walls, other obstacles besides walls, and the like.
S103, acquiring straight line information of a straight line where the obstacle is located in the line image; the straight line information comprises position coordinate information corresponding to each point on a straight line where the obstacle is located;
s104, obtaining all target end points in the line image and end point coordinate information corresponding to the target end points according to the matching of the straight line information and the line image;
s105, calculating the target position of each room door in the house according to the endpoint coordinate information;
s106, processing the line image based on the target position of the room door to generate a target image;
s107, dividing different target areas in the house according to the target images; where each target area corresponds to a room or other type of independent environment (e.g., a hallway).
In the embodiment, a two-dimensional house type image of a house is subjected to binarization processing to obtain line images, wherein each line corresponds to a wall body or other obstacles in the house; acquiring linear information of a straight line where an obstacle is located in the line image, then acquiring a target end point and coordinate information of the target end point, determining the position of each room door based on the coordinate information of the target end point, and connecting the end points of the positions where the room doors are located in the line image to generate a target image; and finally, different areas in the house are divided according to the coordinate information of each pixel point in the image, so that the automatic division of the different areas in the house environment is realized, manual participation is not needed, the user use experience is improved, and the higher user use requirement is met.
Example 2
As shown in fig. 3, the automatic division method of the area in the house of the present embodiment is further to embodiment 1, specifically: step S102 includes:
and S1021, carrying out binarization processing on the image information to obtain a line image corresponding to the two-dimensional house type image, so as to extract walls around the house and other obstacles outside the walls to serve as a basis for subsequent area division.
Step S103 includes:
s1031, performing Gaussian smoothing processing on the line image to obtain a first intermediate image, thereby effectively removing noise and improving the accuracy of a subsequent edge identification result;
s1032, performing edge extraction on the first intermediate image by adopting an edge extraction algorithm to obtain a second intermediate image;
the edge extraction algorithm includes, but is not limited to, gradient operator, laplacian operator, Sobel operator (Sobel operator, belonging to an edge detector), Canny (an edge detector).
And S1033, performing feature detection on the second intermediate image by adopting Hough transform, and acquiring straight line information of the obstacle in the second intermediate image.
As shown in fig. 4, straight lines in the image are detected through hough transform to obtain a coordinate expression corresponding to each straight line in the image, that is, each point on the straight line has corresponding position coordinate information. Step S104 includes:
s1041, traversing and comparing each point on the straight line where the obstacle is located with the line in the line image, and acquiring a target end point and end point coordinate information corresponding to the target end point.
Step S105 includes:
s1051, obtaining endpoint coordinate information corresponding to two adjacent target endpoints on the same straight line;
s1052, calculating the distance between the two target end points according to the coordinate information of the two end points;
s1053, judging whether the distance is smaller than a set threshold value, if so, determining the target position between two target end points as a room door;
step S106 includes:
s1061, connecting two target end points corresponding to the target positions of the room doors in the line image to generate a target image.
Determining a blank area between two target end points as a room door (as shown in FIG. 2) by calculating whether the distance between the two target end points on the same straight line is within a threshold of the width of the room door, and modifying the blank area into an obstacle occupying area; that is, the position of the room door is calculated by the target end point, and then the calculated room door is supplemented to the current line image in the form of an obstacle to form the target image, as shown in fig. 5, a plurality of enclosed spaces are formed, wherein each enclosed space corresponds to a room or other type of independent environment (such as a corridor).
Step S107 includes:
acquiring coordinate information of each pixel point in a target image to form a first coordinate information set;
coordinate information corresponding to each line in the target image is removed from the first coordinate information set to obtain a second coordinate information set;
randomly selecting any pixel point corresponding to the second coordinate information set as a current point, and dividing the current point into a current area;
traversing and calculating connecting lines between the current point and other points to be partitioned in the second coordinate information set, judging whether the connecting lines have intersection points with straight lines where obstacles in the target image are located, and if not, determining that the current point and the points to be partitioned are both in the current area; if yes, judging whether the current point is located in the area where the obstacle is located, and if not, determining that the current point and the point to be partitioned are both located in the current area; if the current point and the point to be partitioned are located in the current area, determining that the current point and the point to be partitioned are not located in the current area;
acquiring all pixel points corresponding to the current area;
removing the coordinate information of all pixel points corresponding to the current area from the second coordinate information set to obtain a new second coordinate information set;
and randomly selecting one point in the new second coordinate information set as a current point, dividing the current point into new current areas, and repeatedly executing the step of calculating the connecting lines between the current point and other points to be partitioned in the second coordinate information set in a traversing manner until all target areas in the house are obtained.
The following examples illustrate: (1) loading coordinate information corresponding to each point in the target image into a list to be partitioned, and removing coordinate confidence of the wall and other obstacles from the list to be partitioned;
(2) randomly selecting a point a (non-wall and other obstacles) in the list to be partitioned and bringing the point a into a first partition, traversing the points of other non-wall and other obstacles in the target image, and judging whether all the traversed points are in the first partition;
(3) taking traversing any point b as an example, judging whether a connecting line between the point a and the point b has an intersection point with a straight line where a line in the target image is located, and if the connecting line does not have an intersection point with the straight line, determining that the point b and the point a are in the same partition; if the intersection point exists with the straight line, whether the intersection point is located in the wall or the obstacle area is continuously judged, and if the intersection point is not located in the wall or the obstacle area, the point b and the point a are determined to be in the same subarea; if so, it is determined that point b is not in the same partition as point a.
(4) After all the points are screened, all the points classified into the first partition are removed from the list to be partitioned;
(5) randomly selecting the next point from the new list to be partitioned, and returning to execute the operation of the step (2) to form the next partition; and repeating the iteration step until all the points on the list to be partitioned are partitioned, and partitioning all the target areas in the house.
In the embodiment, a two-dimensional house type image of a house is subjected to binarization processing to obtain line images, wherein each line corresponds to a wall body or other obstacles in the house; acquiring linear information of a straight line where an obstacle is located in the line image, then acquiring a target end point and coordinate information of the target end point, determining the position of each room door based on the coordinate information of the target end point, connecting the end points of the positions where the room doors are located in the line image, and sealing the open area of the room to generate a target image; and finally, different areas in the house are divided according to the coordinate information of each pixel point in the image, so that the automatic division of the different areas in the house environment is realized, manual participation is not needed, the user use experience is improved, and the higher user use requirement is met.
Example 3
The traveling control method of the cleaning robot of the present embodiment is implemented by the automatic division method of the area in the house of embodiment 1 or 2.
As shown in fig. 6, the method for controlling the traveling of the cleaning robot of the present embodiment includes:
s201, presetting cleaning modes corresponding to different target areas in a house; wherein, the cleaning mode of each target area can be reset according to the actual situation.
And S202, controlling the cleaning robot to walk and clean different target areas according to the cleaning mode of the target areas.
In this embodiment, through presetting the mode that cleans that different regions correspond to control cleaning machines people and clean different regions according to cleaning the mode, thereby realize carrying out the mode switching's cleaning to the different regions that divide out automatically in the house, promoted user and used experience, satisfied higher user's user demand.
Example 4
As shown in fig. 7, the automatic area dividing system in a house of the present embodiment includes an image information acquisition module 1, a line image acquisition module 2, a straight line information acquisition module 3, an end point coordinate acquisition module 4, a room door determination module 5, a target image acquisition module 6, and a target area dividing module 7.
The image information acquisition module 1 is used for acquiring image information corresponding to a two-dimensional house type graph of a house;
for example, sensing devices such as a laser radar and a depth/binocular camera are used for collecting a two-dimensional indoor type image of a house, or a pre-stored two-dimensional indoor type image of the house is directly obtained.
The line image acquisition module 2 is used for acquiring a line image corresponding to the two-dimensional house type image based on the image information; as shown in fig. 2, the lines in the line image correspond to different obstacles in the house; the obstacles include walls, other obstacles besides walls, and the like.
The straight line information acquisition module 3 is used for acquiring straight line information of a straight line where the obstacle is located in the line image; the straight line information comprises position coordinate information corresponding to each point on a straight line where the obstacle is located;
the endpoint coordinate acquisition module 4 is used for obtaining all target endpoints in the line image and endpoint coordinate information corresponding to the target endpoints according to the matching of the line information and the line image;
the room door determining module 5 is used for calculating a target position of each room door in the house according to the endpoint coordinate information;
the target image acquisition module 6 is used for processing the line image based on the target position of the room door to generate a target image;
the target area dividing module 7 is used for dividing different target areas in the house according to the target image; where each target area corresponds to a room or other type of independent environment (e.g., a hallway).
In the embodiment, a two-dimensional house type image of a house is subjected to binarization processing to obtain line images, wherein each line corresponds to a wall body or other obstacles in the house; acquiring linear information of a straight line where an obstacle is located in the line image, then acquiring a target end point and coordinate information of the target end point, determining the position of each room door based on the coordinate information of the target end point, and connecting the end points of the positions where the room doors are located in the line image to generate a target image; and finally, different areas in the house are divided according to the coordinate information of each pixel point in the image, so that the automatic division of the different areas in the house environment is realized, manual participation is not needed, the user use experience is improved, and the higher user use requirement is met.
Example 5
As shown in fig. 8, the automatic division system of the area in the house of the present embodiment is a further improvement of embodiment 4, specifically:
the line image obtaining module 2 is configured to perform binarization processing on the image information to obtain a line image corresponding to the two-dimensional house type map, so as to extract walls around the house and other obstacles outside the walls, and use the extracted walls as a basis for subsequent area division.
The straight line information acquisition module 3 includes a first intermediate image acquisition unit 8, a second intermediate image acquisition unit 9, and a straight line information acquisition unit 10.
The first intermediate image obtaining unit 8 is configured to perform gaussian smoothing on the line image to obtain a first intermediate image, so as to effectively remove noise and improve accuracy of a subsequent edge identification result;
the second intermediate image obtaining unit 9 is configured to perform edge extraction on the first intermediate image by using an edge extraction algorithm to obtain a second intermediate image; the edge extraction algorithm includes, but is not limited to, gradient operator, laplacian operator, Sobel operator, Canny. The straight line information obtaining unit 10 is configured to perform feature detection on the second intermediate image by using hough transform, and obtain straight line information where the obstacle is located in the second intermediate image. As shown in fig. 4, straight lines in the image are detected through hough transform to obtain a coordinate expression corresponding to each straight line in the image, that is, each point on the straight line has corresponding position coordinate information.
The endpoint coordinate obtaining module 4 is configured to traverse and compare lines in the line where the obstacle is located and each point in the line image, and obtain a target endpoint and endpoint coordinate information corresponding to the target endpoint.
The room door determining module 5 includes a target end point acquiring unit 11, a distance calculating unit 12, a judging unit 13, and a target image generating unit 14.
The target endpoint acquiring unit 11 is configured to acquire endpoint coordinate information corresponding to two adjacent target endpoints on the same straight line;
the distance calculation unit 12 is configured to calculate a distance between two target endpoints according to the coordinate information of the two endpoints;
the judging unit 13 is configured to judge whether the distance is smaller than a set threshold, and if so, determine a target position between two target endpoints, which is a room door;
the target image generation unit 14 is configured to connect two target end points corresponding to the target positions of the room doors in the line image to generate a target image.
Whether the distance between two target end points on the same straight line is within a threshold value of the width of a room door is calculated, if so, a blank area between the two target end points is determined to be the room door, and the blank area is modified into an obstacle occupying area; that is, the position of the room door is calculated by the target end point, and then the calculated room door is supplemented to the current line image in the form of an obstacle to form the target image, as shown in fig. 5, a plurality of enclosed spaces are formed, wherein each enclosed space corresponds to a room or other type of independent environment (such as a corridor).
The target area dividing module 7 comprises a coordinate set acquisition unit 15, a first information rejection unit 16, a selection unit 17, a processing unit 18, a first acquisition unit 19 and a second information rejection unit 20.
The coordinate set obtaining unit 15 is configured to obtain coordinate information of each pixel point in the target image to form a first coordinate information set;
the first information removing unit 16 is configured to remove, from the first coordinate information set, coordinate information corresponding to each line in the target image, so as to obtain a second coordinate information set;
the selecting unit 17 is configured to randomly select any one pixel point corresponding to the second coordinate information set as a current point, and divide the current point into a current region;
the processing unit 18 is configured to traverse and calculate a connection line between the current point and another point to be partitioned in the second coordinate information set, determine whether the connection line has an intersection point with a straight line where an obstacle in the target image is located, and if not, determine that the current point and the point to be partitioned are both in the current region; if yes, judging whether the current point is located in the area where the obstacle is located, and if not, determining that the current point and the point to be partitioned are both located in the current area; if the current point and the point to be partitioned are located in the current area, determining that the current point and the point to be partitioned are not located in the current area;
the first obtaining unit 19 is configured to obtain all pixel points corresponding to the current area;
the second information removing unit 20 is configured to remove the coordinate information of all pixel points corresponding to the current region from the second coordinate information set to obtain a new second coordinate information set;
the selecting unit 17 is further configured to randomly select a point in the new second coordinate information set as a current point, divide the current point into a new current area, and recall the processing unit 18 until all target areas in the house are obtained.
The following examples illustrate: (1) loading coordinate information corresponding to each point in the target image into a list to be partitioned, and removing coordinate confidence of the wall and other obstacles from the list to be partitioned;
(2) randomly selecting a point a (non-wall and other obstacles) in the list to be partitioned and bringing the point a into a first partition, traversing the points of other non-wall and other obstacles in the target image, and judging whether all the traversed points are in the first partition;
(3) taking traversing any point b as an example, judging whether a connecting line between the point a and the point b has an intersection point with a straight line where a line in the target image is located, and if the connecting line does not have an intersection point with the straight line, determining that the point b and the point a are in the same partition; if the intersection point exists with the straight line, whether the intersection point is located in the wall or the obstacle area is continuously judged, and if the intersection point is not located in the wall or the obstacle area, the point b and the point a are determined to be in the same subarea; if so, it is determined that point b is not in the same partition as point a.
(4) After all the points are screened, all the points classified into the first partition are removed from the list to be partitioned;
(5) randomly selecting the next point from the new list to be partitioned, and returning to execute the operation of the step (2) to form the next partition; and repeating the iteration step until all the points on the list to be partitioned are partitioned, and partitioning all the target areas in the house.
In the embodiment, a two-dimensional house type image of a house is subjected to binarization processing to obtain line images, wherein each line corresponds to a wall body or other obstacles in the house; acquiring linear information of a straight line where an obstacle is located in the line image, then acquiring a target end point and coordinate information of the target end point, determining the position of each room door based on the coordinate information of the target end point, connecting the end points of the positions where the room doors are located in the line image, and sealing the open area of the room to generate a target image; and finally, different areas in the house are divided according to the coordinate information of each pixel point in the image, so that the automatic division of the different areas in the house environment is realized, manual participation is not needed, the user use experience is improved, and the higher user use requirement is met.
Example 6
The travel control system of the cleaning robot of the present embodiment is implemented by the automatic division system of the area in the house of embodiment 4 or 5.
As shown in fig. 9, the travel control system of the cleaning robot of the present embodiment includes a cleaning mode presetting module 21 and a control module 22.
The cleaning mode presetting module 21 is used for presetting cleaning modes corresponding to different target areas in the house; wherein, the cleaning mode of each target area can be reset according to the actual situation.
The control module 22 is used for controlling the cleaning robot to perform walking cleaning on different target areas according to the cleaning mode of the target areas.
In this embodiment, through presetting the mode that cleans that different regions correspond to control cleaning machines people and clean different regions according to cleaning the mode, thereby realize carrying out the mode switching's cleaning to the different regions that divide automatically, promoted user and used experience, satisfied higher user's user demand.
Example 7
Fig. 10 is a schematic structural diagram of an electronic device according to embodiment 7 of the present invention. The electronic device comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the automatic division method of the area in the house in any one of the embodiments 1 or 2. The electronic device 30 shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 10, the electronic device 30 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, and a bus 33 connecting the various system components (including the memory 32 and the processor 31).
The bus 33 includes a data bus, an address bus, and a control bus.
The memory 32 may include volatile memory, such as Random Access Memory (RAM)321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 31 executes various functional applications and data processing, such as an automatic division method of an area in a house in any one of embodiments 1 or 2 of the present invention, by executing a computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through input/output (I/O) interfaces 35. Also, model-generating device 30 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via network adapter 36. As shown in FIG. 10, network adapter 36 communicates with the other modules of model-generated device 30 via bus 33. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the model-generating device 30, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Example 8
Embodiment 8 of the present invention provides an electronic device, where the electronic device includes a memory, a processor, and a computer program that is stored in the memory and is executable on the processor, and the processor executes the computer program to implement the method for controlling the walking of the cleaning robot in embodiment 3, where the specific structure of the electronic device refers to the electronic device in embodiment 7, and the working principle of the electronic device is substantially the same as that of the electronic device in embodiment 7, and details of the electronic device are not described herein.
Example 9
The present embodiment provides a computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the steps in the automatic division method of an area in a house in any one of embodiments 1 or 2.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation, the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps of implementing the method for automatic division of areas in a house according to any one of embodiments 1 or 2, when the program product is run on the terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
Example 10
The present embodiment provides a computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the steps in the walking control method of the cleaning robot in embodiment 3.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the present invention can also be realized in the form of a program product including program code for causing a terminal device to execute the steps in the walking control method of the cleaning robot in which embodiment 3 is realized, when the program product is run on the terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (18)

1. An automatic division method of an area in a house, characterized in that the automatic division method comprises:
acquiring image information corresponding to a two-dimensional house type graph of a house;
acquiring a line image corresponding to the two-dimensional house type graph based on the image information; wherein lines in the line image correspond to different obstructions in the house;
acquiring straight line information of a straight line where the obstacle is located in the line image; the straight line information comprises position coordinate information corresponding to each point on a straight line where the obstacle is located;
matching all target end points in the line image and end point coordinate information corresponding to the target end points according to the straight line information and the line image;
calculating the target position of each room door in the house according to the endpoint coordinate information;
processing the line image based on the target position of the room door to generate a target image;
and dividing different target areas in the house according to the target images.
2. The method for automatically dividing areas in a house according to claim 1, wherein the step of obtaining line images corresponding to the two-dimensional house type map according to the image information comprises:
and carrying out binarization processing on the image information to obtain the line image corresponding to the two-dimensional house type graph.
3. The method for automatically dividing an area in a house according to claim 1, wherein the step of obtaining the line information of the line where the obstacle is located in the line image includes:
performing Gaussian smoothing processing on the line image to obtain a first intermediate image;
performing edge extraction on the first intermediate image by adopting an edge extraction algorithm to obtain a second intermediate image;
and performing feature detection on the second intermediate image by adopting Hough transform to acquire the linear information of the obstacle in the second intermediate image.
4. The method for automatically dividing areas in a house according to claim 1, wherein the step of obtaining the end point coordinate information corresponding to all the line segment end points in the line image according to the matching of the line information and the line image comprises:
and traversing and comparing each point on the straight line where the obstacle is located with the line in the line image to obtain the target end point and the end point coordinate information corresponding to the target end point.
5. The method for automatically dividing an area in a house according to claim 1 or 4, wherein said step of calculating a target position of each room door in said house based on said endpoint coordinate information comprises:
acquiring the endpoint coordinate information corresponding to two adjacent target endpoints on the same straight line;
calculating the distance between the two target end points according to the coordinate information of the two end points;
judging whether the distance is smaller than a set threshold value, if so, determining that the distance between the two target end points is the target position of the room door; and/or the presence of a gas in the gas,
the step of processing the line image based on the target position of the room door, generating a target image, comprises:
connecting two target end points corresponding to the target positions of the room doors in the line image to generate the target image.
6. The method for automatically dividing an area in a house according to claim 1, wherein said step of dividing different target areas in said house according to said target image comprises:
acquiring coordinate information of each pixel point in the target image to form a first coordinate information set;
coordinate information corresponding to each line in the target image is removed from the first coordinate information set to obtain a second coordinate information set;
randomly selecting any pixel point corresponding to the second coordinate information set as a current point, and dividing the current point into a current area;
traversing and calculating a connecting line between the current point and other points to be partitioned in the second coordinate information set, judging whether the connecting line has an intersection point with a straight line where the obstacle is located in the target image, and if not, determining that the current point and the points to be partitioned are both in the current area; if yes, judging whether the current point is located in the area where the obstacle is located, and if not, determining that the current point and the point to be partitioned are both located in the current area; if so, determining that the current point and the point to be partitioned are not in the current area;
acquiring all pixel points corresponding to the current area;
removing the coordinate information of all pixel points corresponding to the current area from the second coordinate information set to obtain a new second coordinate information set;
and randomly selecting one point in the new second coordinate information set as the current point, dividing the current point into new current areas, and performing the step of traversing and calculating the connecting lines between the current point and other points to be partitioned in the second coordinate information set until all the target areas in the house are obtained.
7. The method for automatically demarcating areas of a house of claim 1 wherein said obstacles comprise walls and/or obstacles outside of said walls.
8. A walking control method of a cleaning robot, characterized in that the walking control method is implemented by the automatic division method of the area in a house according to any one of claims 1 to 7, and the walking control method comprises:
presetting cleaning modes corresponding to different target areas in the house;
and controlling the cleaning robot to walk and clean different target areas according to the cleaning mode of the target areas.
9. An automatic partitioning system for areas in a premises, said automatic partitioning system comprising:
the image information acquisition module is used for acquiring image information corresponding to a two-dimensional house type graph of a house;
the line image acquisition module is used for acquiring a line image corresponding to the two-dimensional house type graph based on the image information; wherein lines in the line image correspond to different obstructions in the house;
the line information acquisition module is used for acquiring line information of a line where the obstacle is located in the line image; the straight line information comprises position coordinate information corresponding to each point on a straight line where the obstacle is located;
the end point coordinate acquisition module is used for obtaining all target end points in the line image and end point coordinate information corresponding to the target end points according to the straight line information and the line image;
the room door determining module is used for calculating the target position of each room door in the house according to the endpoint coordinate information;
the target image acquisition module is used for processing the line image based on the target position of the room door to generate a target image;
and the target area dividing module is used for dividing different target areas in the house according to the target image.
10. The system for automatically dividing areas in a house according to claim 9, wherein the line image obtaining module is configured to perform binarization processing on the image information to obtain a line image corresponding to the two-dimensional house type map.
11. The system for automatically dividing areas in a house of claim 9, wherein said straight line information acquisition module comprises:
the first intermediate image acquisition unit is used for carrying out Gaussian smoothing processing on the line image to acquire a first intermediate image;
a second intermediate image obtaining unit, configured to perform edge extraction on the first intermediate image by using an edge extraction algorithm to obtain a second intermediate image;
and the straight line information acquisition unit is used for carrying out feature detection on the second intermediate image by adopting Hough transform to acquire the straight line information of the obstacle in the second intermediate image.
12. The system for automatically dividing an area in a house according to claim 9, wherein the end point coordinate obtaining module is configured to compare each point on a straight line where the obstacle is located with a line in the line image in a traversing manner, and obtain the target end point and the end point coordinate information corresponding to the target end point.
13. The system for automatically partitioning zones in a house of claim 9, wherein said room door determining module comprises:
the target endpoint acquisition unit is used for acquiring the endpoint coordinate information corresponding to two adjacent target endpoints on the same straight line;
the distance calculation unit is used for calculating the distance between the two target end points according to the coordinate information of the two end points;
the judging unit is used for judging whether the distance is smaller than a set threshold value, and if so, determining the target position between the two target end points as the room door;
and the target image generating unit is used for connecting two target end points corresponding to the target positions of the room doors in the line image to generate the target image.
14. The system for automatic zoning of a zone in a premises of claim 9, wherein the target zone zoning module comprises:
the coordinate set acquisition unit is used for acquiring coordinate information of each pixel point in the target image to form a first coordinate information set;
the first information removing unit is used for removing the coordinate information corresponding to each line in the target image from the first coordinate information set so as to obtain a second coordinate information set;
the selecting unit is used for randomly selecting any one pixel point corresponding to the second coordinate information set as a current point and dividing the current point into a current area;
the processing unit is used for calculating a connecting line between the current point and other points to be partitioned in the second coordinate information set in a traversing manner, judging whether the connecting line has an intersection point with a straight line where the obstacle is located in the target image, and if not, determining that the current point and the points to be partitioned are both in the current area; if yes, judging whether the current point is located in the area where the obstacle is located, and if not, determining that the current point and the point to be partitioned are both located in the current area; if so, determining that the current point and the point to be partitioned are not in the current area;
the first acquisition unit is used for acquiring all pixel points corresponding to the current area;
a second information removing unit, configured to remove the coordinate information of all pixel points corresponding to the current region from the second coordinate information set, so as to obtain a new second coordinate information set;
the selecting unit is further configured to randomly select a new point in the second coordinate information set as the current point, divide the current point into a new current area, and recall the processing unit until all the target areas in the house are obtained.
15. The system for automatically demarcating areas of a premises of claim 9, wherein said obstacles comprise walls and/or obstacles external to said walls.
16. A walking control system of a cleaning robot, characterized in that the walking control system is implemented by the automatic area division system in a house according to any one of claims 9 to 15, and the walking control system comprises:
the cleaning mode presetting module is used for presetting cleaning modes corresponding to different target areas in the house;
and the control module is used for controlling the cleaning robot to carry out walking cleaning on different target areas according to the cleaning mode of the target areas.
17. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method for automatically partitioning areas in a premises according to any one of claims 1-7; or, the walking control method of the cleaning robot described in claim 8 is realized.
18. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the method for automatic division of areas in a house according to any one of claims 1-7; or, the step of implementing the walking control method of the cleaning robot claimed in claim 8.
CN202010478386.2A 2020-05-29 2020-05-29 Automatic region division and robot walking control method, system, equipment and medium Pending CN113744329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010478386.2A CN113744329A (en) 2020-05-29 2020-05-29 Automatic region division and robot walking control method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010478386.2A CN113744329A (en) 2020-05-29 2020-05-29 Automatic region division and robot walking control method, system, equipment and medium

Publications (1)

Publication Number Publication Date
CN113744329A true CN113744329A (en) 2021-12-03

Family

ID=78724891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010478386.2A Pending CN113744329A (en) 2020-05-29 2020-05-29 Automatic region division and robot walking control method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN113744329A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329420A (en) * 2022-07-18 2022-11-11 北京五八信息技术有限公司 Marking line generation method and device, terminal equipment and storage medium
WO2023115660A1 (en) * 2021-12-22 2023-06-29 广东栗子科技有限公司 Method and apparatus for automatically cleaning ground

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115660A1 (en) * 2021-12-22 2023-06-29 广东栗子科技有限公司 Method and apparatus for automatically cleaning ground
CN115329420A (en) * 2022-07-18 2022-11-11 北京五八信息技术有限公司 Marking line generation method and device, terminal equipment and storage medium
CN115329420B (en) * 2022-07-18 2023-10-20 北京五八信息技术有限公司 Marking generation method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
Bormann et al. Room segmentation: Survey, implementation, and analysis
WO2020140860A1 (en) Dynamic region division and region channel identification method, and cleaning robot
CN109682368B (en) Robot, map construction method, positioning method, electronic device and storage medium
US8326019B2 (en) Apparatus, method, and medium for dividing regions by using feature points and mobile robot using the same
US9744671B2 (en) Information technology asset type identification using a mobile vision-enabled robot
CN110874100A (en) System and method for autonomous navigation using visual sparse maps
CN111609852A (en) Semantic map construction method, sweeping robot and electronic equipment
CN113568415B (en) Mobile robot, edgewise moving method thereof and computer storage medium
CN106643721B (en) Construction method of environment topological map
CN111197985B (en) Area identification method, path planning method, device and storage medium
CN113744329A (en) Automatic region division and robot walking control method, system, equipment and medium
CN108154516B (en) Point cloud topological segmentation method and device for closed space
CN113219992A (en) Path planning method and cleaning robot
CN112180931A (en) Sweeping path planning method and device of sweeper and readable storage medium
CN113219993A (en) Path planning method and cleaning robot
CN111127500A (en) Space partitioning method and device and mobile robot
CN113080768A (en) Sweeper control method, sweeper control equipment and computer readable storage medium
EP3264212B1 (en) System and method for determining an energy-efficient path of an autonomous device
CN114089752A (en) Autonomous exploration method for robot, and computer-readable storage medium
CN112087573A (en) Drawing of an environment
CN114365974B (en) Indoor cleaning and partitioning method and device and floor sweeping robot
CN114431771B (en) Sweeping method of sweeping robot and related device
WO2022247544A1 (en) Map partitioning method and apparatus, and autonomous mobile device and storage medium
CN114489058A (en) Sweeping robot, path planning method and device thereof and storage medium
Marzorati et al. Particle-based sensor modeling for 3d-vision slam

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination