CN111857156B - Laser-based robot region division method, chip and robot - Google Patents

Laser-based robot region division method, chip and robot Download PDF

Info

Publication number
CN111857156B
CN111857156B CN202010764293.6A CN202010764293A CN111857156B CN 111857156 B CN111857156 B CN 111857156B CN 202010764293 A CN202010764293 A CN 202010764293A CN 111857156 B CN111857156 B CN 111857156B
Authority
CN
China
Prior art keywords
robot
preset
boundary line
area
room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010764293.6A
Other languages
Chinese (zh)
Other versions
CN111857156A (en
Inventor
赖钦伟
徐依绵
王悦林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Amicro Semiconductor Co Ltd
Original Assignee
Zhuhai Amicro Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Amicro Semiconductor Co Ltd filed Critical Zhuhai Amicro Semiconductor Co Ltd
Priority to CN202010764293.6A priority Critical patent/CN111857156B/en
Publication of CN111857156A publication Critical patent/CN111857156A/en
Application granted granted Critical
Publication of CN111857156B publication Critical patent/CN111857156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0234Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons
    • G05D1/0236Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using optical markers or beacons in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot region dividing method based on laser, a chip and a robot, wherein the robot region dividing method comprises the following steps: when the robot performs the edgewise walking in the indoor work area according to the preset edgewise direction, a reference division boundary line for dividing the indoor work area is set according to the data scanned by the laser sensor of the robot, so that the robot performs the edgewise walking in the preset edgewise direction along the reference division boundary line. Compared with the prior art, the method and the device have the advantages that the reference segmentation boundary line for dividing the area is set through the map boundary information of laser scanning, so that a user can accurately designate the robot to cover and clean the area, and the accuracy rate of dividing the room area is higher than that of a pure vision technology.

Description

Laser-based robot region division method, chip and robot
Technical Field
The present invention relates to robot cleaning area planning, and more particularly, to a laser-based robot area dividing method, a chip, and a robot.
Background
The existing sweeping robot adopts inertial navigation, laser radar or a camera to conduct map planning and navigation, a user can see sweeping area division on mobile equipment in real time when sweeping the floor by the sweeping robot, however, the cleaning area division is not room unit type division, the cleaning area is divided into a plurality of areas at will only based on coordinate information of the cleaning area, the area division is mainly used for covering planning, in the prior art, a map constructed in real time based on pure vision technology means cannot mark information of an uncleaned area in advance, therefore, effective division of rooms cannot be conducted, meanwhile, after the rooms are divided by a algorithm of the prior art, one room is possibly divided into a plurality of small areas, some small areas cannot be combined together to be swept, a plurality of small areas are leaked to be swept, the sweeping efficiency is extremely low, the navigation line is extremely large, and the customer experience is extremely poor.
Then, the user cannot accurately designate that the sweeping robot moves to a specific cleaning room, even if the supporting robot arrives at the specific cleaning room, the user cannot ensure that the cleaning is performed in the specific cleaning room after arriving, but instead, the cleaning robot moves frequently unnecessarily between the specific room and other rooms to perform cleaning, and sub-areas of each room cannot be cleaned well, so that the user feels that the sweeping robot is not intelligent and humanized enough.
Disclosure of Invention
In order to solve the technical problems, the technical scheme of the invention combines laser scanning data to enable the region division to be more accurate, and the robot needs to divide the room sub-region of the indoor environment in a mode of setting filling boundary lines meeting the size requirements, so that a user can accurately designate the robot to cover and clean the region.
A laser-based robot zone division method, comprising: when the robot performs the edgewise walking in the indoor work area according to the preset edgewise direction, a reference division boundary line for dividing the indoor work area is set according to the data scanned by the laser sensor of the robot, so that the robot performs the edgewise walking in the preset edgewise direction along the reference division boundary line. Compared with the prior art, the method and the device have the advantages that the reference segmentation boundary line for dividing the area is set through the map boundary information of laser scanning, so that a user can accurately designate the robot to cover and clean the area. The accuracy of dividing the room area is higher than that of the pure vision technology.
Further, in the process that the robot performs the edge traveling along the preset edge direction, each time one reference division boundary line is set, the robot is controlled to continuously perform the edge traveling along the preset edge direction along the reference division boundary line, and a new non-traversed area is divided in the indoor working area; the reference segmentation boundary line divides a traversed area and an un-traversed area in the indoor working area; the traversed area includes the edge starting point locations and the traversed path along the edge and is marked into the laser map. The technical scheme is beneficial to controlling the robot to walk along the reference division boundary line in the independent room so as to divide the new room subareas in a plurality of directions, so that the coverage rate of the area division of the robot is higher.
Further, when the robot walks back to the edgewise start position in the traversed area in the preset edgewise direction or no new reference division boundary line is set in the process of performing the edgewise walking in the traversed area in the preset edgewise direction, the edgewise walking of the robot in the preset edgewise direction is ended. The robot is prevented from going down endlessly, so that reasonable reference segmentation boundary lines are set, and the division of the areas is more reasonable.
Further, the specific method for setting the reference division boundary line for dividing the indoor working area according to the data scanned by the laser sensor of the robot is as follows: according to the data scanned by the laser sensor of the robot, candidate boundary lines which simultaneously meet the preset boundary width condition and the preset region size condition are set in the indoor working region and marked as the reference segmentation boundary lines in the laser map. According to the technical scheme, from the size of the actual room area and the trafficability of the channels among the rooms, the candidate boundary line with reasonable traffic conditions is determined, so that the candidate boundary line is also used for setting the door, and the accuracy of dividing the room area is improved.
Further, before the indoor working area sets the candidate boundary line satisfying the preset boundary width condition and the preset area size condition at the same time, the method further includes: filling the non-traversed area of the laser map through a seed filling algorithm to fill out contour boundaries for surrounding the non-traversed area, and marking the filled contour boundaries as the candidate boundary lines; the scanning length of the candidate boundary line is the length of a line segment of the filled outline boundary in the laser map processed by the seed filling algorithm. According to the technical scheme, isolated obstacles which are easy to misjudge as map boundaries are filtered before the room subareas are divided, so that the accuracy of dividing the indoor working areas is improved.
Further, the step of determining that the candidate boundary line satisfies the preset boundary width condition includes: and judging whether the scanning length of the candidate boundary line is larger than a first preset boundary length and smaller than a second preset boundary length, if so, determining that the candidate boundary line meets the preset boundary width condition, otherwise, determining that the candidate boundary line is not the reference segmentation boundary line. According to the technical scheme, the candidate boundary lines are limited within the range meeting the door width requirement, and room area division of the corridor and the small-clearance channel of the indoor working area is avoided, so that the room area division is more reasonable.
Further, the step of determining that the candidate boundary line satisfies the preset area size condition includes: step 21, judging whether the absolute value of the difference value between the upper left-most corner abscissa of the scanned marked working area and the lower right-most corner abscissa of the scanned marked working area is within a preset room length range or not on a laser map constructed in real time by the robot, if yes, entering a step 22, otherwise, determining that the candidate boundary line does not meet the preset area size condition; step 22, judging whether the absolute value of the difference value between the upper right-most horizontal coordinate of the scanned marked working area and the lower left-most horizontal coordinate of the scanned marked working area is within the preset room length range, if yes, entering step 23, otherwise, determining that the candidate boundary line does not meet the preset area size condition; step 23, judging whether the absolute value of the difference value between the upper left-most angle ordinate of the scanned marked working area and the lower right-most angle ordinate of the scanned marked working area is within a preset room width range, if yes, entering step 24, otherwise, determining that the candidate boundary line does not meet the preset area size condition; and step 24, judging whether the absolute value of the difference value between the rightmost upper-angle ordinate of the scanned marked working area and the leftmost lower-angle ordinate of the scanned marked working area is within a preset room width range, if so, determining that the candidate boundary line meets the preset area size condition, otherwise, determining that the candidate boundary line does not meet the preset area size condition. The candidate boundary line is used for dividing the room subareas in the indoor working area with normal passable size, so that the robot is prevented from dividing the room subareas under low furniture.
Further, the method further comprises the following steps: after each reference division boundary line is set, the position mark of the reference division boundary line is used as an entrance of a corresponding room subarea in the laser map, so that the indoor working area is divided into different room subareas, and different room subareas are marked in the laser map.
A chip storing computer program instructions which when executed implement the method of robot region segmentation. Compared with a visual dividing method, the accuracy of dividing the room area can be ensured.
A robot equipped with a laser sensor at the same time, the robot having the chip built therein for executing the robot region dividing method by calling the laser sensor. The method overcomes the defect that the pure vision camera cannot collect the outline information of the non-traversed area in advance, and improves the accuracy of dividing the room area.
Drawings
Fig. 1 is a flow chart of a laser-based robot zone division method disclosed by the invention.
Fig. 2 is a flowchart illustrating a step of setting candidate boundary lines satisfying a preset boundary width condition and a preset region size condition by a robot according to an embodiment of the present invention.
Fig. 3 is a schematic view showing a robot according to an embodiment of the present invention in which a reference division boundary line is set in an indoor work area by laser scanning.
Detailed Description
The following describes the technical solution in the embodiment of the present invention in detail with reference to the drawings in the embodiment of the present invention. For further illustration of the various embodiments, the invention is provided with the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments and together with the description, serve to explain the principles of the embodiments.
The embodiment of the invention discloses a robot area dividing method based on laser, which is applied to a robot edge walking scene, wherein the robot can execute edge walking in an indoor working area along a preset edge direction. As shown in fig. 1, the robot region dividing method includes: step S1, controlling the robot to perform the edge walking in the indoor working area according to the preset edge direction, actually the robot travels along the detected wall, the traveling direction of the robot may be parallel to the extending direction of the wall, so that the laser sensor is convenient to scan the outline of the indoor working area to set the boundary line for dividing the subarea, and then step S2 is performed, wherein the area where the robot has traveled along the edge may be regarded as a part of the traversed area. Step S2, setting a reference division boundary line for dividing the indoor working area according to the data scanned by the laser sensor of the robot, wherein the reference division boundary line cooperates with the path which has been followed and the scanned environment outline to form a traversed area, so that the division of the indoor working area is realized, the robot can execute the edge walking in the preset edge direction along the reference division boundary line in the latest formed traversed area, and a new reference division boundary line can be set in the subsequent edge following process in the formed traversed area, and the robot can still execute the edge walking in the preset edge direction along the new reference division boundary line in the traversed area. In this embodiment, the robot scans where to go, and uses the local characteristics of the door opening of the room on the map scanned by the laser to instantly and accurately identify the entrance of a preset room, and controls the robot to travel along the door opening but not enter the preset room by setting a virtual wall (the reference division boundary line), so as to move to a new wall, travel along the wall, make the robot only remain in the original room to travel along the edge, and obtain a closed polygon after the robot completes the edge, which is the separated relatively complete room (compared with the prior art, the small area cannot be combined), and then the robot performs planning cleaning in the separated room, thereby realizing the goal of dynamic accurate room separation cleaning.
Compared with the prior art, the method and the device have the advantages that the reference segmentation boundary line for dynamically dividing the area is set through the map boundary information of laser scanning, so that a user can accurately designate the robot to cover and clean the area. The accuracy of dividing the room area is higher than that of the pure vision technology. The robot is enabled to move frequently unnecessarily to clean, all the room subareas can be cleaned and covered well according to the set reference segmentation boundary line, and the user can feel that the robot has higher intelligent degree.
In this embodiment, the reference division boundary line divides the traversed area and the non-traversed area from the indoor working area, and updates the non-traversed area into the traversed area when the robot performs the edge walking according to the preset edge direction; wherein the traversed area and the non-traversed area are respectively located at two sides of the reference segmentation boundary line, the traversed area comprises the edge starting point position of the robot and the traversed path along the edge, and the traversed area and the non-traversed area are marked in a laser map. The non-traversed area is an area which the robot does not walk through but can be detected by laser scanning, and the area is subjected to seed filling processing to acquire area outline boundary information and divided into room subareas so that the robot can arrange room area cleaning tasks.
Preferably, in the process that the robot performs the edge traveling along the preset edge direction, each time one reference division boundary line is set, the robot is controlled to continue to perform the edge traveling along the preset edge direction along the reference division boundary line until a new wall body is detected in the same room area, and then the robot continues to perform the edge traveling along the preset edge direction along the new wall body, the robot is kept parallel to the wall body, the wall body is also divided into a part of traversed areas, the traversed areas comprise edge starting positions and traversed paths, after the robot finishes the edge traveling along the preset edge direction, a closed polygonal area is determined, namely, the robot edge traveling returns to the edge starting positions, the traversed area is marked on a laser map, and a new non-traversed area is divided in the indoor working area, so that the robot is controlled to divide new room areas in multiple directions in the process of traveling along the same edge direction, and the coverage rate of the division of the areas of the robot is higher. And ending the edgewise walking of the robot in the traversed area when the robot edgewise walks back to the edgewise starting point position according to the preset edgewise direction in the traversed area or no new reference segmentation boundary line is set in the process of performing the edgewise walking according to the preset edgewise direction in the traversed area. The robot is prevented from going down endlessly, so that reasonable reference segmentation boundary lines are set, and the division of the areas is more reasonable.
For convenience of understanding, in the process of starting the robot of fig. 3 to walk counterclockwise along the semi-closed area #1 having only one opening, the reference dividing boundary line is set only at the position B to distinguish that the area #1 is a traversed area, the area #2 is an unviewed area, and then the robot is restricted to continue to walk counterclockwise along the traversed area #1 until the edge starting point position O is returned. The robot (circle of fig. 3 being close to the boundary of the indoor working area) as in fig. 3 sequentially traverses the boundary P2P5, the boundary P5P7, the boundary P7B, the boundary BA, the boundary AP8, the boundary P8P6, the boundary P6P1, the boundary P1P2 of the indoor working area along the counterclockwise direction from the boundary starting point position O, and then returns to the boundary starting point position O along the boundary P2P5 to form a boundary path indicated by an arrow dotted line in fig. 3; in the process of walking along the edge, the robot of fig. 3 rotates and emits laser rays (rays with arrows radiating to the periphery around a circle close to the position B of fig. 3), so that the robot can cover most of corner outlines of the traversed area #1 of fig. 3 and scan most of corner outlines of the non-traversed area #2 of fig. 3, and scan and mark a large amount of environmental edge information for a laser map constructed in real time by the robot; on the basis of the real-time scanning data of the laser sensor of the robot, a reference division boundary line is set in the indoor working area, so that the reference division boundary line divides the indoor working area into a subarea #1 and a subarea #2, each divided subarea considers the width characteristic meeting the passable requirement of the door and the size of the whole room area, the dashed line BA in fig. 3 is set as the reference division boundary line, and the reference division boundary line encloses a closed subarea #1 with a boundary AP8, a boundary P8P6, a boundary P6P1, a boundary P1P2, a boundary P2P5, a boundary P5P7 and a boundary P7B, and the robot can only walk along the reference division boundary line (the dashed line BA in fig. 3) in the preset edge direction (corresponding to the anticlockwise direction in fig. 3) in the original room area, because the robot does not meet the configuration requirement of the closed area constructed by the scanning if continuing to enter the subarea #2 along the edge, which is equivalent to the entrance (the dashed line BA in fig. 3). It should be noted that the preset edge direction may also be clockwise, which may be different depending on the edge starting point position.
Preferably, after the robot finishes the edgewise travel in the preset edgewise direction and returns to the edgewise start position, or no new reference division boundary line is set during the edgewise travel in the preset edgewise direction, the behavior performed by the robot may be a pre-configured good instruction, such as controlling the robot to start to perform the cleaning task in the traversed area #1 divided by the reference division boundary line AB, because the laser sensor of the robot has scanned the marked room area information during the edgewise process and generates the cleaning task instruction at the moment the edgewise end.
On the basis of the above embodiment, in the process that the robot walks along the edges of the indoor working area, the specific method for setting the reference segmentation boundary line for dividing the indoor working area according to the real-time scanning data of the laser sensor of the robot is as follows: according to the real-time scanning data of the laser sensor of the robot, setting a candidate boundary line which simultaneously meets a preset boundary width condition and a preset region size condition in an indoor working region, and marking the candidate boundary line as the reference segmentation boundary line in a laser map, so that the reference segmentation boundary line preliminarily divides the indoor working region into a traversed region and an un-traversed region; wherein the traversed area and the non-traversed area respectively reside on two sides of the reference segmentation boundary line and can be scanned and covered by a laser sensor of the robot, and the traversed area comprises the starting point position of the robot along the edge and the traversed path along the edge; it should be noted that the embodiment of the present invention does not limit the judgment order of the preset boundary width condition and the preset area size condition. In the embodiment, from the aspect of the size of the actual room area and the trafficability of the channels among the rooms, the candidate boundary line with reasonable traffic conditions is determined, so that the candidate boundary line is also used for setting a door as an entrance of a complete room, and the accuracy of dividing the room area is improved.
As an embodiment, the step flowchart of setting the candidate boundary lines satisfying the preset boundary width condition and the preset region size condition by the robot as shown in fig. 2 may be obtained by combining the preset boundary width condition and the preset region size condition, and includes:
s21, constructing a laser map by utilizing area information acquired by laser sensors of the robot at different positions and at different angles, wherein the embodiment cannot directly utilize data acquired by the laser sensors in real time; and then selecting laser line segments in the laser map, so that the laser line segments become candidate boundary lines for dividing the room area, which can be the candidate boundary lines scanned by the robot at the current edge position, and then proceeding to step S22. The laser map is a map processed by a seed filling algorithm, and boundary lines which can enclose the outline of the map and are obtained by seed filling processing are used as candidate boundary lines, so that misjudgment caused by isolated obstacles in the laser map is prevented.
As shown in fig. 3, when the robot walks along the edge to a position close to the position B, that is, walks along the edge to a position where the laser beam is emitted to the surrounding in fig. 3, a plurality of candidate boundary lines can be selected, in fig. 3, the position B, the position a, the position P1, the position P3, and the position P4 can be used as corner position points of the environmental profile scanned by the laser sensor and constructed in the laser map, and the connecting lines of the scanned corner position points can be used as candidate boundary lines, and the boundary lines with colors are filled and processed by the seed filling algorithm, so that the broken line AB in fig. 3 is regarded as the candidate boundary lines. The candidate boundary lines can be walls or can be directly combined with the walls, so that space positions are reserved for the walls, and room areas can be divided subsequently.
Step S22, determining whether the candidate boundary line meets the preset boundary width condition, specifically, determining whether the scanning length of the candidate boundary line is greater than a first preset boundary length and less than a second preset boundary length, if yes, determining that the candidate boundary line meets the preset boundary width condition and proceeding to step S23, otherwise proceeding to step S28, and determining that the candidate boundary line does not meet the preset boundary width condition. The first preset boundary length is a threshold value for judging the small channel width of the room area, the small channel width of the room area comprises small gap channel widths between adjacent rooms or in the same room area, and small distances between desk legs or chair legs, and the candidate boundary lines of the small scanning lengths cannot be used for dividing a new room area; the second preset boundary length is a threshold value for judging the width of the corridor, and the widths of the corridor and the corridor area scanned by the robot in the process of walking along the edge (along the wall) are relatively large, so the set second preset boundary length is larger than the first preset boundary length and the width of a common door, and the candidate boundary lines with the larger scanning lengths cannot be set as room inlets and cannot be used for dividing new room areas. In the embodiment, the candidate boundary line is limited within the range meeting the gate width passing requirement, so that room area division of a corridor and a small-clearance channel of an indoor working area is avoided, and the room area division is more reasonable.
Step S23, determining whether the candidate boundary line meets the preset area size condition on a laser map constructed in real time by the robot, specifically includes: and judging whether the absolute value of the difference value between the upper left-most horizontal coordinate of the scanned marked working area and the lower right-most horizontal coordinate of the scanned marked working area is within the preset room length range, if so, entering step S24, otherwise, entering step S28. Step S23 is used to determine whether the length of the working area marked by the laser sensor in the laser map reaches the basic length allowed for forming a room (within the range of the preset room length), so as to avoid setting reference division boundary lines under the houses such as tables, chairs, beds or sofas to divide the room subareas. As shown in fig. 3, when the robot walks along the edge to the proximate position B (which can be regarded as the position B of the robot), the robot scans the leftmost upper corner P4 of the working area and marks its abscissa on the laser map, and simultaneously the robot scans the rightmost lower corner P2 of the working area and marks its abscissa on the laser map, and then calculates whether the absolute value of the difference between the leftmost upper corner P4 abscissa and the rightmost lower corner P2 abscissa is within the preset room length range.
Step S24, judging whether the absolute value of the difference value between the upper right-most corner abscissa of the scanned marked working area and the lower left-most corner abscissa of the scanned marked working area is within the preset room length range or not on the laser map constructed in real time by the robot, if so, entering step S25, otherwise, entering step S28. Step S24 is to determine, from another symmetrical direction, whether the length of the working area marked by the laser sensor in the laser map reaches the basic length allowed for forming a room (within the range of the preset room length), so as to more fully avoid setting a reference division boundary line under the house such as a desk, a chair, a bed or a sofa to divide the room subareas. As shown in fig. 3, when the robot walks along the edge to the proximate position B (which can be regarded as the position B of the robot), the robot scans the rightmost upper corner P3 of the working area and marks its abscissa on the laser map, and simultaneously the robot scans the leftmost lower corner P1 of the working area and marks its abscissa on the laser map, and then calculates whether the absolute value of the difference between the upper right-most upper corner P3 abscissa and the lower left-most corner P1 abscissa is within the preset room length range.
Step S25, judging whether the absolute value of the difference value between the upper left-most longitudinal coordinate of the scanned marked working area and the lower right-most longitudinal coordinate of the scanned marked working area is within a preset room width range or not on a laser map constructed in real time by the robot, if so, entering step S26, otherwise, entering step S28. Step S26 is used to determine whether the width of the working area marked by the laser sensor in the laser map reaches the basic width allowed for forming a room (within the range of the preset room width), so as to avoid setting reference division boundary lines under the houses such as tables, chairs, beds or sofas to divide the room subareas. As shown in fig. 3, when the robot walks along the edge to the proximate position B (which can be regarded as the position B of the robot), the robot scans the leftmost upper corner P4 of the working area and marks its ordinate on the laser map, and simultaneously the robot scans the rightmost lower corner P2 of the working area and marks its ordinate on the laser map, and then calculates whether the absolute value of the difference between the upper leftmost upper corner P4 ordinate and the lower rightmost corner P2 ordinate is within the preset room width range.
And S26, judging whether the absolute value of the difference value between the upper right-most longitudinal coordinate of the scanned marked working area and the lower left-most longitudinal coordinate of the scanned marked working area is within a preset room width range or not on a laser map constructed in real time by the robot, if so, entering a step S27, otherwise, entering a step S28. Step S26 is to determine, from another symmetrical direction, whether the length of the working area marked by the laser sensor in the laser map reaches the basic width (within the range of the preset room width) allowed for forming a room, so as to more fully avoid setting a reference division boundary line under the house such as a desk, a chair, a bed or a sofa to divide the room subareas, relative to step S25. As shown in fig. 3, when the robot walks along the edge to the proximate position B (which can be regarded as the position B of the robot), the robot scans the rightmost upper corner P3 of the working area and marks its ordinate on the laser map, and simultaneously the robot scans the leftmost lower corner P1 of the working area and marks its ordinate on the laser map, and then calculates whether the absolute value of the difference between the rightmost upper corner P3 ordinate and the leftmost lower corner P1 ordinate is within the preset room width range.
In the foregoing steps S21 to S26, the step of setting the candidate boundary line in the indoor working area to meet the preset boundary width condition and the preset area size condition simultaneously is: and judging whether the candidate boundary line meets the preset boundary width condition or not, and continuously judging whether the candidate boundary line meets the preset region size condition or not on the basis that the candidate boundary line meets the preset boundary width condition. The method comprises the steps of judging the trafficability of the boundary line, and judging whether the area where the boundary line is located has the size of a formed room or not. The left-most upper-corner abscissa and the right-most lower-corner ordinate of the scanned and marked working area, the right-most upper-corner abscissa and the right-most lower-corner abscissa of the scanned and marked working area are all coordinate parameters in a laser map according to the data mark scanned by a laser sensor of the robot in real time, and the selected coordinate parameters are used for comprehensively representing the size information of a large room area and judging whether the size information of the large area framed based on the coordinate parameters is in a normal parameter range or not. The step of determining that the candidate boundary line in step S23 to step S26 satisfies the preset region size condition may be exchanged with the step of determining that the candidate boundary line in step S22 satisfies the preset boundary width condition.
It should be noted that, as shown in fig. 3, the working area of the scanned mark in the previous step is a part of the indoor working area, when the robot is in the edge-to-edge approach position B, the laser ray emitted by the laser sensor may cover the corner positions P9, P10 and P11 of the non-traversed area #2, but the non-traversed area #2 is still not covered by the laser ray, and the part of the area is left to be cleaned after the robot completes the cleaning task of the traversed area # 1.
Step S27, determining that the candidate boundary line meets both the preset area size condition and the preset boundary width condition, and further marking the candidate boundary line as the reference segmentation boundary line, so as to achieve division of the indoor working area.
Step S28, determining that the candidate boundary line is not the reference division boundary line. Wherein the execution condition for jumping from step S22 to step S28 is that the candidate boundary line does not satisfy the preset boundary width condition, and the execution condition for jumping from other steps to step S28 is that the candidate boundary line does not satisfy the preset region size condition.
The robot region dividing method described in the foregoing steps S21 to S28 makes the candidate boundary line divide the subareas in the indoor working region with normal passable size, so as to avoid the robot dividing the subareas of the room under low furniture.
Before the indoor working area sets the candidate boundary line that satisfies both the preset boundary width condition and the preset area size condition, that is, before the robot area dividing method described in the foregoing steps S21 to S28 is executed, the following steps are further required to be executed: filling the non-traversed area of the constructed laser map by using a seed filling algorithm, specifically, when the gray value of the grid in the non-traversed area is 128, filling the grid into a red (not shown in the figure) and marking the grid as a contour boundary grid of the non-traversed area, stopping the filling process until the non-traversed area is filled, so that the grid of the contour boundary of the non-traversed area is filled into red, the filled contour boundaries are surrounded by a closed polygon area, or the filled contour boundaries are connected with the contour boundary of the traversed area to surround the closed polygon area, the closed polygon area can surround the non-traversed area, and at the moment, an isolated barrier with the gray value of 128 is also present in the contour boundary surrounding area of the non-traversed area, and is easy to judge as an isolated barrier of a map boundary line, therefore, the candidate boundary line is only selected from the contour boundary boundaries filled in the non-traversed area, the filled contour boundary line is easy to judge as the boundary line of the candidate boundary line, the isolated barrier is easy to judge as the isolated barrier of the candidate boundary line, and the isolated barrier is easy to judge the room partition into the isolated barrier region before the partition of the map is easy to judge the isolated barrier. The scan length of the candidate boundary line in the foregoing embodiment is the length of a line segment of the filled outline boundary in the laser map processed by the seed filling algorithm.
In the process of filling the non-traversed area of the constructed laser map by using the seed filling algorithm, in the laser map disclosed by the embodiment of the invention, the gray value of the grid on the map boundary (including the boundary line for dividing the area which is set later) is configured to be 128 as a specific gray value, the gray value of the idle grid in the laser map is configured to be more than 128, and the gray value of the grid marking the obstacle in the laser map is configured to be less than 128; when the gray value of the grid is detected to be not equal to 128, the grid is not filled and marked with 1, otherwise the grid is filled and marked with 0.
The foregoing embodiment completes the setting of the reference division boundary line of the indoor working area, and completes the edge walking in the counterclockwise direction around the reference division boundary line BA, the boundary P2P5, the boundary P5P7, the boundary P7B, the boundary AP8, the boundary P8P6, the boundary P6P1, the boundary P1P2 to form a closed sub-area #1, and at this time, the reference division boundary line BA divides the closed sub-area #1 into traversed areas. After each reference division boundary line is set, for example, after the reference division boundary line BA is set according to the foregoing method steps, the position mark where the reference division boundary line BA line is located is used as the entrance of the inter-room sub-area #2 (corresponding to the non-traversed area #2 in fig. 3) in the laser map, so as to divide the indoor working area into different room sub-areas, and at the same time, mark the different room sub-areas in the laser map.
Preferably, after the robot recognizes the room entrance position at the reference division boundary line, this room entrance position divides the indoor working area into different room sub-areas, and marks the different room sub-areas in the laser map at the same time, corresponding to the indoor working area of fig. 3, the robot recognizes that one door can be provided at the reference division boundary line AB in the traversed area #1, and determines the division of the indoor working area into the room sub-area #1 and the room sub-area #2 with this reference division boundary line AB as a boundary, which is a result of the determination on the basis of the foregoing steps S21 to S28, to achieve the final division of the indoor working area into the different room sub-areas.
After the robot completes the cleaning task of the room sub-area #1, the robot enters the room sub-area #2 across the reference division boundary line AB, then starts to walk along another preset edge direction in the room sub-area #2, starts to clean the room sub-area #2 after the edge is finished according to the method steps, and performs steps 1 to 2 to divide a new room sub-area, that is, the laser sensor of the robot at the edge close position B does not scan the non-traversed area of the covered room sub-area #2 to divide the new room sub-area, so that the newly divided room sub-area is complete, does not divide a small area in a larger area, but does not return to the cleaned and traversed room sub-area #1, specifically, the reference division boundary line is set by repeatedly executing the robot area dividing method to divide the new sub-area, and the robot is controlled to complete the covered cleaning in the corresponding sub-area, and the covered cleaning of the indoor working area is completed by dividing the room sub-area.
According to the embodiment, different room subareas are divided by the reference dividing boundary line judged by laser scanning, so that the robot can finish cleaning of each room successively according to the divided room subareas, and the robot can not randomly enter and exit two adjacent room subareas in the process of executing a cleaning task, thereby realizing intelligent cleaning.
A chip storing computer program instructions which when executed implement the method of robot region segmentation. Compared with a visual dividing method, the accuracy of dividing the room area can be ensured.
A robot equipped with a laser sensor at the same time, the robot having the chip built therein for executing the robot region dividing method by calling the laser sensor. The method overcomes the defect that the pure vision camera cannot collect the outline information of the non-traversed area in advance, and improves the accuracy of dividing the room area.
The foregoing embodiments are merely illustrative of the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and to implement the same according to the present invention, not to limit the scope of the present invention. All changes and modifications that come within the meaning and range of equivalency of the invention are to be embraced within their scope.

Claims (8)

1. A laser-based robot zone division method, comprising:
when the robot performs the edge walking in the indoor working area according to the preset edge direction, setting a reference division boundary line for dividing the indoor working area according to the data scanned by the laser sensor of the robot, so that the robot performs the edge walking in the preset edge direction along the reference division boundary line;
the specific method for setting the reference division boundary line for dividing the indoor working area according to the data scanned by the laser sensor of the robot comprises the following steps:
according to the data scanned by the laser sensor of the robot, setting a candidate boundary line which simultaneously meets the preset boundary width condition and the preset region size condition in an indoor working region, and marking the candidate boundary line as the reference segmentation boundary line in a laser map;
the step of judging that the candidate boundary line meets the preset area size condition comprises the following steps:
step 21, judging whether the absolute value of the difference value between the upper left-most corner abscissa of the scanned marked working area and the lower right-most corner abscissa of the scanned marked working area is within a preset room length range or not on a laser map constructed in real time by the robot, if yes, entering a step 22, otherwise, determining that the candidate boundary line does not meet the preset area size condition;
step 22, judging whether the absolute value of the difference value between the upper right-most horizontal coordinate of the scanned marked working area and the lower left-most horizontal coordinate of the scanned marked working area is within the preset room length range, if yes, entering step 23, otherwise, determining that the candidate boundary line does not meet the preset area size condition;
step 23, judging whether the absolute value of the difference value between the upper left-most angle ordinate of the scanned marked working area and the lower right-most angle ordinate of the scanned marked working area is within a preset room width range, if yes, entering step 24, otherwise, determining that the candidate boundary line does not meet the preset area size condition;
and step 24, judging whether the absolute value of the difference value between the rightmost upper-angle ordinate of the scanned marked working area and the leftmost lower-angle ordinate of the scanned marked working area is within a preset room width range, if so, determining that the candidate boundary line meets the preset area size condition, otherwise, determining that the candidate boundary line does not meet the preset area size condition.
2. The robot zone division method according to claim 1, wherein each time the robot sets one of the reference division boundary lines in performing the edgewise travel in the preset edgewise direction, the robot is controlled to continue performing the edgewise travel in the preset edgewise direction along the one of the reference division boundary lines and to divide a new non-traversed zone within the indoor work zone;
the reference segmentation boundary line divides a traversed area and an un-traversed area in the indoor working area;
the traversed area includes the edge starting point locations and the traversed path along the edge and is marked into the laser map.
3. The robot region dividing method according to claim 2, wherein the edgewise travel of the robot in the preset edgewise direction is ended when the robot edgewise travels back to the edgewise start position in the traversed region in accordance with the preset edgewise direction or no new reference division boundary line is set in the process of performing the edgewise travel in the traversed region in accordance with the preset edgewise direction.
4. The robot zone division method of claim 3, further comprising, before the indoor work zone sets a candidate boundary line that satisfies both the preset boundary width condition and the preset zone size condition:
filling the non-traversed area of the laser map through a seed filling algorithm to fill out contour boundaries for surrounding the non-traversed area, and marking the filled contour boundaries as the candidate boundary lines;
the scanning length of the candidate boundary line is the length of a line segment of the filled outline boundary in the laser map processed by the seed filling algorithm.
5. The robot region division method according to claim 4, wherein the judging step of the candidate boundary line satisfying the preset boundary width condition includes:
and judging whether the scanning length of the candidate boundary line is larger than a first preset boundary length and smaller than a second preset boundary length, if so, determining that the candidate boundary line meets the preset boundary width condition, otherwise, determining that the candidate boundary line is not the reference segmentation boundary line.
6. The robot zone division method of claim 5, further comprising: after each reference division boundary line is set, the position mark of the reference division boundary line is used as an entrance of a corresponding room subarea in the laser map, so that the indoor working area is divided into different room subareas, and different room subareas are marked in the laser map.
7. A chip storing computer program instructions, characterized in that the computer program instructions, when executed, implement the robot zone division method of any one of claims 1 to 6.
8. A robot equipped with a laser sensor at the same time, characterized in that the robot has a chip as claimed in claim 7 built in for executing the robot zone division method as claimed in any one of claims 1 to 6 by calling the laser sensor.
CN202010764293.6A 2020-08-02 2020-08-02 Laser-based robot region division method, chip and robot Active CN111857156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010764293.6A CN111857156B (en) 2020-08-02 2020-08-02 Laser-based robot region division method, chip and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010764293.6A CN111857156B (en) 2020-08-02 2020-08-02 Laser-based robot region division method, chip and robot

Publications (2)

Publication Number Publication Date
CN111857156A CN111857156A (en) 2020-10-30
CN111857156B true CN111857156B (en) 2024-04-02

Family

ID=72954279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010764293.6A Active CN111857156B (en) 2020-08-02 2020-08-02 Laser-based robot region division method, chip and robot

Country Status (1)

Country Link
CN (1) CN111857156B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631299B (en) * 2020-12-24 2023-09-26 南京苏美达智能技术有限公司 Control method for multiple mowers in multiple areas
CN115393234A (en) * 2021-05-25 2022-11-25 速感科技(北京)有限公司 Map region fusion method and device, autonomous mobile equipment and storage medium
CN114947664A (en) * 2022-06-22 2022-08-30 汇智机器人科技(深圳)有限公司 Control method and system applied to laser guidance of sweeping robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615138A (en) * 2015-01-14 2015-05-13 上海物景智能科技有限公司 Dynamic indoor region coverage division method and device for mobile robot
CN108143364A (en) * 2017-12-28 2018-06-12 湖南格兰博智能科技有限责任公司 A kind of method for cleaning map area division from mobile clean robot
CN109920424A (en) * 2019-04-03 2019-06-21 北京石头世纪科技股份有限公司 Robot voice control method and device, robot and medium
CN110269550A (en) * 2019-06-13 2019-09-24 深圳市银星智能科技股份有限公司 A kind of location recognition method and mobile robot
CN110412619A (en) * 2019-08-12 2019-11-05 珠海市一微半导体有限公司 The area coverage method and laser master chip of laser robot
CN111328386A (en) * 2017-09-12 2020-06-23 罗博艾特有限责任公司 Exploration of unknown environments by autonomous mobile robots

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3907575B1 (en) * 2019-01-03 2023-09-06 Ecovacs Robotics Co., Ltd. Dynamic region division and region channel identification method, and cleaning robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615138A (en) * 2015-01-14 2015-05-13 上海物景智能科技有限公司 Dynamic indoor region coverage division method and device for mobile robot
CN111328386A (en) * 2017-09-12 2020-06-23 罗博艾特有限责任公司 Exploration of unknown environments by autonomous mobile robots
CN108143364A (en) * 2017-12-28 2018-06-12 湖南格兰博智能科技有限责任公司 A kind of method for cleaning map area division from mobile clean robot
CN109920424A (en) * 2019-04-03 2019-06-21 北京石头世纪科技股份有限公司 Robot voice control method and device, robot and medium
CN110269550A (en) * 2019-06-13 2019-09-24 深圳市银星智能科技股份有限公司 A kind of location recognition method and mobile robot
CN110412619A (en) * 2019-08-12 2019-11-05 珠海市一微半导体有限公司 The area coverage method and laser master chip of laser robot

Also Published As

Publication number Publication date
CN111857156A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111897334B (en) Robot region division method based on boundary, chip and robot
CN111857156B (en) Laser-based robot region division method, chip and robot
CN111603099B (en) Cleaning planning method with region traversal priority and chip
CN111830970B (en) Regional cleaning planning method for robot walking along edge, chip and robot
CN109464074B (en) Area division method, subarea cleaning method and robot thereof
CN109363585B (en) Partition traversing method, sweeping method and sweeping robot thereof
CN110362079B (en) Traversal control method and chip of robot and cleaning robot
CN112799398B (en) Cleaning path planning method based on path finding cost, chip and cleaning robot
US11914391B2 (en) Cleaning partition planning method for robot walking along boundry, chip and robot
CN112137529B (en) Cleaning control method based on dense obstacles
CN110338715B (en) Method and chip for cleaning floor by intelligent robot and cleaning robot
CN109240312A (en) The cleaning control method and chip and clean robot of a kind of robot
CN109528090A (en) The area coverage method and chip and clean robot of a kind of robot
CN109298717A (en) The cleaning method and chip and Intelligent cleaning robot of intelligent robot
CN102138769A (en) Cleaning robot and cleaning method thereby
CN110412619B (en) Region traversing method of laser robot and laser main control chip
CN110543174A (en) Method for establishing passable area graph, method for processing passable area graph, device and movable equipment
CN112764418B (en) Cleaning entrance position determining method based on path searching cost, chip and robot
CN112180924B (en) Mobile control method for navigating to dense obstacle
CN112826373A (en) Cleaning method, device, equipment and storage medium of cleaning robot
CN114690753A (en) Hybrid strategy-based path planning method, autonomous traveling equipment and robot
CN109298716A (en) A kind of the planning cleaning method and chip of robot
JP7332806B2 (en) Work start point determination method and motion control method for movement restriction frame of robot
CN111401337A (en) Lane following exploration mapping method, storage medium and robot
CN112650252B (en) Method and chip for acquiring path searching cost for searching initial cleaning position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 2706, No. 3000, Huandao East Road, Hengqin new area, Zhuhai, Guangdong

Applicant after: Zhuhai Yiwei Semiconductor Co.,Ltd.

Address before: Room 105-514, No.6 Baohua Road, Hengqin New District, Zhuhai City, Guangdong Province

Applicant before: AMICRO SEMICONDUCTOR Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant