CN115316887A - Robot control method, robot, and computer-readable storage medium - Google Patents

Robot control method, robot, and computer-readable storage medium Download PDF

Info

Publication number
CN115316887A
CN115316887A CN202211269360.2A CN202211269360A CN115316887A CN 115316887 A CN115316887 A CN 115316887A CN 202211269360 A CN202211269360 A CN 202211269360A CN 115316887 A CN115316887 A CN 115316887A
Authority
CN
China
Prior art keywords
frame
path
robot
planning
local planning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211269360.2A
Other languages
Chinese (zh)
Other versions
CN115316887B (en
Inventor
陈一彬
马子昂
刘征宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huacheng Software Technology Co Ltd
Original Assignee
Hangzhou Huacheng Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huacheng Software Technology Co Ltd filed Critical Hangzhou Huacheng Software Technology Co Ltd
Priority to CN202211269360.2A priority Critical patent/CN115316887B/en
Publication of CN115316887A publication Critical patent/CN115316887A/en
Application granted granted Critical
Publication of CN115316887B publication Critical patent/CN115316887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a robot control method, a robot and a computer-readable storage medium. The method comprises the following steps: acquiring a grid map subjected to point cloud conversion by an environment map, and extracting wall surface information from the grid map; generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map; controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and then performing full-coverage traversal on the internal path points of the current local planning path frame; and controlling the robot to move from the current local planning path frame to the next local planning path frame until all the border path points and the internal path points of the local planning path frame are traversed. The robot can be controlled to rapidly develop full-coverage cleaning in the local planning path frame area, and cleaning efficiency of the robot is improved.

Description

Robot control method, robot, and computer-readable storage medium
Technical Field
The application relates to the technical field of sweeping robots, in particular to a robot control method, a robot and a computer readable storage medium.
Background
With the increasing improvement of living standard and the rapid development of household service robot technology, floor sweeping robots are increasingly entering into the family life. Such a robot is generally provided with a bottom driving wheel, a rolling brush, an edge brush, a dust suction device, and the like, which can rotate autonomously, and can perform cleaning work on a floor surface.
Generally, the robot can be classified into a non-planning random walk type cleaning mode and a full-coverage planning cleaning mode according to the running state of the robot, and the random walk type cleaning mode is gradually eliminated due to low cleaning efficiency, and the full-coverage cleaning mode is a mainstream method of the cleaning robot. However, as a result of intensive research, it has been found that full-coverage cleaning is more suitable for relatively open areas, and dust is more likely to accumulate in relatively concealed dead spaces such as corners of walls and table corners. A special cleaning mode "edgewise cleaning" prior to full coverage cleaning has then been proposed.
In general, the edgewise cleaning is to control the sweeping robot at a specific distance from the wall surface by using an edgewise wall sensor, and to realize obstacle-encountering steering of the robot by using a collision sensor, so as to achieve the effect that the sweeping robot always cleans along the wall surface. However, when the area to be cleaned is very large, the sweeping robot needs to spend a lot of time to complete the edge cleaning process of the whole area, which is not beneficial to quickly developing the full-coverage cleaning process, and once the area to be cleaned enclosed by the edge area is too large, it is also not beneficial to reasonably planning the full-coverage path.
Disclosure of Invention
The application provides a robot control method, a robot and a computer readable storage medium, so that the robot can rapidly carry out full-coverage cleaning, and the cleaning efficiency of the robot is improved.
In order to solve the technical problem, the application adopts a technical scheme that: there is provided a robot control method including:
acquiring a grid map subjected to point cloud conversion by an environment map, and extracting wall surface information from the grid map; generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information; controlling the robot to move from the planning border starting point along the border of the current local planning path frame where the planning border starting point is located, and performing full-coverage traversal on the internal path points of the current local planning path frame after traversing all path points on the border of the current local planning path frame; and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the frame path points and the internal path points of the local planning path frame are traversed.
After the wall surface information is extracted from the grid map and before a plurality of local planning path frames are generated in the grid map, the robot control method further comprises the following steps:
extracting the position of an obstacle in a grid map; and generating a plurality of local planning path frames in the grid map based on the positions of the obstacles, wherein no obstacle exists in the borders and the internal area of the local planning path frames.
Wherein the barrier comprises a charging pile; generating a planning edge starting point based on the wall surface information, wherein the method comprises the following steps:
generating a planning edge starting point based on the wall surface information and the position of the charging pile; the planning edgewise starting point is at least one grid point, wherein the grid point is a plurality of grid points which are away from the wall surface by a first preset distance, and the relative distance between the grid points and the charging pile is smaller than or equal to a second preset distance.
Wherein, carry out edgewise motion along the frame of planning the present local planning route frame at edgewise initial point place, include:
in the process of controlling the robot to move edgewise along the frame of the current local planning path frame, acquiring the sensing distance of the obstacle in the moving direction; when the obstacle sensing distance is smaller than a third preset distance, triggering obstacle-encountering fixing action; monitoring the lateral distance between the robot and the wall surface in the process of the robot executing obstacle-encountering fixing action; and when the lateral distance is smaller than the fourth preset distance, the obstacle-encountering fixing action is withdrawn, and the edgewise movement is continuously executed based on the current position.
Wherein, the obstacle is fixed as: controlling the robot to rotate along a first direction and recording a first rotation angle; when the first rotation angle reaches a first preset angle, controlling the robot to rotate along a second direction, and recording the second rotation angle; when the second rotation angle reaches a second preset angle, controlling the robot to stop rotating; the first direction and the second direction are two opposite directions.
Wherein continuing to perform the edgewise motion based on the current position comprises:
judging whether the current position of the robot is on the frame of the current local planning path frame or not; if yes, continuing to execute the edgewise movement from the current position; if not, acquiring a tracking path point with the minimum distance from the current position on the frame of the current local planning path frame, dispatching the robot from the current position to the tracking path point, and continuing to execute the edge movement from the tracking path point.
Wherein, schedule the robot from the current position to track the waypoint, including:
acquiring a dispatching path from the current position to the tracking path point; and dispatching the robot from the current position to the tracking path point according to the dispatching path, and marking the path point on the dispatching path as a traversed point.
Wherein, control robot moves to next local planning route frame from current local planning route frame to carry out path tracking to next local planning route frame, includes:
obtaining the relative distance between the current local planning path frame and the rest local planning path frames; taking the local planning path frame with the minimum relative distance with the current local planning path frame as the next local planning path frame of the current local planning path frame; acquiring two frames of the next local planning path frame parallel to the wall surface of the wall surface information, and calculating the relative distance between the two frames and the wall surface; setting a new planning edge starting point at the middle point of one frame with a small relative distance in the two frames; and controlling the robot to move edgewise along the frame of the next local planned path frame from the new planned edgewise starting point.
In order to solve the technical problem, the application adopts a technical scheme that: the robot comprises a processor and a memory connected with the processor, wherein program data are stored in the memory, and the processor executes the program data stored in the memory to realize the following steps: acquiring a grid map subjected to point cloud conversion by an environment map, and extracting wall surface information from the grid map; generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information; controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and then performing full-coverage traversal on the internal path points of the current local planning path frame; and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the border path points and the internal path points of the local planning path frames are traversed.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer readable storage medium having stored therein program instructions that are executed to implement: acquiring a grid map subjected to point cloud conversion by an environment map, and extracting wall surface information from the grid map; generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information; controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and then performing full-coverage traversal on the internal path points of the current local planning path frame; and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the frame path points and the internal path points of the local planning path frame are traversed.
The beneficial effect of this application is: the method is different from the prior art, the wall information is extracted through a grid map of environment point cloud conversion, a planning starting point is generated based on the wall information, a plurality of local planning path frames are generated on the grid map, the robot is controlled to move from the planning starting point along the frame of the local planning path frame along the edge until all path points on the frame of the current local planning path frame are traversed, the inner path points of the current local planning path frame are subjected to full-coverage traversing cleaning, and the robot is controlled to move to the next local planning path frame to repeat the actions. This application is when the regional too big time of waiting to clean that the border region encloses, make the robot can carry out the total coverage to the region of current local planning route frame fast after the border is ended at current local planning route frame through setting up a plurality of local planning route frames and clean, compare in prior art and accomplish the method that the border of whole region could begin the total coverage and clean, the robot control method of this application is favorable to carrying out the planning of reasonable total coverage route, improve the efficiency of cleaning of robot, and still be suitable for under more complex environment and spacious environment.
Drawings
FIG. 1 is a schematic flow chart diagram of a first embodiment of a robot control method of the present application;
FIG. 2 is a schematic diagram of an embodiment of a grid map of the present application;
FIG. 3 is a schematic flow chart diagram of a second embodiment of a robot control method of the present application;
FIG. 4 is a schematic flow chart diagram illustrating one embodiment of step S103 in FIG. 1;
FIG. 5 is a flowchart of one embodiment of step S304 in FIG. 4;
FIG. 6 is a flowchart of one embodiment of step S403 in FIG. 5;
FIG. 7 is a flowchart of one embodiment of step S104 in FIG. 1;
FIG. 8 is a schematic structural diagram of an embodiment of the robot of the present application;
FIG. 9 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the rapid development of the household service robot technology, more and more sweeping robots enter the family life, the running state of the current robot during sweeping can be divided into a random walk type without planning and a full-coverage planning sweeping type, but the random walk type without planning is low in sweeping efficiency and is gradually eliminated, and in the full-coverage sweeping type of the robot, the full-coverage sweeping type is more suitable for a relatively open area, and dust at relatively more hidden dead corners such as wall corners and table corners is more easily accumulated. Therefore, before the full-coverage cleaning mode, the edgewise cleaning is provided in a specific cleaning mode, but when the area to be cleaned is very large, the robot needs to spend a large amount of time to complete the edgewise cleaning process of the whole area, which is not beneficial to quickly developing the full-coverage cleaning process, and once the area to be cleaned enclosed by the edgewise area is too large, the reasonable full-coverage path planning is also not beneficial.
In order to improve the cleaning efficiency of the robot and rapidly develop the full-coverage cleaning process, the present application first proposes a robot control method, please refer to fig. 1, and fig. 1 is a schematic flow diagram of a first embodiment of the robot control method of the present application. As shown in fig. 1, the robot control method specifically includes steps S101 to S104:
step S101: and acquiring a grid map subjected to point cloud conversion by the environment map, and extracting wall surface information from the grid map.
The robot receives the edgewise cleaning instruction, enables a Laser radar sensor of the robot, specifically can be a Laser Direct Structuring (LDS) radar sensor, and synchronously starts a synchronous positioning And Mapping (SLAM) module.
Among them, SLAM can be described as: the robot starts to move from an unknown position in an unknown environment, self-positioning is carried out according to position estimation and a map in the moving process, and meanwhile, an incremental map is built on the basis of self-positioning, so that autonomous positioning and navigation of the robot are realized. Positioning means that the robot has to know its position in the environment. Mapping means that the robot has to record the position of features in the environment. SLAM refers to the robot establishing an environment map while positioning.
In this embodiment, the robot may acquire point cloud information of its own environment map through the laser radar sensor and the SLAM module, and convert the point cloud information into a grid map.
Referring to fig. 2, fig. 2 is a schematic diagram of an embodiment of a grid map according to the present application. As shown in fig. 2.
After the robot acquires the grid map, wall information can be extracted from the grid map. As shown in fig. 2, in the grid map, a is wall information in the grid map, and B and C are obstacles in the grid map.
When extractable wall surface information exists in the constructed grid map, in order to effectively perform subsequent planning of the edge path, the included angle between the wall surface of the wall surface information and the coordinate system of the constructed grid map can be obtained, and the grid map is corrected according to the included angle, so that the grid map can be parallel to or perpendicular to the coordinate system of the map. And if the extractable wall information does not exist, defaulting the current angle of the robot to be an initial planning angle.
Step S102: and generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information.
If the extractable wall surface information exists in the grid map, after the robot extracts and acquires the wall surface information, a planning edge starting point is generated based on the coordinates of the wall surface information, and the planning edge starting point can be acquired based on the position information of the wall surface information.
When the planning edge starting point is generated based on the wall information, the selection of the planning edge starting point can be as close as possible to the vicinity of the charging pile of the robot. As shown in fig. 2, if the obstacle B is a charging pile, the specification may be selected from grids near the obstacle B along the edge starting point. The position of planning edgewise initial point is close to the electric pile that fills of robot can make the robot edgewise clean and get back to and fill electric pile near after finishing to can carry out quick charge to the robot, improve the cleaning efficiency of robot.
After the robot acquires the border starting point, a plurality of local planning path frames are generated on the grid map, wherein the local planning path frames can be a rectangular frame. In the embodiment, several local planned path boxes are adjacent to each other to form a mesh-shaped global planned edgewise path. In this embodiment, the side length of the local planned path box is 4.5m. In other embodiments, a corresponding number of local planned path frames may be generated in the grid map based on the area of the grid map, which is not limited herein.
In other embodiments, the side length of the local planned path frame may be set according to actual needs, and the local planned path frame may also be in other shapes, which is not limited herein.
When the local planned path is generated, it needs to be ensured that the generated planned edge starting point coincides with the midpoint of one edge of one of the local planned path frames, and the edge where the planned edge starting point is located is parallel to the wall surface of the wall surface information. Therefore, when generating a plurality of local planning path frames, the local planning path frame where the planning border starting point is located can be generated first, and then a plurality of other respective adjacent local planning path frames are generated.
And if the extractable wall information does not exist in the grid map, taking the current position of the robot as a planning edge starting point.
Step S103: and controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and then performing full-coverage traversal on the internal path points of the current local planning path frame.
After a plurality of local planning path frames are established in the grid map, the robot is dispatched to a planning edge starting point, and the sweeper is switched to a planning edge State State _ EW _ plan. And in the current state, the robot can be controlled to move edgewise along the border of the current local planning path frame.
In the process, a planning module of the robot can issue a local section path of a current local planning path frame in real time to enable the robot to move ahead along the edge, and can record all passing local path points of the robot in real time until all path points on a frame of the current local planning path frame are traversed, at the moment, the robot considers that an area surrounded by the current local planning path frame is a cleanable area, and then the robot conducts full-coverage traversal on the internal path points of the current local planning path frame to enable the robot to conduct full-coverage cleaning on all areas in the current local planning path frame.
Step S104: and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the frame path points and the internal path points of the local planning path frame are traversed.
And the robot performs full-coverage cleaning on all areas in the current local planning path frame for planning the border starting point, controls the current local planning path frame to move to the next local planning path frame so as to perform path tracking on the next local planning path frame until all frame path points and internal path points of the local planning path frame are traversed, and completes full-coverage cleaning on the whole area.
According to the robot control method, under the condition that wall information cannot be extracted from the grid map in a complex environment or an open scene, edgewise cleaning can still be performed according to the plurality of local planning path frames, the planned path is determined according to the current position and orientation of the robot, a rectangular frame of the local planning path frames with preset fixed side length is generated, and the robot can be ensured to perform edgewise cleaning normally in any scene. Meanwhile, after the cleaning is finished at the edge and in the area of one local planning path frame, the robot can directly schedule the edge point of the next local planning path frame without performing edge finding operation, so that the cleaning efficiency of the robot is improved.
The method is different from the prior art, the wall information is extracted through a grid map of environment point cloud conversion, a planning starting point is generated based on the wall information, a plurality of local planning path frames are generated on the grid map, the robot is controlled to move from the planning starting point along the frames of the local planning path frames along the edge until all path points on the frames of the current local planning path frames are traversed, the inner path points of the current local planning path frames are subjected to full-coverage traversal cleaning, and the robot is controlled to move to the next local planning path frame to repeat the actions. This application is when the region of waiting to clean that the edgewise region encloses is too big, through setting up a plurality of local planning route frames and make the robot can carry out the total coverage to the region of current local planning route frame fast after the current local planning route frame edgewise finishes and clean, compare in the method that the edgewise of accomplishing whole region could begin the total coverage and clean among the prior art, the robot control method of this application is favorable to carrying out rationally the total coverage route planning, the efficiency of cleaning of improvement robot, and still be suitable for under more complex environment and spacious environment.
Optionally, referring to fig. 3, fig. 3 is a schematic flowchart of a robot control method according to a second embodiment of the present application. As shown in fig. 3, the robot control method specifically includes steps S201 to S206:
step S201: and acquiring a grid map subjected to point cloud conversion by the environment map, and extracting wall surface information from the grid map.
Step S201 is identical to step S101, and is not described again.
Step S202: and extracting the position of the obstacle in the grid map.
After the robot generates the grid map, the position of the obstacle is extracted from the grid map. In the grid map, there are some specific obstacles, such as an obstacle B and an obstacle C in fig. 2, at positions where the robot cannot pass, in addition to the wall surface where the wall surface information exists. Therefore, the position of the obstacle in the map is extracted before the local planning road stiffness frame is generated.
Step S203: and generating a plurality of local planning path frames in the grid map based on the positions of the obstacles, wherein no obstacle exists in the borders and the internal area of the local planning path frames.
After the position of the obstacle is obtained, generating a plurality of local planning path frames in the grid map requires planning the frames of the local planning path frames, so that the frames and the internal area of the local planning path frames do not have the obstacle. That is, the generated number of partial planned path frames and the internal area cannot have the obstacle B and the obstacle C as shown in fig. 2.
The frames of the local planning path frames are generated, and the frames and the inner areas of the local planning path frames do not have barriers, so that the robot can be prevented from being collided when being cleaned or from being stuck, and the cleaning efficiency of the robot is improved.
Step S204: and generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information.
As described above, the robot may generate a planning start point in the grid map based on the wall surface information and generate a plurality of partial planning path frames in the grid map, wherein the planning start point is located at a midpoint of one edge of one of the partial planning path frames, which is parallel to the wall surface of the wall surface information.
When the barrier that exists in the grid map is including filling electric pile, the planning edgewise initial point can set up near filling electric pile to make the robot get back to near filling electric pile and carry out quick charge when the edgewise motion finishes, improve the cleaning efficiency of robot.
In this embodiment, the step of generating a planned edge starting point based on the wall information may be implemented by the following method, which specifically includes:
generating a planning edge starting point based on the wall surface information and the position of the charging pile; the planning edgewise starting point is at least one grid point, wherein the grid point is a plurality of grid points which are away from the wall surface by a first preset distance, and the relative distance between the grid points and the charging pile is smaller than or equal to a second preset distance.
In this embodiment, when the planning start point is generated, the planning edge start point may be generated based on the wall information and the position of the charging pile. The planning edgewise starting point may be a first preset distance away from the wall surface, the first preset distance may be a radius of the robot body, and the second preset distance may be set based on an actual demand, which is not limited herein.
Step S205: and controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and then performing full-coverage traversal on the internal path points of the current local planning path frame.
Step S205 is identical to step S103, and is not described in detail.
Step S206: and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the frame path points and the internal path points of the local planning path frame are traversed.
Step S206 is identical to step S104, and is not described in detail.
Optionally, a method for controlling the robot to perform the edgewise motion on the frame of the locally planned path is shown in fig. 4, where fig. 4 is a schematic flowchart of a specific embodiment in step S103 in fig. 1. In this embodiment, the step of performing the edgewise motion along the border of the current local planned path frame where the planned edgewise start point is located in step S103 may be implemented by the method shown in fig. 4, and the embodiment specifically includes steps S301 to S304:
step S301: and acquiring the sensing distance of the obstacle in the movement direction in the process of controlling the robot to move along the edge along the frame of the current local planning path frame.
In the process of controlling the robot to move along the edge along the frame of the current local planning path frame, the robot enables an RGB (red, green, blue) structured light or linear laser sensor and an edge wall sensor, a sensing module is started, a third preset distance Forward _ th is set, and the barrier sensing distance between the robot and the barrier is calculated in real time and is recorded as d _ Forward.
Step S302: and when the obstacle sensing distance is smaller than a third preset distance, triggering obstacle meeting fixing action.
When the robot moves edgewise along the frame of the current local planned path frame, suddenly meeting a newly-added obstacle during edgewise movement, calculating an obstacle sensing distance d _ Forward between the robot and the obstacle in real time, and if the obstacle sensing distance d _ Forward is smaller than a third preset distance Forward _ th, considering that an obstacle exists in front of the robot, receiving an obstacle avoidance instruction by the robot at the moment, and triggering obstacle meeting fixing action. The obstacle-encountering fixing action is an obstacle avoiding action preset in advance by the robot, and can be a circular motion around an obstacle or other motions, which are not limited herein.
Specifically, the obstacle encountering fixing action further comprises: controlling the robot to rotate along a first direction and recording a first rotation angle; when the first rotating angle reaches a first preset angle, controlling the robot to rotate along a second direction, and recording the second rotating angle; when the second rotation angle reaches a second preset angle, controlling the robot to stop rotating; the first direction and the second direction are two opposite directions.
In the embodiment, when the obstacle encountering fixing action is triggered, the robot is controlled to start to rotate anticlockwise, a first rotation angle is recorded, when the first preset angle reaches 90 degrees, the robot starts to rotate clockwise, a second rotation angle is recorded, and when the second preset angle reaches 90 degrees, the robot stops rotating.
Step S303: and in the process of the robot executing obstacle-encountering fixing action, monitoring the lateral distance between the robot and the wall surface.
In the process that the robot performs obstacle-encountering fixing action, namely in the process that the robot rotates, the lateral distance between the robot and the wall surface is detected in real time by using the wall-following sensor.
Step S304: and when the lateral distance is smaller than the fourth preset distance, the obstacle-encountering fixing action is withdrawn, and the edgewise movement is continuously executed based on the current position.
And in the obstacle encountering fixing action process of the robot, if the lateral distance is less than a fourth preset distance, the robot immediately exits the obstacle encountering fixing action, and continues to execute the edge movement based on the current position of the robot.
Different from the prior art, the robot control method can carry out edgewise movement along the frame of the current local planning path frame to timely avoid obstacle processing, can greatly avoid the floor sweeping robot being interfered by temporary obstacles, people or pets, can monitor the lateral distance between the robot and the wall surface in real time, and can timely quit the fixed obstacle avoiding action, thereby improving the cleaning efficiency of the robot.
Optionally, fig. 5 is a schematic flowchart of an embodiment in step S304 in fig. 4. In this embodiment, the step of continuing to execute the edgewise motion based on the current position in step S304 can be implemented by the method shown in fig. 5, and the embodiment specifically includes steps S401 to S404:
step S401: and judging whether the current position of the robot is on the frame of the current local planning path frame.
After the robot exits the obstacle-encountering fixing action, whether the current position of the robot is on the frame of the current local planned path frame or not is judged, namely whether the robot is in a planned edgewise State State _ EW _ plan or not is judged.
If yes, go to step S402; if not, the process goes to step S403;
step S402: the edgewise motion is continued from the current position.
And if the robot is in the planned edgewise State State _ EW _ plan, the robot continues to execute the edgewise motion from the current position.
Step S403: and acquiring a tracking path point with the minimum distance from the current position on the frame of the current local planning path frame, dispatching the robot from the current position to the tracking path point, and continuing to execute the edgewise motion from the tracking path point.
After the robot exits the obstacle-encountering fixing action, judging that the robot is not in the planned edgewise State, acquiring a tracking path point M with the minimum distance from the current position on a frame of a front local planned path frame, switching the State of the robot into an entity edgewise State State _ EW _ Real, recording an initial angle init _ Real at which the robot starts to edgewise by using a sensor, and controlling the robot to move edgewise by using a lateral distance sensor of the robot when the robot is in the entity edgewise State State _ EW _ Real. And dispatching the robot from the current position to the tracking path point M with the minimum distance from the current position, and continuing to execute the edgewise motion from the tracking path point M with the minimum distance from the current position.
When the robot is in a State _ EW _ real along an entity, when the robot is dispatched from the current position to a tracking path point, on the running path, if an obstacle is encountered in the dispatching process, the obstacle-encountering fixed action in the front is carried out, and after the obstacle-encountering fixed action is exited, the current position of the robot is judged twice, whether the robot is outside the current local planning path frame or not is judged for the first time, and if the robot is outside the current local planning path frame, the robot is dispatched to the tracking path point M through two-point dispatching. And judging whether the robot is in the border path of the current local planned path frame or not for the second time, if the distance between the current position of the robot and the border of the current local planned path frame meets a fifth preset threshold, changing the entity border State _ EW _ real of the robot into the planned border State _ EW _ plan, and enabling the robot to continue to execute border motion from the current path tracking point M.
In addition, when the robot is in the State of edge of the entity, state _ EW _ real, the robot is controlled to move along the wall through distance information of the sensor or the laser sensor along the wall. When the robot performs obstacle-encountering fixing action in an entity edge State State _ EW _ Real, the robot calculates the difference value between the current rotation angle and the initial angle init _ Real, judges whether the difference value is larger than 360 degrees or not, judges that the robot deviates from the wall surface and rotates for one circle abnormally if the difference value is larger than the initial angle init _ Real, and controls the robot to dispatch to a tracking path point M from the current position at the moment.
Optionally, referring to fig. 6, fig. 6 is a schematic flowchart of an embodiment in step S403 in fig. 5. The present embodiment may implement the step of scheduling the robot from the current position to the tracking waypoint in step S403 by using the method shown in fig. 6, and the embodiment specifically includes steps S501 to S502:
step S501: and acquiring a dispatching path from the current position to the tracking path point of the robot.
When the robot changes from the entity edgewise State _ EW _ real to the planned edgewise State _ EW _ plan and continues to execute edgewise motion from the current path tracking point M, the path point of the current local planned path frame traversed by the robot needs to be updated at this time, so that a dispatching path from the current position of the robot to the tracking path point M after the obstacle-encountering fixing action is finished needs to be acquired.
Step S502: and dispatching the robot from the current position to the tracking path point according to the dispatching path, and marking the path point on the dispatching path as a traversed point.
The robot dispatches the robot from the current position to the tracking path point M according to the dispatching path, updates the current trajectory tracking point to be the tracking path point M, and marks all path points on the current local planning path frame from the planning edge starting point to the point before the tracking path point M as traversed points.
Different from the prior art, the robot control method has the advantages that the robot control method is abnormal in the edgewise motion process, the robot can automatically return to the current locally-planned path frame to continue the previous edgewise motion, the robot does not need to restart the edgewise motion, accordingly, the cleaning efficiency of the robot is improved, the situation of repeated cleaning is avoided, and the robustness of the robot performing the edgewise motion along the frame of the locally-planned path frame is improved.
The robot control method can more accurately control the end condition of the edgewise motion of the robot, avoid ending the edgewise motion of the robot too early or too late and further achieve the purposes of improving the coverage rate and the working efficiency of the robot.
Optionally, referring to fig. 7, fig. 7 is a schematic flowchart of an embodiment in step S104 in fig. 1. As shown in fig. 7, in this embodiment, the step of controlling the robot to move from the current locally-planned path frame to the next locally-planned path frame in step S104 to perform path tracking on the next locally-planned path frame may be implemented by the method shown in fig. 7, and the method in this embodiment specifically includes steps S601 to S605:
step S601: and acquiring the relative distance between the current local planning path frame and the rest local planning path frames.
And when the robot finishes the full-coverage cleaning of the current local planning path frame, acquiring the relative distance between the current local planning path frame and the rest local planning path frames.
Step S602: and taking the local planned path frame with the minimum relative distance with the current local planned path frame as the next local planned path frame of the current local planned path frame.
And the robot takes the local planned path frame with the minimum relative distance with the current local planned path frame as the next local planned path frame of the current local planned path frame.
Step S603: and acquiring two frames of the next local planning path frame parallel to the wall surface of the wall surface information, and calculating the relative distance between the two frames and the wall surface.
The robot obtains two frames parallel to the wall surface of the next local planning path frame and the wall surface information, and calculates the relative distance between the two frames and the wall surface.
Step S604: and setting a new planning edge starting point at the middle point of one frame with a small relative distance in the two frames.
The robot selects the middle point of one frame with a small relative distance in the two frames and sets the middle point as a new planning edge starting point.
Step S605: and controlling the robot to move along the edge along the frame of the next local planned path frame from the new planned edge starting point.
And controlling the robot to move along the edge from the new planned edge starting point along the frame of the next local planned path frame, and repeating the steps until the current area full coverage type cleaning is finished after the reachable local planned path frame does not exist in the current grid map.
Different from the prior art, the robot control method has the advantages that after a certain local planning path frame is cleaned, the robot can be directly dispatched to the next planning edge point of the local planning path frame, the operation of finding the edge is not needed, and the cleaning efficiency of the robot can be improved.
Optionally, the present application further provides a robot, please refer to fig. 8, and fig. 8 is a schematic structural diagram of an embodiment of the robot in the present application, where the robot 200 includes a processor 201 and a memory 202 connected to the processor 201, where the memory 202 stores program data, and the processor 201 executes the program data stored in the memory 202 to execute steps in the embodiments of the methods described above.
The processor 201 may also be referred to as a CPU (Central Processing Unit). The processor 201 may be an integrated circuit chip having signal processing capabilities. The processor 201 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 202 is used for storing program data required for the processor 201 to operate.
The processor 201 is also used to execute the program data stored in the memory 202 to implement the robot control method described above.
Optionally, the present application further proposes a computer-readable storage medium. Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
The computer-readable storage medium 300 of the embodiment of the present application stores therein program instructions 310, and the program instructions 310 are executed to implement the robot control method described above.
The program instructions 310 may form a program file stored in the storage medium in the form of a software product, so that an electronic device (which may be a personal computer, a server, or a network device) or a processor (processor) executes all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
The computer-readable storage medium 300 of the embodiment may be, but is not limited to, a usb disk, an SD card, a PD optical drive, a removable hard disk, a high-capacity floppy drive, a flash memory, a multimedia memory card, a server, and the like.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The computer instructions are read by a processor of the electronic device from the computer-readable storage medium, and the processor executes the computer instructions to cause the electronic device to perform the steps in the above-mentioned method embodiments.
In addition, if the above functions are implemented in the form of software functions and sold or used as a standalone product, the functions may be stored in a storage medium readable by a mobile terminal, that is, the present application also provides a storage device storing program data, which can be executed to implement the method of the above embodiments, the storage device may be, for example, a usb disk, an optical disk, a server, etc. That is, the present application may be embodied as a software product, which includes several instructions for causing an intelligent terminal to perform all or part of the steps of the methods described in the embodiments.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing mechanisms, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (e.g., a personal computer, server, network device, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
The above description is only an example of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A robot control method, characterized by comprising:
acquiring a grid map subjected to point cloud conversion by an environment map, and extracting wall surface information from the grid map;
generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information;
controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and performing full-coverage traversal on the internal path points of the current local planning path frame;
and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the frame path points and the internal path points of the local planning path frame are traversed.
2. The robot control method according to claim 1,
after the wall surface information is extracted from the grid map and before the plurality of local planning path frames are generated in the grid map, the robot control method further includes:
extracting obstacle positions in the grid map;
and generating the local planning path boxes in the grid map based on the positions of the obstacles, wherein no obstacle exists in the borders and the inner area of the local planning path boxes.
3. The robot control method according to claim 2,
the barrier comprises a charging pile;
generating a planning edge starting point based on the wall surface information, including:
generating the planning edge starting point based on the wall information and the position of the charging pile; the planning edge starting point is at least one grid point, of a plurality of grid points which are away from the wall surface by a first preset distance, and the relative distance between the grid points and the charging pile is smaller than or equal to a second preset distance.
4. The robot control method according to claim 1,
the edgewise moving is performed along the frame of the current local planning path frame where the planning edgewise starting point is located, and the edgewise moving includes:
in the process of controlling the robot to move along the edge along the frame of the current local planning path frame, acquiring the sensing distance of the obstacle in the moving direction;
when the obstacle sensing distance is smaller than a third preset distance, triggering obstacle-encountering fixing action;
monitoring the lateral distance between the robot and the wall surface in the process that the robot executes the obstacle-encountering fixing action;
and when the lateral distance is smaller than a fourth preset distance, withdrawing the obstacle-encountering fixing action, and continuing to execute the edge movement based on the current position.
5. Robot control method according to claim 4,
the obstacle-encountering fixation is as follows: controlling the robot to rotate along a first direction, and recording a first rotation angle; when the first rotation angle reaches a first preset angle, controlling the robot to rotate along a second direction, and recording the second rotation angle; when the second rotation angle reaches a second preset angle, controlling the robot to stop rotating;
the first direction and the second direction are two opposite directions.
6. The robot control method according to claim 4,
the continuing to perform the edgewise motion based on the current position includes:
judging whether the current position of the robot is on the frame of the current local planning path frame;
if so, continuing to execute the edge movement from the current position;
if not, acquiring a tracking path point which is on the frame of the current local planning path frame and has the minimum distance with the current position, dispatching the robot from the current position to the tracking path point, and continuing to execute the edgewise motion from the tracking path point.
7. The robot control method according to claim 6,
the scheduling the robot from the current location to the tracking waypoint, comprising:
acquiring a dispatching path from the current position to the tracking path point;
and dispatching the robot from the current position to the tracking path point according to the dispatching path, and marking the path point on the dispatching path as a traversed point.
8. The robot control method according to claim 1,
the controlling the robot to move from the current locally planned path frame to a next locally planned path frame to perform path tracking on the next locally planned path frame includes:
acquiring the relative distance between the current local planning path frame and the rest local planning path frames;
taking the local planning path frame with the minimum relative distance with the current local planning path frame as the next local planning path frame of the current local planning path frame;
acquiring two frames of the next local planning path frame parallel to the wall surface of the wall surface information, and calculating the relative distance between the two frames and the wall surface;
setting a new planning edge starting point at the middle point of one frame with a small relative distance in the two frames;
and controlling the robot to perform edgewise motion along the frame of the next local planned path frame from the new planned edgewise starting point.
9. A robot comprising a processor and a memory coupled to the processor, wherein the memory has stored therein program data, and wherein the processor executes the program data stored in the memory to perform operations to:
acquiring a grid map subjected to point cloud conversion by an environment map, and extracting wall surface information from the grid map;
generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information;
controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and performing full-coverage traversal on the internal path points of the current local planning path frame;
and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the frame path points and the internal path points of the local planning path frame are traversed.
10. A computer-readable storage medium having stored therein program instructions that are executed to implement:
acquiring a grid map subjected to point cloud conversion by an environment map, and extracting wall surface information from the grid map;
generating a planning edge starting point based on the wall surface information, and generating a plurality of local planning path frames in the grid map, wherein the planning edge starting point is positioned at the middle point of one edge of one local planning path frame, which is parallel to the wall surface of the wall surface information;
controlling the robot to perform edgewise motion along the frame of the current local planning path frame where the planning edgewise starting point is located from the planning edgewise starting point until all path points on the frame of the current local planning path frame are traversed, and performing full-coverage traversal on the internal path points of the current local planning path frame;
and controlling the robot to move from the current local planning path frame to the next local planning path frame so as to track the path of the next local planning path frame until all the frame path points and the internal path points of the local planning path frame are traversed.
CN202211269360.2A 2022-10-17 2022-10-17 Robot control method, robot, and computer-readable storage medium Active CN115316887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211269360.2A CN115316887B (en) 2022-10-17 2022-10-17 Robot control method, robot, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211269360.2A CN115316887B (en) 2022-10-17 2022-10-17 Robot control method, robot, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN115316887A true CN115316887A (en) 2022-11-11
CN115316887B CN115316887B (en) 2023-02-28

Family

ID=83915209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211269360.2A Active CN115316887B (en) 2022-10-17 2022-10-17 Robot control method, robot, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115316887B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102138769A (en) * 2010-01-28 2011-08-03 深圳先进技术研究院 Cleaning robot and cleaning method thereby
CN107518833A (en) * 2017-10-12 2017-12-29 南京中高知识产权股份有限公司 A kind of obstacle recognition method of sweeping robot
CN109540155A (en) * 2019-02-01 2019-03-29 西安全志科技有限公司 A kind of path planning and navigation method, computer installation and the computer readable storage medium of sweeping robot
US20190133402A1 (en) * 2016-08-30 2019-05-09 Beijing Xiaomi Mobile Software Co., Ltd. Robot and robot control method
CN110398964A (en) * 2019-07-16 2019-11-01 浙江大学 A kind of low energy loss robot complete coverage path planning method and system
CN111399516A (en) * 2020-03-31 2020-07-10 深圳市银星智能科技股份有限公司 Robot path planning method and device and robot
CN111728535A (en) * 2020-06-22 2020-10-02 上海高仙自动化科技发展有限公司 Method and device for generating cleaning path, electronic equipment and storage medium
CN111830970A (en) * 2020-06-12 2020-10-27 珠海市一微半导体有限公司 Regional cleaning planning method for robot walking along edge, chip and robot
US20200359871A1 (en) * 2019-05-15 2020-11-19 Shenzhen Silver Star Intelligent Technology Co., Ltd. Method for cleaning along edge and cleaning robot
US20210041871A1 (en) * 2018-03-09 2021-02-11 Amicro Semicoductor Co.,Ltd. Along-edge Walking Control Method for Autonomous Mobile Robot
CN112438658A (en) * 2019-08-29 2021-03-05 华南师范大学 Cleaning area dividing method for cleaning robot and cleaning robot
CN112462768A (en) * 2020-11-25 2021-03-09 深圳拓邦股份有限公司 Mobile robot navigation map creating method and device and mobile robot
CN113317733A (en) * 2021-06-04 2021-08-31 深圳飞鼠动力科技有限公司 Path planning method and cleaning robot
CN114431771A (en) * 2021-12-31 2022-05-06 浙江大华技术股份有限公司 Sweeping method of sweeping robot and related device
CN114442625A (en) * 2022-01-24 2022-05-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy joint control agent

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102138769A (en) * 2010-01-28 2011-08-03 深圳先进技术研究院 Cleaning robot and cleaning method thereby
US20190133402A1 (en) * 2016-08-30 2019-05-09 Beijing Xiaomi Mobile Software Co., Ltd. Robot and robot control method
CN107518833A (en) * 2017-10-12 2017-12-29 南京中高知识产权股份有限公司 A kind of obstacle recognition method of sweeping robot
US20210041871A1 (en) * 2018-03-09 2021-02-11 Amicro Semicoductor Co.,Ltd. Along-edge Walking Control Method for Autonomous Mobile Robot
CN109540155A (en) * 2019-02-01 2019-03-29 西安全志科技有限公司 A kind of path planning and navigation method, computer installation and the computer readable storage medium of sweeping robot
US20200359871A1 (en) * 2019-05-15 2020-11-19 Shenzhen Silver Star Intelligent Technology Co., Ltd. Method for cleaning along edge and cleaning robot
CN110398964A (en) * 2019-07-16 2019-11-01 浙江大学 A kind of low energy loss robot complete coverage path planning method and system
CN112438658A (en) * 2019-08-29 2021-03-05 华南师范大学 Cleaning area dividing method for cleaning robot and cleaning robot
CN111399516A (en) * 2020-03-31 2020-07-10 深圳市银星智能科技股份有限公司 Robot path planning method and device and robot
CN111830970A (en) * 2020-06-12 2020-10-27 珠海市一微半导体有限公司 Regional cleaning planning method for robot walking along edge, chip and robot
CN111728535A (en) * 2020-06-22 2020-10-02 上海高仙自动化科技发展有限公司 Method and device for generating cleaning path, electronic equipment and storage medium
CN112462768A (en) * 2020-11-25 2021-03-09 深圳拓邦股份有限公司 Mobile robot navigation map creating method and device and mobile robot
CN113317733A (en) * 2021-06-04 2021-08-31 深圳飞鼠动力科技有限公司 Path planning method and cleaning robot
CN114431771A (en) * 2021-12-31 2022-05-06 浙江大华技术股份有限公司 Sweeping method of sweeping robot and related device
CN114442625A (en) * 2022-01-24 2022-05-06 中国地质大学(武汉) Environment map construction method and device based on multi-strategy joint control agent

Also Published As

Publication number Publication date
CN115316887B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
US20220050468A1 (en) Robot-assisted processing of a surface using a robot
WO2020140860A1 (en) Dynamic region division and region channel identification method, and cleaning robot
CN110974091B (en) Cleaning robot, control method thereof, and storage medium
EP2540203B1 (en) Robot cleaner and control method thereof
US8195331B2 (en) Method, medium, and apparatus for performing path planning of mobile robot
JP2022528261A (en) Exploration methods, devices, mobile robots and storage media
WO2014135008A1 (en) Edge-to-center cleaning method used by robot cleaner
JP2007213236A (en) Method for planning route of autonomously traveling robot and autonomously traveling robot
JP7281707B2 (en) Mobile robot and control method
US9599987B2 (en) Autonomous mobile robot and method for operating the same
CN113741438A (en) Path planning method and device, storage medium, chip and robot
CN110543174A (en) Method for establishing passable area graph, method for processing passable area graph, device and movable equipment
CN112806912B (en) Robot cleaning control method and device and robot
US11571813B2 (en) Systems and methods for managing a semantic map in a mobile robot
CN111443696B (en) Laser sweeping robot path planning method, device and chip
CN112656307B (en) Cleaning method and cleaning robot
KR101333496B1 (en) Apparatus and Method for controlling a mobile robot on the basis of past map data
US11537141B2 (en) Robotic cleaning device with dynamic area coverage
CN114431771B (en) Sweeping method of sweeping robot and related device
JP2010026727A (en) Autonomous moving device
CN112137512B (en) Sweeping robot sweeping area detection method, device, equipment, system and medium
CN115316887B (en) Robot control method, robot, and computer-readable storage medium
CN115500737B (en) Ground medium detection method and device and cleaning equipment
CN115344034A (en) Intelligent cleaning robot path planning method and intelligent cleaning device
CN112137528B (en) Sweeping robot sweeping area detection method, device, equipment, system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant