CN114967691A - Robot control method and device, readable storage medium and robot - Google Patents

Robot control method and device, readable storage medium and robot Download PDF

Info

Publication number
CN114967691A
CN114967691A CN202210585744.9A CN202210585744A CN114967691A CN 114967691 A CN114967691 A CN 114967691A CN 202210585744 A CN202210585744 A CN 202210585744A CN 114967691 A CN114967691 A CN 114967691A
Authority
CN
China
Prior art keywords
robot
point cloud
outline
target object
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210585744.9A
Other languages
Chinese (zh)
Inventor
罗铭
蔡君义
李松
邵林
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Robozone Technology Co Ltd
Original Assignee
Midea Robozone Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Robozone Technology Co Ltd filed Critical Midea Robozone Technology Co Ltd
Priority to CN202210585744.9A priority Critical patent/CN114967691A/en
Publication of CN114967691A publication Critical patent/CN114967691A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a control method and device of a robot, a readable storage medium and the robot. The control method of the robot comprises the following steps: acquiring a point cloud and a contour frame of a target object; selecting one item or a combination of the point cloud and the outline box as path planning information; determining a driving path of the robot according to the path planning information; and controlling the robot to run according to the running path. According to the technical scheme, the point cloud and the outline frame of the target object are obtained, and one or a combination of the point cloud and the outline frame can be selected as the path planning information according to actual requirements, so that the driving path of the robot is planned quickly and efficiently, the robot is controlled to drive according to the planned driving path, and the obstacle avoidance effect in the driving process of the robot is further improved.

Description

Robot control method and device, readable storage medium and robot
Technical Field
The invention relates to the technical field of robot control, in particular to a robot control method and device, a readable storage medium and a robot.
Background
The obstacle avoidance capability is an important index for measuring the intelligent level of the sweeping robot. The main obstacle avoidance sensors of the existing sweeping robot comprise infrared, linear laser, radar and the like. In the prior art, the sweeping robot has the problem of unstable obstacle avoidance effect.
Disclosure of Invention
The present invention has been made to solve at least one of the problems occurring in the prior art or the related art.
To this end, a first aspect of the present invention is to propose a control method of a robot.
A second aspect of the present invention is to provide a control device for a robot.
A third aspect of the present invention is to provide a control device for a robot.
A fourth aspect of the invention is directed to a readable storage medium.
A fifth aspect of the present invention is to provide a robot.
In view of this, according to a first aspect of the present invention, there is provided a control method of a robot, including: acquiring a point cloud and a contour frame of a target object; selecting one item or a combination of the point cloud and the outline box as path planning information; determining a driving path of the robot according to the path planning information; and controlling the robot to run according to the running path.
In the technical scheme, in the running process of the robot, the situation that the robot is hindered from advancing by a target object occurs, the robot is controlled to carry out evasive action aiming at the situation, and the point cloud and the outline frame of the target object are respectively obtained. In the process of planning the driving path, the point cloud or the outline box can be used as path planning information independently, and the point cloud and the outline box can also be used as path planning information. And planning a running path of the robot according to the determined path planning information, and controlling the robot to run according to the planned running path.
It should be noted that the target object includes obstacles of different types, different shapes, and different poses. For example, low objects (carpet, doorsill), irregular objects (wires, clothing), obstacles of a certain classification (pet faeces, liquid stains).
The robot can be selected as a sweeping robot, and image acquisition devices such as a camera and the like are arranged in the sweeping robot.
The outline box of the target object includes the smallest rectangle of the complete outline of the target object. The outline box can be obtained by means of visual recognition. Specifically, an image including a target object is captured by an image capturing device such as a camera, and a contour frame of the target object is acquired by recognizing the image.
The point cloud of the target object comprises discrete point data of the specific contour of the target object, the point cloud of the target object is distributed in the contour frame, and both the contour frame and the point cloud can be used as path planning information.
It will be appreciated that for the same target object, the point cloud of the target object is located inside the outline box.
And under the condition that more point clouds exist in the outline frame, judging that the difference between the recognition effects of taking the point clouds as path planning information and taking the outline frame as the path planning information is smaller. To reduce the amount of computation, the outline box may be used as path planning information.
Under the condition that the point clouds are distributed in a certain area in the outline frame in a centralized manner, namely under the condition that the number of the point clouds in the outline frame is small, the accuracy of path planning is judged to be low only by taking the outline frame as path planning information, so that the point clouds can be selected as the path planning information, or the point clouds and the outline frame can be selected as the path planning information.
In the related art, obstacle avoidance strategies of sweeping robots are based on single data to plan a driving path. For example, an outline frame of an obstacle is used as an obstacle avoidance planning basis, and in the process of identifying and avoiding the obstacle for a long and thin obstacle (electric wire), the real outline of the obstacle is only the diagonal line of the outline frame, so that the cleaning coverage rate of the sweeping robot is influenced.
According to the control method of the robot in the technical scheme, the point cloud and the outline frame of the target object are obtained, and one or a combination of the point cloud and the outline frame can be selected as path planning information according to actual requirements, so that the driving path of the robot is planned quickly and efficiently, the robot is controlled to drive according to the planned driving path, and the obstacle avoidance effect in the driving process of the robot is further improved.
The control method of the robot according to the present invention may further include the following additional features:
in the above technical solution, selecting one or a combination of a point cloud and a contour box as path planning information includes: determining the area ratio of the point cloud in the outline frame; and selecting one item or a combination of the point cloud and the outline frame as path planning information according to the area ratio.
In the technical scheme, data of a target object are subjected to operation processing, area data of a point cloud and area data of a contour frame are subjected to operation processing respectively, the area of the contour frame and the distribution contour area of the point cloud are calculated, and the area ratio of the distribution contour area of the point cloud in the area of the contour frame is further calculated. And according to the numerical value of the area ratio, combining one item of data in the point cloud and the outline box or the data of the point cloud and the outline box to be used as path planning information for planning a path.
In the process of determining the area ratio of the point cloud in the outline box, the area of the point cloud and the area of the outline box need to be acquired. The method comprises the steps of dividing an acquired image into a plurality of grids with the same area, setting the area of a single grid as a unit area, setting the area of a point cloud as an area corresponding to the number of the grids occupied by the point cloud, and setting the area of a contour frame as an area corresponding to the number of the grids contained in the contour frame. And calculating the area ratio of the point cloud in the outline box through the area of the point cloud and the area of the outline box.
Specifically, when the area ratio is large, the contour frame of the target object is determined to reflect the real contour of the target object, and the contour frame is used as the path planning information. And under the condition that the area occupation ratio is smaller, judging that the identified outline frame of the target object cannot reflect the real outline of the target object, and independently using the point cloud as path planning information or using the point cloud and the outline frame as the path planning information.
According to the control method of the robot in the technical scheme, whether the outline frame obtained through image recognition can reflect the real outline of the target object or not can be determined according to the area proportion of the coverage area of the point cloud in the area of the outline frame, and one or a combination of the point cloud and the outline frame is selected as path planning information according to the fact that the outline frame can reflect the real outline of the target object. The accuracy of path planning is further guaranteed. Under the condition that the robot is a sweeping robot, the obstacle avoidance effect can be ensured, and the sweeping effect of the sweeping robot can be further improved.
In the above technical solution, selecting one or a combination of the point cloud and the outline box as the path planning information according to the area ratio includes: taking the outline frame as path planning information under the condition that the area ratio is larger than or equal to a first preset ratio; taking the point cloud as path planning information under the condition that the area ratio is less than or equal to a second preset ratio; taking the point cloud and the outline frame as path planning information under the condition that the area occupation ratio is larger than a second preset occupation ratio and smaller than a first preset occupation ratio; wherein the first preset proportion is larger than the second preset proportion.
In the technical scheme, the area ratio of the coverage area of the point cloud of the target object in the area of the outline frame is calculated, a first preset ratio and a second preset ratio are obtained, and the ratio of the first preset ratio is larger than that of the second preset ratio. And comparing the calculated area ratio with a first preset ratio and a second preset ratio respectively, and determining path planning information in the point cloud and the outline frame according to a comparison result.
Specifically, the quantitative relation between the area ratio and the first preset ratio is judged, under the condition that the area ratio is detected to be larger than or equal to the first preset ratio, a large number of point clouds exist in the outline frame, the outline frame obtained through image recognition is judged to be close to the real outline of the target object, and the outline frame is used as path planning information at the moment.
And under the condition that the area ratio is smaller than the first preset ratio, judging the quantitative relation between the area ratio and the second preset ratio. And under the condition that the area ratio is greater than the second preset ratio, determining that a certain amount of point clouds exist in the outline frame, cutting the outline frame through the point clouds of the target object, wherein the cut outline frame is close to the real outline of the target object, and then taking the outline frame cut according to the point clouds as path planning information.
And under the condition that the area occupation ratio of the comparison result is less than or equal to a second preset occupation ratio, determining that less point clouds exist in the outline frame, judging that the point clouds obtained through image recognition are close to the real outline of the target object, and taking the point clouds as path planning information at the moment.
According to the control method of the robot in the technical scheme, the first preset proportion and the second preset proportion are set, the calculated area proportion is compared with the first preset proportion and the second preset proportion respectively, whether the outline frame obtained through image recognition can reflect the real outline of the target object or not is determined, and one or combination of the point cloud and the outline frame is selected as path planning information. The accuracy of path planning is further guaranteed. Under the condition that the robot is a sweeping robot, the obstacle avoidance effect can be ensured, and the sweeping effect of the sweeping robot can be further improved.
In the technical scheme, the value range of the first preset ratio is 15-35%; and/or the second predetermined proportion ranges from 65% to 85%.
In the technical scheme, the value ranges of the first preset ratio and the second preset ratio are set, specifically, the value range of the first preset ratio is set to be 15% to 35%, and/or the value range of the second preset ratio is set to be 65% to 85%.
The control method of the robot limits the magnitude relation of the first preset occupation ratio and the second preset occupation ratio by setting the value ranges of the first preset occupation ratio and the second preset occupation ratio, and ensures the accuracy of the step of selecting the path planning information of the robot.
In the above technical solution, the path planning information includes a point cloud or a contour frame, and the determining of the driving path of the robot according to the path planning information includes: determining first boundary information of a point cloud or a contour box; and determining the driving path according to the first boundary information.
In the technical scheme, when the path planning information only comprises a single point cloud or a single outline frame, the driving path is planned according to the first boundary information of the corresponding point cloud or the first boundary information of the corresponding outline frame.
Under the condition that the path planning information comprises point clouds of a target object, first boundary information of the point clouds of the target object is determined, wherein the first boundary information comprises distribution contour edge information of the point clouds. Under the condition that the path planning information comprises the outline frame of the target object, first boundary information of the outline frame of the target object is determined, wherein the first boundary information comprises boundary information of the outline frame. And after the first boundary information is determined, planning a driving path of the robot according to the first boundary information.
Specifically, when a small number of point clouds are included in a contour frame for different target objects, the edge contour of the target object can be drawn from point cloud data distributed around the edge in the point cloud of the target object, and coordinate data of the edge contour drawn from the point cloud can be used as the first boundary information. When a large amount of point clouds are contained in the outline frame, the border of the outline frame of the target object can be regarded as the edge outline of the target object, and the coordinate data of the border of the outline frame can be used as the first border information.
According to the control method of the robot in the technical scheme, the driving path of the robot is planned through the point cloud of the target object or the first boundary information of the outline frame, and the accuracy of the step of planning the driving path of the robot is guaranteed.
In the above technical solution, the path planning information includes a point cloud and a contour frame, and the determining of the driving path of the robot according to the path planning information includes: determining second boundary information of the point cloud; determining a target area in the outline frame according to the second boundary information; determining third boundary information of the target area; and determining a driving path according to the third boundary information.
According to the technical scheme, under the condition that the path planning information only comprises the point cloud and the outline frame, the outline frame is cut according to the second boundary information of the point cloud, and the driving path is planned according to the third boundary information of the target area obtained through cutting.
Specifically, the robot is a sweeping robot, and under the condition that the area ratio of the point cloud in the outline frame is greater than the second preset ratio and smaller than the first preset ratio, if the driving path is planned only according to the point cloud, obstacle avoidance failure is easily caused, and if the driving path is planned only according to the outline frame, the sweeping effect of the sweeping robot is easily influenced. Therefore, the contour box is selected to be clipped according to the point cloud.
And under the condition that the path planning information comprises the point cloud and the outline frame of the target object, determining second boundary information of the point cloud of the target object, and cutting the outline frame according to the second boundary information.
It should be noted that the point cloud of the target object is distributed inside the outline frame, and the outline frame contains a certain number of point clouds, and the point cloud data distributed at the edge in the point cloud of the target object can outline part of the edge outline of the target object, and the coordinate data of the edge outline outlined by the point cloud is set as second boundary information, and then the outline frame is trimmed by the second boundary information, and the frame of the trimmed outline frame is closer to the edge outline of the target object, and the coordinate data of the frame of the outline frame trimmed according to the point cloud can be set as third boundary information.
According to the control method of the robot in the technical scheme, the target area inside the outline frame is determined through the second boundary information of the point cloud of the target object, the driving path of the robot is planned according to the third boundary information of the target area, and the accuracy of the step of planning the driving path of the robot is guaranteed under the condition that the outline frame contains a certain number of point clouds.
In the above technical solution, acquiring a point cloud and a contour frame of a target object includes: acquiring a first image set, wherein each first image in the first image set comprises a target object; a first image set is identified, and a point cloud and a contour box are determined.
In the technical scheme, the robot is provided with an image acquisition device, and images including a target object can be acquired in the running process of the robot. In order to improve the accuracy of the acquired point cloud and outline frame of the target object, in the step of identifying the point cloud and outline frame of the target object, a plurality of images including the target object need to be acquired, the set of the images is a first image set, each image in the first image set includes the target object, and the first image set is subjected to image processing, so that the point cloud and outline frame of the target object can be accurately identified.
Specifically, during the running process of the robot, the image acquisition device starts to acquire images and simultaneously identifies the acquired images. And when the target object is detected to be included in the image, taking the image as the image in the first image set, and when the target object is not included in the detected image, filtering the image. After the first image set is acquired, each image in the first image set is identified, a contour frame and a point cloud of a target object in each image are identified, and then a plurality of identified contour frames and a plurality of identified point clouds are subjected to smoothing operation to determine the contour frame and the point cloud of the target object.
According to the control method of the robot, the first image set comprising the images of the plurality of shooting objects and the target object is obtained, the images of the plurality of images in the first image set are identified, the point cloud and the contour frame of the target object are identified, the accuracy of identifying the point cloud and the contour frame is guaranteed, and the accuracy of robot path planning is further guaranteed.
In the above technical solution, after identifying the first image set and determining the point cloud and the outline frame of the target object, the method further includes: under the condition that the recognition effect of the point cloud and/or the outline box does not meet the preset condition, acquiring a second image set, wherein the acquisition time of the second image set is earlier than that of the first image set, and each second image in the second image set comprises a target object; a second image set is identified, and a point cloud and a contour box are determined.
In the technical scheme, after the point cloud and the outline frame of the target object are identified according to the first image set, if the identification effect of the point cloud and/or the outline frame does not reach the preset condition, the second image set can be obtained, and the outline frame and the point cloud of the target object can be identified again. The preset conditions include a frame size standard of the outline frame, and the number and position information of the point clouds are accurate, for example, the size of the outline frame is consistent with the size of the target object, and the position information of the point clouds is consistent with the actual coordinate information of the target object.
It should be noted that the second image set is an additional stored historical image set when the first image set is acquired, and the second image set temporarily stores a period of time for subsequent calling. In the process of acquiring the first image set, conditions such as camera shielding or robot running bump may occur, so that the image quality in the first image set is poor, and further, a situation that the outline frame and point cloud of the target object are identified and do not reach preset conditions occurs. Aiming at the situation, the second image set can be temporarily stored, and the recognized outline frame and point cloud can reach preset conditions.
According to the control method of the robot, when the identification result of the first image set is not good, the second image set which is temporarily stored is called, secondary acquisition is not needed under the condition that the second image set can be used, and the identification efficiency of the outline frame and the point cloud of the target object is improved under the condition that the identification effect is ensured.
In the above technical solution, after identifying the first image set and determining the point cloud and the contour frame of the target object, the method further includes: under the condition that the point cloud and/or outline box recognition effect does not meet the preset condition, determining a first position of the robot according to the first image set; controlling the robot to travel to a first position; acquiring a third image set in the running process of the robot, wherein each third image in the third image set comprises a target object; and identifying a third image set, and determining a point cloud and a contour frame.
According to the technical scheme, under the condition that the outline frame and the point cloud of the identified target object do not reach the preset conditions according to the acquired first image set, the robot can be controlled to retreat to the first position, the robot is controlled to advance again, the image is collected again, and the process of identifying the outline frame and the point cloud is carried out again.
Specifically, a first position of the robot is recorded in advance, the first position being the position information when the first image set was acquired. And under the condition that the outline frame and the point cloud of the target object are judged to be recognized and not reach the preset conditions, controlling the robot to stop advancing, returning to the first position, and restarting to acquire images in the returning process to form a third image set. And after the third image set is acquired, identifying each image in the third image set to identify a contour frame and a point cloud of the target object in each image, and performing smoothing operation on the identified contour frames and the identified point clouds to determine the contour frame and the point cloud of the target object.
According to the control method of the robot in the technical scheme, under the condition that the contour frame and the point cloud of the identified target object do not reach the preset conditions, the robot is controlled to return to the first position, the third image set is collected at the same time, image identification processing is carried out on the images in the third image set, the contour frame and the point cloud of the target object are identified, the wheel point cloud and/or the contour frame of the target object are guaranteed to achieve the preset conditions, and therefore accuracy of a running path of the robot is improved.
In the above solution, determining the first position of the robot from the first set of images includes: determining relative position information of the robot and the target object according to the first image set; determining a first position based on the relative position information.
In the technical scheme, according to a first image set, a relative position relationship between the robot and a target object is identified, and relative position information is acquired, wherein the relative position information comprises distance information and angle information between the robot and the target object. And determining the first position of the robot according to the relative position information.
Specifically, according to the plurality of images in the first image set, the shooting angle of the robot when the camera shoots the target object when the first image set is collected can be judged, and the relative distance between the camera and the target object can also be judged. The first position of the robot can be determined based on the target object by using the parameters such as the shooting angle and the relative distance.
According to the control method of the robot in the technical scheme, under the condition that the outline frame and the point cloud of the identified target object do not meet the preset conditions, the first position is determined according to a plurality of images in the first image set, the robot is controlled to return to the first position, the images are collected again, the outline frame and the point cloud of the target object are identified again, and the identification effect of the point cloud and/or the outline frame of the target object is guaranteed.
In the technical scheme, a first running speed of the robot when the robot runs to the first position is less than a second running speed of the robot when the robot runs according to the running path.
In this technical solution, the robot is controlled to travel forward at the second travel speed. And under the condition that the outline frame and the point cloud of the identified target object do not reach the preset conditions, controlling the robot to return to the first position, and controlling the robot to run at the first running speed in the process of returning to the first position. Wherein the first travel speed is less than the second travel speed.
The control method of the robot in the technical scheme limits the running speed of the robot returning to the first position. The robot driving process is more stable in the process of re-collecting the image, the image quality of the collected image is guaranteed, and the recognition effect of the point cloud and/or the outline frame is further guaranteed.
According to a second aspect of the present invention, there is provided a control device for a robot, comprising: the acquisition module is used for acquiring a point cloud and a contour frame of a target object; the selection module is used for selecting one item or a combination of the point cloud and the outline frame as path planning information; the determining module is used for determining the driving path of the robot according to the path planning information; and the control module is used for controlling the robot to run according to the running path.
In the technical scheme, in the running process of the robot, the situation that the robot is hindered from advancing by a target object occurs, the robot is controlled to carry out evasive action aiming at the situation, and the acquisition module respectively acquires the point cloud and the outline frame of the target object. In the process of planning the driving path, the selection module can independently use the point cloud or the outline box as path planning information, and can also use the point cloud and the outline box as path planning information. And according to the determined path planning information, the determining module plans a running path of the robot, and the control module controls the robot to run according to the planned running path.
It should be noted that the target object includes obstacles of different types, different shapes, and different poses. For example, low objects (carpet, doorsill), irregular objects (wires, clothing), obstacles of a certain classification (pet faeces, liquid stains).
The robot can be selected as a sweeping robot, and image acquisition devices such as a camera and the like are arranged in the sweeping robot.
The outline box of the target object includes the smallest rectangle of the complete outline of the target object. The outline box can be obtained by means of visual recognition. Specifically, an image including a target object is captured by an image capturing device such as a camera, and a contour frame of the target object is acquired by recognizing the image.
The point cloud of the target object comprises discrete point data of the specific contour of the target object, the point cloud of the target object is distributed in the contour frame, and both the contour frame and the point cloud can be used as path planning information.
It can be understood that, for the same target object, the point cloud of the target object is located inside the outline box.
And under the condition that more point clouds exist in the outline frame, judging that the difference between the recognition effects of taking the point clouds as path planning information and taking the outline frame as the path planning information is smaller. To reduce the amount of computation, the outline box may be used as path planning information.
Under the condition that the point clouds are distributed in a certain area in the outline frame in a centralized manner, namely under the condition that the number of the point clouds in the outline frame is small, the accuracy of path planning is judged to be low only by taking the outline frame as path planning information, so that the point clouds can be selected as the path planning information, or the point clouds and the outline frame can be selected as the path planning information.
In the related art, obstacle avoidance strategies of the sweeping robots are based on single data to plan driving paths. For example, the outline frame of the obstacle is used as the obstacle avoidance planning basis, and in the process of identifying and avoiding the obstacle for the slender obstacle (electric wire), the real outline of the obstacle is only the diagonal line of the outline frame, so that the cleaning coverage rate of the sweeping robot is influenced.
According to the technical scheme, the control device of the robot acquires the point cloud and the outline frame of the target object through the acquisition module, the selection module can select one or a combination of the point cloud and the outline frame as path planning information according to actual requirements, the determination module can quickly and efficiently plan the running path of the robot, the control module controls the robot to run according to the planned running path, and the obstacle avoidance effect in the running process of the robot is further improved.
According to a third aspect of the present invention, there is provided a control device for a robot, comprising a processor and a memory, wherein the memory stores a program or instructions, and the program or instructions, when executed by the processor, implement the steps of the control method for a robot according to any one of the above-mentioned aspects. Therefore, the control device has all the advantages of the control method of the robot in any one of the above technical solutions, and details are not repeated herein.
According to a fourth aspect of the present invention, a readable storage medium is provided, on which a program or instructions are stored, which when executed by a processor implement the control method of the robot according to any one of the above-mentioned technical solutions. Therefore, the readable storage medium has all the advantages of the control method of the robot in any of the above technical solutions, and is not described herein again.
According to a fifth aspect of the present invention, there is provided a robot comprising: the control device of the robot as defined in the second aspect, the control device of the robot as defined in the third aspect, or the readable storage medium as defined in the fourth aspect, therefore, all the advantageous technical effects of the control device of the robot as defined in the second aspect, the control device of the robot as defined in the third aspect, or the readable storage medium as defined in the fourth aspect are achieved, and redundant description is not repeated herein.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 shows one of the flow diagrams of the control method of the robot in the first embodiment of the invention;
fig. 2 shows a second flow chart of a control method of the robot in the first embodiment of the present invention;
fig. 3 shows a third flowchart of a control method of the robot in the first embodiment of the present invention;
fig. 4 shows a fourth flowchart of a control method of the robot in the first embodiment of the invention;
fig. 5 shows a fifth flowchart of a control method of the robot in the first embodiment of the invention;
fig. 6 shows a sixth flowchart of a control method of the robot in the first embodiment of the invention;
fig. 7 shows a seventh flowchart of a control method of the robot in the first embodiment of the invention;
fig. 8 shows an eighth flowchart of the control method of the robot in the first embodiment of the present invention;
fig. 9 shows one of effect diagrams of a control method of the robot in the first embodiment of the present invention;
fig. 10 shows a second effect diagram of the control method of the robot in the first embodiment of the present invention;
fig. 11 shows a third effect diagram of the control method of the robot in the first embodiment of the present invention;
fig. 12 is a block diagram showing a configuration of a control apparatus of a robot in a second embodiment of the present invention;
fig. 13 is a block diagram showing a configuration of a control device of a robot according to a third embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
The following describes in detail a control method, an apparatus, a readable storage medium, and a robot provided in the embodiments of the present application with reference to fig. 1 to 13 through specific embodiments and application scenarios thereof.
The first embodiment is as follows:
as shown in fig. 1, a first embodiment of the present invention provides a control method of a robot, including:
102, acquiring a point cloud and a contour frame of a target object;
104, selecting one item or a combination of the point cloud and the outline box as path planning information;
step 106, determining a driving path of the robot according to the path planning information;
and step 108, controlling the robot to run according to the running path.
In this embodiment, during the running process of the robot, a situation that the robot is hindered from moving forward by a target object may occur, and the robot is controlled to perform an avoiding operation in view of the situation, so as to respectively obtain the point cloud and the outline frame of the target object. In the process of planning the driving path, the point cloud or the outline box can be used as path planning information independently, and the point cloud and the outline box can also be used as path planning information. And planning the running path of the robot according to the determined path planning information, and controlling the robot to run according to the planned running path.
It should be noted that the target object includes obstacles of different types, different shapes, and different poses. For example, low objects (carpet, doorsill), irregular objects (wires, clothing), obstacles of a certain classification (pet faeces, liquid stains).
The robot can be selected as a sweeping robot, and image acquisition devices such as a camera and the like are arranged in the sweeping robot.
The outline box of the target object includes the smallest rectangle of the complete outline of the target object. The outline box can be obtained by means of visual recognition. Specifically, an image including a target object is captured by an image capturing device such as a camera, and a contour frame of the target object is acquired by recognizing the image.
The point cloud of the target object comprises discrete point data of the specific contour of the target object, the point cloud of the target object is distributed in the contour frame, and both the contour frame and the point cloud can be used as path planning information.
It will be appreciated that for the same target object, the point cloud of the target object is located inside the outline box.
And under the condition that more point clouds exist in the outline frame, judging that the difference between the recognition effects of taking the point clouds as path planning information and taking the outline frame as the path planning information is smaller. To reduce the amount of computation, the outline box may be used as path planning information.
Under the condition that the point clouds are distributed in a certain area in the outline frame in a centralized mode, namely under the condition that the number of the point clouds in the outline frame is small, the accuracy of path planning is low by only taking the outline frame as path planning information, and therefore the point clouds can be selected as the path planning information, or the point clouds and the outline frame can be selected as the path planning information.
In some embodiments, the target object is a building block toy on the ground, and due to the shape rule of the building block toy, when the point cloud of the building block toy is identified to be scattered in the outline frame and more point clouds are close to the edge position of the outline frame, the obtained outline frame is used as path planning information to plan a driving path of the robot, and then the robot is controlled to drive according to the planned driving path to avoid the target object.
In other embodiments, the target object is a power line of a household appliance, and the outline frame of the power line is large, but the point cloud is concentrated in a part of the outline frame. In this case, if the driving route is planned using the outline box as the route planning information, the driving route may be greatly different from the route which can be actually driven. Therefore, the point cloud is used as the path planning information alone or the point cloud and the outline frame are used as the path planning information together, and the path planning accuracy can be improved. Specifically, when the point cloud and the outline frame are used as path planning information, the outline frame is cut by a power supply, and then path planning is performed according to the cut outline frame.
In the related art, obstacle avoidance strategies of sweeping robots are based on single data to plan a driving path. For example, the outline frame of the obstacle is used as the obstacle avoidance planning basis, and in the process of identifying and avoiding the obstacle for the slender obstacle (electric wire), the real outline of the obstacle is only the diagonal line of the outline frame, so that the cleaning coverage rate of the sweeping robot is influenced.
According to the control method of the robot in the embodiment, the point cloud and the outline frame of the target object are obtained, and one or a combination of the point cloud and the outline frame can be selected as the path planning information according to actual requirements, so that the driving path of the robot is planned quickly and efficiently, the robot is controlled to drive according to the planned driving path, and the obstacle avoidance effect in the driving process of the robot is further improved.
As shown in fig. 2, in any of the above embodiments, selecting one or a combination of the point cloud and the outline box as the path planning information includes:
step 202, determining the area ratio of the point cloud in the outline frame;
and step 204, selecting one item or a combination of the point cloud and the outline box as path planning information according to the area ratio.
In this embodiment, the data of the target object is subjected to operation processing, the area data of the point cloud and the area data of the outline box are subjected to operation processing, the area of the outline box and the distribution outline area of the point cloud are calculated, and the area ratio of the distribution outline area of the point cloud in the area of the outline box is further calculated. And according to the numerical value of the area ratio, combining one item of data in the point cloud and the outline box or the data of the point cloud and the outline box to be used as path planning information for planning a path.
In the process of determining the area ratio of the point cloud in the outline box, the area of the point cloud and the area of the outline box need to be acquired. The method comprises the steps of dividing an acquired image into a plurality of grids with the same area, setting the area of a single grid as a unit area, setting the area of a point cloud as an area corresponding to the number of the grids occupied by the point cloud, and setting the area of a contour frame as an area corresponding to the number of the grids contained in the contour frame. And calculating the area ratio of the point cloud in the outline box through the area of the point cloud and the area of the outline box.
Specifically, when the area ratio is large, the contour frame of the target object is determined to reflect the real contour of the target object, and the contour frame is used as the path planning information. And under the condition that the area occupation ratio is smaller, judging that the identified outline frame of the target object cannot reflect the real outline of the target object, and independently using the point cloud as path planning information or using the point cloud and the outline frame as the path planning information.
In some embodiments, the target object is a power line of a household appliance, an image including the target object is obtained, a plurality of grids with the same area are divided, wherein the point cloud of the target object occupies 9 grids in the image, the area of the point cloud is calculated to be 9 unit areas, the outline frame of the target object comprises 20 grids, and the area of the outline frame is calculated to be 20 unit areas. And calculating the area ratio of the target object according to the point cloud area and the outline box area. And according to the ratio of the area to the ratio, using the point cloud of the target object as path planning information for planning a path.
The control method of the robot in this embodiment can determine whether the outline frame obtained by image recognition can reflect the real outline of the target object according to the area proportion of the coverage area of the point cloud in the area of the outline frame, and accordingly selects one or a combination of the point cloud and the outline frame as the path planning information. The accuracy of path planning is further guaranteed. Under the condition that the robot is a sweeping robot, the obstacle avoidance effect can be ensured, and the sweeping effect of the sweeping robot can be further improved.
In any of the above embodiments, selecting one or a combination of the point cloud and the outline box as the path planning information according to the area ratio includes:
taking the outline frame as path planning information under the condition that the area ratio is larger than or equal to a first preset ratio;
taking the point cloud as path planning information under the condition that the area ratio is less than or equal to a second preset ratio;
taking the point cloud and the outline frame as path planning information under the condition that the area occupation ratio is larger than a second preset occupation ratio and smaller than a first preset occupation ratio;
wherein the first preset proportion is larger than the second preset proportion.
In the embodiment, the area ratio of the coverage area of the point cloud of the target object in the area of the outline frame is calculated, a first preset ratio and a second preset ratio are obtained, and the ratio of the first preset ratio is greater than the ratio of the second preset ratio. And comparing the calculated area ratio with a first preset ratio and a second preset ratio respectively, and determining path planning information in the point cloud and the outline frame according to a comparison result.
Specifically, the quantitative relation between the area ratio and the first preset ratio is judged, under the condition that the area ratio is detected to be larger than or equal to the first preset ratio, a large number of point clouds exist in the outline frame, the outline frame obtained through image recognition is judged to be close to the real outline of the target object, and the outline frame is used as path planning information at the moment.
And under the condition that the area ratio is smaller than the first preset ratio, judging the quantitative relation between the area ratio and the second preset ratio. And under the condition that the area ratio is greater than the second preset ratio, determining that a certain amount of point clouds exist in the outline frame, cutting the outline frame through the point clouds of the target object, wherein the cut outline frame is close to the real outline of the target object, and then taking the outline frame cut according to the point clouds as path planning information.
And under the condition that the area occupation ratio of the comparison result is less than or equal to a second preset occupation ratio, determining that less point clouds exist in the outline frame, judging that the point clouds obtained through image recognition are close to the real outline of the target object, and taking the point clouds as path planning information at the moment.
In some embodiments, the first preset ratio is set to 30%, the second preset ratio is set to 70%, the target object is a power line of the household appliance, the area of the contour frame of the target object is calculated to be 80 unit areas, the distribution contour area of the point cloud is calculated to be 20 unit areas, and the area ratio of the distribution contour area of the point cloud in the area of the contour frame is calculated to be 25%. And determining that less point clouds exist in the outline frame when the area ratio is smaller than the first preset ratio, judging that the point clouds obtained through image recognition are close to the real outline of the target object, and using the point clouds of the target object as path planning information for planning a path.
In some other embodiments, the first predetermined ratio is set to 30%, the second predetermined ratio is set to 70%, the target object is a toy building block on the ground, the area of the contour box of the target object is calculated to be 20 unit areas, the distribution contour area of the point cloud is calculated to be 18 unit areas, and the area ratio of the distribution contour area of the point cloud in the area of the contour box is calculated to be 90%. And determining that a large amount of point clouds exist in the outline frame, judging that the outline frame obtained through image recognition is close to the real outline of the target object, and using the outline frame of the target object as path planning information for planning a path.
In some other embodiments, the first predetermined ratio is set to 30%, the second predetermined ratio is set to 70%, the target object is a threshold, the area of the contour frame of the target object is calculated to be 80 unit areas, the distribution contour area of the point cloud is calculated to be 40 unit areas, and the area ratio of the distribution contour area of the point cloud in the area of the contour frame is calculated to be 50%. And the area ratio is greater than the first preset ratio and is smaller than the second preset ratio, a certain amount of point clouds exist in the outline frame, the outline frame is cut through the point clouds of the target object, the cut outline frame is close to the real outline of the target object, and the outline frame cut according to the point clouds is used as path planning information.
In the control method of the robot in this embodiment, the first preset proportion and the second preset proportion are set, the calculated area proportion is compared with the first preset proportion and the second preset proportion, whether the contour frame obtained through image recognition can reflect the real contour of the target object is determined, and accordingly, one or a combination of the selected point cloud and the contour frame is selected as the path planning information. The accuracy of path planning is further guaranteed. Under the condition that the robot is a sweeping robot, the obstacle avoidance effect can be ensured, and the sweeping effect of the sweeping robot can be further improved.
In any of the above embodiments, the first predetermined proportion ranges from 15% to 35%; and/or the second predetermined proportion ranges from 65% to 85%.
In this embodiment, a value range of the first preset ratio and a value range of the second preset ratio are respectively set, specifically, the value range of the first preset ratio is set to be 15% to 35%; and setting the second preset ratio to be 65-85%.
In some embodiments, the first predetermined percentage is set to 30% and the second predetermined percentage is set to 70%.
In the control method of the robot in the embodiment, the first preset proportion and the second preset proportion are set, so that the size relationship between the first preset proportion and the second preset proportion is limited, and the accuracy of the step of selecting the robot path planning information is ensured.
As shown in fig. 3, in any of the above embodiments, the determining the driving path of the robot according to the path planning information includes:
step 302, determining first boundary information of a point cloud or a contour frame;
and step 304, determining a driving path according to the first boundary information.
In this embodiment, in the case that the path planning information includes only a single point cloud or a single outline box, the driving path is planned according to only the first boundary information of the corresponding point cloud or the first boundary information of the corresponding outline box.
Under the condition that the path planning information comprises point clouds of a target object, first boundary information of the point clouds of the target object is determined, wherein the first boundary information comprises distribution contour edge information of the point clouds. Under the condition that the path planning information comprises the outline frame of the target object, first boundary information of the outline frame of the target object is determined, wherein the first boundary information comprises boundary information of the outline frame. And after the first boundary information is determined, planning a driving path of the robot according to the first boundary information.
Specifically, when a small number of point clouds are included in a contour frame for different target objects, the edge contour of the target object can be drawn from point cloud data distributed around the edge in the point cloud of the target object, and coordinate data of the edge contour drawn from the point cloud can be used as the first boundary information. When a large amount of point clouds are contained in the outline frame, the border of the outline frame of the target object can be regarded as the edge outline of the target object, and the coordinate data of the border of the outline frame can be used as the first border information.
As shown in fig. 9, when the area ratio of the point cloud 904 in the outline box 902 is greater than the first preset ratio, the outline box 902 that identifies the target object contains a large amount of point cloud 904, the border of the outline box 902 of the target object can be identified as the edge outline of the target object, the coordinate data of the border of the outline box 902 can be set as the first border information, the route can be planned according to the first border information identified by the outline box 902, and the planned travel route is located outside the border of the outline box 902. As can be seen, the part of the planned path is located outside the outline box 902, that is, in the case that the area of the point cloud 904 in the outline box 902 is relatively large, it is more accurate to plan the driving path according to the first boundary information confirmed by the outline box 902.
As shown in fig. 10, when the area ratio of the point cloud 1004 in the outline frame 1002 is smaller than the second preset ratio, it is assumed that a small number of point clouds 1004 are included in the outline frame 1002 of the target object, the edge outline of the target object can be drawn from the point cloud 1004 data distributed at the edge in the point cloud 1004 of the target object, and the coordinate data of the edge outline drawn from the point cloud 1004 can be used as the first boundary information. Planning a path according to the first boundary information determined by the point cloud 1004, wherein the planned driving path is located at the edge position of the point cloud 1004. As can be seen, the part of the planned path is located within the outline frame 1002, that is, in the case that the area of the point cloud 1004 in the outline frame 1002 is small, the planning of the driving path according to the first boundary information of the point cloud 1004 is more accurate.
According to the control method of the robot in the embodiment, the driving path of the robot is planned through the point cloud of the target object or the first boundary information of the outline frame, and the accuracy of the step of planning the driving path of the robot is guaranteed.
As shown in fig. 4, in any of the above embodiments, the determining the driving path of the robot according to the path planning information includes:
step 402, determining second boundary information of the point cloud;
step 404, determining a target area in the outline frame according to the second boundary information;
step 406, determining third boundary information of the target area;
and step 408, determining a driving path according to the third boundary information.
In this embodiment, when the path planning information only includes the point cloud and the outline frame, the outline frame is cut according to the second boundary information of the point cloud, and the driving path is planned according to the third boundary information of the target area obtained by cutting.
Specifically, the robot is a sweeping robot, and under the condition that the area ratio of the point cloud in the outline frame is greater than the second preset ratio and smaller than the first preset ratio, if the driving path is planned only according to the point cloud, obstacle avoidance failure is easily caused, and if the driving path is planned only according to the outline frame, the sweeping effect of the sweeping robot is easily influenced. Therefore, the contour box is selected to be clipped according to the point cloud.
And under the condition that the path planning information comprises the point cloud and the outline frame of the target object, determining second boundary information of the point cloud of the target object, and cutting the outline frame according to the second boundary information.
It should be noted that the point cloud of the target object is distributed inside the outline frame, and the outline frame contains a certain number of point clouds, and the point cloud data distributed at the edge in the point cloud of the target object can outline part of the edge outline of the target object, and the coordinate data of the edge outline outlined by the point cloud is set as second boundary information, and then the outline frame is trimmed by the second boundary information, and the frame of the trimmed outline frame is closer to the edge outline of the target object, and the coordinate data of the frame of the outline frame trimmed according to the point cloud can be set as third boundary information.
As shown in fig. 11, in the case that the area ratio of the point cloud 1104 in the outline box 1102 is smaller than the first preset ratio and larger than the second preset ratio, it is determined that a certain number of point clouds 1104 are included in the outline box 1102 of the target object, and the point cloud 1104 data distributed on the edge in the point cloud 1104 of the target object can outline a part of the edge outline of the target object. The coordinate data of the edge contour outlined by the point cloud 1104 is set as second boundary information, the contour frame 1102 is cut by the second boundary information, the frame of the cut contour frame 1102 is closer to the edge contour of the target object, and the coordinate data of the frame of the cut contour frame 1102 can be set as first boundary information. And planning a path according to the first boundary information jointly determined by the point cloud 1104 and the outline box 1102, wherein part of the planned driving path is located at the edge position of the point cloud 1104, and part of the planned driving path is located at the outer position of the frame of the outline box 1102. As can be seen, in the case that a certain number of point clouds 1104 are included in the outline box 1102, it is more accurate to plan the driving path according to the third boundary information jointly confirmed by the point clouds 1104 and the outline box 1102.
The control method of the robot in the embodiment determines the target area inside the outline frame according to the second boundary information of the point cloud of the target object, plans the driving path of the robot according to the third boundary information of the target area, and ensures the accuracy of the step of planning the driving path of the robot under the condition that the outline frame contains a certain number of point clouds.
As shown in fig. 5, in any of the above embodiments, acquiring the point cloud and the outline frame of the target object includes:
step 502, acquiring a first image set, wherein each first image in the first image set comprises a target object;
at step 504, a first image set is identified, and a point cloud and a contour box are determined.
In this embodiment, the robot is provided with an image capturing device capable of capturing an image including the target object while the robot is traveling. In order to improve the accuracy of the acquired point cloud and contour frame of the target object, in the step of identifying the point cloud and contour frame of the target object, a plurality of images including the target object need to be acquired, the set of the images is a first image set, each image in the first image set includes the target object, and the first image set is subjected to image processing, so that the point cloud and contour frame of the target object can be accurately identified.
Specifically, during the running process of the robot, the image acquisition device starts to acquire images and simultaneously identifies the acquired images. And when the target object is detected to be included in the image, taking the image as the image in the first image set, and when the target object is not detected to be included in the image, filtering out the image. After the first image set is acquired, each image in the first image set is identified, a contour frame and a point cloud of a target object in each image are identified, and then the identified contour frames and the identified point clouds are subjected to smoothing operation to determine the contour frame and the point cloud of the target object.
According to the control method of the robot, the first image set comprising the images of the plurality of shooting objects including the target object is obtained, the images of the plurality of images in the first image set are identified, the point cloud and the contour frame of the target object are identified, the accuracy of identifying the point cloud and the contour frame is guaranteed, and the accuracy of robot path planning is further guaranteed.
As shown in fig. 6, in any of the above embodiments, after identifying the first image set and determining the point cloud and the outline box of the target object, the method further includes:
step 602, under the condition that the identification effect of the point cloud and/or the outline box does not meet a preset condition, acquiring a second image set, wherein the acquisition time of the second image set is earlier than that of the first image set, and each second image in the second image set comprises a target object;
step 604, identify the second image set, determine the point cloud and outline box.
In this embodiment, after the point cloud and the contour frame of the target object are identified according to the first image set, if the identification effect of the point cloud and/or the contour frame does not meet the preset condition, the second image set may be obtained, and the contour frame and the point cloud of the target object may be identified again. The preset conditions include a frame size standard of the outline frame, and the number and position information of the point clouds are accurate, for example, the size of the outline frame is consistent with the size of the target object, and the position information of the point clouds is consistent with the actual coordinate information of the target object.
It should be noted that the second image set is another saved history image set when the first image set is acquired, and the second image set is temporarily stored for a period of time for subsequent calling. In the process of acquiring the first image set, conditions such as camera shielding or robot running bump may occur, so that the image quality in the first image set is poor, and further, a situation that the outline frame and point cloud of the target object are identified and do not reach preset conditions occurs. Aiming at the situation, the second image set can be temporarily stored, and the recognized outline frame and point cloud can reach preset conditions.
According to the control method of the robot, when the identification result of the first image set is poor, the temporary stored second image set is called, secondary acquisition is not needed under the condition that the second image set can be used, and the identification efficiency of the outline frame and the point cloud of the target object is improved under the condition that the identification effect is ensured.
As shown in fig. 7, in any of the above embodiments, after identifying the first image set and determining the point cloud and the outline box of the target object, the method further includes:
step 702, under the condition that the recognition effect of the point cloud and/or the outline box does not meet the preset condition, determining a first position of the robot according to the first image set;
step 704, controlling the robot to travel to a first position;
step 706, in the process of the robot driving, a third image set is obtained, and each third image in the third image set comprises a target object;
at step 708, a third image set is identified, and a point cloud and a contour box are determined.
In this embodiment, according to the acquired first image set, when the identified outline frame and point cloud of the target object do not meet the preset conditions, the robot may be controlled to retreat to the first position, the robot may be controlled to advance again, the image may be acquired again, and the process of identifying the outline frame and the point cloud may be performed again.
Specifically, a first position of the robot is recorded in advance, the first position being the position information when the first image set was acquired. And under the condition that the outline frame and the point cloud of the target object are judged to be recognized and not reach the preset conditions, controlling the robot to stop advancing, returning to the first position, and restarting to acquire images in the returning process to form a third image set. And after the third image set is acquired, identifying each image in the third image set to identify a contour frame and a point cloud of the target object in each image, and performing smoothing operation on the identified contour frames and the identified point clouds to determine the contour frame and the point cloud of the target object.
In the control method of the robot in this embodiment, under the condition that the point cloud and/or the contour frame of the identified target object do not meet the preset condition, the robot is controlled to return to the first position, the third image set is acquired at the same time, the images in the third image set are subjected to image identification processing, the contour frame and the point cloud of the target object are identified, the identification effect of the point cloud and/or the contour frame of the target object is ensured to meet the preset condition, and the accuracy of the robot driving path is further improved.
As shown in fig. 8, in any of the above embodiments, determining the first position of the robot from the first set of images comprises:
step 802, determining relative position information of the robot and the target object according to the first image set;
step 804, determining a first position according to the relative position information.
In this embodiment, a relative position relationship between the robot and the target object is identified according to the first image set, and relative position information is obtained, where the relative position information includes distance information and angle information between the robot and the target object. And determining the first position of the robot according to the relative position information.
Specifically, according to a plurality of images in the first image set, the shooting angle of the robot when the robot shoots the target object and the relative distance between the camera and the target object can be judged when the first image set is collected. The first position of the robot can be determined based on the target object by using the parameters such as the shooting angle and the relative distance.
In the control method of the robot in this embodiment, when the identified contour frame and point cloud of the target object do not meet the preset condition, the first position is determined according to the multiple images in the first image set, the robot is controlled to return to the first position, the images are collected again, the contour frame and the point cloud of the target object are identified again, and the identification effect of the point cloud and/or the contour frame of the target object is ensured.
In any of the above embodiments, the first travel speed at which the robot travels to the first position is less than the second travel speed at which the robot travels along the travel path.
In this embodiment, the robot is controlled to travel forward at the second travel speed. And under the condition that the outline frame and the point cloud of the identified target object do not reach the preset conditions, controlling the robot to return to the first position, and controlling the robot to run at the first running speed in the process of returning to the first position. Wherein the first travel speed is less than the second travel speed.
The control method of the robot in the present embodiment is to limit the travel speed of the robot to return to the first position. The robot can run more stably in the process of re-acquiring the image, the image quality of the acquired image is ensured, and the identification effect of the point cloud and/or the outline frame is further ensured.
Example two:
as shown in fig. 12, a second embodiment of the present invention provides a control device for a robot, the control device 1200 for a robot including:
an obtaining module 1202, configured to obtain a point cloud and a contour frame of a target object;
a selection module 1204, configured to select one or a combination of a point cloud and a contour box as path planning information;
a determining module 1206, configured to determine a driving path of the robot according to the path planning information;
and the control module 1208 is used for controlling the robot to run according to the running path.
In this embodiment, during the running process of the robot, a situation may occur in which the robot is hindered from advancing by the target object, the robot is controlled to perform the avoidance operation in response to the situation, and the acquisition module 1202 acquires the point cloud and the contour frame of the target object respectively. In the process of planning a driving path, the selection module 1204 may use the point cloud or the outline box as path planning information alone, or use the point cloud and the outline box as path planning information. According to the determined path planning information, the determining module 1206 plans a running path of the robot, and the control module 1208 controls the robot to run according to the planned running path.
It should be noted that the target object includes obstacles of different types, different shapes, and different poses. For example, low objects (carpet, doorsill), irregular objects (wires, clothing), obstacles of a certain classification (pet faeces, liquid stains).
The robot can be selected as a sweeping robot, and image acquisition devices such as a camera and the like are arranged in the sweeping robot.
The outline box of the target object includes the smallest rectangle of the complete outline of the target object. The outline box can be obtained by means of visual recognition. Specifically, an image including a target object is captured by an image capturing device such as a camera, and a contour frame of the target object is acquired by recognizing the image.
The point cloud of the target object comprises discrete point data of the specific contour of the target object, the point cloud of the target object is distributed in the contour frame, and both the contour frame and the point cloud can be used as path planning information.
It will be appreciated that for the same target object, the point cloud of the target object is located inside the outline box.
And under the condition that more point clouds exist in the outline frame, judging that the difference between the recognition effects of taking the point clouds as path planning information and taking the outline frame as the path planning information is smaller. To reduce the amount of computation, the outline box may be used as path planning information.
Under the condition that the point clouds are distributed in a certain area in the outline frame in a centralized manner, namely under the condition that the number of the point clouds in the outline frame is small, the accuracy of path planning is judged to be low only by taking the outline frame as path planning information, so that the point clouds can be selected as the path planning information, or the point clouds and the outline frame can be selected as the path planning information.
In some embodiments, the target object is a building block toy on the ground, and due to the shape rule of the building block toy, when the point cloud of the building block toy is identified to be scattered in the outline frame and more point clouds are close to the edge position of the outline frame, the obtained outline frame is used as path planning information to plan a driving path of the robot, and then the robot is controlled to drive according to the planned driving path to avoid the target object.
In other embodiments, the target object is a power line of a household appliance, and the outline frame of the power line is large, but the point cloud is concentrated in a part of the outline frame. In this case, if the driving route is planned using the outline box as the route planning information, the driving route may be greatly different from the route which can be actually driven. Therefore, the point cloud is used as the path planning information alone or the point cloud and the outline frame are used as the path planning information together, and the path planning accuracy can be improved. Specifically, under the condition that the point cloud and the outline frame are used as path planning information, the outline frame is cut through a power supply, and then path planning is carried out according to the cut outline frame.
In the related art, obstacle avoidance strategies of sweeping robots are based on single data to plan a driving path. For example, the outline frame of the obstacle is used as the obstacle avoidance planning basis, and in the process of identifying and avoiding the obstacle for the slender obstacle (electric wire), the real outline of the obstacle is only the diagonal line of the outline frame, so that the cleaning coverage rate of the sweeping robot is influenced.
The control device of the robot in this embodiment obtains the point cloud and the contour frame of the target object through the obtaining module 1202, the selecting module 1204 can select one or a combination of the point cloud and the contour frame as path planning information according to actual requirements, the determining module 1206 determines a driving path of the robot quickly and efficiently, and the control module 1208 controls the robot to drive according to the planned driving path, so that the obstacle avoidance effect in the driving process of the robot is further improved.
In any of the above embodiments, the control device 1200 of the robot includes:
a determining module 1206 for determining an area ratio of the point cloud in the outline frame;
and a selecting module 1204, configured to select one or a combination of the point cloud and the outline box as path planning information according to the area ratio.
In this embodiment, the determining module 1206 performs operation on the data of the target object, performs operation on the area data of the point cloud and the outline frame, calculates the area of the outline frame and the distribution outline area of the point cloud, and further calculates the area ratio of the distribution outline area of the point cloud in the area of the outline frame. According to the area ratio value, the selection module 1204 uses one item of data or a combination of data of the point cloud and the outline box as path planning information for planning a path.
In the process of determining the area ratio of the point cloud in the outline box, the area of the point cloud and the area of the outline box need to be acquired. The method comprises the steps of dividing an acquired image into a plurality of grids with the same area, setting the area of a single grid as a unit area, setting the area of a point cloud as an area corresponding to the number of the grids occupied by the point cloud, and setting the area of a contour frame as an area corresponding to the number of the grids contained in the contour frame. And calculating the area proportion of the point cloud in the outline box through the area of the point cloud and the area of the outline box.
Specifically, when the area ratio is large, the contour frame of the target object is determined to reflect the real contour of the target object, and the contour frame is used as the path planning information. And under the condition that the area occupation ratio is smaller, judging that the identified outline frame of the target object cannot reflect the real outline of the target object, and independently using the point cloud as path planning information or using the point cloud and the outline frame as the path planning information.
In some embodiments, the target object is a power line of a household appliance, an image including the target object is obtained, a plurality of grids with the same area are divided, wherein the point cloud of the target object occupies 9 grids in the image, the area of the point cloud is calculated to be 9 unit areas, the outline frame of the target object comprises 20 grids, and the area of the outline frame is calculated to be 20 unit areas. And calculating the area ratio of the target object according to the point cloud area and the outline box area. And according to the ratio of the area to the ratio, using the point cloud of the target object as path planning information for planning a path.
The control device of the robot in this embodiment can determine whether the outline frame obtained by image recognition can reflect the real outline of the target object by the determination module 1206 according to the area proportion of the coverage area of the point cloud in the outline frame area, and the selection module 1204 selects one or a combination of the point cloud and the outline frame as the path planning information according to the determination module. The accuracy of path planning is further guaranteed. Under the condition that the robot is a sweeping robot, the obstacle avoidance effect can be ensured, and the sweeping effect of the sweeping robot can be further improved.
In any of the above embodiments, the control device 1200 of the robot includes:
a selecting module 1204, configured to use the outline frame as path planning information when the area ratio is greater than or equal to a first preset ratio;
a selecting module 1204, configured to take the point cloud as path planning information when the area ratio is less than or equal to a second preset ratio;
a selecting module 1204, configured to use the point cloud and the outline frame as path planning information when the area ratio is greater than a second preset ratio and smaller than a first preset ratio;
wherein the first preset proportion is larger than the second preset proportion.
In the embodiment, the area ratio of the coverage area of the point cloud of the target object in the area of the outline frame is calculated, a first preset ratio and a second preset ratio are obtained, and the ratio of the first preset ratio is larger than that of the second preset ratio. The calculated area ratio is compared with a first preset ratio and a second preset ratio respectively, and the selection module 1204 determines path planning information in the point cloud and the outline box according to the comparison result.
Specifically, the quantitative relation between the area ratio and the first preset ratio is judged, under the condition that the area ratio is detected to be larger than or equal to the first preset ratio, a large number of point clouds exist in the outline frame, the outline frame obtained through image recognition is judged to be close to the real outline of the target object, and the outline frame is used as path planning information at the moment.
And under the condition that the area ratio is smaller than the first preset ratio, judging the quantitative relation between the area ratio and the second preset ratio. And under the condition that the area ratio is greater than the second preset ratio, determining that a certain amount of point clouds exist in the outline frame, cutting the outline frame through the point clouds of the target object, wherein the cut outline frame is close to the real outline of the target object, and then taking the outline frame cut according to the point clouds as path planning information.
And under the condition that the area occupation ratio of the comparison result is less than or equal to a second preset occupation ratio, determining that less point clouds exist in the outline frame, judging that the point clouds obtained through image recognition are close to the real outline of the target object, and taking the point clouds as path planning information at the moment.
In some embodiments, the first preset ratio is set to 30%, the second preset ratio is set to 70%, the target object is a power line of the household appliance, the area of the contour frame of the target object is calculated to be 80 unit areas, the distribution contour area of the point cloud is calculated to be 20 unit areas, and the area ratio of the distribution contour area of the point cloud in the area of the contour frame is calculated to be 25%. And determining that less point clouds exist in the outline frame when the area ratio is smaller than the first preset ratio, judging that the point clouds obtained through image recognition are close to the real outline of the target object, and using the point clouds of the target object as path planning information for planning a path.
In some other embodiments, the first predetermined ratio is set to 30%, the second predetermined ratio is set to 70%, the target object is a toy building block on the ground, the area of the contour box of the target object is calculated to be 20 unit areas, the distribution contour area of the point cloud is calculated to be 18 unit areas, and the area ratio of the distribution contour area of the point cloud in the area of the contour box is calculated to be 90%. And determining that a large amount of point clouds exist in the outline frame, judging that the outline frame obtained through image recognition is close to the real outline of the target object, and using the outline frame of the target object as path planning information for planning a path.
In some other embodiments, the first predetermined ratio is set to 30%, the second predetermined ratio is set to 70%, the target object is a threshold, the area of the contour frame of the target object is calculated to be 80 unit areas, the distribution contour area of the point cloud is calculated to be 40 unit areas, and the area ratio of the distribution contour area of the point cloud in the area of the contour frame is calculated to be 50%. And the area ratio is greater than the first preset ratio and is smaller than the second preset ratio, a certain amount of point clouds exist in the outline frame, the outline frame is cut through the point clouds of the target object, the cut outline frame is close to the real outline of the target object, and the outline frame cut according to the point clouds is used as path planning information.
The control device of the robot in this embodiment determines whether the contour frame obtained through image recognition can reflect the real contour of the target object by setting the first preset proportion and the second preset proportion and comparing the calculated area proportion with the first preset proportion and the second preset proportion, and the selection module 1204 selects one or a combination of the point cloud and the contour frame as the path planning information. The accuracy of path planning is further guaranteed. Under the condition that the robot is a sweeping robot, the obstacle avoidance effect can be ensured, and the sweeping effect of the sweeping robot can be further improved.
In any of the above embodiments, the first predetermined proportion ranges from 15% to 35%; and/or the second predetermined proportion ranges from 65% to 85%.
In this embodiment, a value range of the first preset ratio and a value range of the second preset ratio are set, specifically, the first preset ratio is set, and the value range is set to 15% to 35%; and setting the second preset ratio to be 65-85%.
In some embodiments, the first predetermined percentage is set to 30% and the second predetermined percentage is set to 70%.
In this embodiment, the control device of the robot defines the magnitude relationship between the first preset proportion and the second preset proportion by setting the first preset proportion and the second preset proportion, and ensures the accuracy of the step of selecting the robot path planning information.
In any of the above embodiments, the control device of the robot includes:
a determining module 1206 for determining first boundary information of the point cloud or the outline box;
the determining module 1206 is configured to determine the driving path according to the first boundary information.
In this embodiment, in the case that the path planning information includes only a single point cloud or a single outline box, the determining module 1206 plans the driving path according to only the first boundary information of the corresponding point cloud or the first boundary information of the corresponding outline box.
In the case that the path planning information includes a point cloud of the target object, the determining module 1206 determines first boundary information of the point cloud of the target object, where the first boundary information includes distribution contour edge information of the point cloud. In the case that the path planning information includes the outline frame of the target object, the determining module 1206 determines first boundary information of the outline frame of the target object, wherein the first boundary information includes boundary information of the outline frame. And after the first boundary information is determined, planning a driving path of the robot according to the first boundary information.
Specifically, when a small number of point clouds are included in the contour frame for different target objects, the point cloud data distributed at the edge in the point clouds of the target objects may outline the edge contour of the target object, and the determining module 1206 may set the coordinate data of the edge contour outlined by the point clouds as the first boundary information. In the case where a large number of point clouds are included in the outline frame, the border of the outline frame of the target object may be regarded as the edge outline of the target object, and the determination module 1206 may set the coordinate data of the border of the outline frame as the first border information.
As shown in fig. 9, when the area ratio of the point cloud in the outline frame is greater than the first preset ratio, it is determined that the outline frame of the target object contains a large amount of point clouds, the frame of the outline frame of the target object can be determined as the edge outline of the target object, the coordinate data of the frame of the outline frame can be set as the first boundary information, the route is planned according to the first boundary information determined by the outline frame, and the planned driving route is located at the position outside the frame of the outline frame. Therefore, the part for planning the path is positioned outside the outline frame, namely the driving path is planned more accurately according to the first boundary information confirmed by the outline frame under the condition that the area ratio of the point cloud in the outline frame is larger.
As shown in fig. 10, when the area ratio of the point cloud in the outline frame is smaller than the second preset ratio, it is determined that a small amount of point cloud is contained in the outline frame of the target object, the edge outline of the target object can be drawn by the point cloud data distributed on the edge in the point cloud of the target object, and the coordinate data of the edge outline drawn by the point cloud can be set as the first boundary information. And planning a path according to the first boundary information determined by the point cloud, wherein the planned driving path is located at the edge position of the point cloud. Therefore, the part for planning the path is positioned in the outline frame, namely the driving path is more accurately planned according to the first boundary information of the point cloud under the condition that the area of the point cloud in the outline frame is smaller.
The control device of the robot in this embodiment determines the driving path of the robot planned by the module 1206 according to the point cloud of the target object or the first boundary information of the outline frame, thereby ensuring the accuracy of the step of planning the driving path of the robot.
In any of the above embodiments, the path planning information includes a point cloud and a contour frame, and determining the driving path of the robot according to the path planning information includes:
a determining module 1206 for determining second boundary information of the point cloud;
a determining module 1206, configured to determine a target area in the outline box according to the second boundary information;
a determining module 1206, configured to determine third boundary information of the target area;
the determining module 1206 is configured to determine the driving path according to the third boundary information.
In this embodiment, when the path planning information only includes the point cloud and the outline frame, the outline frame is clipped according to the second boundary information of the point cloud, and the determining module 1206 plans the driving path according to the third boundary information of the target area obtained by clipping.
Specifically, the robot is a sweeping robot, and under the condition that the area ratio of the point cloud in the outline frame is greater than the second preset ratio and smaller than the first preset ratio, if the driving path is planned only according to the point cloud, obstacle avoidance failure is easily caused, and if the driving path is planned only according to the outline frame, the sweeping effect of the sweeping robot is easily influenced. Therefore, the contour box is selected to be clipped according to the point cloud.
And under the condition that the path planning information comprises the point cloud and the outline frame of the target object, determining second boundary information of the point cloud of the target object, and cutting the outline frame according to the second boundary information.
It should be noted that the point cloud of the target object is distributed inside the outline frame, and the outline frame contains a certain number of point clouds, and the point cloud data distributed at the edge in the point cloud of the target object can outline part of the edge outline of the target object, and the coordinate data of the edge outline outlined by the point cloud is set as second boundary information, and then the outline frame is trimmed by the second boundary information, and the frame of the trimmed outline frame is closer to the edge outline of the target object, and the coordinate data of the frame of the outline frame trimmed according to the point cloud can be set as third boundary information.
As shown in fig. 11, when the area ratio of the point cloud in the outline frame is smaller than the first preset ratio and larger than the second preset ratio, it is determined that the outline frame of the target object contains a certain number of point clouds, and the point cloud data distributed on the edge in the point cloud of the target object can outline a part of the edge outline of the target object. And setting the coordinate data of the edge outline outlined by the point cloud as second boundary information, and then cutting the outline frame by the second boundary information, wherein the frame of the cut outline frame is closer to the edge outline of the target object, and the coordinate data of the frame of the cut outline frame can be set as the first boundary information. And planning a path according to first boundary information determined by the point cloud and the outline frame together, wherein one part of the planned driving path is positioned at the edge position of the point cloud, and the other part of the planned driving path is positioned at the outer position of the frame of the outline frame. Therefore, under the condition that a certain number of point clouds are contained in the outline frame, the planning of the driving path is more accurate according to the third boundary information which is jointly confirmed by the point clouds and the outline frame.
The control device of the robot in this embodiment determines the target area inside the contour frame by the determining module 1206 according to the second boundary information of the point cloud of the target object, and then plans the driving path of the robot according to the third boundary information of the target area, thereby ensuring the accuracy of the step of planning the driving path of the robot under the condition that the contour frame contains a certain number of point clouds.
In any of the above embodiments, the control device 1200 of the robot includes:
an obtaining module 1202 for obtaining a first set of images, each first image in the first set of images comprising a target object;
a determining module 1206 is configured to identify the first image set and determine a point cloud and a contour box.
In this embodiment, the robot is provided with an image capturing device capable of capturing an image including the target object while the robot is traveling. In order to improve the accuracy of the acquired point cloud and outline frame of the target object, in the step of identifying the point cloud and outline frame of the target object, a plurality of images including the target object need to be acquired, the set of the images is a first image set, each image in the first image set includes the target object, and the first image set is subjected to image processing, so that the point cloud and outline frame of the target object can be accurately identified.
Specifically, during the running process of the robot, the image acquisition device starts to acquire images and simultaneously identifies the acquired images. And when the target object is detected to be included in the image, taking the image as the image in the first image set, and when the target object is not detected to be included in the image, filtering out the image. After the first image set is acquired, each image in the first image set is identified, a contour frame and a point cloud of a target object in each image are identified, and then a plurality of identified contour frames and a plurality of identified point clouds are subjected to smoothing operation to determine the contour frame and the point cloud of the target object.
The control device of the robot in this embodiment acquires a first image set including a plurality of images of the shooting objects including the target object through the acquisition module 1202, and the determination module 1206 identifies the plurality of images in the first image set, identifies the point cloud and the contour frame of the target object, and ensures the accuracy of identifying the point cloud and the contour frame, thereby ensuring the accuracy of robot path planning.
In any of the above embodiments, the control device 1200 of the robot includes:
the obtaining module 1202 is configured to obtain a second image set under the condition that the recognition effect of the point cloud and/or the contour frame does not meet a preset condition, where an acquisition time of the second image set is earlier than an acquisition time of the first image set, and each second image in the second image set includes a target object;
a determining module 1206 is configured to identify the second image set and determine a point cloud and a contour box.
In this embodiment, after the point cloud and the outline frame of the target object are identified according to the first image set, if the identification effect of the point cloud and/or the outline frame does not reach the preset condition, the obtaining module 1202 may obtain the second image set to re-identify the outline frame and the point cloud of the target object. The preset conditions include a frame size standard of the outline frame, and the number and position information of the point clouds are accurate, for example, the size of the outline frame is consistent with the size of the target object, and the position information of the point clouds is consistent with the actual coordinate information of the target object.
It should be noted that the second image set is another saved history image set when the first image set is acquired, and the second image set is temporarily stored for a period of time for subsequent calling. In the process of acquiring the first image set, conditions such as camera shielding or robot running bump may occur, so that the image quality in the first image set is poor, and further, a situation that the outline frame and point cloud of the target object are identified and do not reach preset conditions occurs. Aiming at the situation, the second image set can be temporarily stored, and the recognized outline frame and point cloud can reach preset conditions.
When the recognition result of the first image set is not good, the obtaining module 1202 calls the temporarily stored second image set, so that secondary acquisition is not needed when the second image set can be used, and the recognition efficiency of the outline frame and the point cloud of the target object is improved under the condition that the recognition effect is ensured.
In any of the above embodiments, the control device 1200 of the robot includes:
a determining module 1206, configured to determine a first position of the robot according to the first image set when the recognition effect of the point cloud and/or the outline box does not meet a preset condition;
a control module 1208 for controlling the robot to travel to a first position;
an obtaining module 1202, configured to obtain a third image set in a process of driving the robot, where each third image in the third image set includes a target object;
a determining module 1206 is configured to identify the third image set, and determine a point cloud and a contour box.
In this embodiment, according to the acquired first image set, under the condition that the contour frame and the point cloud of the identified target object do not meet the preset conditions, the robot may be controlled to return to the first position, the robot is controlled to advance again, the image is acquired again, and the process of identifying the contour frame and the point cloud is performed again.
Specifically, a first position of the robot is recorded in advance, the first position being the position information when the first image set was acquired. And under the condition that the outline frame and the point cloud of the target object are judged to be recognized and not reach the preset conditions, controlling the robot to stop advancing, returning to the first position, and restarting to acquire images in the returning process to form a third image set. And after the third image set is acquired, identifying each image in the third image set to identify a contour frame and a point cloud of the target object in each image, and performing smoothing operation on the identified contour frames and the identified point clouds to determine the contour frame and the point cloud of the target object.
In this embodiment, the control device of the robot controls the robot to return to the first position and simultaneously acquire the third image set under the condition that the identified contour frame and point cloud of the target object do not meet the preset conditions, performs image identification processing on the images in the third image set, identifies the contour frame and point cloud of the target object, ensures that the identification effect of the point cloud and/or the contour frame of the target object meets the preset conditions, and further improves the accuracy of the travel path of the robot.
In any of the above embodiments, the control device 1200 of the robot includes:
a determining module 1206, configured to determine, according to the first image set, relative position information of the robot and the target object;
the determining module 1206 is configured to determine the first position according to the relative position information.
In this embodiment, the determining module 1206 identifies a relative position relationship between the robot and the target object according to the first image set, and obtains relative position information, where the relative position information includes distance information and angle information between the robot and the target object. And determining the first position of the robot according to the relative position information.
Specifically, according to a plurality of images in the first image set, the shooting angle of the robot when the robot shoots the target object and the relative distance between the camera and the target object can be judged when the first image set is collected. The first position of the robot can be determined based on the target object by using the parameters such as the shooting angle and the relative distance.
In this embodiment, the control device of the robot determines the first position according to the multiple images in the first image set by the determining module 1206 under the condition that the identified contour frame and point cloud of the target object do not meet the preset condition, controls the robot to return to the first position, acquires the images again, and identifies the contour frame and the point cloud of the target object again, thereby ensuring the identification effect of the point cloud and/or the contour frame of the target object.
In any of the above embodiments, the first travel speed at which the robot travels to the first position is less than the second travel speed at which the robot travels along the travel path.
In this embodiment, the robot is controlled to travel forward at the second travel speed. And under the condition that the outline frame and the point cloud of the identified target object do not reach the preset conditions, controlling the robot to return to the first position, and controlling the robot to run at the first running speed in the process of returning to the first position. Wherein the first travel speed is less than the second travel speed.
The control device of the robot in the present embodiment limits the travel speed of the robot to return to the first position. The robot driving process is more stable in the process of re-collecting the image, the image quality of the collected image is guaranteed, and the recognition effect of the point cloud and/or the outline frame is further guaranteed.
Example three:
as shown in fig. 13, a third embodiment of the present invention provides a robot control apparatus, a robot control apparatus 1300 includes a processor 1302 and a memory 1304, and the memory 1304 stores a program or instructions, and the program or instructions, when executed by the processor 1302, implement the steps of the robot control method according to any of the above-mentioned embodiments. Therefore, the robot has all the beneficial effects of the control method of the robot in any of the above technical solutions, and details are not repeated here.
Example four:
a fourth embodiment of the present invention provides a readable storage medium having stored thereon a program which, when executed by a processor, implements the control method of the robot as in any of the embodiments described above, thereby having all the advantageous technical effects of the control method of the robot as in any of the embodiments described above.
The readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Example five:
in a fifth embodiment of the present invention, there is provided a robot comprising: the control device of the robot in any of the above embodiments, and/or the readable storage medium in any of the above embodiments, thus having all the beneficial technical effects of the control device of the robot in any of the above embodiments, and/or the readable storage medium in any of the above embodiments, will not be described herein again.
It is to be understood that, in the claims, the specification and the drawings of the specification of the present invention, the term "plurality" means two or more, unless explicitly defined otherwise, the terms "upper", "lower" and the like indicate orientations or positional relationships based on those shown in the drawings, and are used only for the purpose of describing the present invention more conveniently and simplifying the description, and are not used to indicate or imply that the device or element referred to must have the specific orientation described, be constructed in a specific orientation, and be operated, and thus the description should not be construed as limiting the present invention; the terms "connect," "mount," "secure," and the like are to be construed broadly, and for example, "connect" may refer to a fixed connection between multiple objects, a removable connection between multiple objects, or an integral connection; the multiple objects may be directly connected to each other or indirectly connected to each other through an intermediate. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art from the above data specifically.
In the claims, specification, and drawings that follow the present disclosure, the description of the terms "one embodiment," "some embodiments," "specific embodiments," and so forth, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In the claims, specification and drawings of the present invention, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. A method for controlling a robot, comprising:
acquiring a point cloud and a contour frame of a target object;
selecting one item or a combination of the point cloud and the outline box as path planning information;
determining a driving path of the robot according to the path planning information;
and controlling the robot to run according to the running path.
2. The method of controlling a robot according to claim 1, wherein the selecting one or a combination of the point cloud and the outline box as path planning information includes:
determining the area ratio of the point cloud in the outline box;
and selecting one item or a combination of the point cloud and the outline box as path planning information according to the area ratio.
3. The method of controlling a robot according to claim 2, wherein the selecting one or a combination of the point cloud and the outline box as the path planning information according to the area ratio includes:
taking the outline frame as the path planning information under the condition that the area occupation ratio is greater than or equal to a first preset occupation ratio;
taking the point cloud as the path planning information under the condition that the area ratio is less than or equal to a second preset ratio;
taking the point cloud and the outline frame as path planning information under the condition that the area occupation ratio is larger than the second preset occupation ratio and smaller than the first preset occupation ratio;
wherein the first preset ratio is greater than the second preset ratio.
4. The control method of a robot according to claim 3,
the value range of the first preset proportion is 15-35%; and/or
The value range of the second preset proportion is 65% to 85%.
5. The method according to any one of claims 1 to 4, wherein the path planning information includes the point cloud or the outline box, and the determining the travel path of the robot according to the path planning information includes:
determining first boundary information of the point cloud or the outline box;
and determining the driving path according to the first boundary information.
6. The method according to any one of claims 1 to 4, wherein the path planning information includes the point cloud and the outline box, and the determining the travel path of the robot according to the path planning information includes:
determining second boundary information of the point cloud;
determining a target area in the outline frame according to the second boundary information;
determining third boundary information of the target area;
and determining the driving path according to the third boundary information.
7. The method according to any one of claims 1 to 4, wherein the acquiring a point cloud and a contour box of a target object includes:
acquiring a first image set, each first image in the first image set comprising the target object;
identifying the first image set, determining the point cloud and the outline box.
8. The method of controlling a robot of claim 7, wherein the identifying the first set of images, after determining the point cloud and the outline box of the target object, further comprises:
under the condition that the identification effect of the point cloud and/or the outline frame does not meet a preset condition, acquiring a second image set, wherein the acquisition time of the second image set is earlier than that of the first image set, and each second image in the second image set comprises the target object;
identifying the second image set, and determining the point cloud and the outline box.
9. The method of claim 7, wherein said identifying the first set of images, after determining the point cloud and the outline box of the target object, further comprises:
under the condition that the recognition effect of the point cloud and/or the outline box does not meet a preset condition, determining a first position of the robot according to the first image set;
controlling the robot to travel to the first position;
acquiring a third image set during the running process of the robot, wherein each third image in the third image set comprises the target object;
and identifying the third image set, and determining the point cloud and the outline box.
10. The method of controlling a robot of claim 9, wherein said determining a first position of the robot from the first set of images comprises:
determining relative position information of the robot and the target object according to the first image set;
and determining the first position according to the relative position information.
11. The method of controlling a robot according to claim 9,
a first travel speed at which the robot travels to the first position is less than a second travel speed at which the robot travels along the travel path.
12. A control device for a robot, comprising:
the acquisition module is used for acquiring a point cloud and a contour frame of a target object;
the selection module is used for selecting one item or the combination of the point cloud and the outline frame as path planning information;
the determining module is used for determining a driving path of the robot according to the path planning information;
and the control module is used for controlling the robot to run according to the running path.
13. A control device for a robot, comprising:
a processor;
a memory having stored therein a program or instructions, the processor implementing the steps of the method of any one of claims 1 to 11 when executing the program or instructions in the memory.
14. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the method according to any one of claims 1 to 11.
15. A robot, comprising:
a control device of the robot according to claim 12 or 13; or
The readable storage medium of claim 14.
CN202210585744.9A 2022-05-27 2022-05-27 Robot control method and device, readable storage medium and robot Pending CN114967691A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210585744.9A CN114967691A (en) 2022-05-27 2022-05-27 Robot control method and device, readable storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210585744.9A CN114967691A (en) 2022-05-27 2022-05-27 Robot control method and device, readable storage medium and robot

Publications (1)

Publication Number Publication Date
CN114967691A true CN114967691A (en) 2022-08-30

Family

ID=82956668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210585744.9A Pending CN114967691A (en) 2022-05-27 2022-05-27 Robot control method and device, readable storage medium and robot

Country Status (1)

Country Link
CN (1) CN114967691A (en)

Similar Documents

Publication Publication Date Title
CN111897334B (en) Robot region division method based on boundary, chip and robot
EP3907575A1 (en) Dynamic region division and region channel identification method, and cleaning robot
CN109634285B (en) Mowing robot and control method thereof
CN108733061B (en) Path correction method for cleaning operation
CN112650235A (en) Robot obstacle avoidance control method and system and robot
CN110794831B (en) Method for controlling robot to work and robot
CN111208811A (en) Narrow-slit escaping method, device and equipment for sweeping robot and readable storage medium
CN110477813B (en) Laser type cleaning robot and control method thereof
CN111966090B (en) Robot boundary map construction method and device and robot
CN111857156A (en) Robot region dividing method based on laser, chip and robot
Menon et al. NBV-SC: Next best view planning based on shape completion for fruit mapping and reconstruction
CN112274063B (en) Robot cleaning method, control device, readable storage medium and robot
CN113475978B (en) Robot recognition control method and device, robot and storage medium
CN112748721A (en) Visual robot and cleaning control method, system and chip thereof
CN114967691A (en) Robot control method and device, readable storage medium and robot
CN111753388B (en) Spraying control method, spraying control device, electronic equipment and computer-readable storage medium
CN116711996A (en) Operation method, self-mobile device, and storage medium
WO2021042487A1 (en) Automatic working system, automatic travelling device and control method therefor, and computer readable storage medium
CN115336459B (en) Method, system, computer readable medium and mowing robot for processing hay
CN115202361A (en) Path planning method of mobile robot and mobile robot
WO2021139683A1 (en) Self-moving device
CN113741441A (en) Operation method and self-moving equipment
CN112783147A (en) Trajectory planning method and device, robot and storage medium
CN111230883A (en) Return method and device for crawling welding robot, robot and storage medium
CN116266060A (en) Cleaning control method, cleaning control system and cleaning robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination