CN116898356A - Robot, control method and device thereof, and readable storage medium - Google Patents

Robot, control method and device thereof, and readable storage medium Download PDF

Info

Publication number
CN116898356A
CN116898356A CN202310890811.2A CN202310890811A CN116898356A CN 116898356 A CN116898356 A CN 116898356A CN 202310890811 A CN202310890811 A CN 202310890811A CN 116898356 A CN116898356 A CN 116898356A
Authority
CN
China
Prior art keywords
cleaning
target object
target
robot
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310890811.2A
Other languages
Chinese (zh)
Inventor
赵震
徐志远
车正平
唐剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Group Co Ltd
Midea Group Shanghai Co Ltd
Original Assignee
Midea Group Co Ltd
Midea Group Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Group Co Ltd, Midea Group Shanghai Co Ltd filed Critical Midea Group Co Ltd
Priority to CN202310890811.2A priority Critical patent/CN116898356A/en
Publication of CN116898356A publication Critical patent/CN116898356A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/28Floor-scrubbing machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Landscapes

  • Manipulator (AREA)

Abstract

The application provides a robot, a control method and a device thereof and a readable storage medium, wherein the robot comprises an executing mechanism, and the control method of the robot comprises the following steps: acquiring the current space position of a target object; planning an execution track according to the current space position; and controlling the executing mechanism to reach the target space position according to the executing track, and cleaning the target object.

Description

Robot, control method and device thereof, and readable storage medium
Technical Field
The present application relates to the field of robots, and in particular, to a robot, a control method and apparatus thereof, and a readable storage medium.
Background
With the development of intelligent cleaning technology, robots are moved into more and more families, and the labor of cleaning houses by people is greatly reduced.
In an open life scene, the robot is fully automatically operated, and the most indispensable link is a three-dimensional cleaning task. In the related art, the robot can only perform cleaning based on a preset process flow, so that the accuracy and consistency of the robot in cleaning different objects are low.
Disclosure of Invention
The present application aims to solve one of the technical problems existing in the prior art or related technologies.
To this end, a first aspect of the present application proposes a control method of a robot.
A second aspect of the present application proposes a control device for a robot.
A third aspect of the present application proposes a control device for a robot.
A fourth aspect of the application proposes a readable storage medium.
A fifth aspect of the present application proposes a robot.
In view of this, according to a first aspect of the present application, there is provided a control method of a robot including an actuator, the control method of the robot including: acquiring the current space position of a target object; planning an execution track according to the current space position; and controlling the executing mechanism to reach the target space position according to the executing track, and cleaning the target object.
According to the technical scheme, the control method is used for controlling the robot, the robot comprises an executing mechanism, the robot can execute cleaning operation on a target object through the executing mechanism, the robot further comprises an image acquisition device, the robot can acquire an environment image of a cleaning scene through the image acquisition device, and the environment image is identified, so that a corresponding cleaning scheme is planned at a corresponding current space position, cleaning of the target object is completed, the automation capacity of the robot is improved, the actions of the robot are more consistent, and the flexibility of the actions of the robot is improved.
In the technical scheme, the robot runs into the cleaning scene, the image acquisition device continuously acquires the image data in the cleaning scene, the number of the acquired image data can be multiple, and the environment images including the target object in the multiple image data can be identified by the image identification technology. The number of the environment images obtained by recognition can be one or a plurality of. After the environment image is acquired, the environment image is analyzed, and the space coordinates of the target object in the three-dimensional space, namely the current control position of the target object, can be determined. When the robot executes cleaning treatment on the target object, the target object is positioned based on the space coordinates of the target object, so that the accuracy of the robot in the action in the space dimension is improved, the robot can conveniently move the executing mechanism to the position of the target control according to the executing track, and the target object is cleaned in a proper cleaning mode.
In the technical scheme of the application, the current space position is the coordinate of the target object under the space coordinate system, and the current space position can be the coordinate set of the boundary of the target object.
In the technical scheme of the application, the target space position is a space position where the executing mechanism is convenient for executing cleaning treatment on the target object, when the executing mechanism is used for executing cleaning treatment on the target object, the executing mechanism is required to be moved to the target space position, and after the executing mechanism is moved to the target space position, the executing mechanism is started to execute cleaning treatment on the target object.
According to the technical scheme, the robot can plan the execution track of the executing mechanism according to the current space position of the target object, so that the executing mechanism can execute corresponding cleaning treatment on the target object after moving to the target space position. The planned execution track is matched with the current position of the target object and the cleaning strategy required to be executed by the target object, so that the accuracy of the robot in planning the execution track is improved. The space coordinates of the target object are considered when the execution track is planned, so that the robot can accurately position the target object and execute cleaning treatment conforming to the target object, and the consistency and accuracy of the robot in the cleaning treatment process are improved.
In some embodiments, optionally, obtaining the current spatial position of the target object includes: acquiring an environment image corresponding to a target object, wherein the environment image comprises the target object; based on the environmental image, a current spatial position of the target object is determined.
In the technical scheme of the application, the environment image is an image of a clean scene where the target object is located, and the image is acquired by an image acquisition device.
Specifically, the environment image is divided into at least two sub-images, and the at least two sub-images correspond to at least two cleaning areas in the cleaning scene; extracting target image characteristics in target sub-images in at least two sub-images, wherein the target image characteristics are image characteristics of a target object; the current spatial position is determined from the target image features and the object class is determined from the target image features.
In the technical scheme, the robot divides the environment image to obtain at least two sub-images, screens the at least two sub-images to obtain a target sub-image, wherein the target sub-image comprises target image features corresponding to a target object, and the current spatial position of the target object and the object type of the target object can be determined based on the target image features.
According to the technical scheme, when the robot divides the environment image, the robot divides the environment image based on a semantic recognition mode to obtain at least two sub-images, and the at least two sub-images correspond to different cleaning areas in a cleaning scene. By carrying out segmentation processing on the environment image, different cleaning areas in the cleaning scene can be identified on the environment image, the current space position of the target object can be positioned conveniently, the accuracy of the robot in identifying the target area is improved, and invalid cleaning on other areas is avoided. Specifically, the robot divides different sub-images corresponding to different cleaning areas by a boundary recognition algorithm of the cleaning areas, an example segmentation algorithm, or the like.
In the technical scheme, after at least two sub-images are obtained through division, the target sub-image in the at least two sub-images is screened through the image characteristics in each sub-image, and the target image characteristics in the target sub-image are extracted. By screening the sub-images obtained by dividing, the accuracy of the target sub-images with the target image characteristics obtained by positioning can be improved, and the accuracy of identifying the object category of the target object can be further improved.
In the technical scheme, after the target image features are identified, the current space position of the target object is positioned based on the target image features, and the object category of the target object corresponding to the target image features is identified.
According to the technical scheme, the accuracy of the current space position of the positioning target object and the accuracy of the object type of the identified target object can be improved by carrying out segmentation detection on the environment image, so that the robot can execute cleaning coordinates matched with the object type on the target object, and the consistency of the cleaning process on the target object is improved. In some embodiments, optionally, planning the execution track according to the current spatial position includes: determining a target space position corresponding to the executing mechanism according to the current space position; and planning to obtain an execution track based on the initial spatial position and the target spatial position of the execution mechanism.
In the technical scheme of the application, the current space position is the current position of the target object, and the target space position is the position required to be positioned when the executing mechanism cleans the target object. After the current spatial position of the target object is determined by means of image recognition, the starting position of the cleaning process of the target object, i.e. the target spatial position, is determined by means of the current spatial position.
According to the technical scheme, the initial spatial position is the spatial position where the executing mechanism is currently located, and the executing track can be planned and obtained through the initial spatial position and the target spatial position. The initial spatial position is the start point of the execution track, and the target spatial position is the end point of the execution track.
According to the technical scheme, the starting point of the cleaning treatment of the target object by the executing mechanism, namely the target space position, can be determined through the current space position of the target object, and then the starting point and the end point of the execution track can be determined according to the initial space position of the executing mechanism and the target space position, so that the accuracy of the execution track planned according to the starting point and the end point is ensured to be higher.
In some embodiments, optionally, planning the execution track according to the current spatial position includes: acquiring a first space coordinate, wherein the first space coordinate is an edge coordinate of a first area; and controlling the executing mechanism to clean the target object according to the first space coordinates so that the target object can move into the first area.
According to the technical scheme, when the target object comprises dirt, the current space position of the dirt in the clean environment and the first space coordinate of the first area where the dirt needs to be placed are provided, and the robot is controlled to move the dirt into the first area, so that the dirt is cleaned.
In the technical scheme of the application, the first area is used for accommodating the target object. When the target object is solid dirt or liquid dirt, the solid dirt is moved to the first area to be treated uniformly.
In the technical scheme of the application, the first area is an area for storing dirt in a clean environment. And the robot recognizes that the target object comprises dirt in a semantic recognition mode, and determines that the dirt target object needs to be subjected to corresponding cleaning treatment. At this time, a first region in the clean environment is identified, and a first spatial coordinate corresponding to the first region is acquired.
According to the technical scheme, after the current space position of the dirty target object and the first space coordinate of the first area are determined, the corresponding execution track is planned according to the current space position and the first space coordinate, so that the dirty target object is concentrated in the first area after the execution mechanism executes cleaning according to the execution track, and the cleaning effect and the cleaning efficiency of the dirty object are ensured.
In some embodiments, optionally, the number of target objects is at least two; according to the first space coordinates, controlling the executing mechanism to clean the target object comprises the following steps: acquiring at least two second space coordinates corresponding to at least two target objects; and cleaning the target object according to at least two second space coordinates and the first space coordinates.
In the technical scheme, under the condition that the number of the target objects is at least two, at least two second space coordinates and first space coordinates of a first area to which the at least two target objects need to move are determined, and when the executing mechanism is controlled to clean the at least two target objects, the executing mechanism needs to pass through the at least two second space coordinates, and the target objects in the second space coordinates are collected into the first area.
In the technical scheme of the application, the number of the target objects is at least two, and the current space position comprises at least two second space coordinates, and each second space coordinate corresponds to one target object.
According to the technical scheme, under the condition that the number of the target objects is multiple, the second space coordinates corresponding to the multiple target objects are obtained, and the execution mechanism is controlled to clean the multiple target objects according to the second space coordinates and the first space coordinates, so that the multiple target objects can be collected in the first area uniformly.
In some possible solutions, optionally, the cleaning process for the target object according to at least two second spatial coordinates and the first spatial coordinates includes: determining at least two first cleaning tracks according to the at least two second space coordinates and the first space coordinates; and controlling the executing mechanism to clean at least two target objects along at least two first cleaning tracks in sequence.
According to the technical scheme, when the number of the target objects is at least two, the corresponding at least two first cleaning tracks are planned through the corresponding at least two second space coordinates and the corresponding first space coordinates, so that the robot can clean according to the at least two first cleaning tracks, and then the at least two target objects can be located in the first area.
In the technical scheme of the application, the first cleaning tracks are in one-to-one correspondence with the second space coordinates, the starting point of each first cleaning track is matched with the second space coordinates, and the ending point of each first cleaning track is matched with the first space coordinates. The robot executes cleaning processing on the target object according to the execution track comprising a plurality of first cleaning tracks, so that the target objects at different positions can be ensured to move into the first area.
According to the technical scheme, when the target object comprises a plurality of target objects, the robot can plan a plurality of corresponding first cleaning tracks for the target objects positioned at different positions respectively, so that the execution track comprises a plurality of first cleaning tracks, the robot can clean the target objects positioned at different positions according to different cleaning tracks respectively, the situation that when the robot moves one target object, the other target objects are scattered is avoided, and the cleaning effect is improved.
In some embodiments, optionally, the cleaning process for the target object according to at least two second spatial coordinates and the first spatial coordinates includes: determining a second cleaning track according to at least two second space coordinates; determining a third cleaning track according to the third space coordinate and the first space coordinate, wherein the third space coordinate is the coordinate of a track end point of the second cleaning track; and after the control executing mechanism sequentially passes through at least two target objects along the second cleaning track, the control executing mechanism carries out cleaning treatment on the at least two target objects along the third cleaning track.
According to the technical scheme, under the condition that the number of the target objects is at least two, the second cleaning track passing through the at least two current space positions is planned through the corresponding at least two current space positions, so that the robot can uniformly collect the target objects located at different positions after executing cleaning treatment according to the second cleaning track. And planning to obtain a third cleaning track based on a third space coordinate corresponding to the end point of the second cleaning track and a second space coordinate corresponding to the first area, so that the robot can move the uniformly collected target objects into the first area after executing cleaning treatment according to the third cleaning track.
According to the technical scheme, the second cleaning track passes through the current space position corresponding to each target object, the third cleaning track is connected with the second cleaning track, and the starting point of the third cleaning track is the second cleaning track end point, so that the robot can continuously execute the corresponding third cleaning track after collecting all target objects according to the second cleaning track, and the collected target objects can be moved into the first area.
According to the technical scheme, when the target objects comprise a plurality of target objects, the robot can plan the second cleaning track based on the current space positions of the target objects located at different positions, and then plan the third cleaning track based on the third space coordinates of the end point of the second cleaning track and the second control coordinates, so that the robot can collect all the target objects when moving according to the second cleaning track, and can move all the collected target objects into the first area when moving according to the third cleaning track, and the robot can move all the target objects into the first area at one time, thereby improving the cleaning efficiency.
In some aspects, optionally, the target object comprises at least two soil types of soil; performing a cleaning process on a target object, comprising: acquiring at least two fourth space coordinates of the at least two dirt types, wherein the at least two fourth space coordinates correspond to the at least two dirt types one by one; planning to obtain at least two fourth cleaning tracks based on at least two fourth space coordinates; and controlling the executing mechanism to clean the dirt of at least two dirt types along at least two fourth cleaning tracks.
In the technical scheme of the application, at least two fourth space coordinates respectively correspond to at least two dirt types of dirt.
According to the technical scheme, at least two fourth cleaning tracks are planned and obtained according to at least two fourth space coordinates, the at least two fourth cleaning tracks are used for cleaning at least two dirt types of dirt respectively, and when an executing mechanism cleans the at least two dirt types of dirt, different dirt types can be cleaned by cleaning according to the fourth cleaning tracks in sequence.
In the technical scheme of the application, when the target object contains dirt of different dirt types, the current space position comprises fourth space coordinates of the dirt of different dirt types. By planning a fourth cleaning track corresponding to the dirt of different dirt types through the fourth space coordinates of the dirt of different dirt types, when the robot moves the dirt of different dirt types according to the moving track, the situation that the dirt of different dirt types are mixed with each other can be reduced, and the cleaning effect is improved.
According to the technical scheme, when the target object comprises dirt of different dirt types, the fourth cleaning tracks planned by the robot can independently clean the dirt of different dirt types, liquid dirt is scattered, and the cleaning effect and the cleaning efficiency of the robot on the target object are improved.
In some embodiments, optionally, the at least two soil types of soil include solid soil and liquid soil, and the at least two fourth spatial coordinates include: the solid dirt coordinates and the liquid dirt coordinates, and the at least two fourth cleaning tracks comprise a solid cleaning track and a liquid cleaning track; planning to obtain at least two fourth cleaning tracks based on at least two fourth space coordinates, including: generating a solid cleaning track based on the solid dirt coordinates; and generating a liquid cleaning track based on the liquid dirt coordinates, wherein the liquid cleaning track and the solid cleaning track are isolated from each other.
In the technical scheme of the application, the liquid dirt coordinates correspond to liquid dirt, and the solid dirt coordinates correspond to solid dirt. The liquid cleaning track is used for cleaning liquid dirt, the solid cleaning track is used for cleaning solid dirt, the liquid cleaning track and the solid cleaning track are mutually isolated, two different types of dirt can be prevented from being mixed together through the liquid dirt when the solid dirt is cleaned, the solid dirt can be prevented from being scattered when the liquid dirt is cleaned, and the cleaning effect and the cleaning efficiency of the dirt are improved.
In the technical scheme of the application, when the target object comprises two parts of solid dirt and liquid dirt, the first space coordinates comprise solid dirt coordinates of the solid dirt and liquid dirt coordinates of the liquid dirt. Through the corresponding solid cleaning track of solid dirt coordinates and liquid dirt coordinates planning, when the robot moves the solid dirt according to the moving track, the moving distance of the solid dirt in the liquid dirt can be reduced, the cleaning effect is improved, and the planned liquid cleaning track can clean the liquid dirt.
According to the technical scheme, when the target object comprises solid dirt and liquid dirt, the execution track planned by the robot can be used for cleaning the solid dirt and the liquid dirt independently, so that the liquid dirt is prevented from being scattered when the solid dirt in the liquid dirt is located, and the cleaning effect and the cleaning efficiency of the robot on the target object are improved.
In some aspects, optionally, the target object comprises liquid soil; cleaning the target object, including: determining a seventh space coordinate according to the current space position, wherein the seventh space coordinate is in a second area, and the second area is a liquid dirt coverage area; and cleaning the liquid dirt according to the seventh space coordinate.
In the technical scheme of the application, the second area is an area surrounded by the seventh space coordinate, namely, the second area is an area through which the execution track passes in the process of cleaning the liquid dirt by the robot. The current spatial position comprises a plurality of coordinates within the liquid dirt coverage area, and the coordinates corresponding to the current spatial position are all located in the second area.
According to the technical scheme, when the target object is liquid dirt, the corresponding seventh space coordinate is determined based on the current space position of the liquid dirt, so that the second area surrounded by the seventh space coordinate covers the whole liquid dirt, and the corresponding execution track is planned based on the seventh space coordinate, so that the cleaning effect of the robot on the liquid dirt according to the execution track is ensured.
In some embodiments, optionally, the target object includes an article to be moved, and the cleaning process is performed on the target object, including: acquiring a first gesture and a fifth space coordinate of an object to be moved, wherein the fifth space coordinate is a coordinate of a target area corresponding to the object to be moved; determining a second gesture of the actuating mechanism for grabbing the object to be moved according to the first gesture; controlling the actuating mechanism to grasp the object to be moved in a second posture; and moving the object to be moved into the target area based on the fifth spatial coordinates.
According to the technical scheme, when the target object comprises the object to be cleaned, a first gesture and a fifth space coordinate of the object to be cleaned are obtained, and based on the first gesture, a second gesture of the actuating mechanism for grabbing the object to be cleaned is determined, so that stability of the actuating mechanism for grabbing the object to be cleaned according to the second gesture is guaranteed. And according to the fifth space coordinate, the object to be moved is moved into a target area corresponding to the fifth space coordinate.
In the technical scheme of the application, the articles to be cleaned comprise, but are not limited to, articles such as plates, water cups, bowls and chopsticks and the like. When the target object includes an object to be cleaned, the robot is required to grasp the object to be cleaned and place the object to be cleaned to a corresponding position.
According to the technical scheme, the to-be-cleaned objects with different shapes and different postures need to be grabbed by the executing mechanism in different postures, the robot can determine the first posture of the to-be-cleaned object based on the current space position of the to-be-cleaned object, so that the second posture required by the executing mechanism to grab the to-be-cleaned object in the first posture is determined according to the first posture, and the stability of the executing mechanism in grabbing the to-be-cleaned object according to the second posture is ensured.
According to the technical scheme, the target area is the area to which the object to be cleaned needs to be moved, the fifth space coordinate is the coordinate of the target area, and after the object to be cleaned is grabbed by the executing mechanism through the second gesture, the object to be cleaned is moved into the target area according to the fifth space coordinate, so that the cleaning treatment of the object to be cleaned is completed.
In some embodiments, optionally, before the target object is cleaned, the method further includes: the executing mechanism is controlled to grasp the cleaning piece, and the cleaning piece is matched with the dirt type corresponding to the target object. According to the technical scheme, when the executing mechanism cleans the solid dirt and/or the liquid dirt, the cleaning piece grabbed by the executing mechanism cleans the solid dirt and/or the liquid dirt according to the executing track, so that the cleaning effect on the solid dirt and/or the liquid dirt is ensured.
According to the technical scheme, a user can manually install the cleaning piece on the executing mechanism, and the robot can be controlled to automatically grasp the corresponding cleaning piece. Wherein the cleaning member matches the object category of the target object.
According to the technical scheme, the robot can drive the cleaning piece to move according to the execution track, so that solid dirt and/or liquid dirt can be effectively cleaned.
In some embodiments, optionally, after the cleaning treatment on the target object, the method includes: acquiring a sixth space coordinate, wherein the sixth space coordinate is a space coordinate of a cleaning scene, and a target object is positioned in the cleaning scene; and cleaning the cleaning scene based on the sixth spatial coordinates.
In this technical solution, a process of integrally cleaning a cleaning scene after the robot completes cleaning of the target object is proposed.
According to the technical scheme, the sixth space coordinate is the coordinate corresponding to the cleaning scene where the target object is located, the cleaning track covering the whole cleaning scene can be planned based on the sixth space coordinate, the cleaning scene can be completely cleaned through the executing mechanism according to the cleaning track planned according to the sixth space coordinate, and the cleaning of the area in the cleaning scene is avoided.
According to a second aspect of the present application, there is provided a control device for a robot including an actuator, the control device for a robot including: the acquisition module is used for acquiring the current space position of the target object; the planning module is used for planning an execution track according to the current space position; and the control module is used for controlling the execution mechanism to reach the target space position according to the execution track and cleaning the target object.
According to the technical scheme, the robot can plan the execution track of the executing mechanism according to the current space position of the target object, so that the executing mechanism can execute corresponding cleaning treatment on the target object after moving to the target space position. The planned execution track is matched with the current position of the target object and the cleaning strategy required to be executed by the target object, so that the accuracy of the robot in planning the execution track is improved. The space coordinates of the target object are considered when the execution track is planned, so that the robot can accurately position the target object and execute cleaning treatment conforming to the target object, and the consistency and accuracy of the robot in the cleaning treatment process are improved.
According to a third aspect of the present application, there is provided a control device for a robot, comprising: a memory in which a program or instructions are stored; the processor executes the program or the instructions stored in the memory to implement the steps of the control method of the robot as in any one of the first aspects, so that the method has all the beneficial technical effects of the control method of the robot as in any one of the first aspects, and will not be described in detail herein.
According to a fourth aspect of the present application, there is provided a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement the steps of the method for controlling a robot as in any of the above-mentioned first aspects. Therefore, the control method of the robot in any of the above first aspects has all the beneficial technical effects, and will not be described in detail herein.
According to a fifth aspect of the present application there is provided a robot comprising: the control device of the robot as defined in the above second or third aspect and/or the readable storage medium as defined in the above fourth aspect thus has all the advantageous technical effects of the control device of the robot as defined in the above second or third aspect and/or the readable storage medium as defined in the above fourth aspect, and will not be described in detail herein.
Additional aspects and advantages of the application will be set forth in part in the description which follows, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates one of the schematic flowcharts of a control method of a robot provided in some embodiments of the application;
FIG. 2 illustrates a schematic structural view of a robot provided in some embodiments of the present application;
FIG. 3 illustrates a second schematic flow chart of a control method of a robot provided in some embodiments of the application;
FIG. 4 illustrates a schematic third flowchart of a control method of a robot provided in some embodiments of the application;
FIG. 5 illustrates a fourth schematic flow chart of a control method of a robot provided in some embodiments of the application;
FIG. 6 illustrates one of the schematic diagrams of execution traces provided in some embodiments of the application;
FIG. 7 illustrates a second schematic of execution traces provided in some embodiments of the application;
FIG. 8 illustrates a fifth schematic flow chart of a control method of a robot provided in some embodiments of the application;
FIG. 9 illustrates a third schematic diagram of execution traces provided in some embodiments of the application;
FIG. 10 illustrates a fourth schematic diagram of an execution trace provided in some embodiments of the application;
FIG. 11 illustrates a fifth schematic diagram of an execution trace provided in some embodiments of the application;
FIG. 12 illustrates a sixth schematic of execution traces provided in some embodiments of the application;
FIG. 13 illustrates a schematic diagram of the working logic of a robot provided in some embodiments of the application;
fig. 14 is a block diagram showing a control apparatus of a robot according to some embodiments of the present application;
fig. 15 is a block diagram showing a control apparatus of a robot according to some embodiments of the present application;
fig. 16 illustrates a block diagram of a robot provided by some embodiments of the application.
Wherein reference numerals in fig. 2 and 11 are as follows:
200 robots, 210 actuators, 212 grabbing components, 214 traveling components, 220 image acquisition devices, 1100 articles to be cleaned.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the present embodiment and the features in the embodiment may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited to the specific embodiments disclosed below.
Methods of controlling a robot, apparatuses, readable storage media, and robots according to some embodiments of the present application are described below with reference to fig. 1 to 16.
According to an embodiment of the present application, as shown in fig. 1, a control method of a robot is provided, the robot including an actuator.
The control method of the robot comprises the following steps:
102, acquiring the current space position of a target object;
104, planning an execution track according to the current space position;
and 106, controlling the executing mechanism to reach the target space position according to the executing track, and cleaning the target object.
The embodiment of the application provides a control method for controlling a robot, wherein the robot comprises an executing mechanism, the robot can execute cleaning operation on a target object through the executing mechanism, the robot further comprises an image acquisition device, the robot can acquire an environment image of a cleaning scene through the image acquisition device, and the environment image is identified, so that a corresponding cleaning scheme is planned at a corresponding current space position, the cleaning treatment of the target object is completed, the automation capability of the robot is improved, the actions of the robot are more consistent, and the flexibility of the actions of the robot is improved.
Fig. 2 illustrates a schematic structural diagram of a robot provided in some embodiments of the present application, and as illustrated in fig. 2, the robot 200 includes an actuator 210 and an image acquisition device 220, and the actuator 210 includes a gripping assembly 212 and a traveling assembly 214. The robot 200 can travel into a target area in a cleaning scene through the traveling assembly 214 and grasp the cleaning member to clean the target object through the grasping assembly 212 or directly grasp the target object to clean. It will be appreciated that the robot in the present application may be a home service robot, a cleaning robot, a mall robot, etc., without limitation. In some embodiments, the technical scheme provided by the application can be applied to the deployment of a product platform of an open life scene service robot.
In this embodiment, the robot travels into the cleaning scene, the image data in the cleaning scene is continuously collected by the image collecting device, the number of collected image data may be multiple, and the environmental image including the target object in the multiple image data is identified by the image identifying technology. The number of the environment images obtained by recognition can be one or a plurality of. After the environment image is acquired, the environment image is analyzed, and the space coordinates of the target object in the three-dimensional space, namely the current control position of the target object, can be determined. When the robot executes cleaning treatment on the target object, the target object is positioned based on the space coordinates of the target object, so that the accuracy of the robot in the action in the space dimension is improved, the robot can conveniently move the executing mechanism to the position of the target control according to the executing track, and the target object is cleaned in a proper cleaning mode.
Illustratively, the user sends a "clean kitchen counter" cleaning instruction to the robot, which is able to automatically travel to the kitchen and collect image data within the kitchen, and upon recognizing that the image data within the kitchen includes a counter image, the counter image is determined to be an environmental image. The target object in the tabletop image that is to be cleaned is identified, and the current spatial position of the target object is identified. For example: the object type of the target object is identified as the cup to be cleaned, the space coordinates of the cup are positioned, and the execution mechanism determines that the target object needs to be grabbed, transported and placed based on the object type of the target object, so that the execution mechanism is controlled to grab the cup to be cleaned based on the space coordinates, and send the cup to be cleaned into the dish washer, the whole cleaning is carried out on the area where the target object is located, and the effect of automatically completing cleaning is achieved.
In the embodiment of the application, the current spatial position is the coordinate of the target object in the spatial coordinate system, and the current spatial position can be the coordinate set of the boundary of the target object.
In the embodiment of the application, the target space position is a space position where the executing mechanism is convenient for executing cleaning treatment on the target object, when the executing mechanism is used for executing cleaning treatment on the target object, the executing mechanism needs to be moved to the target space position, and after the executing mechanism is moved to the target space position, the executing mechanism is started to execute cleaning treatment on the target object.
In the embodiment of the application, the robot can plan the execution track of the executing mechanism according to the current space position of the target object, so that the executing mechanism can execute corresponding cleaning treatment on the target object after moving to the target space position. The planned execution track is matched with the current position of the target object and the cleaning strategy required to be executed by the target object, so that the accuracy of the robot in planning the execution track is improved. The space coordinates of the target object are considered when the execution track is planned, so that the robot can accurately position the target object and execute cleaning treatment conforming to the target object, and the consistency and accuracy of the robot in the cleaning treatment process are improved.
As shown in fig. 3, in some embodiments, optionally, obtaining the current spatial position of the target object includes:
step 302, acquiring an environment image corresponding to a target object, wherein the environment image comprises the target object;
step 304, determining a current spatial position of the target object based on the environmental image.
In the embodiment of the application, the environment image is an image of a clean scene where the target object is located, and the image is acquired by an image acquisition device.
Specifically, the environment image is divided into at least two sub-images, and the at least two sub-images correspond to at least two cleaning areas in the cleaning scene; extracting target image characteristics in target sub-images in at least two sub-images, wherein the target image characteristics are image characteristics of a target object; the current spatial position is determined from the target image features and the object class is determined from the target image features.
In this embodiment, the robot divides the environment image to obtain at least two sub-images, and screens the at least two sub-images to obtain a target sub-image, where the target sub-image includes a target image feature corresponding to a target object, and based on the target image feature, the current spatial position of the target object and the object class of the target object can be determined.
In the embodiment of the application, when the robot divides the environment image, the robot divides the environment image based on a semantic recognition mode to obtain at least two sub-images, and the at least two sub-images correspond to different cleaning areas in the cleaning scene. By carrying out segmentation processing on the environment image, different cleaning areas in the cleaning scene can be identified on the environment image, the current space position of the target object can be positioned conveniently, the accuracy of the robot in identifying the target area is improved, and invalid cleaning on other areas is avoided. Specifically, the robot divides different sub-images corresponding to different cleaning areas by a boundary recognition algorithm of the cleaning areas, an example segmentation algorithm, or the like.
Illustratively, the environment image is an image acquired by a robot in a kitchen, and the robot divides the environment image into a ground image, a table top image and a cabinet image by means of semantic recognition.
In the embodiment of the application, after at least two sub-images are obtained by dividing, the target sub-image in at least two sub-images is screened by the image characteristics in each sub-image, and the target image characteristics in the target sub-image are extracted. By screening the sub-images obtained by dividing, the accuracy of the target sub-images with the target image characteristics obtained by positioning can be improved, and the accuracy of identifying the object category of the target object can be further improved.
Illustratively, the plurality of sub-images includes "floor image," table top image, "and" cabinet image. Because the control instruction input by the user to the robot is "clean the table surface", the robot determines the table surface image as a target sub-image, and extracts the target image characteristics in the target sub-image, wherein the target image characteristics are the image characteristics of the table surface, namely the object on the table surface.
In the embodiment of the application, after the target image feature is identified, the current space position of the target object is positioned based on the target image feature, and the object category of the target object corresponding to the target image feature is identified.
Illustratively, the target image features are identified through a two-dimensional/three-dimensional network model, and the current spatial position of the target object is accurately positioned by combining color information and depth information in the target image features.
Illustratively, the object class of the target object can be obtained by identifying the target image feature by the classification network model, for example: soil-laden trays, bowls, cups, etc., and other soil solids include solid food residues such as: bones, paper towels which have been balled, etc., the soiling fluid contains fluid which adheres to the area, for example: dripping oil stain, etc.
According to the embodiment of the application, the accuracy of the current space position of the positioning target object and the accuracy of the object type of the identified target object can be improved by carrying out segmentation detection on the environment image, so that the robot can execute the cleaning coordinates matched with the object type on the target object, and the consistency of the cleaning process on the target object is improved. In some embodiments, optionally, planning the execution trajectory according to the current spatial position includes: determining a target space position corresponding to the executing mechanism according to the current space position; and planning to obtain an execution track based on the initial spatial position and the target spatial position of the execution mechanism.
In the embodiment of the application, the current space position is the current position of the target object, and the target space position is the position required to be located when the executing mechanism cleans the target object. After the current spatial position of the target object is determined by means of image recognition, the starting position of the cleaning process of the target object, i.e. the target spatial position, is determined by means of the current spatial position.
In the embodiment of the application, the initial spatial position is the spatial position where the executing mechanism is currently located, and the executing track can be planned and obtained through the initial spatial position and the target spatial position. The initial spatial position is the start point of the execution track, and the target spatial position is the end point of the execution track.
In the embodiment of the application, the starting point of the cleaning treatment of the target object by the executing mechanism, namely the target space position, can be determined through the current space position of the target object, and then the starting point and the end point of the execution track can be determined according to the initial space position of the executing mechanism and the target space position, so that the accuracy of the execution track planned according to the starting point and the end point is ensured to be higher.
As shown in fig. 4, in some embodiments, optionally, planning the execution trajectory according to the current spatial position includes:
Step 402, acquiring a first space coordinate, wherein the first space coordinate is an edge coordinate of a first area;
in the embodiment of the application, the first area is used for accommodating the target object. When the target object is solid dirt or liquid dirt, the solid dirt is moved to the first area to be treated uniformly.
Step 404, according to the first spatial coordinates, controlling the actuator to clean the target object so that the target object can move into the first area.
In this embodiment, it is proposed that when the target object includes dirt, the robot is controlled to move the dirt into the first area to complete cleaning of the dirt, with the current spatial position of the dirt in the cleaning environment and the first spatial coordinates of the first area where the dirt needs to be placed.
In an embodiment of the present application, the first area is an area for storing dirt in a clean environment. And the robot recognizes that the target object comprises dirt in a semantic recognition mode, and determines that the dirt target object needs to be subjected to corresponding cleaning treatment. At this time, a first region in the clean environment is identified, and a first spatial coordinate corresponding to the first region is acquired.
Illustratively, when the target object is located on the desktop, then the first region may also be located on the desktop. For example: and if the target object is solid food residues on the dining table, the first area is a corner area of the tabletop, and the solid food residues are stacked to the corner area through planning an execution track, so that the solid food residues can be cleaned conveniently.
The target object is located on the ground, for example, the first region may also be located on the ground. For example: and if the target object is paper scraps on the floor, the first area is a corner area of the floor, and the paper scraps are piled up to the corner area through planning an execution track, so that the paper scraps are prevented from being scattered in other areas in the action process of a user.
In the embodiment of the application, after the current space position of the dirty target object and the first space coordinate of the first area are determined, the corresponding execution track is planned according to the current space position and the first space coordinate, so that the dirty target object is concentrated in the first area after the execution mechanism executes cleaning according to the execution track, and the cleaning effect and the cleaning efficiency of the dirty object are ensured.
As shown in fig. 5, in some embodiments, optionally, the number of target objects is at least two; according to the first space coordinates, controlling the executing mechanism to clean the target object comprises the following steps:
step 502, obtaining at least two second space coordinates corresponding to at least two target objects;
in the embodiment of the application, if the number of the target objects is at least two, the current space position includes at least two second space coordinates, and each second space coordinate corresponds to one target object.
Step 504, cleaning the target object according to at least two second space coordinates and the first space coordinates.
In this embodiment, in the case that the number of the target objects is at least two, at least two second spatial coordinates and the first spatial coordinates of the first area to which the at least two target objects need to be moved are determined, when the actuator is controlled to perform cleaning processing on the at least two target objects, it is necessary to pass the actuator through the at least two second spatial coordinates, and collect the target objects in the second spatial coordinates into the first area.
For example, in the case where the target object is solid dirt, when the number of solid dirt is at least two, it is necessary to collect at least two solid dirt into the first region.
In the embodiment of the application, under the condition that the number of the target objects is a plurality of, the second space coordinates corresponding to the plurality of target objects are obtained, and the execution mechanism is controlled to clean the plurality of target objects according to the second space coordinates and the first space coordinates, so that the plurality of target objects can be collected in the first area uniformly.
In some possible embodiments, optionally, the cleaning of the target object according to at least two second spatial coordinates and the first spatial coordinates includes: determining at least two first cleaning tracks according to the at least two second space coordinates and the first space coordinates; and controlling the executing mechanism to clean at least two target objects along at least two first cleaning tracks in sequence.
In this embodiment, it is proposed that, in the case that the number of target objects is at least two, the at least two first cleaning trajectories are planned by the corresponding at least two second spatial coordinates and the first spatial coordinates, so that the robot can perform cleaning according to the at least two first cleaning trajectories, and then, the at least two target objects can be located in the first area.
In the embodiment of the application, the first cleaning tracks are in one-to-one correspondence with the second space coordinates, the starting point of each first cleaning track is matched with the second space coordinates, and the ending point of each first cleaning track is matched with the first space coordinates. The robot executes cleaning processing on the target object according to the execution track comprising a plurality of first cleaning tracks, so that the target objects at different positions can be ensured to move into the first area.
As shown in fig. 6, the target object includes three target objects 602, and the three target objects 602 are located in three different areas, and the three target objects 602 need to move into the same first area 604, for example. According to the three second spatial coordinates corresponding to the three target objects 602 and the first spatial coordinates of the first region 604, three first cleaning tracks 606 can be planned, that is, the execution tracks include the three first cleaning tracks 606, so that after the robot executes cleaning according to the execution tracks, all the three target objects 602 can move into the first region 604.
In the embodiment of the application, when the target object comprises a plurality of target objects, the robot can plan a plurality of corresponding first cleaning tracks for the target objects positioned at different positions respectively, so that the execution track comprises a plurality of first cleaning tracks, the robot can clean the target objects positioned at different positions according to different cleaning tracks respectively, the situation that the robot breaks up other target objects when moving one target object is avoided, and the cleaning effect is improved.
In some embodiments, optionally, the cleaning of the target object according to at least two second spatial coordinates and the first spatial coordinates includes: determining a second cleaning track according to at least two second space coordinates; determining a third cleaning track according to the third space coordinate and the first space coordinate, wherein the third space coordinate is the coordinate of a track end point of the second cleaning track; and after the control executing mechanism sequentially passes through at least two target objects along the second cleaning track, the control executing mechanism carries out cleaning treatment on the at least two target objects along the third cleaning track.
In this embodiment, it is proposed that, in the case that the number of target objects is at least two, through the corresponding at least two current spatial positions, a second cleaning track passing through the at least two current spatial positions is planned, so that after the robot performs cleaning according to the second cleaning track, the target objects located at different positions can be collected uniformly. And planning to obtain a third cleaning track based on a third space coordinate corresponding to the end point of the second cleaning track and a second space coordinate corresponding to the first area, so that the robot can move the uniformly collected target objects into the first area after executing cleaning treatment according to the third cleaning track.
In the embodiment of the application, the second cleaning track passes through the current space position corresponding to each target object, the third cleaning track is connected with the second cleaning track, and the starting point of the third cleaning track is the end point of the second cleaning track, so that the robot can continuously execute the corresponding third cleaning track after collecting all the target objects according to the second cleaning track, and the collected target objects can be moved into the first area.
As shown in fig. 7, the target object includes three target objects 702, and the three target objects 702 are located in three different areas, and the three target objects 702 need to move into the same first area 704, for example. According to the three current spatial positions corresponding to the three target objects 702, a second cleaning track 706 is planned, and a third cleaning track 708 is planned by taking the end point of the second cleaning track 706 as a starting point. The robot collects three target objects 702 located at different positions during the running along the second cleaning track 706, continues to run along the third cleaning track 708, and moves the collected three target objects 702 into the first area 704.
In the embodiment of the application, when the target object comprises a plurality of target objects, the robot can plan the second cleaning track based on the current space positions of the target objects positioned at different positions, and then plan the third cleaning track based on the third space coordinates of the end point of the second cleaning track and the second control coordinates, so that the robot can collect all the target objects when moving according to the second cleaning track, and can move all the collected target objects into the first area when moving according to the third cleaning track, and the robot can move all the target objects into the first area at one time, thereby improving the cleaning efficiency.
As shown in fig. 8, in some embodiments, optionally, the target object includes at least two soil types of soil; performing a cleaning process on a target object, comprising:
step 802, obtaining at least two fourth space coordinates of at least two types of dirt, wherein the at least two fourth space coordinates correspond to the at least two types of dirt one by one;
step 804, planning to obtain at least two fourth cleaning tracks based on at least two fourth space coordinates;
step 806, controlling the executing mechanism to clean the dirt of at least two dirt types along at least two fourth cleaning tracks.
In the embodiment of the present application, at least two fourth space coordinates respectively correspond to at least two types of dirt, for example: the target object includes solid dirt and liquid dirt, and the number of the fourth space coordinates is two, corresponding to the solid dirt and the liquid dirt respectively.
In the embodiment of the application, at least two fourth cleaning tracks are respectively planned according to at least two fourth space coordinates, and are respectively used for cleaning at least two dirt types of dirt, and when the execution mechanism cleans the at least two dirt types of dirt, different dirt types can be cleaned by cleaning according to the fourth cleaning tracks in sequence.
In an exemplary case that the target object includes both solid dirt and liquid dirt, by planning a fourth cleaning track corresponding to the solid dirt and a fourth cleaning track corresponding to the liquid dirt, and ensuring that no cross overlap exists between the two fourth cleaning tracks, touching of the liquid dirt when the executing mechanism cleans the solid dirt is reduced, and liquid dirt coverage area enlargement caused by movement of the solid dirt in a liquid dirt range is reduced.
In the embodiment of the application, when the target object contains the dirt with different dirt types, the current space position comprises the fourth space coordinate of the dirt with different dirt types. By planning a fourth cleaning track corresponding to the dirt of different dirt types through the fourth space coordinates of the dirt of different dirt types, when the robot moves the dirt of different dirt types according to the moving track, the situation that the dirt of different dirt types are mixed with each other can be reduced, and the cleaning effect is improved.
In the embodiment of the application, when the target object comprises dirt of different dirt types, the fourth cleaning tracks planned by the robot can independently clean the dirt of different dirt types, so that the liquid dirt is scattered, and the cleaning effect and the cleaning efficiency of the robot on the target object are improved.
In some embodiments, optionally, the at least two soil types of soil include solid soil and liquid soil, and the at least two fourth spatial coordinates include: the solid dirt coordinates and the liquid dirt coordinates, and the at least two fourth cleaning tracks comprise a solid cleaning track and a liquid cleaning track; planning to obtain at least two fourth cleaning tracks based on at least two fourth space coordinates, including: generating a solid cleaning track based on the solid dirt coordinates; and generating a liquid cleaning track based on the liquid dirt coordinates, wherein the liquid cleaning track and the solid cleaning track are isolated from each other.
In the embodiment of the application, the liquid dirt coordinates correspond to liquid dirt, and the solid dirt coordinates correspond to solid dirt. The liquid cleaning track is used for cleaning liquid dirt, the solid cleaning track is used for cleaning solid dirt, the liquid cleaning track and the solid cleaning track are mutually isolated, two different types of dirt can be prevented from being mixed together through the liquid dirt when the solid dirt is cleaned, the solid dirt can be prevented from being scattered when the liquid dirt is cleaned, and the cleaning effect and the cleaning efficiency of the dirt are improved.
In an embodiment of the application, when the target object comprises both solid and liquid soil, then the first spatial coordinates comprise solid soil coordinates of the solid soil and liquid soil coordinates of the liquid soil. Through the corresponding solid cleaning track of solid dirt coordinates and liquid dirt coordinates planning, when the robot moves the solid dirt according to the moving track, the moving distance of the solid dirt in the liquid dirt can be reduced, the cleaning effect is improved, and the planned liquid cleaning track can clean the liquid dirt.
As shown in fig. 9, by way of example, if the target object includes a solid dirt 902 and a liquid dirt 904, the solid cleaning track 906 bypasses the area where the liquid dirt 904 is located, moves the solid dirt 902 into the first area, and the liquid cleaning track 908 only passes through the liquid dirt 904, and it is also possible to plan that the second solid cleaning track 910 directly cleans all of the solid dirt 902.
In the embodiment of the application, when the target object comprises solid dirt and liquid dirt, the execution track planned by the robot can independently clean the solid dirt and the liquid dirt, so that the liquid dirt is prevented from being scattered when the solid dirt in the liquid dirt is located, and the cleaning effect and the cleaning efficiency of the robot on the target object are improved.
In some embodiments, optionally, the target object comprises liquid soil; cleaning the target object, including: determining a seventh space coordinate according to the current space position, wherein the seventh space coordinate is in a second area, and the second area is a liquid dirt coverage area; and cleaning the liquid dirt according to the seventh space coordinate.
In the embodiment of the application, the second area is an area surrounded by the seventh space coordinate, that is, the second area is an area through which the execution track passes in the process of cleaning the liquid dirt by the robot. The current spatial position comprises a plurality of coordinates within the liquid dirt coverage area, and the coordinates corresponding to the current spatial position are all located in the second area.
The second region is a region corresponding to a circumscribed rectangle of the region where the seventh spatial coordinate is located, and the second spatial coordinate includes boundary coordinates of the region corresponding to the circumscribed rectangle.
The second region is a region corresponding to a circumscribed circle of the region where the seventh space coordinate is located, and the second space coordinate includes boundary coordinates of the region corresponding to the circumscribed circle.
The second region and the seventh space coordinate are the same in shape, and the size of the second region is larger than the size of the seventh space coordinate, that is, the second space coordinate is a coordinate in which the boundary of the seventh space coordinate extends outward by a preset distance.
As shown in fig. 10, the target object illustratively includes a liquid soil 1002, and the boundary of the second region is also not outside the boundary of the liquid soil 1002, and the planned cleaning trajectory 1004 for cleaning the liquid soil 1002 completely covers the second region.
In the embodiment of the application, when the target object is liquid dirt, the corresponding seventh space coordinate is determined based on the current space position of the liquid dirt, so that the second area surrounded by the seventh space coordinate covers the whole liquid dirt, and the corresponding execution track is planned based on the seventh space coordinate, thereby ensuring the cleaning effect of the robot for cleaning the liquid dirt according to the execution track.
In some embodiments, optionally, the target object includes an article to be moved, and the cleaning process for the target object includes: acquiring a first gesture and a fifth space coordinate of an object to be moved, wherein the fifth space coordinate is a coordinate of a target area corresponding to the object to be moved; determining a second gesture of the actuating mechanism for grabbing the object to be moved according to the first gesture; controlling the actuating mechanism to grasp the object to be moved in a second posture; and moving the object to be moved into the target area based on the fifth spatial coordinates.
In this embodiment, when the target object includes the object to be cleaned, the first posture and the fifth spatial coordinate of the object to be cleaned are acquired, and the second posture of the object to be cleaned, which is grasped by the actuator, is determined based on the first posture, so that stability of grasping the object to be cleaned by the actuator according to the second posture is ensured. And according to the fifth space coordinate, the object to be moved is moved into a target area corresponding to the fifth space coordinate.
In embodiments of the present application, the items to be cleaned include, but are not limited to, dishes, cups, bowls, chopsticks, and the like. When the target object includes an object to be cleaned, the robot is required to grasp the object to be cleaned and place the object to be cleaned to a corresponding position.
In the embodiment of the application, the to-be-cleaned objects with different shapes and different postures need to be grabbed by the executing mechanism in different postures, and the robot can determine the first posture of the to-be-cleaned object based on the current space position of the to-be-cleaned object, so that the second posture required by the executing mechanism to grab the to-be-cleaned object in the first posture is determined according to the first posture, and the stability of the executing mechanism grabbing the to-be-cleaned object according to the second posture is ensured.
As shown in fig. 11, the target object is illustratively an object 1100 to be cleaned, and after the robot 200 grips the object 1100 to be cleaned in the cleaning scene by the gripping assembly 212, the robot 200 moves to the placement scene by the traveling assembly 214 and places the object 1100 to be cleaned in the target area in the placement scene.
In the embodiment of the application, the target area is the area to which the object to be cleaned needs to be moved, the fifth space coordinate is the coordinate of the target area, and after the executing mechanism grabs the object to be cleaned through the second gesture, the object to be cleaned is moved into the target area according to the fifth space coordinate, so that the cleaning treatment of the object to be cleaned is completed.
In some embodiments, optionally, before the target object is subjected to the cleaning process, the method further includes: the executing mechanism is controlled to grasp the cleaning piece, and the cleaning piece is matched with the dirt type corresponding to the target object. In this embodiment, it is proposed that when the execution mechanism cleans the solid dirt and/or the liquid dirt, the cleaning member grabbed by the execution mechanism cleans the solid dirt and/or the liquid dirt according to the execution track, so as to ensure the cleaning effect on the solid dirt and/or the liquid dirt.
In the embodiment of the application, a user can manually install the cleaning piece on the executing mechanism, and can also control the robot to automatically grasp the corresponding cleaning piece. Wherein the cleaning member matches the object category of the target object.
Illustratively, where the target object is a liquid stain, the cleaning member is a mopping assembly including, but not limited to, a dishcloth tray, a roller brush, a water absorbing sponge, and the like.
Illustratively, where the target object is a solid dirt, the cleaning elements are a dust extraction assembly and a sweeping assembly.
In the embodiment of the application, the robot can drive the cleaning piece to move according to the execution track, so that the solid dirt and/or the liquid dirt can be effectively cleaned.
In some embodiments, optionally, after the cleaning process is performed on the target object, the method includes: acquiring a sixth space coordinate, wherein the sixth space coordinate is a space coordinate of a cleaning scene, and a target object is positioned in the cleaning scene; and cleaning the cleaning scene based on the sixth spatial coordinates.
In this embodiment, a process of integrally cleaning a cleaning scene after the robot completes cleaning of the target object is proposed.
In the embodiment of the application, the sixth space coordinate is the coordinate corresponding to the cleaning scene where the target object is located, the cleaning track covering the whole cleaning scene can be planned based on the sixth space coordinate, and the cleaning scene can be completely cleaned by the executing mechanism according to the cleaning track planned according to the sixth space coordinate, so that the cleaning of the area in the cleaning scene is avoided.
As shown in fig. 12, for example, a sixth cleaning trajectory 1202 planned according to a sixth spatial coordinate matches the boundary of the cleaning scene, i.e., the sixth cleaning trajectory 1202 is located within the boundary range of the cleaning scene. As can be seen, a differently shaped cleaning scene plans a differently shaped sixth cleaning trajectory 1302.
In any of the above embodiments, determining the current spatial location according to the target image feature includes: extracting color information and depth information in the target image characteristics; and determining the current space position according to the color information and the depth information.
In this embodiment, a process of locating the target object according to the color information in the target image feature and the depth information to obtain the current spatial position of the target object is proposed.
In an embodiment of the application, the color information comprises RGB information of the target image feature and the Depth information comprises Depth information of the target image feature. By combining the RGB information and the Depth information, the current spatial position of the target object can be obtained.
Specifically, color information and depth information in the target image feature are identified through the network identification model, the identified color information and depth information are converted into point cloud data, and the target object is positioned through the point cloud data, so that the corresponding current space position is obtained.
Illustratively, the target image feature is input into the network recognition model, RGB (red, green, blue) information and Depth (Depth) information can be obtained, and it is determined that the object corresponding to the target image feature is a tray. And obtaining point cloud data of the target image characteristics through the RGB information and the Depth information, and obtaining the current space position of the target object through the point cloud data.
According to the embodiment of the application, the robot can extract the color information and the depth information in the target image characteristics, and can accurately determine the current spatial position of the target object through the color information and the depth information, so that the position of the target object can be determined before the robot starts cleaning, and the spatial position of the target object is fully referenced when the cleaning is performed on the target object, so that the action consistency of an executing mechanism of the robot is improved.
In some embodiments, optionally, determining the current spatial location according to the color information and the depth information includes: inputting color information into a first network model to obtain plane coordinates of a target object, wherein the first network model is a two-dimensional network model; the current spatial position is determined based on the plane coordinates and the depth information.
In this embodiment, the color information is identified by the two-dimensional network model, and then the planar coordinates obtained by the identification are combined with the depth information, so that the current spatial position of the target object can be obtained.
Specifically, the RGB information is identified through the two-dimensional first network model, so that the plane coordinates of the target object can be obtained, and the current spatial position of the target object can be positioned by combining the extracted Depth information of the target object.
In the embodiment of the application, the color information is identified through the two-dimensional network model, and the identification result is combined with the depth information, so that the current space position of the target object can be accurately obtained, and the accuracy of determining the positioning information of the target object in space is improved.
In some embodiments, optionally, determining the current spatial location according to the color information and the depth information includes: and inputting the color information and the depth information into a second network model to obtain the current space position, wherein the second network model is a three-dimensional network model.
In this embodiment, by inputting the color information and the depth information of the target image feature into the three-dimensional network model, the three-dimensional network model can obtain the point cloud data of the target object based on the color information and the depth information, and the current spatial position of the target object can be determined from the point cloud data.
Specifically, the color information and the depth information are transmitted to the three-dimensional second network model, the second network model can synthesize the color information and the depth information to obtain point cloud data of the target object, and the current spatial position of the target object can be obtained based on the point cloud data.
In the embodiment of the application, the three-dimensional second network model can convert the color information and the depth information into the corresponding point cloud data, and the accuracy of the acquired current spatial position of the target object can be improved by identifying the point cloud data.
In some embodiments, optionally, after extracting the target image feature in the target sub-image in the at least two sub-images, the method further includes: and inputting the target image characteristics into a third network model to obtain corresponding object categories.
In this embodiment, the object class of the target object is determined by identifying and classifying the target image features by a third network model, which is a classified network model.
The third network model may be selected as a MobileNet network model, by which the target image can be identified and classified for features, for example. For example: the target image features are kitchen counter area features in the image, and identifying object categories of target objects in the target image features comprises: greasy dirt of liquid dirt, bones and paper clusters of solid dirt, bowls, dishes and cups of objects to be cleaned.
In the embodiment of the application, the third network model is configured in the robot, and after the target image features corresponding to the target object identified by the robot, the target image features can be identified and classified through the third network model, so that the accuracy of the object type of the identified target object is improved, and the consistency and the stability of the subsequent control of the robot are ensured.
In some embodiments, optionally, before extracting the target image features in the target sub-image in the at least two sub-images, the method further includes: determining a target cleaning region of the at least two cleaning regions based on the cleaning instruction in response to the cleaning instruction; and determining the sub-image corresponding to the target cleaning area as a target sub-image.
In this embodiment, it is proposed that the robot is capable of screening at least two cleaning areas based on a cleaning instruction issued by a user, obtaining a corresponding target cleaning area, and selecting a target sub-image of the divided plurality of sub-images based on the target cleaning area.
In the embodiment of the application, the cleaning area is an area which can be cleaned by a robot in an open environment, and when the robot divides an environment image, the cleaning area included in the environment image is divided based on the cleaning area, and each sub-image obtained by dividing corresponds to one cleaning area.
Illustratively, the environmental image is an image of a living room area, the plurality of cleaning areas in the living room area including a tabletop area, a floor area, a seating area, and a sofa area.
Specifically, the user gives a cleaning instruction to the robot, and after dividing the environment image into a plurality of sub-images, the robot determines a target cleaning area to be cleaned according to the cleaning instruction, and takes the sub-image corresponding to the target cleaning area as a target sub-image.
Illustratively, the plurality of cleaning areas includes a tabletop area, a floor area, a seating area, and a sofa area, i.e., each cleaning area corresponds to one of the divided sub-images. And if the cleaning instruction is "cleaning the desktop area", determining the sub-map corresponding to the desktop area as the target sub-image.
In the embodiment of the application, the robot can determine the target cleaning area in the cleaning scene based on the cleaning instruction sent by the user, and screen the plurality of sub-images according to the target cleaning area to obtain the corresponding target sub-image, thereby improving the accuracy of screening the target sub-image by the robot.
In some embodiments, optionally, the object categories include at least one of: solid dirt, liquid dirt, and articles to be cleaned.
In this embodiment, it is proposed that the object types of the target object include at least one of solid dirt, liquid dirt and an object to be cleaned, and the target objects of different object types correspond to different cleaning modes, that is, the robot can plan different execution tracks based on different object types of the target object, so that accuracy of the robot to execute cleaning processing on the target objects of different object types is improved.
Illustratively, when the target object comprises trash such as food waste, paper dust, etc., then it is determined that the target object comprises solid soil.
Illustratively, the target object is greasy dirt, poured beverage, or the like, then it is determined that the target object includes liquid dirt.
Illustratively, the target object is a dish, a cup, an ashtray, or the like, then it is determined that the target object includes the item to be cleaned.
Fig. 13 illustrates a schematic diagram of the working logic of a robot provided in some embodiments of the present application, as shown in fig. 13, with models 1, 2, 3, 4, 5, and 6 configured into the robot. Wherein, the model 1, the model 2 and the model 3 are configured in the identification positioning module.
And under the condition that the robot collects scene data, inputting the scene data into the identification and positioning module, wherein the scene data is the environment image in any embodiment.
In the identification positioning module, an environment image firstly passes through the model 1, and the model 1 demarcates the area to be cleaned, so that invalid cleaning caused by exceeding the range in the cleaning process is prevented. The multiple defined ranges correspond to at least two target sub-images in any of the above embodiments, and each target sub-image corresponds to a range. For example: and (5) defining the range of the desktop to be cleaned, and removing other ineffective areas such as the ground. The model may delineate the operational area using a boundary recognition algorithm for the clean-up area, an instance segmentation algorithm, and the like.
In the identifying and positioning module, the range information defined by the model 1 and the data of the scene are input into the model 2, and the model 2 can identify the area where the object to be cleaned is positioned in the range and the corresponding category, for example: a tray area on the desktop, a bowl area, etc. The model 2 may be located using a conventional 2D/3D object detection, segmentation network. The 2D network can recognize RGB, and then combine with Depth to obtain space position; the 3D network can then identify and locate the point cloud data converted by RGB and Depth. The model 2 may be the first network model or the second network model in any of the embodiments described above.
In the identification and location module, the area to be cleaned is fed into the model 3, and the model 3 is used to further refine the specific dirty solids, other dirty solids and dirty liquid, and conventional classification networks can be used, for example: mobileNet series, etc. Wherein the specific soiled solids comprise household items to be stored and transported, such as dinner plates, dishes, water cups, etc. stained with stains; other soiled solids include solid food residues such as: bone, paper towel with ball; the dirty liquid comprises liquid adhering to the area, for example: dripping oil stain, etc. Wherein the model 3 is the third network model in any of the above embodiments.
It should be noted that, model 1, model 2 and model 3 may be subjected to a simplified process, and the processes of model 1, model 2 and model 3 described above are performed by one network model.
The object category and the current spatial position of the target object can be identified by identifying the positioning model. And inputting the current spatial position of the target object and the object category into different models for processing based on the fact that the target object corresponds to different object categories.
Illustratively, when the target object is identified as a dirty solid, the current spatial position and the object type of the target object are input to the model 4 for processing, and the model 4 can provide the grabbing postures and the corresponding positions of the dirty solid needing to be cleaned, so that the mechanical arm can execute grabbing, and the object is picked up. The model may be processed using a planar grabbing algorithm, a 6 dof-phase (six axis robotic arm pose) 3D (three dimensional) grabbing algorithm, and the like.
After the model 4 processes the dirty solids, it is necessary to perform a trajectory planning to operate, as shown in fig. 11, and since the dirty solids to be cleaned have corresponding specific categories, the trajectory planning 1 includes a planning of navigation movement and placement in addition to picking. For example, for a dirty dish on a cleaning table, the mechanical arm picks up the dish, sends relevant instruction information to the chassis according to specific category information, and the chassis navigates to send to a cleaning place, and then places the dish in the place.
After the model 4 has processed the soiled solid, it is necessary to perform a trajectory planning operation, as shown in fig. 6 and 7. Since the dirty solid is a garbage to be cleaned, the garbage is not required to be reserved. As shown by the first cleaning trajectories 606 in fig. 6, a plurality of dirty solids are collectively stacked through the plurality of first cleaning trajectories 606. The second cleaning path 706 and the third cleaning path 708 in fig. 7 are shown, where the second cleaning path 706 cleans a plurality of dirty solids, and the dirty solids are stacked together, i.e., while cleaning.
Illustratively, upon identifying the target object as a dirty liquid, the current spatial position of the target object and the object class are input to the model 5 for processing. The mould 5 is able to provide these boundaries and corresponding positions where the liquid needs to be cleaned, so that the robot arm can perform a wiping operation.
After the model 5 processes the dirty liquid, the operation needs to be performed by performing track planning. Such as cleaning trajectory 1004 shown in fig. 10. Because liquid dirt can be cleaned only by wiping, the mechanical arm can switch the tail end of the actuator into a wiping structure or the clamping jaw of the mechanical arm can grab rag or sponge for cleaning according to the task of cleaning at the moment. The tail end of the mechanical arm is wiped around the region in a similar way through a reciprocating spiral track, so that stains are wiped as far as possible.
When both types of dirt (liquid, other solids) are present in the scene, the relevant information obtained from model 4 and model 5 needs to be fed to perform trajectory planning. As shown in fig. 9, since the dirty liquid has adhesion and fluidity, it is necessary to avoid the dirty liquid when the dirty solid is moved. Because the relevant dirt positions are known, two different avoiding modes can be adopted, wherein the two different dirt solids can be cleaned by respectively bypassing the dirt liquid as shown in fig. 9, and the dirt liquid can be avoided after the different dirt solids are cleaned.
In the cleaning process, the identification positioning module is continuously adopted to perform state inspection on the cleaning area. After the track planning of one round is executed, the state checking mechanism is started to identify whether relevant dirty areas are reserved in the scene, if so, the step of inputting the current space position and object type of the target object into the model 4 and/or the model 5 is continuously executed.
Finally, trajectory planning is performed for overall cleaning, as shown by the cleaning trajectory in fig. 12. Because the boundary information of the cleaning area is already obtained in the identification and positioning module, the special-shaped cleaning area can be planned and cleaned. The tail end of the mechanical arm can be switched to a wiping mode for cleaning. The trajectory movement of the special-shaped cleaning region of fig. 12 is not limited to this trajectory movement, and each region may be wiped in place according to the boundary information.
In the above embodiment, the technical solution provided by the present application may be directed to a cleaning assistant under the operation of an open environment robot, which has the capability of interacting with the environment, and has autonomous intelligence. The technical scheme provided by the application is an automatic cleaning assistant, is an unobtainable link for realizing the intellectualization of an open life scene, can greatly liberate the hands of a person, and can complete the cleaning of a specified scene in a closed loop without manual intervention.
In an embodiment according to the present application, as shown in fig. 14, a control apparatus 1400 of a robot is provided, the robot includes an actuator, and the control apparatus 1400 of the robot includes:
an obtaining module 1402, configured to obtain a current spatial position of a target object;
A planning module 1404, configured to plan an execution track according to the current spatial position;
and a control module 1406 for controlling the actuator to reach the target space position according to the execution track and performing cleaning treatment on the target object.
In the embodiment of the application, the robot can plan the execution track of the executing mechanism according to the current space position of the target object, so that the executing mechanism can execute corresponding cleaning treatment on the target object after moving to the target space position. The planned execution track is matched with the current position of the target object and the cleaning strategy required to be executed by the target object, so that the accuracy of the robot in planning the execution track is improved. The space coordinates of the target object are considered when the execution track is planned, so that the robot can accurately position the target object and execute cleaning treatment conforming to the target object, and the consistency and accuracy of the robot in the cleaning treatment process are improved.
In some embodiments, optionally, the acquiring module is configured to acquire an environment image corresponding to the target object, where the environment image includes the target object;
the robot control device 1400 includes:
and the determining module is used for determining the current space position of the target object based on the environment image.
According to the embodiment of the application, the accuracy of the current space position of the positioning target object and the accuracy of the object type of the identified target object can be improved by carrying out segmentation detection on the environment image, so that the robot can execute the cleaning coordinates matched with the object type on the target object, and the consistency of the cleaning process on the target object is improved.
In some embodiments, optionally, the determining module is configured to determine, according to the current spatial position, a target spatial position corresponding to the actuator;
the planning module 1404 is configured to plan to obtain an execution track based on the initial spatial position and the target spatial position where the execution mechanism is located.
In the embodiment of the application, the starting point of the cleaning treatment of the target object by the executing mechanism, namely the target space position, can be determined through the current space position of the target object, and then the starting point and the end point of the execution track can be determined according to the initial space position of the executing mechanism and the target space position, so that the accuracy of the execution track planned according to the starting point and the end point is ensured to be higher.
In some embodiments, optionally, an acquiring module 1402 is configured to acquire first spatial coordinates, where the first spatial coordinates are edge coordinates of the first region;
A control module 1406 for controlling the actuator to clean the target object based on the first spatial coordinates to enable movement of the target object into the first area.
In the embodiment of the application, after the current space position of the dirty target object and the first space coordinate of the first area are determined, the corresponding execution track is planned according to the current space position and the first space coordinate, so that the dirty target object is concentrated in the first area after the execution mechanism executes cleaning according to the execution track, and the cleaning effect and the cleaning efficiency of the dirty object are ensured.
In some embodiments, optionally, the number of target objects is at least two;
an obtaining module 1402, configured to obtain at least two second spatial coordinates corresponding to at least two target objects;
a control module 1406, configured to obtain at least two second spatial coordinates corresponding to the at least two target objects.
In the embodiment of the application, under the condition that the number of the target objects is a plurality of, the second space coordinates corresponding to the plurality of target objects are obtained, and the execution mechanism is controlled to clean the plurality of target objects according to the second space coordinates and the first space coordinates, so that the plurality of target objects can be collected in the first area uniformly.
In some possible embodiments, optionally, the determining module is configured to determine at least two first cleaning tracks according to at least two second spatial coordinates and the first spatial coordinates;
a control module 1406 for controlling the actuator to sequentially clean the at least two target objects along at least two first cleaning tracks
In the embodiment of the application, when the target object comprises a plurality of target objects, the robot can plan a plurality of corresponding first cleaning tracks for the target objects positioned at different positions respectively, so that the execution track comprises a plurality of first cleaning tracks, the robot can clean the target objects positioned at different positions according to different cleaning tracks respectively, the situation that the robot breaks up other target objects when moving one target object is avoided, and the cleaning effect is improved.
In some embodiments, optionally, the determining module is configured to determine a second cleaning track according to at least two second spatial coordinates;
the determining module is used for determining a third cleaning track according to the third space coordinate and the first space coordinate, wherein the third space coordinate is the coordinate of a track end point of the second cleaning track;
a control module 1406 for controlling the actuator to perform cleaning treatment on at least two target objects along the third cleaning track after sequentially passing through the at least two target objects along the second cleaning track
In the embodiment of the application, when the target object comprises a plurality of target objects, the robot can plan the second cleaning track based on the current space positions of the target objects positioned at different positions, and then plan the third cleaning track based on the third space coordinates of the end point of the second cleaning track and the second control coordinates, so that the robot can collect all the target objects when moving according to the second cleaning track, and can move all the collected target objects into the first area when moving according to the third cleaning track, and the robot can move all the target objects into the first area at one time, thereby improving the cleaning efficiency.
In some embodiments, optionally, the target object comprises at least two soil types of soil;
an obtaining module 1402, configured to obtain at least two fourth space coordinates of at least two types of dirt, where the at least two fourth space coordinates correspond to the at least two types of dirt one to one;
a planning module 1404, configured to plan to obtain at least two fourth cleaning tracks based on at least two fourth spatial coordinates;
the control module 1406 is configured to control the actuator to clean at least two types of dirt along at least two fourth cleaning tracks, respectively.
In the embodiment of the application, when the target object contains the dirt with different dirt types, the current space position comprises the fourth space coordinate of the dirt with different dirt types. By planning a fourth cleaning track corresponding to the dirt of different dirt types through the fourth space coordinates of the dirt of different dirt types, when the robot moves the dirt of different dirt types according to the moving track, the situation that the dirt of different dirt types are mixed with each other can be reduced, and the cleaning effect is improved. In some embodiments, optionally, the at least two soil types of soil include solid soil and liquid soil, and the at least two fourth spatial coordinates include: the solid dirt coordinates and the liquid dirt coordinates, and the at least two fourth cleaning tracks comprise a solid cleaning track and a liquid cleaning track;
the control device 1400 of the robot further includes:
the generation module is used for generating a solid cleaning track based on the solid dirt coordinates; and generating a liquid cleaning track based on the liquid dirt coordinates, wherein the liquid cleaning track and the solid cleaning track are isolated from each other.
In the embodiment of the application, when the target object comprises solid dirt and liquid dirt, the execution track planned by the robot can independently clean the solid dirt and the liquid dirt, so that the liquid dirt is prevented from being scattered when the solid dirt in the liquid dirt is located, and the cleaning effect and the cleaning efficiency of the robot on the target object are improved.
In some embodiments, optionally, the target object comprises liquid soil; the determining module is used for determining a seventh space coordinate according to the current space position, the seventh space coordinate is in a second area, and the second area is a liquid dirt coverage area;
a control module 1406 for performing a cleaning process on the liquid soil according to the seventh spatial coordinate.
In the embodiment of the application, when the target object is liquid dirt, the corresponding seventh space coordinate is determined based on the current space position of the liquid dirt, so that the second area surrounded by the seventh space coordinate covers the whole liquid dirt, and the corresponding execution track is planned based on the seventh space coordinate, thereby ensuring the cleaning effect of the robot for cleaning the liquid dirt according to the execution track.
In some embodiments, optionally, the target object comprises an item to be moved;
the acquiring module 1402 acquires a first gesture of the object to be moved and a fifth spatial coordinate, where the fifth spatial coordinate is a coordinate of a target area corresponding to the object to be moved;
the determining module is used for determining a second gesture of the actuating mechanism for grabbing the object to be moved according to the first gesture;
a control module 1406 for controlling the actuator to grasp the item to be moved in a second pose; and moving the object to be moved into the target area based on the fifth spatial coordinates.
In the embodiment of the application, the target area is the area to which the object to be cleaned needs to be moved, the fifth space coordinate is the coordinate of the target area, and after the executing mechanism grabs the object to be cleaned through the second gesture, the object to be cleaned is moved into the target area according to the fifth space coordinate, so that the cleaning treatment of the object to be cleaned is completed.
In some embodiments, optionally, the control module 1406 is configured to control the actuator to grasp a cleaning member that matches a type of soil corresponding to the target object.
In the embodiment of the application, when the executing mechanism cleans the solid dirt and/or the liquid dirt, the cleaning piece grabbed by the executing mechanism cleans the solid dirt and/or the liquid dirt according to the executing track, so that the cleaning effect on the solid dirt and/or the liquid dirt is ensured.
In some embodiments, optionally, the acquiring module 1402 is configured to acquire sixth spatial coordinates, where the sixth spatial coordinates are spatial coordinates of the cleaning scene, and the target object is located in the cleaning scene;
a control module 1406 for performing a cleaning process on the cleaning scene based on the sixth spatial coordinates.
In the embodiment of the application, the sixth space coordinate is the coordinate corresponding to the cleaning scene where the target object is located, the cleaning track covering the whole cleaning scene can be planned based on the sixth space coordinate, and the cleaning scene can be completely cleaned by the executing mechanism according to the cleaning track planned according to the sixth space coordinate, so that the cleaning of the area in the cleaning scene is avoided.
In one embodiment according to the present application, as shown in fig. 15, there is provided a control apparatus 1500 of a robot including: a processor 1502 and a memory 1504, the memory 1504 having stored therein programs or instructions; the processor 1502 executes programs or instructions stored in the memory 1504 to implement the steps of the control method of the robot in any of the above embodiments, so that all the advantages of the control method of the robot in any of the above embodiments are achieved, and will not be described in detail herein.
In an embodiment according to the present application, a computer program product is provided, which when executed by a processor, implements the steps of the control method of the robot in any of the above embodiments, so that all the beneficial technical effects of the control method of the robot in any of the above embodiments are provided, and will not be described in detail herein.
In an embodiment according to the present application, a readable storage medium is provided, on which a program or instructions are stored which, when executed by a processor, implement the steps of the control method of a robot as in any of the embodiments described above. Therefore, the control method of the robot in any of the above embodiments has all the beneficial technical effects, and will not be described in detail herein.
In one embodiment according to the present application, as shown in fig. 16, a robot 1600 is proposed, comprising: the control device 1500 of the robot in any of the embodiments and/or the readable storage medium 1602 in any of the embodiments described above have all the advantages of the control device 1500 of the robot in any of the embodiments described above and/or the readable storage medium 1602 in any of the embodiments described above, and will not be described in detail herein.
It is to be understood that in the claims, specification and drawings of the present application, the term "plurality" means two or more, and unless otherwise explicitly defined, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, only for the convenience of describing the present application and making the description process easier, and not for the purpose of indicating or implying that the apparatus or element in question must have the particular orientation described, be constructed and operated in the particular orientation, so that these descriptions should not be construed as limiting the present application; the terms "connected," "mounted," "secured," and the like are to be construed broadly, and may be, for example, a fixed connection between a plurality of objects, a removable connection between a plurality of objects, or an integral connection; the objects may be directly connected to each other or indirectly connected to each other through an intermediate medium. The specific meaning of the terms in the present application can be understood in detail from the above data by those of ordinary skill in the art.
In the claims, specification, and drawings of the present application, the descriptions of the terms "one embodiment," "some embodiments," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in the embodiment or example of the present application. In the claims, specification and drawings of the present application, the schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above is only a preferred embodiment of the present application, and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (16)

1. A control method of a robot, the robot including an actuator, the control method of the robot comprising:
acquiring the current space position of a target object;
Planning an execution track according to the current space position;
and controlling the executing mechanism to reach the target space position according to the executing track, and cleaning the target object.
2. The method of claim 1, wherein the obtaining the current spatial position of the target object includes:
acquiring an environment image corresponding to the target object, wherein the environment image comprises the target object;
the current spatial position of the target object is determined based on the environmental image.
3. The method according to claim 1, wherein planning an execution trajectory according to the current spatial position includes:
determining the target space position corresponding to the executing mechanism according to the current space position;
and planning to obtain the execution track based on the initial space position of the execution mechanism and the target space position.
4. A control method of a robot according to any one of claims 1 to 3, wherein the cleaning process of the target object includes:
acquiring a first space coordinate, wherein the first space coordinate is an edge coordinate of a first area;
And controlling the executing mechanism to clean the target object according to the first space coordinates so that the target object can move into the first area.
5. The method of controlling a robot according to claim 4, wherein the number of the target objects is at least two;
the controlling the executing mechanism to clean the target object according to the first space coordinate includes:
acquiring at least two second space coordinates corresponding to at least two target objects;
and cleaning the target object according to the at least two second space coordinates and the first space coordinates.
6. The method according to claim 5, wherein the cleaning the target object based on the at least two second spatial coordinates and the first spatial coordinates includes:
determining at least two first cleaning tracks according to the at least two second space coordinates and the first space coordinates;
and controlling the executing mechanism to clean at least two target objects along the at least two first cleaning tracks in sequence.
7. The method according to claim 5, wherein the cleaning the target object based on the at least two second spatial coordinates and the first spatial coordinates includes:
determining a second cleaning track according to the at least two second space coordinates;
determining a third cleaning track according to a third space coordinate and the first space coordinate, wherein the third space coordinate is the coordinate of a track end point of the second cleaning track;
and controlling the executing mechanism to clean at least two target objects along the third cleaning track after the executing mechanism sequentially passes through at least two target objects along the second cleaning track.
8. A control method of a robot according to any one of claims 1 to 3, characterized in that the target object comprises at least two soil types of soil;
the cleaning treatment of the target object comprises the following steps:
acquiring at least two fourth space coordinates of the at least two dirt types, wherein the at least two fourth space coordinates correspond to the at least two dirt types one by one;
Planning to obtain at least two fourth cleaning tracks based on at least two fourth space coordinates;
and controlling the executing mechanism to clean the dirt of the at least two dirt types along the at least two fourth cleaning tracks.
9. The method of claim 8, wherein the at least two types of contamination include solid contamination and liquid contamination, and the at least two fourth spatial coordinates include: the solid dirt coordinates and the liquid dirt coordinates, and the at least two fourth cleaning tracks comprise a solid cleaning track and a liquid cleaning track;
the planning to obtain at least two fourth cleaning tracks based on at least two fourth space coordinates includes:
generating the solid cleaning trajectory based on the solid dirt coordinates; and
and generating a liquid cleaning track based on the liquid dirt coordinates, wherein the liquid cleaning track and the solid cleaning track are isolated from each other.
10. A control method of a robot according to any one of claims 1 to 3, wherein the target object includes an article to be moved, and the cleaning process of the target object includes:
Acquiring a first gesture and a fifth space coordinate of the object to be moved, wherein the fifth space coordinate is a coordinate of a target area corresponding to the object to be moved; determining a second gesture of the actuating mechanism for grabbing the object to be moved according to the first gesture;
controlling the actuating mechanism to grasp the object to be moved in the second posture; and
and moving the object to be moved into the target area based on the fifth space coordinate.
11. A control method of a robot according to any one of claims 1 to 3, characterized in that before the cleaning process is performed on the target object, further comprising:
and controlling the executing mechanism to grasp a cleaning piece, wherein the cleaning piece is matched with the dirt type corresponding to the target object.
12. A control method of a robot according to any one of claims 1 to 3, characterized in that after the cleaning process of the target object, it comprises:
acquiring a sixth space coordinate, wherein the sixth space coordinate is a space coordinate of a cleaning scene, and the target object is positioned in the cleaning scene;
and cleaning the cleaning scene based on the sixth space coordinate.
13. A control device of a robot, the robot including an actuator, the control device of the robot comprising:
the acquisition module is used for acquiring the current space position of the target object;
the planning module is used for planning an execution track according to the current space position;
and the control module is used for controlling the executing mechanism to reach the target space position according to the executing track and cleaning the target object.
14. A control device for a robot, comprising:
a memory having stored thereon programs or instructions;
a processor for implementing the steps of the control method of the robot according to any one of claims 1 to 12 when executing the program or instructions.
15. A readable storage medium having stored thereon a program or instructions, which when executed by a processor, implement the steps of the method of controlling a robot according to any of claims 1 to 12.
16. A robot, comprising:
the control device of a robot according to claim 13 or 14; and/or
The readable storage medium of claim 15.
CN202310890811.2A 2023-07-19 2023-07-19 Robot, control method and device thereof, and readable storage medium Pending CN116898356A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310890811.2A CN116898356A (en) 2023-07-19 2023-07-19 Robot, control method and device thereof, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310890811.2A CN116898356A (en) 2023-07-19 2023-07-19 Robot, control method and device thereof, and readable storage medium

Publications (1)

Publication Number Publication Date
CN116898356A true CN116898356A (en) 2023-10-20

Family

ID=88354547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310890811.2A Pending CN116898356A (en) 2023-07-19 2023-07-19 Robot, control method and device thereof, and readable storage medium

Country Status (1)

Country Link
CN (1) CN116898356A (en)

Similar Documents

Publication Publication Date Title
Yamazaki et al. Home-assistant robot for an aging society
CN109392308B (en) Scheduling and control system for autonomous cleaning robot
US20210204793A1 (en) Intelligent dishwashing systems and methods
US20190244375A1 (en) Intelligent Dishwashing Systems And Methods
CN109171571A (en) Garbage cleaning method and device and cleaning robot
WO2006013829A1 (en) Robot for carrying goods, system for carrying goods and method for carrying goods
WO2015194760A1 (en) Dishwashing method using cumulative processing of shape recognition information, and system using same
KR20230113337A (en) Clutter-cleaning robot system
CN110507249A (en) A kind of combination of clean robot and clean method
CN109448002A (en) A kind of sweeping robot control method, system, mobile terminal and storage medium
CN114468859A (en) Floor sweeping robot
CN107862313A (en) Dish washing machine and control method and device thereof
US20220152825A1 (en) Automated manipulation of objects using a vision-based method for determining collision-free motion planning
CN116898356A (en) Robot, control method and device thereof, and readable storage medium
CN116919268A (en) Robot, control method and device thereof, and readable storage medium
CN115034731A (en) Unmanned autonomous restaurant tableware cleaning equipment and system
US20220304546A1 (en) Dish cleaning by dirt localization and targeting
US11684232B2 (en) Real-time single dish cleaner
CN107280590B (en) Household cleaning device and cleaning method
WO2024022360A1 (en) Method, device, and system for controlling cleaning robot, and storage medium
CN116250778A (en) Cleaning control method and system of cleaning robot and cleaning robot
CN215993742U (en) Robot cooking device
Andersen et al. Combining a novel computer vision sensor with a cleaning robot to achieve autonomous pig house cleaning
CN112971674A (en) Intelligent automatic tableware cleaning and replacing system and method
Yamazaki et al. Tidying and cleaning rooms using a daily assistive robot-an integrated system for doing chores in the real world

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination