CN114504273A - Robot control method and device - Google Patents

Robot control method and device Download PDF

Info

Publication number
CN114504273A
CN114504273A CN202011282012.XA CN202011282012A CN114504273A CN 114504273 A CN114504273 A CN 114504273A CN 202011282012 A CN202011282012 A CN 202011282012A CN 114504273 A CN114504273 A CN 114504273A
Authority
CN
China
Prior art keywords
robot
target object
cleaning
working area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011282012.XA
Other languages
Chinese (zh)
Inventor
付雷
于坤
张亮
刘达
顾陈洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecovacs Robotics Suzhou Co Ltd
Original Assignee
Ecovacs Robotics Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecovacs Robotics Suzhou Co Ltd filed Critical Ecovacs Robotics Suzhou Co Ltd
Priority to CN202011282012.XA priority Critical patent/CN114504273A/en
Publication of CN114504273A publication Critical patent/CN114504273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The embodiment of the specification provides a robot control method and a robot control device, wherein the robot control method comprises the steps of controlling a robot to execute a cleaning task on a working area according to a pre-established partition map; under the condition that the robot and a target object in the subarea map are determined to meet a preset distance condition, controlling the robot to clean the working area according to the distance relation between the robot and the target object; according to the robot control method, when the robot is controlled to clean the working area based on the partition map pre-established in the working area, the robot can be controlled to selectively clean the target object based on the preset distance relation between the robot and the target object, so that a more humanized cleaning method is realized, collision protection on the target object is realized through avoidance of the robot on the target object, and user experience is improved.

Description

Robot control method and device
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a robot control method. One or more embodiments of the present specification also relate to a robot control apparatus, a robot, and a computer-readable storage medium.
Background
The cleaning mode of the existing sweeping robot is single, the robot can only clean the whole working area (such as a room) or select the area for cleaning the fixed subareas in the working area in an automatic mode, for example, when a person in a certain room has a rest and the door is closed, the sweeping robot can still clean outdoors, and can collide the door, disturb the rest of the person and hardly meet the personalized cleaning requirement of a family. Secondly, furniture such as sofas, beds and wardrobes in rooms are generally painted wood, and if the furniture is scratched and collided by a sweeping robot for a long time, certain damage can be caused, and the user experience is not good.
Therefore, it is urgently needed to provide a robot control method which can meet the personalized cleaning requirements of families and improve the user experience.
Disclosure of Invention
In view of this, the present specification provides a robot control method. One or more embodiments of the present disclosure are also directed to a robot control apparatus, a robot, and a computer-readable storage medium to solve the technical problems of the related art.
According to a first aspect of embodiments herein, there is provided a robot control method including:
controlling the robot to execute a cleaning task on a working area according to a pre-established partition map;
under the condition that the robot and the target object in the zone map are determined to meet the preset distance condition, controlling the robot to clean the working area through a first cleaning strategy;
wherein the first cleaning strategy comprises controlling the robot to perform a cleaning task on the working area according to a distance relationship between the robot and the target object.
Optionally, before controlling the robot to perform a cleaning task on a working area according to a pre-established partition map, the method further includes:
under the condition of receiving an initial starting instruction, starting the robot, and controlling the robot to clean the working area through a second cleaning strategy;
collecting the environmental information of the working area through a visual collection device of the robot;
performing area division on the working area based on the environmental information of the working area to construct a partition map of the working area;
wherein the second cleaning strategy comprises controlling the robot to perform a cleaning task on the work area through a cleaning route preset in the robot.
Optionally, after the constructing the partition map of the work area, the method further includes:
and identifying a target object of each partition in the partition map, and labeling the target object.
Optionally, the identifying the target object of each partition in the partition map includes:
controlling the robot to acquire images of the objects in each subarea according to the subarea map;
determining an object tag of the object from the acquired object image;
and matching the object tag with a preset object tag, and determining an object corresponding to the successfully matched object tag as a target object.
Optionally, the determining an object label of the object according to the acquired object image includes:
inputting a collected object image into an image recognition model, wherein the image recognition model outputs an object label corresponding to the object image; or
And sending the acquired object image to a terminal in communication connection with the robot, and receiving an object label set by a user at the terminal for the object according to the object image.
Optionally, the matching the object tag with a preset object tag, and determining an object corresponding to the successfully matched object tag as a target object includes:
matching the object tag with a first preset object tag, and determining an object corresponding to the successfully matched object tag as a first target object; and
and matching the object tag with a second preset object tag, and determining an object corresponding to the successfully matched object tag as a second target object.
Optionally, when it is determined that the robot and the target object in the zone map satisfy a preset distance condition, controlling the robot to clean the working area through a first cleaning strategy includes:
under the condition that the first target object is acquired through the vision acquisition equipment of the robot, determining the state of the first target object through a panoramic image of a partition where the first target object is located, acquired through the vision acquisition equipment of the robot;
determining a first cleaning distance between the robot and the second target object through a distance sensor of the robot under the condition that the state of the first target object is determined to meet a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the second target object under the condition that the first cleaning distance meets a first preset distance condition.
Optionally, after determining the state of the first target object, the method further includes:
determining, by a distance sensor of the robot, a second cleaning distance of the robot from the first target object, in a case where it is determined that the state of the first target object does not satisfy a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the first target object under the condition that the second cleaning distance meets a second preset distance condition.
Optionally, the determining, by the panoramic image of the partition where the first target object is located acquired by the vision acquisition device of the robot, a state of the first target object includes:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot, and extracting a top image of a current working area of the partition where the first target object is located from the panoramic image;
determining an initial working area top image of a partition in which the first target object is located based on the partition map;
and comparing the top image of the current working area with the top image of the original working area to determine the state of the first target object.
Optionally, the determining, by the panoramic image of the partition where the first target object is located acquired by the vision acquisition device of the robot, a state of the first target object includes:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot;
inputting the panoramic image into the image recognition model, wherein the image recognition model outputs an image tag corresponding to the panoramic image;
determining a position of the first target object and the bearing object in the panoramic image if the image tag matches a bearing object of the first target object;
determining the state of the first target object based on the position relation of the first target object and the bearing object in the panoramic image.
According to a second aspect of embodiments herein, there is provided a robot control device including:
a control module configured to control the robot to perform a cleaning task on a work area according to a pre-established zone map;
a cleaning module configured to control the robot to clean the working area through a first cleaning strategy if it is determined that the robot and a target object in the zone map satisfy a preset distance condition;
wherein the first cleaning strategy comprises controlling the robot to perform a cleaning task on the working area according to a distance relationship between the robot and the target object.
Optionally, the apparatus further includes:
the starting module is configured to start the robot and control the robot to clean the working area through a second cleaning strategy under the condition of receiving an initial starting instruction;
an information acquisition module configured to acquire environmental information of the work area through a vision acquisition device of the robot;
the map building module is configured to perform regional division on the working area based on the environmental information of the working area so as to build a regional map of the working area;
wherein the second cleaning strategy comprises controlling the robot to perform a cleaning task on the work area through a cleaning route preset in the robot.
Optionally, the apparatus further includes:
and the identification module is configured to identify the target object of each partition in the partition map and label the target object.
Optionally, the identification module is further configured to:
controlling the robot to acquire images of the objects in each subarea according to the subarea map;
determining an object tag of the object from the acquired object image;
and matching the object tag with a preset object tag, and determining an object corresponding to the successfully matched object tag as a target object.
Optionally, the identification module is further configured to:
inputting a collected object image into an image recognition model, wherein the image recognition model outputs an object label corresponding to the object image; or
And sending the acquired object image to a terminal in communication connection with the robot, and receiving an object label set by a user at the terminal for the object according to the object image.
Optionally, the identification module is further configured to:
matching the object tag with a first preset object tag, and determining an object corresponding to the successfully matched object tag as a first target object; and
and matching the object tag with a second preset object tag, and determining an object corresponding to the successfully matched object tag as a second target object.
Optionally, the cleaning module is further configured to:
under the condition that the first target object is acquired through the vision acquisition equipment of the robot, determining the state of the first target object through a panoramic image of a partition where the first target object is located, acquired through the vision acquisition equipment of the robot;
determining a first cleaning distance between the robot and the second target object through a distance sensor of the robot under the condition that the state of the first target object is determined to meet a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the second target object under the condition that the first cleaning distance meets a first preset distance condition.
Optionally, the apparatus further includes:
a distance determination module configured to determine, by a distance sensor of the robot, a second cleaning distance of the robot from the first target object, in a case where it is determined that the state of the first target object does not satisfy a preset cleaning condition;
and the task execution module is configured to control the robot to execute a cleaning task on the working area according to a distance relation between the robot and the first target object when the second cleaning distance meets a second preset distance condition.
Optionally, the cleaning module is further configured to:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot, and extracting a top image of a current working area of the partition where the first target object is located from the panoramic image;
determining an initial working area top image of a partition in which the first target object is located based on the partition map;
and comparing the top image of the current working area with the top image of the original working area to determine the state of the first target object.
Optionally, the cleaning module is further configured to:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot;
inputting the panoramic image into the image recognition model, wherein the image recognition model outputs an image tag corresponding to the panoramic image;
determining a position of the first target object and the bearing object in the panoramic image if the image tag matches a bearing object of the first target object;
determining the state of the first target object based on the position relation of the first target object and the bearing object in the panoramic image.
According to a third aspect of embodiments herein, there is provided a robot comprising:
the machine body is provided with a memory and a processor;
the memory is for storing computer executable instructions and the processor is for executing the computer executable instructions, which when executed by the processor implement the steps of the robot control method.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the robot control method.
One embodiment of the specification realizes a robot control method and a robot control device, wherein the robot control method comprises the steps of controlling the robot to execute a cleaning task on a working area according to a pre-established partition map; under the condition that the robot and a target object in the zone map are determined to meet a preset distance condition, controlling the robot to clean the working area according to the distance relation between the robot and the target object; when the robot cleans a working area based on a partition map pre-established in the working area, the robot control method can control the robot to selectively clean the target object based on a preset distance relation between the robot and the target object, so that a more humanized cleaning method is realized, collision protection on the target object is realized through avoidance of the robot on the target object, and user experience is improved.
Drawings
Fig. 1 is an exemplary diagram of a specific application scenario of a robot control method provided in an embodiment of the present specification;
FIG. 2 is a flow chart of a method for controlling a robot provided in one embodiment of the present disclosure;
fig. 3 is a schematic cleaning diagram of a robot in a state that a door is closed in a robot control method according to an embodiment of the present disclosure;
fig. 4 is schematic diagrams illustrating two cleaning modes of a robot in a state that a door is closed in a robot control method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a top image of a current working area of a partition in which a first target object is located in a robot control method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an initial working area top image of a partition in which a first target object is located in a robot control method according to an embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating a process of a robot control method according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a robot control device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, as those skilled in the art will be able to make and use the present disclosure without departing from the spirit and scope of the present disclosure.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can be termed a second and, similarly, a second can be termed a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present specification relate are explained.
Furniture: household appliances, including beds, cabinets, tables, chairs, tea tables, and the like.
A house-type graph: the layout of the housing space, namely the pattern describing the use function, the corresponding position and the size of each independent space, is a household type diagram during specific use.
The present specification relates to a robot control method. One or more embodiments of the present specification relate to a robot control apparatus, a robot, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Referring to fig. 1, fig. 1 is a diagram illustrating an example of a specific application scenario of a robot control method provided in an embodiment of the present specification.
Taking a robot as a sweeping robot and a working area as a room as an example, the application scenario in fig. 1 includes a sweeping robot 102 and a room 104. Specifically, under the condition that a control end of the sweeping robot receives an initial starting instruction, the sweeping robot is started and is controlled to walk along the wall of a room to clean the room for the first time, meanwhile, various environment information (such as the area of the room, placed furniture, the position of the furniture, the position of a door of the room and the like) of the room is acquired through a visual acquisition device of the sweeping robot, then, a map of the room is constructed by using the acquired various environment information of the room, and a partitioned floor plan is constructed according to the information of the door of each room; according to the house type diagram, identifying and labeling target furniture in each subarea, such as sofas in living rooms, tables in dining rooms, wardrobes in bedrooms, beds and the like; after target furniture in the room is identified and labeled according to a house type diagram, if the sweeping robot is controlled again to carry out overall sweeping on the room based on the house type diagram, the sweeping robot reaches the position near the labeled target furniture and can automatically avoid a preset distance for sweeping the labeled target furniture, wherein the preset distance is 5 cm, 8 cm or 10 cm and the like, when the sweeping robot reaches the position near a door of each partition, whether the door is opened or closed is judged according to image information of a door collected by a vision collecting device, if the door is opened, the sweeping robot enters the corresponding room for sweeping, and if the door is closed, the sweeping robot sweeps in an area which is more than 10 cm or 20 cm and the like from the door, so that the door is prevented from being collided.
The robot control method provided by the embodiment of the specification is applied to a cleaning scene of a sweeping robot for a room, furniture in the room is avoided by controlling the sweeping robot, so that collision protection is realized for the furniture, and when the sweeping robot reaches a position near a door of the room, the room is cleaned or cleaned in an avoiding mode by automatically judging the opening and closing state of the door, so that the collision caused by the door when the sweeping robot cleans outdoors is further avoided, noise is generated, and a more humanized cleaning method is realized.
Referring to fig. 2, fig. 2 is a flowchart illustrating a robot control method according to an embodiment of the present disclosure, including the following steps.
Step 202: and controlling the robot to execute a cleaning task on a working area according to a pre-established partition map.
In specific implementation, the robot control method provided in the embodiment of the present specification may be applied to any scene in which a working area is cleaned by a robot, for example, a scene in which the working area is cleaned by the robot, a scene in which the working area is disinfected by the robot, or a scene in which the working area is washed by the robot, and the like, and this specification does not limit this; then the type of robot is different based on different application scenarios, for example, in a sweeping scenario, the robot may be a sweeping robot; in a disinfection scenario, the robot may be a disinfection robot; and in a floor washing scenario, the robot may be a floor washing robot.
The work area includes but is not limited to a cleanable area such as a room, a factory, a school, or an office building.
For convenience of understanding, the following embodiments are described in detail in the context of the robot control method applied to a scene in which a room is cleaned by a robot, where the robot may be understood as a sweeping robot, and an execution subject of the robot control method may be understood as a control component of the robot, such as a control main board.
Specifically, before controlling the robot to execute a cleaning task on a working area according to a pre-established partition map, the method further includes:
under the condition of receiving an initial starting instruction, starting the robot, and controlling the robot to clean the working area through a second cleaning strategy;
collecting the environmental information of the working area through a visual collection device of the robot;
performing area division on the working area based on the environment information of the working area to construct a partition map of the working area;
wherein the second cleaning strategy comprises controlling the robot to perform a cleaning task on the work area through a cleaning route preset in the robot.
The initial starting instruction can be an initial starting instruction realized by touching a starting button arranged on the robot, an initial starting instruction realized by touching a starting control arranged on a display panel of the robot, an initial starting instruction realized by receiving starting voice recognizable by the robot, and the like; the preset cleaning route may be any one of preset cleaning routes, such as a zigzag cleaning route or an S-shaped cleaning route, and the like.
Specifically, in the case of receiving an initial starting instruction, starting the robot, and starting a vision acquisition device of the robot, wherein the vision acquisition device includes but is not limited to a camera, a camera and the like; after that, the robot is controlled to pass through a cleaning route preset in the robot, the robot is controlled to clean a working area for the first time, and meanwhile, environment information of the working area is collected through a vision collecting device of the robot, wherein the environment information of the working area includes but is not limited to information such as the area, the placed object and the position of the object of the working area, and taking the working area as a room as an example, the environment information of the room may include the area of each small room, furniture placed in the room, the position of the furniture, the position of a door of each small room, and the like.
Specifically, after the environmental information of the working area is collected, the working area is divided into a plurality of small working areas according to the collected environmental information of the working area; for example, the room is divided into a living room area, a restaurant area, a bedroom area, a kitchen area, a toilet area, a balcony area, and the like according to the collected environment information.
And then constructing a partition map of the working area through the working area after the area division, namely presenting the working area by using the partition map, wherein in the case that the working area is a room, the partition map can be understood as a house type map, and which area is a living room of the room, which area is a restaurant of the room, which area is a bedroom of the room and the like can be determined through the partition map.
Along the above example, if the control robot encounters a door while walking along the wall of the room to be cleaned, the robot enters a room from the door, where the robot walks along the wall for a round, the area, furniture and the like of the room are collected by the vision collection equipment of the robot, and then the robot goes out of the door and continues to walk along the wall for cleaning, when meeting another door, the robot enters the room corresponding to the door by adopting the mode, and the vision acquisition equipment of the robot acquires the environmental information of the room corresponding to the door, the method includes the steps that the robot is controlled to return to a starting point along the wall in a room to be cleaned in a circle, and finally the room to be cleaned is divided into areas based on all environment information collected for the room to be cleaned so as to construct a partition map of the room to be cleaned; specifically, the area division of the room to be cleaned can be realized through the door of each room of the room to be cleaned, and the construction of the partition map of the room to be cleaned is realized by taking each room with the door as one area of the room to be cleaned.
In specific implementation, after a partition map of the working area is constructed, the target object in each partition in the partition map is identified and labeled, so that when the robot is subsequently controlled to perform global cleaning on the working area next time, under the condition that the robot collects the labeled target object, the avoidance of the robot on the target object is realized based on the distance relationship between the robot and the target object in each partition, and the specific implementation mode is as follows:
after the constructing the partition map of the working area, the method further comprises the following steps:
and identifying a target object of each partition in the partition map, and labeling the target object.
Specifically, the identification of the target object in each partition can be realized as follows:
the identifying the target object of each partition in the partition map comprises:
controlling the robot to acquire images of the objects in each subarea according to the subarea map;
determining an object tag of the object from the acquired object image;
and matching the object tag with a preset object tag, and determining an object corresponding to the successfully matched object tag as a target object.
The object may be any object in each partition, for example, the work area is a room, the object may include a door, furniture, and the like in each partition, and the object tag of the object may be a furniture tag of the door and the furniture, by which it is clear whether the object is the door or the furniture, and if the object is the furniture, what the furniture is, for example, a sofa, a bed, a wardrobe, and the like.
Specifically, after a work area is divided based on environment information of the work area to construct a partition map of the work area, the robot is controlled to acquire images of all objects in each partition according to the partition map, an object tag of each object is determined according to the image of each object, and finally the object tag is matched with a preset object tag, and an object corresponding to the successfully matched object tag is determined as a target object; for example, the robot is controlled to collect images of objects in a certain partition of a room, and the object tags of the objects are determined to be doors, beds, wardrobes, bedside cabinets, curtains and ceiling lamps according to the collected images of the objects, and the preset object tags are the doors, the beds, the wardrobes and the bedside cabinets, so that the object tags are matched with the preset object tags, and the objects corresponding to the successfully matched object tags are the doors, the beds, the wardrobes and the bedside cabinets, so that the target objects are the doors, the beds, the wardrobes and the bedside cabinets.
In the embodiment of the specification, the image of the object in each partition of the working area is acquired, the object tag of the object is determined according to the image of each object, then the object tag is matched with the tag of the preset object which cannot collide (namely the preset object tag), the object corresponding to the successfully matched object tag is determined as the target object, and subsequently, when the robot is controlled to clean the working area, the target object can be accurately avoided, so that the target object is prevented from being damaged.
In specific implementation, when the object tag of the object is determined according to the acquired image of the object, the object tag of the object can be quickly and accurately obtained in a machine learning model manner, and can be set by a user in a user interaction manner, so that personalized experience of the user is improved, and the specific implementation manner is as follows:
the determining an object label of the object from the acquired object image comprises:
inputting a collected object image into an image recognition model, wherein the image recognition model outputs an object label corresponding to the object image; or
And sending the acquired object image to a terminal in communication connection with the robot, and receiving an object label set by a user at the terminal for the object according to the object image.
The image recognition model is a pre-trained image recognition model, during specific training, a plurality of existing images can be obtained to serve as sample images, the name of a real object in each image serves as a sample label, then the sample images and the corresponding sample labels form a sample image pair, and the image recognition model is obtained through training of the sample image pair.
After the image recognition model is obtained through training, the acquired image of the object is input into the image recognition model, and then the object label of the object corresponding to the image of the object can be rapidly and accurately obtained.
In another case, in order to realize user interaction and improve personalized experience of a user, the acquired image of the object can be sent to a terminal in communication connection with the robot, such as a mobile phone, a tablet computer and the like, and then an object tag set for the object by the user at the terminal according to the image of the object is received, so that the user interaction is embodied, and the use experience of the user is improved.
In addition, the matching the object tag with a preset object tag, and determining an object corresponding to the successfully matched object tag as a target object, includes:
matching the object tag with a first preset object tag, and determining an object corresponding to the successfully matched object tag as a first target object; and
and matching the object tag with a second preset object tag, and determining an object corresponding to the successfully matched object tag as a second target object.
In actual use, the target objects comprise a first target object and a second target object, and in a scene of cleaning a room through the robot, the first target object can be a door of the room, and the second target object can be some furniture of the room.
Specifically, after the object tag of the object is obtained, the object tag is first matched with a first preset object tag, the object corresponding to the successfully matched object tag is determined as a first target object, meanwhile, the object tag is matched with a second preset object tag, and the object corresponding to the successfully matched object tag is determined as a second target object.
In the embodiment of the specification, the first target object and the second target object are obtained through the matching relationship between the object tag and the preset object tag, so that when the subsequent control robot cleans a working area, different strategies can be adopted to avoid the obstacle of the first target object and the second target object based on the distance relationship between the robot and the first target object and the distance relationship between the robot and the second target object, and the personalized experience of a user is ensured.
Step 204: and under the condition that the robot and the target object in the zone map are determined to meet the preset distance condition, controlling the robot to clean the working area through a first cleaning strategy.
Wherein the first cleaning strategy comprises controlling the robot to perform a cleaning task on the working area according to a distance relationship between the robot and the target object.
Specifically, after the first target object and the second target object are determined, when the robot is controlled to perform a cleaning task on the work area according to the partition map, the robot is controlled to clean the work area according to the distance relationship between the robot and the first target object and/or the second target object in each partition.
Specifically, when it is determined that the robot and the target object in the zone map satisfy a preset distance condition, controlling the robot to clean the working area through a first cleaning strategy includes:
under the condition that the first target object is acquired through the vision acquisition equipment of the robot, determining the state of the first target object through a panoramic image of a partition where the first target object is located, acquired through the vision acquisition equipment of the robot;
determining a first cleaning distance between the robot and the second target object through a distance sensor of the robot under the condition that the state of the first target object is determined to meet a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the second target object under the condition that the first cleaning distance meets a first preset distance condition.
In practical application, if the working area is a room and the first target object is a door, the preset cleaning condition can be understood as the door opening; in addition, the first preset distance condition may be understood as that the first cleaning distance is less than or equal to a first preset distance threshold, and the first preset distance threshold may be set according to practical applications, which is not limited in this application, for example, the first preset distance threshold may be set to 5 centimeters, 8 centimeters, and the like.
Along with the above example, taking the control robot to clean the room, the first target object is the door, specifically, when the robot is controlled to clean a room according to the zone map, under the condition that the vision acquisition equipment of the robot acquires a door, the panoramic image of the partition where the door is located, which is acquired by the vision acquisition equipment of the robot, determines the state (opening or closing) of the door, in case that the state of the door satisfies a preset cleaning condition, i.e., the door is opened, a first cleaning distance between the current position of the robot and the second target object is acquired through a distance sensor (e.g., a laser radar sensor) of the robot, when the first cleaning distance meets a first preset distance condition (namely the first cleaning distance is smaller than or equal to a first preset distance threshold), controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and a second target object; the distance relationship between the robot and the second target object can be understood as the relative fixed distance between the robot and the second target object when the cleaning task is executed on the working area; for example to control the robot to perform a cleaning task in a work area that is 10 cm away from the second target object.
In the embodiment of the specification, when the robot is controlled to clean the working area, firstly, the state of a door needs to be judged, the robot can enter the working area through the door to execute a cleaning task under the condition that the door is opened, and when the robot enters the working area to execute the cleaning task, the cleaning distance between the current position of the robot and an identified second target object needs to be judged, and when the cleaning distance is smaller than or equal to a first preset distance threshold value, the robot is controlled to clean the subarea of the working area according to the distance relationship between the robot and the second target object; when the robot is controlled to clean a working area, the robot can enter the working area to clean only by recognizing the state of the door under the condition that the door is opened, so that the door is prevented from being collided, the user is prevented from being disturbed to have a rest, and the target furniture in the cleaning area is recognized to execute a cleaning task in an area which is a fixed distance away from the target furniture, so that the target furniture is prevented from being damaged by collision protection.
And under the condition that the state of door is for closing, in order to guarantee that the cleaning work of robot normally goes on, can not cause the omission of clean region, then can control the robot and clean in the region apart from door (first target object) certain distance, neither can collide the door, can clean the region outside the door again, concrete implementation is as follows:
after the determining the state of the first target object, the method further includes:
determining, by a distance sensor of the robot, a second cleaning distance of the robot from the first target object, in a case where it is determined that the state of the first target object does not satisfy a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the first target object under the condition that the second cleaning distance meets a second preset distance condition.
In practical application, if the working area is a room and the first target object is a door, the preset cleaning condition can be understood as that the door is opened, and the preset cleaning condition is not met, the door is closed; in addition, the second preset distance condition may be understood as that the second cleaning distance is less than or equal to a second preset distance threshold, and the second preset distance threshold may be set according to practical applications, which is not limited in this application, for example, the second preset distance threshold may be set to 10 centimeters, 12 centimeters, and the like, and the first preset distance threshold may be the same as or different from the second preset distance threshold.
Along with the above example, taking the control robot to clean the room, the second target object is the bed, specifically, when the robot is controlled to clean a room according to the zone map, under the condition that the vision acquisition equipment of the robot acquires a door, the panoramic image of the partition where the door is located, which is acquired by the vision acquisition equipment of the robot, determines the state (opening or closing) of the door, in case the state of the door does not satisfy the preset cleaning condition, i.e., the door is closed, a second cleaning distance between the current position of the robot and the first target object is acquired through a distance sensor (e.g., a lidar sensor) of the robot, when the second cleaning distance meets a second preset distance condition (namely the second cleaning distance is smaller than or equal to a second preset distance threshold), controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the first target object; the distance relationship between the robot and the first target object can be understood as the relative fixed distance between the robot and the first target object when the cleaning task is executed on the working area; for example to control the robot to perform a cleaning task in a work area located 12 cm away from the first target object.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating cleaning of a robot in a state where a door is closed in a robot control method according to an embodiment of the present disclosure.
In fig. 3, the first target object 302 is a door, and the door is in a closed state, and if the robot 304 is controlled to perform a cleaning task on the working area, the robot 304 only performs the cleaning on the working area in an area 10 or 20 cm away from the door, so as to avoid the noise generated by the collision of the door.
Meanwhile, when the robot is controlled to clean a working area in an area which is 10 cm away from the door, the cleaning distance between the current position of the robot and the second target object is still determined through the distance sensor of the robot, and the robot is prevented from colliding target furniture in the working area.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating two cleaning modes of a robot in a state that a door is closed in a robot control method according to an embodiment of the present disclosure.
If the cleaning route preset in the robot is the bow-shaped cleaning route on the left side of fig. 4, when the robot is controlled to perform a cleaning task on a working area, if the door is determined to be closed by the vision acquisition equipment, the robot is controlled to perform a cleaning task on the working area according to the distance relationship between the robot and the door, for example, the robot is controlled to perform a cleaning task with a working area which is 10 cm away from the door, the robot returns to reverse cleaning towards the working area which is 10 cm away from the door each time the robot is 10 cm away from the door, then, the cleaning task is performed in the door direction according to the zigzag type route, and in the case of being spaced apart from the door by 10 cm again, the reverse cleaning is continued, the robot is controlled to circularly clean the working area in the way all the time, the final cleaning route of the robot and the door are in a transverse arch shape, and the robot can perform cleaning tasks on a working area which is 10 cm away from the door.
In another case, when the robot is controlled to execute a cleaning task on the working area based on the bow-shaped cleaning route, if the door is determined to be closed through the vision acquisition equipment and the distance between the robot and the door is determined to be smaller than a preset distance threshold value, for example, smaller than 10 cm, the cleaning route of the robot is changed, the bow-shaped cleaning route which is horizontal to the door is adopted, and the robot is controlled to execute the cleaning task on the working area according to the distance relationship between the robot and the door; finally, the arched cleaning route of the robot is horizontal to the door, and the cleaning task of a working area which is 10 cm away from the door is executed through the arched cleaning route; that is, the first situation can be simply understood as collision return, the robot does not change its cleaning mode when encountering the door, returns to cleaning when encountering the door, cleans the area outside the door again in the direction of the door after cleaning, returns again when encountering the door, and repeats the cycle; while the second case may be understood as the encounter with a door that changes its robot's cleaning mode, performing cleaning tasks in an area that is 10 cm away from the door, along the horizontal direction of the door.
In addition, a cleaning mode of the robot can be set according to the area of the working area, for example, when the area of the working area is large, and under the condition that people are detected in a certain subarea through the vision acquisition equipment of the robot, the robot can be controlled to execute a cleaning task on the working area according to the first cleaning strategy of the embodiment and the distance relationship between the robot and the target object, so that the disturbance of noise to people is reduced, and in other subareas far away from the subarea, the robot can be controlled to still execute the cleaning task according to the second cleaning strategy, so that the working area can be cleaned more comprehensively.
In another embodiment of the present specification, the determining the state of the first target object from the panoramic image of the partition where the first target object is located, captured by the vision capturing device of the robot, includes:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot, and extracting a top image of a current working area of the partition where the first target object is located from the panoramic image;
determining an initial working area top image of a partition in which the first target object is located based on the partition map;
and comparing the top image of the current working area with the top image of the original working area to determine the state of the first target object.
Taking a first target object as an example to explain, specifically, when acquiring a state of a door, first, a panoramic image of a partition where the door is located is acquired by a vision acquisition device of a robot, and a top image of a current working area of the partition where the door is located is extracted from the panoramic image, for example, referring to fig. 5, fig. 5 shows a schematic diagram of the top image of the current working area of the partition where the first target object is located in a robot control method provided in an embodiment of the present specification; and because the partition map is constructed based on the acquired environment information of the working area, an initial working area top image of a partition where the first target object is located may be determined through the partition map when the partition map is initially constructed, for example, referring to fig. 6, fig. 6 shows a schematic diagram of the initial working area top image of the partition where the first target object is located in the robot control method provided by an embodiment of the present specification, and then the current working area top image is compared with the initial working area top image to determine the state of the first target object.
As can be seen from fig. 6, the initial working area top image of the partition where the door is located represents the open state of the door, and it can be determined by comparing the working area top images of fig. 5 and 6, and the current working area top image in fig. 5 is different from the initial working area top image in fig. 6, so that when the initial working area top image in fig. 6 represents that the door is in the open state, the current working area top image in fig. 5 represents that the door is in the closed state.
In the embodiment of the specification, the image of the top of the current working area of the partition where the first target object is located is compared with the image of the top of the initial working area of the partition where the first target object is located, which is obtained when the robot performs the cleaning task for the first time to construct the partition map, and the state of the door can be rapidly determined to be closed or opened by the image comparison mode, so that the cleaning efficiency of the robot is greatly improved.
In addition, the state of the first target object can be acquired in a machine learning model manner to more accurately determine the state of the first target object, and the specific implementation manner is as follows:
the determining the state of the first target object through the panoramic image of the partition where the first target object is acquired by the vision acquisition device of the robot includes:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot;
inputting the panoramic image into the image recognition model, wherein the image recognition model outputs an image tag corresponding to the panoramic image;
determining a position of the first target object and the bearing object in the panoramic image if the image tag matches a bearing object of the first target object;
determining the state of the first target object based on the position relation of the first target object and the bearing object in the panoramic image.
For specific introduction of the image recognition model, reference may be made to the above embodiments, which are not described herein again.
In practical application, after a panoramic image of a partition where a first target object is located is acquired through a vision acquisition device of a robot, the panoramic image is input into an image recognition model, image tags corresponding to all objects in the panoramic image, such as a door, a door frame and the like, are acquired, then the image tags are matched with a bearing object of the first target object, and then the state of the first target object is determined based on the position relation of the first target object and the bearing object in the panoramic image.
In specific implementation, under the condition that the first target object is a door, the bearing object of the first target object is a door frame; after a panoramic image is input into an image recognition model, image tags of all objects in the panoramic image can be obtained, if the image tags are matched with a bearing object of a first target object, the bearing object of the first target object is determined to exist in the panoramic image, at the moment, the positions of a door and a door frame are marked in the panoramic image, then an included angle between the door and the door frame is calculated, if the included angle is smaller than or equal to a preset angle threshold value, the door is determined to be in a closed state, and if the included angle is larger than the preset angle threshold value, the door is determined to be in an open state; the preset angle threshold may be set according to practical applications, for example, set to 10 degrees, 20 degrees, and the like.
In general, when the door is closed, the included angle between the door and the door frame is small, and when the door is opened, the included angle between the door and the door frame is large.
In the embodiment of the specification, the robot control method constructs the partition map of the working area and marks the target object in each partition based on the partition map, so that when the robot cleans the working area based on the partition map of the working area, the robot can selectively clean or avoid the target object based on the preset incidence relation between the robot and the target object, a more humanized cleaning method is realized, collision protection of the target object is realized through avoidance of the robot on the target object, and user experience is improved.
The following description will further describe the robot control method provided in this specification with reference to fig. 7, by taking an application of the robot control method to cleaning a room by a robot as an example. Fig. 7 is a flowchart illustrating a processing procedure of a robot control method according to an embodiment of the present disclosure, which specifically includes the following steps.
Step 702: when the sweeping robot executes a sweeping task for the first time, the sweeping robot is controlled to walk along the walls of the rooms to sweep all the rooms, the environment information of the rooms is collected through the vision collection equipment of the robot, and the partition map of the rooms is constructed.
Step 704: and marking the target objects in each subarea according to the constructed subarea map, wherein the target objects comprise beds, sofas, wardrobes and doors.
Step 706: when the sweeping robot is controlled to perform subsequent cleaning tasks on the room according to the zone map, and when the sweeping robot is controlled to sweep the periphery of the door, whether the door is in a closed state is judged, if yes, step 708 is executed, and if not, step 712 is executed.
Step 708: and controlling the sweeping robot to sweep only the area with the door distance exceeding 10 cm.
Step 710: after the area with the distance of more than 10 cm is cleaned, the sweeping robot is controlled to leave the area around the door, and the sweeping robot is controlled to normally clean the room.
Step 712: and judging whether the sweeping robot sweeps the periphery of the bed, the sofa or the wardrobe, if so, executing step 714, and if not, executing step 718.
Step 714: and controlling the sweeping robot to sweep only an area which is more than 5 centimeters away from a bed, a sofa or a wardrobe.
Step 716: after the area with the distance of more than 5 cm is cleaned, the sweeping robot is controlled to leave the surrounding area of the bed, the sofa or the wardrobe, and the sweeping robot is controlled to normally clean the room.
Step 718: the floor sweeping robot is controlled to carry out overall cleaning on the room through the method.
The robot control method provided by the embodiment of the specification is applied to a scene of cleaning a room through a robot, when the sweeping robot is controlled to clean the room for the first time, the environment information of the room is collected by using the vision collection equipment of the sweeping robot while the room is cleaned, a house type diagram of the partitioned room is constructed, target objects such as sofas, wardrobes and beds in the room are identified and labeled according to the house type diagram, when the sweeping robot is controlled to perform next global cleaning on the room, and the sweeping robot is controlled to be near a door, whether the door is closed or opened can be automatically identified, and under the condition that the door is closed, the sweeping robot is controlled to automatically avoid 10 cm, so that the door is prevented from being collided, and the generation of noise is reduced; meanwhile, when the room is cleaned globally, the furniture can be protected, when the floor sweeping robot is close to the furniture, the furniture can be cleaned in an edge extending mode, the furniture and the furniture are automatically kept away from 5 cm, the furniture is prevented from being collided, damage to the furniture is avoided, and the personalized cleaning requirement of a user is met.
Corresponding to the above method embodiment, the present specification further provides an embodiment of a robot control device, and fig. 8 shows a schematic structural diagram of a robot control device provided in an embodiment of the present specification. As shown in fig. 8, the apparatus includes:
a control module 802 configured to control the robot to perform a cleaning task on a work area according to a pre-established zone map;
a cleaning module 804 configured to control the robot to clean the working area through a first cleaning strategy if it is determined that the robot and a target object in the zone map satisfy a preset distance condition;
wherein the first cleaning strategy comprises controlling the robot to perform a cleaning task on the working area according to a distance relationship between the robot and the target object.
Optionally, the apparatus further includes:
the starting module is configured to start the robot and control the robot to clean the working area through a second cleaning strategy under the condition of receiving an initial starting instruction;
an information acquisition module configured to acquire environmental information of the work area through a vision acquisition device of the robot;
the map building module is configured to perform regional division on the working area based on the environmental information of the working area so as to build a regional map of the working area;
wherein the second cleaning strategy comprises controlling the robot to perform a cleaning task on the work area through a cleaning route preset in the robot.
Optionally, the apparatus further includes:
and the identification module is configured to identify the target object of each partition in the partition map and label the target object.
Optionally, the identification module is further configured to:
controlling the robot to acquire images of the objects in each subarea according to the subarea map;
determining an object tag of the object from the acquired object image;
and matching the object tag with a preset object tag, and determining an object corresponding to the successfully matched object tag as a target object.
Optionally, the identification module is further configured to:
inputting a collected object image into an image recognition model, wherein the image recognition model outputs an object label corresponding to the object image; or
And sending the acquired object image to a terminal in communication connection with the robot, and receiving an object label set by a user at the terminal for the object according to the object image.
Optionally, the identification module is further configured to:
matching the object tag with a first preset object tag, and determining an object corresponding to the successfully matched object tag as a first target object; and
and matching the object tag with a second preset object tag, and determining an object corresponding to the successfully matched object tag as a second target object.
Optionally, the cleaning module 804 is further configured to:
under the condition that the first target object is acquired through the vision acquisition equipment of the robot, determining the state of the first target object through a panoramic image of a partition where the first target object is located, acquired through the vision acquisition equipment of the robot;
determining a first cleaning distance between the robot and the second target object through a distance sensor of the robot under the condition that the state of the first target object is determined to meet a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the second target object under the condition that the first cleaning distance meets a first preset distance condition.
Optionally, the apparatus further includes:
a distance determination module configured to determine, by a distance sensor of the robot, a second cleaning distance of the robot from the first target object, in a case where it is determined that the state of the first target object does not satisfy a preset cleaning condition;
and the task execution module is configured to control the robot to execute a cleaning task on the working area according to a distance relation between the robot and the first target object when the second cleaning distance meets a second preset distance condition.
Optionally, the cleaning module 804 is further configured to:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot, and extracting a top image of a current working area of the partition where the first target object is located from the panoramic image;
determining an initial working area top image of a partition in which the first target object is located based on the partition map;
and comparing the top image of the current working area with the top image of the original working area to determine the state of the first target object.
Optionally, the cleaning module 804 is further configured to:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot;
inputting the panoramic image into the image recognition model, wherein the image recognition model outputs an image tag corresponding to the panoramic image;
determining a position of the first target object and the bearing object in the panoramic image if the image tag matches a bearing object of the first target object;
determining the state of the first target object based on the position relation of the first target object and the bearing object in the panoramic image.
In the embodiment of the specification, the robot control device can realize that the robot selectively cleans the target object based on a preset distance relationship between the robot and the target object when the robot is controlled to clean the working area based on the partition map pre-established in the working area, so that a more humanized cleaning method is realized, collision protection on the target object is realized through avoidance of the robot on the target object, and user experience is improved.
The above is a schematic scheme of a robot control device of the present embodiment. It should be noted that the technical solution of the robot control device is the same concept as the technical solution of the robot control method, and details of the technical solution of the robot control device, which are not described in detail, can be referred to the description of the technical solution of the robot control method.
An embodiment of the present specification further provides a robot, including:
the machine body is provided with a memory and a processor;
the memory is for storing computer executable instructions and the processor is for executing the computer executable instructions, which when executed by the processor implement the steps of the robot control method.
The above is a schematic solution of a robot of the present embodiment. It should be noted that the technical solution of the robot belongs to the same concept as the technical solution of the robot control method, and for details that are not described in detail in the technical solution of the robot, reference may be made to the description of the technical solution of the robot control method
An embodiment of the present specification also provides a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the robot control method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the robot control method, and for details that are not described in detail in the technical solution of the storage medium, reference may be made to the description of the technical solution of the robot control method.
The foregoing description of specific embodiments has been presented for purposes of illustration and description. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.

Claims (13)

1. A robot control method, comprising:
controlling the robot to execute a cleaning task on a working area according to a pre-established partition map;
under the condition that the robot and the target object in the zone map are determined to meet the preset distance condition, controlling the robot to clean the working area through a first cleaning strategy;
wherein the first cleaning strategy comprises controlling the robot to perform a cleaning task on the working area according to a distance relationship between the robot and the target object.
2. The robot control method according to claim 1, wherein before controlling the robot to perform a cleaning task on a work area according to a pre-established partition map, further comprising:
under the condition of receiving an initial starting instruction, starting the robot, and controlling the robot to clean the working area through a second cleaning strategy;
collecting the environmental information of the working area through a visual collection device of the robot;
performing area division on the working area based on the environment information of the working area to construct a partition map of the working area;
wherein the second cleaning strategy comprises controlling the robot to perform a cleaning task on the work area through a cleaning route preset in the robot.
3. The robot control method according to claim 2, further comprising, after the constructing the zone map of the work area:
and identifying a target object of each partition in the partition map, and labeling the target object.
4. The robot control method of claim 3, wherein the identifying the target object for each zone in the zone map comprises:
controlling the robot to acquire images of the objects in each subarea according to the subarea map;
determining an object tag of the object from the acquired object image;
and matching the object tag with a preset object tag, and determining an object corresponding to the successfully matched object tag as a target object.
5. The robot control method of claim 4, wherein determining the object label of the object from the captured object image comprises:
inputting a collected object image into an image recognition model, wherein the image recognition model outputs an object label corresponding to the object image; or
And sending the acquired object image to a terminal in communication connection with the robot, and receiving an object label set by a user at the terminal for the object according to the object image.
6. The robot control method according to claim 4 or 5, wherein the matching the object tag with a preset object tag and determining an object corresponding to the successfully matched object tag as a target object comprises:
matching the object tag with a first preset object tag, and determining an object corresponding to the successfully matched object tag as a first target object; and
and matching the object tag with a second preset object tag, and determining an object corresponding to the successfully matched object tag as a second target object.
7. The robot control method according to claim 6, wherein controlling the robot to clean the work area by a first cleaning strategy in a case where it is determined that the robot and a target object in the zone map satisfy a preset distance condition comprises:
under the condition that the first target object is acquired through the vision acquisition equipment of the robot, determining the state of the first target object through a panoramic image of a partition where the first target object is located, acquired through the vision acquisition equipment of the robot;
determining a first cleaning distance between the robot and the second target object through a distance sensor of the robot under the condition that the state of the first target object is determined to meet a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the second target object under the condition that the first cleaning distance meets a first preset distance condition.
8. The robot control method according to claim 7, further comprising, after the determining the state of the first target object:
determining, by a distance sensor of the robot, a second cleaning distance of the robot from the first target object, in a case where it is determined that the state of the first target object does not satisfy a preset cleaning condition;
and controlling the robot to execute a cleaning task on the working area according to the distance relation between the robot and the first target object under the condition that the second cleaning distance meets a second preset distance condition.
9. The robot control method according to claim 7, wherein the determining the state of the first target object from the panoramic image of the partition where the first target object is located captured by the vision capturing device of the robot includes:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot, and extracting a top image of a current working area of the partition where the first target object is located from the panoramic image;
determining an initial working area top image of a partition in which the first target object is located based on the partition map;
and comparing the top image of the current working area with the top image of the original working area to determine the state of the first target object.
10. The robot control method according to claim 7, wherein the determining the state of the first target object from the panoramic image of the partition where the first target object is located captured by the vision capturing device of the robot includes:
acquiring a panoramic image of a partition where the first target object is located through visual acquisition equipment of the robot;
inputting the panoramic image into the image recognition model, wherein the image recognition model outputs an image tag corresponding to the panoramic image;
determining a position of the first target object and the bearing object in the panoramic image if the image tag matches a bearing object of the first target object;
determining the state of the first target object based on the position relation of the first target object and the bearing object in the panoramic image.
11. A robot control apparatus, comprising:
a control module configured to control the robot to perform a cleaning task on a work area according to a pre-established zone map;
a cleaning module configured to control the robot to clean the working area through a first cleaning strategy if it is determined that the robot and a target object in the zone map satisfy a preset distance condition;
wherein the first cleaning strategy comprises controlling the robot to perform a cleaning task on the working area according to a distance relationship between the robot and the target object.
12. A robot, comprising:
the machine body is provided with a memory and a processor;
the memory is adapted to store computer-executable instructions and the processor is adapted to execute the computer-executable instructions, which when executed by the processor, perform the steps of the robot control method of any of claims 1-10.
13. A computer-readable storage medium, characterized in that it stores computer instructions which, when executed by a processor, carry out the steps of the robot control method according to any one of claims 1-10.
CN202011282012.XA 2020-11-16 2020-11-16 Robot control method and device Pending CN114504273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011282012.XA CN114504273A (en) 2020-11-16 2020-11-16 Robot control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011282012.XA CN114504273A (en) 2020-11-16 2020-11-16 Robot control method and device

Publications (1)

Publication Number Publication Date
CN114504273A true CN114504273A (en) 2022-05-17

Family

ID=81547308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011282012.XA Pending CN114504273A (en) 2020-11-16 2020-11-16 Robot control method and device

Country Status (1)

Country Link
CN (1) CN114504273A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115328172A (en) * 2022-10-13 2022-11-11 科大讯飞股份有限公司 Cleaning robot, control method, device, equipment and storage medium thereof
CN116067365A (en) * 2023-04-04 2023-05-05 科大讯飞股份有限公司 Map partitioning method, device, equipment and readable storage medium

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020045630A (en) * 2000-12-09 2002-06-20 구자홍 Electric vacuum cleaner based on remote control
JP2005218578A (en) * 2004-02-04 2005-08-18 Funai Electric Co Ltd Self-propelled vacuum cleaner
JP2010099365A (en) * 2008-10-27 2010-05-06 Panasonic Corp Self-propelled cleaner
DE102010060347A1 (en) * 2010-11-04 2012-05-10 Vorwerk & Co. Interholding Gmbh Automatically movable device e.g. cleaning device for cleaning floor, has contactless sensor that measures distance to target object, for comparing measured distance with defined distance
CN104000541A (en) * 2014-06-16 2014-08-27 成都北斗群星智能科技有限公司 Floor sweeping robot with threshold detection function and threshold detection method thereof
CN104401272A (en) * 2014-11-20 2015-03-11 联想(北京)有限公司 Control method and automobile
CN105629972A (en) * 2014-11-07 2016-06-01 科沃斯机器人有限公司 Guide type virtual wall system
CN106913290A (en) * 2015-12-25 2017-07-04 北京奇虎科技有限公司 A kind of sweeper and the control method based on sweeper
CN107041718A (en) * 2016-02-05 2017-08-15 北京小米移动软件有限公司 Clean robot and its control method
CN107224249A (en) * 2017-07-06 2017-10-03 北京小米移动软件有限公司 The clean operation of cleaning equipment performs method, device and readable storage medium storing program for executing
CN107615203A (en) * 2015-06-15 2018-01-19 夏普株式会社 The traveling method of self-propelled electronic equipment and the self-propelled electronic equipment
CN107636548A (en) * 2015-05-12 2018-01-26 三星电子株式会社 Robot and its control method
US20180210448A1 (en) * 2017-01-25 2018-07-26 Lg Electronics Inc. Method of identifying functional region in 3-dimensional space, and robot implementing the method
US20180350391A1 (en) * 2017-05-03 2018-12-06 Soltare Inc. Audio processing for vehicle sensory systems
CN109683622A (en) * 2019-02-22 2019-04-26 深圳市杉川机器人有限公司 Robot cleaning method, device, robot and computer readable storage medium
CN109765895A (en) * 2019-01-24 2019-05-17 平安科技(深圳)有限公司 Automatic driving vehicle control method, device, automatic driving vehicle and storage medium
CN109758046A (en) * 2019-01-31 2019-05-17 任飞 A kind of smart home robot
CN109875470A (en) * 2019-01-31 2019-06-14 科沃斯机器人股份有限公司 It gets rid of poverty method, equipment and storage medium
CN110403528A (en) * 2019-06-12 2019-11-05 深圳乐动机器人有限公司 A kind of method and system improving cleaning coverage rate based on clean robot
CN110454059A (en) * 2019-09-12 2019-11-15 遵化市阔旺木业有限公司 A kind of Face recognition device for door leaf installation
CN110448241A (en) * 2019-07-18 2019-11-15 广东宝乐机器人股份有限公司 The stranded detection of robot and method of getting rid of poverty
CN110742557A (en) * 2019-10-24 2020-02-04 深圳市银星智能科技股份有限公司 Camera control method and device and electronic equipment
WO2020186493A1 (en) * 2019-03-21 2020-09-24 珊口(深圳)智能科技有限公司 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020045630A (en) * 2000-12-09 2002-06-20 구자홍 Electric vacuum cleaner based on remote control
JP2005218578A (en) * 2004-02-04 2005-08-18 Funai Electric Co Ltd Self-propelled vacuum cleaner
JP2010099365A (en) * 2008-10-27 2010-05-06 Panasonic Corp Self-propelled cleaner
DE102010060347A1 (en) * 2010-11-04 2012-05-10 Vorwerk & Co. Interholding Gmbh Automatically movable device e.g. cleaning device for cleaning floor, has contactless sensor that measures distance to target object, for comparing measured distance with defined distance
CN104000541A (en) * 2014-06-16 2014-08-27 成都北斗群星智能科技有限公司 Floor sweeping robot with threshold detection function and threshold detection method thereof
CN105629972A (en) * 2014-11-07 2016-06-01 科沃斯机器人有限公司 Guide type virtual wall system
CN104401272A (en) * 2014-11-20 2015-03-11 联想(北京)有限公司 Control method and automobile
CN107636548A (en) * 2015-05-12 2018-01-26 三星电子株式会社 Robot and its control method
CN107615203A (en) * 2015-06-15 2018-01-19 夏普株式会社 The traveling method of self-propelled electronic equipment and the self-propelled electronic equipment
CN106913290A (en) * 2015-12-25 2017-07-04 北京奇虎科技有限公司 A kind of sweeper and the control method based on sweeper
CN107041718A (en) * 2016-02-05 2017-08-15 北京小米移动软件有限公司 Clean robot and its control method
US20180210448A1 (en) * 2017-01-25 2018-07-26 Lg Electronics Inc. Method of identifying functional region in 3-dimensional space, and robot implementing the method
US20180350391A1 (en) * 2017-05-03 2018-12-06 Soltare Inc. Audio processing for vehicle sensory systems
CN107224249A (en) * 2017-07-06 2017-10-03 北京小米移动软件有限公司 The clean operation of cleaning equipment performs method, device and readable storage medium storing program for executing
CN109765895A (en) * 2019-01-24 2019-05-17 平安科技(深圳)有限公司 Automatic driving vehicle control method, device, automatic driving vehicle and storage medium
CN109758046A (en) * 2019-01-31 2019-05-17 任飞 A kind of smart home robot
CN109875470A (en) * 2019-01-31 2019-06-14 科沃斯机器人股份有限公司 It gets rid of poverty method, equipment and storage medium
CN109683622A (en) * 2019-02-22 2019-04-26 深圳市杉川机器人有限公司 Robot cleaning method, device, robot and computer readable storage medium
WO2020186493A1 (en) * 2019-03-21 2020-09-24 珊口(深圳)智能科技有限公司 Method and system for navigating and dividing cleaning region, mobile robot, and cleaning robot
CN110403528A (en) * 2019-06-12 2019-11-05 深圳乐动机器人有限公司 A kind of method and system improving cleaning coverage rate based on clean robot
CN110448241A (en) * 2019-07-18 2019-11-15 广东宝乐机器人股份有限公司 The stranded detection of robot and method of getting rid of poverty
CN110454059A (en) * 2019-09-12 2019-11-15 遵化市阔旺木业有限公司 A kind of Face recognition device for door leaf installation
CN110742557A (en) * 2019-10-24 2020-02-04 深圳市银星智能科技股份有限公司 Camera control method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115328172A (en) * 2022-10-13 2022-11-11 科大讯飞股份有限公司 Cleaning robot, control method, device, equipment and storage medium thereof
CN115328172B (en) * 2022-10-13 2023-02-17 科大讯飞股份有限公司 Cleaning robot, control method, device, equipment and storage medium thereof
CN116067365A (en) * 2023-04-04 2023-05-05 科大讯飞股份有限公司 Map partitioning method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
JP7395229B2 (en) Mobile cleaning robot artificial intelligence for situational awareness
CN111543902B (en) Floor cleaning method and device, intelligent cleaning equipment and storage medium
CN111657798A (en) Cleaning robot control method and device based on scene information and cleaning robot
Ekvall et al. Integrating active mobile robot object recognition and slam in natural environments
US8140188B2 (en) Robotic system and method for observing, learning, and supporting human activities
CN111839371B (en) Ground sweeping method and device, sweeper and computer storage medium
CN111643017B (en) Cleaning robot control method and device based on schedule information and cleaning robot
CN111973075B (en) Floor sweeping method and device based on house type graph, sweeper and computer medium
CN114504273A (en) Robot control method and device
WO2020248458A1 (en) Information processing method and apparatus, and storage medium
CN112401763A (en) Control method of sweeping robot, sweeping robot and computer readable storage medium
CN112784664A (en) Semantic map construction and operation method, autonomous mobile device and storage medium
CN108107886B (en) Driving control method and device of sweeping robot and sweeping robot
US20230324923A1 (en) Infinity smart serving robot
KR20210007474A (en) A ROBOT CLEANER Using artificial intelligence AND CONTROL METHOD THEREOF
CN112462780A (en) Sweeping control method and device, sweeping robot and computer readable storage medium
JP6713057B2 (en) Mobile body control device and mobile body control program
CN110928282A (en) Control method and device for cleaning robot
EP3789841B1 (en) Information processing device, information processing method, program, and autonomous robot control system
CN110742557B (en) Camera control method and device and electronic equipment
JP2014106597A (en) Autonomous moving body, object information acquisition device, and object information acquisition method
CN113367616B (en) Robot control method, robot control device, robot, and storage medium
CN114489058A (en) Sweeping robot, path planning method and device thereof and storage medium
de la Puente et al. RGB-D sensor setup for multiple tasks of home robots and experimental results
CN116091607B (en) Method, device, equipment and readable storage medium for assisting user in searching object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination