WO2021143543A1 - 机器人及其控制方法 - Google Patents

机器人及其控制方法 Download PDF

Info

Publication number
WO2021143543A1
WO2021143543A1 PCT/CN2020/142239 CN2020142239W WO2021143543A1 WO 2021143543 A1 WO2021143543 A1 WO 2021143543A1 CN 2020142239 W CN2020142239 W CN 2020142239W WO 2021143543 A1 WO2021143543 A1 WO 2021143543A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
robot
target
work area
area
Prior art date
Application number
PCT/CN2020/142239
Other languages
English (en)
French (fr)
Inventor
彭锐
宋庆祥
Original Assignee
科沃斯机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 科沃斯机器人股份有限公司 filed Critical 科沃斯机器人股份有限公司
Priority to US17/793,356 priority Critical patent/US20230057965A1/en
Publication of WO2021143543A1 publication Critical patent/WO2021143543A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4002Installations of electric equipment
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4061Steering means; Means for avoiding obstacles; Details related to the place where the driver is accommodated
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/04Automatic control of the travelling movement; Automatic obstacle detection
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Definitions

  • This application relates to the technical field of smart devices, and in particular to a robot and a control method thereof.
  • the present application provides a robot and a control method thereof from multiple aspects, so as to improve the control flexibility of the robot.
  • One aspect of the present application provides a robot control method, including: responding to a gesture interactive wake-up instruction to obtain user posture data; according to the posture data, determining a target work area indicated by the user; the target work area and the The areas to which the current position of the robot belongs are different areas; the robot moves to the target work area to perform the set work task.
  • a robot control method including: a robot body, a sensor assembly, a controller, and a motion assembly installed on the robot body; the sensor assembly is used to respond to a user's job control instruction, Obtain the user's posture data; the controller is configured to: according to the user's posture data, determine the target work position and work area indicated by the user, and control the movement component to move to the target work area to execute Homework tasks.
  • the robot can respond to the posture interactive wake-up instruction, obtain the posture data of the user, and determine the target work area according to the user’s posture data, and can be used in the case where the target work area and the area where the current position of the robot is located belong to different areas , Move to the target work area to execute the set work task. Furthermore, the robot realizes mobile operations based on the user's posture without being restricted by area division, which further improves the control flexibility of the robot.
  • Fig. 1 is a schematic structural diagram of a robot provided by an exemplary embodiment of the application
  • FIG. 2 is a schematic diagram of the principle of three-dimensional depth measurement provided by an exemplary embodiment of this application;
  • FIG. 3 is a schematic flowchart of a robot control method provided by an exemplary embodiment of this application.
  • FIG. 4a is a schematic flowchart of a robot control method provided by another exemplary embodiment of this application.
  • FIG. 4b is a schematic diagram of acquiring posture data and detecting key points according to an exemplary embodiment of this application.
  • 4c-4d are schematic diagrams of determining the target work direction according to the spatial coordinates corresponding to the gesture provided by an exemplary embodiment of this application;
  • FIG. 5a is a schematic diagram of the working logic of the sweeping robot provided by an application scenario embodiment of this application;
  • 5b-5d are schematic diagrams of the cleaning robot performing cleaning tasks according to the user's posture provided by an application scenario embodiment of this application.
  • the method of specifying the working area of the robot is relatively simple.
  • the user usually needs to specify the area to be cleaned on the navigation map of the robot provided by the terminal device, and the cleaning robot performs the cleaning task according to the cleaning area specified by the user on the navigation map.
  • this method is highly dependent on terminal equipment.
  • the robot's navigation map is incomplete, so the sweeping robot cannot perform cleaning tasks on areas not included in the navigation map, and its flexibility is poor.
  • FIG. 1 is a schematic structural diagram of a robot provided by an exemplary embodiment of the application. As shown in FIG. 1, the robot includes a main body 10, a sensor assembly 20 installed on the main body 10, a controller 30 and a motion assembly 40.
  • a robot refers to an electronic device that can move autonomously and can realize intelligent control.
  • the robot is realized as a robot that can perform cleaning and cleaning tasks, such as a sweeping robot that cleans the ground, a scrubbing robot that cleans the ground, walls, ceiling, glass, and motor vehicles, and air that purifies the air. Purify the robot and so on.
  • FIG. 1 taking a sweeping robot as an example, the structure of the robot provided in the embodiment of the present application is schematically illustrated, but it does not mean that the robot provided in the present application can only be implemented as a sweeping robot.
  • the robot can be implemented as a warehouse logistics robot, such as a freight robot, a goods delivery robot, and so on.
  • the robot can be implemented as a robot waiter, such as a welcome robot in a hotel, a serving robot, a shopping guide robot in a mall or a store, etc., which are not shown in the figure.
  • the autonomous movement function of the above-mentioned robot may include the function of moving on the ground, and may also include the function of moving autonomously in the air. If the function of flying and moving in the air is included, the above-mentioned robot can be realized as an unmanned aerial vehicle, which will not be repeated.
  • the sensor assembly 20 is mainly used to respond to the user's operation control instructions to obtain the user's posture data.
  • the posture refers to the posture presented by the user, such as head posture, hand posture, leg posture, and so on.
  • the user can interact with the robot through posture, and the posture data of the user is data collected by the sensor assembly 20 on the posture of the user.
  • the sensor component 20 can be implemented by any one or more sensors capable of collecting posture data of the user, which is not limited in this embodiment.
  • the sensor component 20 may be implemented as a three-dimensional depth sensor for performing three-dimensional measurement on the user to obtain three-dimensional measurement data.
  • the three-dimensional measurement data includes: the image obtained by shooting the user and the distance between the user and the robot.
  • the image can be an RGB (red, green, and blue three-channel) image or a grayscale image, and the distance between the user and the robot is also called the depth of the measured object.
  • the following will exemplify the implementation manner in which the three-dimensional depth sensor obtains the RGB image of the measured object and perceives the depth of the measured object in combination with the optional implementation form of the three-dimensional depth sensor.
  • the three-dimensional depth sensor is implemented based on binocular cameras, and based on binocular depth recovery technology to obtain three-dimensional measurement data.
  • two monocular cameras can be fixed on a module, and the angle and distance of the two cameras can be fixed to form a stable binocular structure.
  • the binocular camera shoots the object under test, and the RGB image of the object under test can be obtained.
  • the distance between the measured object and the camera can be obtained based on the triangulation method and the principle of parallax.
  • triangulation can be used to calculate the distance from the measured object to the baseline of the binocular camera. This will be further described below in conjunction with FIG. 2.
  • the distance between the binocular cameras is the baseline distance B
  • the camera focal length is f
  • the binocular cameras shoot the same feature point P (x c , y c , z c ) of the space object at the same time.
  • (x c , y c , z c ) are the coordinates of the feature point P in the camera coordinate system xyz.
  • the corresponding image coordinates are p left (x left , y left ) and p right (x right , y right ), respectively.
  • the binocular cameras may be infrared binocular cameras respectively, and then based on the illumination of the infrared lamp, the depth information of the measured object can be captured in a weak or even dark environment.
  • the three-dimensional depth sensor may be implemented based on a projector and a camera capable of projecting structured light.
  • the camera can take pictures of the measured object and obtain the RGB image of the measured object.
  • the projector can project structured light with a known pattern to the measured object, and the camera can acquire the pattern formed by the reflected structured light.
  • the projected structured light pattern can be compared with the reflected structured light pattern. Based on the pattern comparison result and the fixed distance between the projector and the camera, the depth of the measured object can be calculated using the triangulation method information.
  • the structured light projected by the projector may be speckle structured light or coded structured light, which is not limited in this embodiment.
  • the three-dimensional depth sensor may be implemented based on a camera, and an electromagnetic wave sensor such as lidar or millimeter wave radar.
  • the camera can take pictures of the measured object and obtain the RGB image of the measured object.
  • the electromagnetic wave signal emitted by the lidar or millimeter wave radar returns after reaching the measured object.
  • the time for the electromagnetic wave signal to return after reaching the measured object is calculated, and the distance between the measured object and the sensor is calculated based on the time and the transmission speed of the electromagnetic wave.
  • controller 30 is used to determine the target work location operation area indicated by the user according to the user's posture data collected by the sensor assembly 20, and control the motion assembly 40 to move to the target work area to perform the work task.
  • the controller 30 may use various application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable gate arrays (FPGAs). ), a micro central control element, a microprocessor, a micro control unit (MCU) or other electronic elements are implemented, and this embodiment does not limit it.
  • ASIC application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGAs field programmable gate arrays
  • MCU micro control unit
  • the motion component 40 refers to a device installed on the robot for autonomous movement of the robot, for example, a mobile chassis of the robot, rollers, etc., which are not limited in this embodiment.
  • the robot provided in the embodiment of the present application may further include a memory installed on the main body 10.
  • the memory is used to store computer programs and can be configured to store various other data to support operations on the robot. Examples of such data include instructions for any application or method operating on the robot.
  • the memory can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and programmable Read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and programmable Read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the robot may further include a display assembly installed on the body 10.
  • the display component may include a screen, and the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the robot may further include a power supply assembly mounted on the body 10, and the power supply assembly may provide power for various components on the robot.
  • the power supply component may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the robot, and will not be repeated here.
  • an embodiment of the present application also provides a robot control method, which will be described in detail below with reference to the accompanying drawings.
  • FIG. 3 is a schematic flowchart of a robot control method provided by an exemplary embodiment of the application. As shown in FIG. 3, the method includes:
  • Step 301 The robot responds to the posture interactive wake-up instruction to obtain posture data of the user.
  • Step 302 The robot determines the target work area indicated by the user according to the posture data; the target work area and the area to which the current position of the robot belongs are different areas.
  • Step 303 The robot moves to the target work area to perform the set work task.
  • the gesture interaction instruction refers to an instruction used to wake up the robot's gesture interaction function.
  • the posture interaction function refers to the function that the robot can capture the user's posture, recognize the interactive content corresponding to the user's posture, and execute the corresponding work task according to the recognized interactive content.
  • the gesture interaction instruction may be directly issued by the user, or may be issued by the user through the terminal device.
  • the gesture interaction instruction can be implemented as a voice instruction issued by the user for waking up the gesture interaction function of the robot, such as "please look at my gesture", "listen to me” Voice commands such as gesture command.
  • the gesture interaction instruction may be implemented as a gesture instruction issued by the user for waking up the gesture interaction function of the robot, and the gesture instruction may be customized by the user, which is not limited in this embodiment.
  • the user can initiate a control operation of the gesture interaction function for waking up the robot to the robot through the terminal device.
  • the gesture interaction instruction can be implemented as a control instruction sent by the terminal device for waking up the robot's gesture interaction function.
  • terminal devices can be implemented as mobile phones, tablet computers, smart watches, smart bracelets, smart speakers and other devices.
  • the terminal device may include an electronic display screen through which the user can initiate an operation to control the robot.
  • the electronic display screen may include a liquid crystal display (LCD) and a touch panel (TP). If the electronic display screen includes a touch panel, the electronic display screen can be implemented as a touch screen that can receive input signals from the user to detect the user's control operation of the robot.
  • the terminal device may include a physical button or a voice input device for providing a robot control operation to the user, which will not be repeated here.
  • the terminal device and the robot are pre-bound, and the two can establish a communication relationship with each other through wired or wireless communication. Based on this, the user's operation of sending gesture interactive wake-up instructions to the robot through the terminal device can be implemented based on the communication message between the terminal device and the robot.
  • the wireless communication methods between the terminal device and the robot include short-distance communication methods such as Bluetooth, ZigBee, infrared, and WiFi (WIreless-Fidelity, wireless fidelity technology), as well as long-distance wireless communication methods such as LORA.
  • the wireless communication method of the mobile network when connected via mobile network communication, the network standard of the mobile network can be 2G (GSM), 2.5G (GPRS), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G+ (LTE+) Any one of, 5G, WiMax, etc., is not limited in this embodiment.
  • the posture data of the user is data obtained by collecting postures presented by the user, such as the posture of the user's head, the posture of the hands, and the posture of the legs.
  • the user's posture data can be obtained through a sensor component installed on the robot.
  • a sensor component installed on the robot.
  • the user's posture data can be acquired through a posture sensor worn by the user.
  • the user's posture data can be acquired through a gyroscope, an inertial sensor, etc., worn on the user's arm, which is not limited in this embodiment.
  • the user's posture data may be realized by multiple sensors installed in the space where the user is located.
  • the surveillance camera installed in the space can be reused to take multi-angle shooting of the user, and obtain the user's posture data based on the result of the multi-angle shooting, which will not be repeated here.
  • the user can direct the robot to go to a specific work area to perform work tasks by posing different postures.
  • a home user can use his arm to point to a certain room to direct the sweeping robot to go to the pointed room to perform cleaning tasks.
  • a hotel foreman user user can turn his head and pose a pose facing a certain area to direct the attendant robot to perform service tasks in the designated area.
  • the target work area is the area where the user instructs the robot to go to which is recognized according to the user's posture data.
  • the target work area and the area to which the current position of the robot belongs are different areas.
  • the target work area and the current position of the robot do not belong to the same room, or the target work area and the current position of the robot are artificially divided into two different areas.
  • the robot can move across the room or across the area to the target work area to perform the set work tasks.
  • the robot can respond to the posture interactive wake-up instruction, obtain the user’s posture data, and determine the target work area according to the user’s posture data. It can also be used when the target work area and the current position of the robot belong to different areas. , Move to the target work area to execute the set work task. Furthermore, the robot realizes mobile operations based on the user's posture without being restricted by area division, which further improves the control flexibility of the robot.
  • Fig. 4a is a schematic flowchart of a robot control method provided by another exemplary embodiment of this application. As shown in Fig. 4a, the method includes:
  • Step 401 The robot responds to the posture interactive wake-up instruction, and performs three-dimensional measurement on the user through the sensor component installed on the robot to obtain three-dimensional measurement data.
  • Step 402 The robot obtains the spatial coordinates corresponding to the user's gesture according to the three-dimensional measurement data.
  • Step 403 The robot determines the target work direction indicated by the user according to the spatial coordinates corresponding to the user's gesture.
  • Step 404 The robot determines from the candidate work areas a work area adapted to the target work direction as the target work area, and the target work area and the area to which the current position of the robot belongs are different areas.
  • Step 405 The robot moves to the target work area to perform the set work task.
  • the three-dimensional measurement data includes: an image obtained by shooting a user and the distance between the user and the robot.
  • the specific method of obtaining the three-dimensional measurement data please refer to the record in the foregoing embodiment, which will not be repeated here.
  • step 402 according to the three-dimensional measurement data, the spatial coordinates corresponding to the user's gesture can be obtained.
  • image recognition can be performed to obtain the key points of the user's posture.
  • the method of identifying key points of a gesture from an image can be implemented based on a deep learning algorithm.
  • the image recognition model can be trained based on Convolutional Neural Networks (CNN) or Graph Convolutional Networks (GCN). The following will exemplify CNN as an example.
  • multiple postures of the user can be photographed to obtain a large number of images. Then, annotate the key points of the posture on the image to obtain training samples, and input the training samples into the CNN model for iterative training.
  • the posture key points marked on the sample can be used as the learning target of the model, and the model parameters in the CNN model can be adjusted continuously until the loss function converges to a certain range.
  • the image can be input to the CNN model, and the key points of the posture on the image can be obtained according to the output of the CNN model.
  • the key points of the posture may include the feature points on the image corresponding to the user's eyes, nose, shoulders, elbows, wrists, hips, knees, ankles and other key parts, as shown in Figure 4b. While recognizing the key points of the posture, the user on the image can also be distinguished from left to right.
  • gestures to interact with the robot.
  • the following will take gestures as an example to illustrate the technical solutions provided in the embodiments of the present application.
  • gestures include specific movements and postures displayed by the user when using his arms. After the gesture key points are recognized, the target key points used to characterize the user's gesture can be determined from the gesture key points.
  • the arm has high flexibility. When the user uses the arm, it can drive joints such as fingers, wrists, elbows, and shoulders to move together. Based on this, in some embodiments, when determining the target key point that characterizes the user's gesture, at least the key point corresponding to the user's elbow and the key point corresponding to the wrist may be determined, as shown in FIG. 4c. In other embodiments, three key points corresponding to the user's shoulder, elbow, and wrist can be determined, as shown in FIG. 4d.
  • the key points corresponding to the user’s fingers can be further acquired. No more illustration.
  • the distance between the target key point and the robot can be determined from the distance between the user and the robot according to the coordinates of the target key point on the image.
  • the key points of the target can be identified from the image captured by the binocular camera, and based on the binocular depth recovery technology, the target key point corresponding to the image captured by the binocular camera can be obtained Depth information, as the distance between the target key point and the robot.
  • the spatial coordinates corresponding to the user's gesture can be determined according to the coordinates of the target key point and the distance between the target key point and the robot. It should be understood that the two-dimensional coordinates of the target key point in the camera coordinate system can be obtained based on the captured image, and the third dimension of the target key point in the camera coordinate system can be obtained based on the distance between the target key point and the robot. coordinate of. Based on the above three coordinates, the three-dimensional coordinates of the target key point in the camera coordinate system can be obtained.
  • the work direction indicated by the user may be determined according to the spatial coordinates corresponding to the user's gesture.
  • a straight line may be fitted according to the spatial coordinates corresponding to the user's gesture to obtain a spatial straight line.
  • the direction in which the spatial straight line extends to the end of the user's gesture is used as the operating direction indicated by the user, as shown in FIGS. 4c and 4d.
  • the direction extending to the end of the user's gesture refers to the direction in which the shoulder extends to the elbow, or the direction in which the shoulder extends to the wrist.
  • the direction in which the shoulder extends to the fingers if the key points taken from the image include the key points corresponding to the elbow, the direction that extends to the end of the user's gesture refers to the direction in which the elbow extends to the wrist, or The direction in which the elbow extends to the finger will not be repeated here.
  • the target work area and the area to which the current position of the robot belongs are different areas, which may include: there is a physical obstacle or a virtual obstacle between the target work area and the area to which the current position belongs.
  • the following will be exemplified in combination with different application scenarios.
  • the robot is implemented as a household sweeping robot
  • the target work area can be implemented as a room in the home, and there is a wall or door between the room and the room where the robot is currently located.
  • the robot is currently located in the living room, and the target work area is the bedroom.
  • the robot is implemented as a waiter robot used in a restaurant.
  • a virtual wall can be delineated between different service areas in the restaurant to generate a navigation map for the robot based on the virtual wall.
  • the virtual wall does not exist in the actual space, but it can exist on the navigation map of the robot, and the robot can perform mobile operations within their respective prescribed operating ranges according to the navigation map generated by the virtual wall.
  • the target work area can be realized as another area with a virtual wall between the area where the robot is currently located. For example, the robot is currently located in the dining area A, the target work area is located in the dining area B, and a virtual wall is drawn between the dining area A and the dining area B.
  • the candidate work area refers to all areas where the robot can go and perform work tasks.
  • the candidate work area includes all the rooms in a house.
  • candidate work areas include all dining areas provided by the restaurant.
  • the candidate operation area may include all the stores provided by the shopping mall, which will not be repeated here.
  • the target work direction is the direction in which the user instructs the robot to go to work. Therefore, after the target work direction is obtained, the work area that matches the target work direction can be determined from the candidate work areas as the target work area.
  • the working direction indicated by the user is expressed by the direction in which the spatial straight line extends to the end of the user's gesture. Based on this, the position of the intersection of the spatial straight line and the plane where the candidate work area is located can be calculated, and the target work area indicated by the user can be determined according to the position of the intersection.
  • the plane where the candidate work area is located can be used as a spatial plane, and the process of calculating the position of the intersection is transformed into the process of calculating the intersection of the spatial line and the spatial plane.
  • Embodiment 1 If the intersection position is within the known work area of the robot, the work area where the intersection point is located is taken as the target work area.
  • the known work area refers to the area already included in the robot navigation map.
  • Embodiment 2 If the intersection position is not in the robot's known work area, and the angle between the space line and the plane of the candidate work area is greater than the set angle threshold, it can be determined from the known work area to be located in the user's instruction The work area closest to the current position of the robot in the work direction of is regarded as the target work area.
  • the angle between the space straight line and the plane where the candidate work area is located is as shown in Fig. 4c.
  • the set angle threshold can be set according to actual needs.
  • the maximum gesture angle that can cover the entire candidate area may be calculated according to the area of the candidate area, and the maximum gesture angle may be used as the angle threshold. If the angle between the spatial straight line and the plane where the candidate work area is located is greater than the set angle threshold, it is considered that the pointing angle of the user's gesture is inappropriate, for example, the user's arm is too high or even the arm is parallel to the ground.
  • the spatial straight line can be projected on the navigation map of the robot to obtain the projected straight line. Then, the operation area on the navigation map that intersects with the projection line and is closest to the current position of the robot is taken as the target operation area.
  • Embodiment 3 If the intersection position is not within the known work area of the robot, and the angle between the space straight line and the plane is less than or equal to the angle threshold, the target work area is searched in the work direction indicated by the user according to the intersection position .
  • the angle between the space straight line and the plane is less than or equal to the angle threshold, it can be considered that the user's pointing angle is reasonable, but there is a missing work area on the robot's navigation map.
  • the user's gesture indicates that the target cleaning area is the kitchen, but there is no kitchen area on the robot's navigation map.
  • the target work area can be found in the work direction indicated by the user according to the position of the intersection point to complete the work task indicated by the user.
  • the robot may move in the working direction indicated by the user until it encounters a target obstacle, which may be a wall. After encountering the target obstacle, the robot can move along the edge of the target obstacle in the direction close to the intersection position until the entrance is detected.
  • the entrance is usually a place where the obstruction of an obstacle disappears, such as a door opened on a wall. If the work area to which the entry belongs is not in the known work area, the work area to which the entry belongs can be used as the target work area.
  • the entrance detected by the robot is located on the navigation map of the robot, and the work area to which the entrance belongs is a part of the known work area of the robot.
  • the robot can be considered to have not yet detected the entrance of the target work area.
  • you can enter the known operation area from the detected entrance, and continue to move in the direction close to the intersection position along the edge of the obstacle in the known operation area until a new entrance is detected.
  • the work area to which the new entry belongs is not within the known work area, the work area to which the new entry belongs can be used as the target work area. Based on this, the robot realizes the function of going to areas not included in the navigation map to perform tasks, which further frees the user's hands.
  • the navigation map of the robot is usually generated based on the historical movement trajectory of the robot, and the navigation map contains the known working area of the robot. For example, for a sweeping robot, when the user puts it at home for the first time, the sweeping robot can move and clean in a room accessible at home, and synchronously draw a navigation map according to the movement trajectory.
  • the robot can search for the room and clean it according to the method provided in Embodiment 3 above.
  • the robot after the robot finds the target work area in the work direction indicated by the user, it can perform the work task in the target work area, and can further update the known work according to the trajectory formed by the execution of the work task.
  • the navigation map corresponding to the area. Based on this, the exploration of the unknown operation area and the real-time update of the navigation map are realized, which is more conducive to improving the efficiency of subsequent operation tasks.
  • step 405 when the robot moves to the target work area to perform the set work task, if the target work area is located in a known work area, as in Embodiment 1 and Embodiment 2, the robot can Know the navigation map corresponding to the operation area, plan the route to the target operation area, and move to the target operation area according to the planned route to the target operation area.
  • the robot after the robot can obtain the user's three-dimensional measurement data, it can obtain the target work direction indicated by the user according to the user's three-dimensional measurement data, and can determine the work area that matches the target work direction from the candidate work areas as Target operating area.
  • the robot can move to the target work area to perform the set work task.
  • the robot realizes mobile operations based on the user's posture, and is not restricted by area division, which further improves the flexibility of the user to control the robot.
  • the execution subject of each step of the method provided in the foregoing embodiment may be the same device, or different devices may also be the execution subject of the method.
  • the execution subject of steps 401 to 402 may be device A; for another example, the execution subject of steps 402 and 402 may be device A, and the execution subject of step 403 may be device B; and so on.
  • the robot provided in the foregoing embodiments is implemented as a sweeping robot.
  • the sweeping robot can execute the robot control method provided in the foregoing embodiments.
  • the user can wake up the gesture interaction function of the sweeping robot through voice commands, and control the sweeping robot to go to different rooms for cleaning through gestures and gestures.
  • the user can say to the sweeping robot: Please follow my instructions.
  • the gesture interaction function of the sweeping robot is awakened, the user can be photographed through the binocular camera installed on it to obtain the user's image.
  • deep learning technology is used to identify the human body imaging results on the image to identify the key points of the human body on the imaging results.
  • select the target key point corresponding to the gesture that is, the key point corresponding to the elbow and the key point corresponding to the wrist.
  • the depth information of the key points of the target is obtained from the images collected by the binocular camera. Based on the coordinates of the target key points on the image and the calculated depth information, the three-dimensional coordinates of the target key points can be obtained.
  • the direction of the human gesture is calculated, and the position coordinates of the human gesture on the ground are calculated, that is, the intersection of the spatial straight line formed by the key points of the elbow and the key points of the wrist and the ground.
  • the intersection is described as an indicator point.
  • the indicator point is located in the current navigation map area.
  • the pointing point is generally closer to the user, and the location of the pointing point is clear.
  • the robot can directly move to the room area to which the pointing point belongs to clean.
  • the pointing point is located in room 2 on the navigation map, and the cleaning robot can go to room 2 to perform cleaning tasks.
  • the pointing point is located outside the current navigation map area.
  • the indicating point is generally far away from the user. This situation may be caused by two reasons:
  • the angle of the user's gesture pointing is unreasonable, for example, the angle of the gesture is too large, or directly horizontal, resulting in no intersection between the pointing point and the current navigation map or exceeding the maximum range that the robot can reach. At this time, it can be considered that the cleaning area is relatively far away, as shown in Figure 5c. In this case, the sweeping robot can search across the room for the room closest to its current area in the indicated direction to clean.
  • the pointing point is outside the maximum range that the robot can reach.
  • Room 3 is located in the indicated direction and is closest to the area where the cleaning robot is currently located, then room 3 can be used as the target cleaning area, and the cleaning robot can go to room 3 to perform cleaning tasks.
  • the angle of the user's gesture pointing is reasonable, but the pointing point does not exist in the current navigation map, or the pointing point is outside the current navigation map, but within the maximum range that the robot can reach. At this time, it can be considered that the navigation map is incomplete and there are missing rooms.
  • the sweeping robot can move to the edge of the nearest obstacle in the indicated direction, and then start to search for accessible doors or entrances along the edge. After searching for a door or entrance, you can enter the area to which the door or entrance belongs. If the area is located on the current navigation map, you can continue to move to the edge of the nearest obstacle in the indicated direction in the area. The next door or entrance is searched, and the area to which the door or entrance belongs is not on the navigation map.
  • the sweeping robot searches for the door of room 1, and after entering room 1, and finds that room 1 is on the navigation map, it can continue to search for accessible doors or entrances in room 1 in the direction indicated. .
  • the sweeping robot finds that room 3 is not on the navigation map.
  • room 3 can be used as the target cleaning area, and room 3 can be cleaned.
  • the map of room 3 can be recorded according to the walking track, and the current navigation map can be updated accordingly.
  • the user can conveniently interact with the sweeping machine through gestures.
  • the sweeping robot can accurately reach the cleaning area according to the user's instructions, and is not affected by obstacles (room walls, etc.) Restrictions, to meet the individual cleaning needs in the family, once again liberating human hands.
  • the robot provided in the foregoing embodiments is implemented as an air purification robot.
  • the air purification robot can execute the robot control method provided in the foregoing embodiments.
  • the user can wake up the posture interaction function of the air purification robot through voice commands, and control the air purification robot to go to different rooms to perform air purification tasks through gestures.
  • the user can say to the air purification robot: Please see me gestures.
  • the posture interaction function of the air purification robot is awakened, the user can be photographed through the binocular camera installed on it to obtain the user's image. Then, deep learning technology and binocular depth recovery technology are used to obtain the three-dimensional coordinates of the target key points used to characterize the user's gesture from the collected images.
  • the direction of the human gesture is calculated, and the position coordinates of the human gesture on the ground are calculated, that is, the intersection of the spatial straight line formed by the key points of the elbow and the key points of the wrist and the ground.
  • the intersection is described as an indicator point.
  • the robot can determine the walking route according to the navigation map, and directly move to the room area to which the indication point belongs to perform the air purification task.
  • the indication point is outside the current navigation map area, it can be further judged whether the angle pointed by the user's gesture is reasonable: if the angle pointed by the user's gesture is unreasonable, the air purification robot can search across the room for the indication direction and its current area Perform air purification tasks in the nearest room; if the angle of the user's gesture is reasonable, but the pointing point does not exist in the current navigation map, or the pointing point is outside the current navigation map, but within the maximum range that the robot can reach. At this time, it can be considered that the navigation map is incomplete and there are missing rooms.
  • the air purification robot can move to the edge of the nearest obstacle in the indicated direction, and then start to search for accessible doors or entrances along the edge. After searching for a door or entrance, you can enter the area to which the door or entrance belongs. If the area is located on the current navigation map, you can continue to move to the edge of the nearest obstacle in the indicated direction in the area. The next door or entrance is searched, and the area to which the door or entrance belongs is not on the navigation map. At this time, the air purification robot can use the area to which the door or entrance belongs as the target air purification area, and start to perform the air purification task. In the process of performing air purification tasks, the air purification robot can also record the map of the target air purification area according to the walking trajectory, and update the current navigation map accordingly.
  • the robot provided in the foregoing embodiments is implemented as an unmanned aerial vehicle.
  • the unmanned aerial vehicle can execute the robot control method provided in the foregoing embodiments.
  • the park contains multiple buildings.
  • the multiple buildings belong to different areas.
  • the user can wake up the attitude interaction function of the unmanned aerial vehicle through voice commands, and use gestures to control the unmanned aerial vehicle to go to different buildings to perform shooting tasks.
  • the user can say to the unmanned aerial vehicle: Please see my gesture.
  • the posture interaction function of the UAV is awakened, the user can be photographed through the binocular camera installed on it to obtain the user's image.
  • deep learning technology and binocular-based depth recovery technology are used to obtain the three-dimensional coordinates of the target key points used to characterize the user's gesture from the captured image.
  • the target key points calculate the indication direction of the human gesture, and calculate the position coordinates of the human gesture on the ground, that is, the intersection of the space line formed by the key points of the elbow and the key points of the wrist and the ground.
  • the intersection is described as an indicator point.
  • the robot can determine the flight route according to the navigation map and directly fly to the room area to which the indication point belongs to perform the shooting task.
  • the pointing point is outside the current navigation map area, it can be further judged whether the angle pointed by the user's gesture is reasonable: if the angle pointed by the user's gesture is unreasonable, the UAV can search across the area in the direction of the instruction and its current area The nearest building is performing the shooting task; if the user's gesture pointing angle is reasonable, but the pointing point does not exist in the current navigation map, or the pointing point is located outside the current navigation map, but is located in the park within the maximum range that the robot can reach Inside. At this time, it can be considered that the navigation map of the park is incomplete and there are missing buildings.
  • the unmanned aerial vehicle can fly along the indicated direction until a new area corresponding to the indicated direction but not on the navigation map is found. At this time, the unmanned aerial vehicle can use the new area as the target shooting area and start to perform the shooting task. In the process of performing shooting tasks, the UAV can also draw a map of the target shooting area according to the location distribution and area of the target shooting area, and update the current navigation map accordingly.
  • embodiments of the present application also provide a computer-readable storage medium storing a computer program, and when the computer program is executed, the steps of the robot control method in the foregoing method embodiments can be implemented.
  • the embodiments of the present invention can be provided as a method, a system, or a computer program product. Therefore, the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in a computer-readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM).
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.

Abstract

本申请实施例提供一种机器人及其控制方法。在机器人控制方法中,机器人可响应姿态交互唤醒指令,获取用户的姿态数据,根据用户的姿态数据确定目标作业区域,并可在目标作业区域与机器人的当前位置所属的区域为不同区域的情况下,移动至目标作业区域执行设定的作业任务。进而,机器人实现了基于用户姿态进行移动作业,且不受区域划分的限制,进一步提升了机器人的控制灵活度。

Description

机器人及其控制方法
交叉引用
本申请引用于2020年1月15日递交的名称为“机器人及其控制方法”的第2020100435390号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请涉及智能设备技术领域,尤其涉及一种机器人及其控制方法。
背景技术
随着科技的发展,智能化的机器人逐渐走入人们的日常生活,为人们日常生活提供了更多的便利,用户对与机器人的互动的需求也越来越强烈。
现有技术中,机器人无法较好地理解用户姿态代表的控制意图。因此,有待提出一种解决方案。
发明内容
本申请从多个方面提供一种机器人及其控制方法,用以提升机器人的控制灵活度。
本申请的一个方面提供一种机器人控制方法,包括:响应姿态交互唤醒指令,获取用户的姿态数据;根据所述姿态数据,确定所述用户指示的目标作业区域;所述目标作业区域与所述机器人的当前位置所属的区域为不同区域;所述机器人移动至所述目标作业区域,以执行设定的作业任务。
本申请的另一个方面提供一种机器人控制方法,包括:机器人本体,安装在所述机器人本体上的传感器组件、控制器以及运动组件;所述传感器组件,用于:响应用户的作业控制指令,获取用户的姿态数据;所述控制器,用于:根据所述用户的姿态数据,确定所述用户指示的目标作业位置作业区域,并控 制所述运动组件移动至所述目标作业区域,以执行作业任务。
在本申请实施例中,机器人可响应姿态交互唤醒指令,获取用户的姿态数据,根据用户的姿态数据确定目标作业区域,并可在目标作业区域与机器人的当前位置所在区域属于不同区域的情况下,移动至目标作业区域执行设定的作业任务。进而,机器人实现了基于用户姿态进行移动作业,且不受区域划分的限制,进一步提升了机器人的控制灵活度。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
在附图中:
图1为本申请一示例性实施例提供的机器人的结构示意图;
图2为本申请一示例性实施例提供的三维深度测量的原理示意图;
图3为本申请一示例性实施例提供的机器人控制方法的流程示意图;
图4a为本申请另一示例性实施例提供的机器人控制方法的流程示意图;
图4b为本申请一示例性实施例提供的获取姿态数据并检测关键点的示意图;
图4c-图4d为本申请一示例性实施例提供的根据手势对应的空间坐标确定目标作业方向的示意图;
图5a为本申请一应用场景实施例提供的扫地机器人的工作逻辑示意图;
图5b-图5d为本申请一应用场景实施例提供的扫地机器人根据用户的姿态执行清洁任务的示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
现有技术中,指定机器人的作业区域的方式较为单一。例如,针对具有清洁功能的扫地机器人而言,用户通常需要在终端设备提供的机器人的导航地图上,指定需要清扫的区域,扫地机器人根据用户在导航地图上指定的清扫区域执行清扫任务。但是,这种方式对终端设备的依赖性较高。除此之外,在一些典型的场景下,机器人的导航地图不完整,因而扫地机器人无法对导航地图不包括的区域执行清扫任务,灵活度较差。
针对上述技术问题,本申请一些示例性实施例提供了一种机器人以及机器人控制方法,以下将结合附图,详细说明本申请各实施例提供的技术方案。
应注意到:相同的标号在下面的附图以及实施例中表示同一物体,因此,一旦某一物体在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
图1为本申请一示例性实施例提供的机器人的结构示意图,如图1所示,该机器人包括:本体10、安装于本体10上的传感器组件20、控制器30以及运动组件40。
在本实施例中,机器人指的是能够自主移动、且能够实现智能控制的电子设备。在一些场景下,机器人实现为能够执行清扫、清洁任务的机器人,例如:对地面进行清扫的扫地机器人,对地面、墙壁、天花板、玻璃、机动车进行清洁的擦洗机器人,对空气进行净化的空气净化机器人的等等。图1中,以扫地机器人为例,对本申请实施例提供的机器人进行了结构示意,但不意味着本申请提供的机器人仅可实现为扫地机器人。
在另一些场景下,机器人可实现为仓储物流机器人,例如货运机器人、货品投递机器人等。在又一些场景下,机器人可实现为机器人服务员,例如酒店的迎宾机器人、上菜机器人、商场或商店里的导购机器人等,不再进行图示。
值得说明的是,上述机器人的自主移动功能,可包括在地面上进行移动的功能,也可包括在空中进行自主飞行移动的功能。若包括在空中进行飞行移动的功能,则上述机器人可实现为无人飞行器,不再赘述。
当然,上述列举的机器人仅用于进行示例性说明,本实施例包含但不限于此。
在机器人中,传感器组件20主要用于响应用户的作业控制指令,获取用户的姿态数据。其中,姿态指的是用户呈现的姿势,例如头部姿态、手部的姿势、腿部的姿势等等。在本申请实施例的多种应用场景中,用户可通过姿态实现与机器人进行交互,用户的姿态数据,是传感器组件20对用户的姿态进行采集得到的数据。
其中,传感器组件20可采用任意能够对用户进行姿态数据采集的一种或者多种传感器实现,本实施例不做限制。在一些可选的实施例中,传感器组件20可实现为三维深度传感器,用以对用户进行三维测量,得到三维测量数据。其中,三维测量数据包括:拍摄所述用户得到的图像以及用户与机器人之间的距离。其中,图像可以为RGB(红绿蓝三通道)图像或者灰度图像,用户与机器人之间的距离也称为被测对象的深度。
以下将结合三维深度传感器的可选实现形式,对三维深度传感器获取被测对象的RGB图像,并感知被测对象深度的实施方式进行示例性说明。
在一些实施例中,三维深度传感器基于双目摄像头实现,并基于双目深度恢复技术,得到三维测量数据。在这种方案中,可将两个单目摄像头固定在一个模块上,固定这两个摄像头的角度以及距离,以形成一个稳定的双目结构。
在这种方案中,双目摄像头对被测对象进行拍摄,可得到被测对象的RGB图像。与此同时,可基于三角测距方法以及视差原理,得到被测对象与摄像头 之间的距离。两个摄像头同时照向被测对象时,每一个摄像头里都会出现被测对象的影像。两个摄像头之间存在一定距离,因此被测对象上的同一个点,经两个摄像头成像后对应的像点的位置不同。基于此,可提取两个摄像头拍摄得到的影像里的两个对应的特征点,计算这两个对应的特征点之间的距离差值。基于这两个对应的特征点之间的距离差值、两个摄像头之间的距离差值以及摄像头的焦距,即可采用三角测量法计算出被测对象到双目摄像头的基线的距离。以下将结合图2进行进一步说明。
如图2所示,双目摄像头之间的距离为基线距B,相机焦距为f,双目摄像头在同一时刻拍摄空间物体的同一特征点P(x c,y c,z c)。(x c,y c,z c)为特征点P在相机坐标系xyz中的坐标。特征点P(x c,y c,z c)经双目摄像头成像后,对应的图像坐标分别为p left(x left,y left),p right(x right,y right)。
双目摄像头拍摄到的图像在同一个平面上,那么特征点P成像后得到的点的在y轴上的坐标相同,即y left=y right,则由三角几何关系得到如下公式1:
Figure PCTCN2020142239-appb-000001
其中,特征点P成像后得到的两个点的视差为:Δ=x left-x right。由此可计算出特征点P在相机坐标系下的三维坐标(x c,y c,z c):
Figure PCTCN2020142239-appb-000002
可选地,双目摄像头可分别为红外双目摄像头,进而可基于红外灯的光照,在光线较弱甚至黑暗的环境下捕捉到被测对象的深度信息。
在另一些实施例中,三维深度传感器可基于能够投射结构光的投影仪以及摄像头实现。摄像头可对被测对象进行拍摄,可得到被测对象的RGB图像。投影仪可向被测对象投射已知图案的结构光,摄像头可获取反射回来的结构光形成的图案。接下来,可将投射出去的结构光的图案和反射回来的结构光的图案进行对比,基于图案对比结果以及投影仪和摄像头之间的固定距离,可采用三角测量法计算得到被测对象的深度信息。
可选地,投影仪投射的结构光,可以是散斑结构光或者编码结构光,本实施例不做限制。
在又一些实施例中,三维深度传感器可基于摄像头,以及激光雷达或毫米波雷达等电磁波传感器实现。摄像头可对被测对象进行拍摄,可得到被测对象的RGB图像。激光雷达或毫米波雷达发射的电磁波信号到达被测对象后返回,计算电磁波信号到达被测对象后返回的时间,基于该时间和电磁波的传输速度计算被测对象与传感器之间的距离。
当然,上述列举的三维深度传感器的实现形式仅用于示例性说明,本实施例包含但不限于此。
其中,控制器30用于根据传感器组件20采集到的用户的姿态数据,确定用户指示的目标作业位置作业区域,并控制运动组件40移动至目标作业区域,以执行作业任务。
可选地,控制器30可以使用各种应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、微中控元件、微处理器、微控制单元(MCU)或其他电子元件实现,本实施例不做限制。
其中,运动组件40指的是机器人上安装的可供机器人自主移动的器件,例如,机器人的移动底盘、滚轮等等,本实施例不做限制。
需要说明的是,除上述实施例记载的组件之外,本申请实施例提供的机器人还可包括安装于本体10上的存储器。存储器,用于存储计算机程序,并可被 配置为存储其它各种数据以支持在机器人上的操作。这些数据的示例包括用于在机器人上操作的任何应用程序或方法的指令。
存储器可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
在一些实施例中,机器人还可包括安装于本体10上的显示组件。显示组件可包括屏幕,其屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。
在一些实施例中,机器人还可包括安装于本体10上的电源组件,该电源组件可为机器人上的各种组件提供电力。电源组件可以包括电源管理系统,一个或多个电源,及其他为机器人生成、管理和分配电力相关联的组件,不再赘述。
基于前述实施例提供的机器人,本申请实施例还提供一种机器人控制方法,以下将结合附图进行具体说明。
图3为本申请一示例性实施例提供的机器人控制方法的流程示意图,如图3所示,该方法包括:
步骤301、机器人响应姿态交互唤醒指令,获取用户的姿态数据。
步骤302、机器人根据该姿态数据,确定用户指示的目标作业区域;该目标作业区域与机器人的当前位置所属的区域为不同区域。
步骤303、机器人移动至目标作业区域,以执行设定的作业任务。
其中,姿态交互指令指的是用于唤醒机器人的姿态交互功能的指令。姿态交互功能,指的是:机器人能够捕获用户的姿态、识别用户的姿态对应的交互 内容,并根据识别到的交互内容,执行对应的作业任务的功能。在本实施例中,姿态交互指令,可以由用户直接发出的,也可以由用户通过终端设备发出。
在一些实施例中,若姿态交互指令由用户直接发出,那么该姿态交互指令可实现为:用户发出的用于唤醒机器人的姿态交互功能的语音指令,例如“请看我手势”、“听我手势指挥”等语音指令。或者,该姿态交互指令可实现为:用户发出的用于唤醒机器人的姿态交互功能的手势指令,该手势指令可由用户自定义,本实施例不做限制。
在另一些实施例中,用户可通过终端设备对机器人发起用于唤醒机器人的姿态交互功能的控制操作。基于此,该姿态交互指令可实现为:终端设备发送的用于唤醒机器人的姿态交互功能的控制指令。其中,终端设备可实现为手机、平板电脑、智能手表、智能手环、智能音箱等设备。
通常,终端设备可包括一电子显示屏,用户可通过该显示屏发起控制机器人的操作。其中,电子显示屏可包括液晶显示器(LCD)和触摸面板(TP)。如果电子显示屏包括触摸面板,电子显示屏可以被实现为触摸屏,该触摸屏可接收来自用户的输入信号,以检测用户对机器人的控制操作。当然,在其他可选的实施例中,终端设备可包括用于向用户提供机器人控制操作的物理按键或者语音输入装置等,此处不赘述。
其中,终端设备与机器人预先绑定,二者可通过有线或者无线通信方式与建立通信关系。基于此,用户通过终端设备向机器人发送姿态交互唤醒指令的操作,可基于终端设备与机器人之间的通信消息实现。
其中,终端设备与机器人之间的无线通信方式包括蓝牙、ZigBee、红外线、WiFi(WIreless-Fidelity,无线保真技术)等短距离通信方式,也包括LORA等远距离无线通信方式,还可包括基于移动网络的无线通信方式。其中,当通过移动网络通信连接时,移动网络的网络制式可以为2G(GSM)、2.5G(GPRS)、3G(WCDMA、TD-SCDMA、CDMA2000、UTMS)、4G(LTE)、4G+(LTE+)、5G、WiMax等中的任意一种,本实施例不做限制。
其中,用户的姿态数据,是对用户呈现的姿势,例如用户头部的姿势、手部的姿势、腿部的姿势进行采集得到的数据。
在一些可选的实施方式中,用户的姿态数据可通过机器人上安装的传感器组件获取,具体可参考前述实施例的记载,此处不赘述。
在另一些可选的实施方式,用户的姿态数据可通过用户穿戴的姿态传感器获取。例如,可通过穿戴在用户手臂上的陀螺仪、惯性传感器等获取用户的姿态数据,本实施例不做限制。
在又一些可选的实施方式中,用户的姿态数据可通过用户所在空间内安装的多个传感器实现。例如,机器人应用在特定空间内时,可复用该空间安装的监控摄像头,对用户进行多角度拍摄,并基于多角度拍摄的结果获取用户的姿态数据,此处不赘述。
在本申请实施例提供的应用场景中,用户可通过摆出不同的姿势来指挥机器人前往特定的作业区域执行作业任务。例如,家庭用户可采用手臂摆出指向某一房间的姿势,以指挥扫地机器人前往被指的房间执行清扫任务。又例如,酒店领班用户用户可扭转头部,摆出面向某一区域的姿势,以指挥服务员机器人前往被指的区域执行服务任务。
目标作业区域,即为根据用户的姿态数据识别到的用户指示机器人前往的区域。其中,目标作业区域与机器人的当前位置所属的区域为不同区域。例如,目标作业区域与机器人的当前位置不属于同一房间,或者目标作业区域与机器人的当前位置被人为划分成了两个不同的区域。在确定目标作业区域后,机器人可跨房间或者跨区域地移动至目标作业区域,执行设定的作业任务。
在本实施例中,机器人可响应姿态交互唤醒指令,获取用户的姿态数据,根据用户的姿态数据确定目标作业区域,并可在目标作业区域与机器人的当前位置所属的区域为不同区域的情况下,移动至目标作业区域执行设定的作业任务。进而,机器人实现了基于用户姿态进行移动作业,且不受区域划分的限制,进一步提升了机器人的控制灵活度。
图4a为本申请另一示例性实施例提供的机器人控制方法的流程示意图,如图4a所示,该方法包括:
步骤401、机器人响应姿态交互唤醒指令,通过安装于机器人上的传感器组件,对用户进行三维测量,得到三维测量数据。
步骤402、机器人根据该三维测量数据,获取用户的手势对应的空间坐标。
步骤403、机器人根据用户的手势对应的空间坐标,确定用户指示的目标作业方向。
步骤404、机器人从候选作业区域中,确定与目标作业方向适配的作业区域,作为目标作业区域,目标作业区域与机器人的当前位置所属的区域为不同区域。
步骤405、机器人移动至目标作业区域,以执行设定的作业任务。
在步骤401中,可选地,三维测量数据包括:拍摄用户得到的图像以及用户与所述机器人之间的距离。获取三维测量数据的具体方法可参考前述实施例的记载,此处不赘述。
在步骤402中,根据三维测量数据,可获取用户的手势对应的空间坐标。
可选地,针对三维测量数据中的图像,可进行图像识别,得到用户的姿态关键点。可选地,从图像上识别姿态关键点的方法,可基于深度学习算法实现。例如,可基于卷积神经网络(Convolutional Neural Networks,CNN),或者,图卷积神经网络(Graph Convolutional Network,GCN)训练图像识别模型。以下将以CNN为例进行示例性说明。
可选地,可对用户的多种姿态进行拍摄,得到大量图像。接着,对图像上的姿态关键点进行标注,得到训练样本,并将训练样本输入CNN模型进行迭代训练。在训练的过程中,可将样本上标注的姿态关键点作为模型的学习目标,不断地调整CNN模型中的模型参数,直至损失函数收敛至一定范围。
响应姿态交互唤醒指令,机器人拍摄到用户的图像后,可将该图像输入CNN模型,并根据CNN模型的输出,获取图像上的姿态关键点。
其中,姿态关键点可包括图像上与用户的眼睛、鼻子、肩膀、手肘、手腕、臀部、膝盖、脚踝等关键部位对应的特征点,如图4b所示。识别姿态关键点的同时,还可对图像上的用户进行左右区别。
在一些场景下,考虑到与机器人进行姿态交互的便利性,用户可采用手势与机器人进行手势交互。以下将以手势为例,对本申请实施例提供的技术方案进行示例性说明。
其中,手势,包括用户在运用手臂时,所展现的具体动作与体位。识别到姿态关键点后,可从姿态关键点中,确定用于表征用户的手势的目标关键点。手臂具有较高的灵活性,用户运用手臂时,可带动手指、手腕、手肘、肩膀等关节共同活动。基于此,在一些实施例中,在确定表征用户的手势的目标关键点时,可至少确定与用户的手肘对应的关键点和手腕对应的关键点,如图4c所示。在另一些实施例中,可确定与用户的肩膀、手肘以及手腕对应的三个关键点,如图4d所示。在又一些实施例中,为更准确地识别手势,在获取到肩膀对应的关键点、手肘对应的关键点以及手腕对应的关键点的同时,还可进一步获取用户的手指对应的关键点,不再进行图示。针对三维测量数据中的用户与机器人之间的距离,可根据目标关键点在图像上的坐标,从用户与机器人之间的距离中,确定目标关键点与机器人之间的距离。例如,传感器组件实现为双目摄像头时,可从双目摄像头拍摄到的图像上识别目标关键点,并基于双目深度恢复技术,从双目摄像头拍摄到的图像上,获取目标关键点对应的深度信息,作为目标关键点与机器人之间的距离。
接下来,可根据目标关键点的坐标以及目标关键点与机器人之间的距离,确定用户的手势对应的空间坐标。应当理解,基于拍摄到的图像可获取到目标关键点在相机坐标系中的二维坐标,基于目标关键点与机器人之间的距离,可获取目标关键点在相机坐标系中的第三个维度的坐标。基于上述三个坐标,可得到目标关键点在相机坐标系中的三维坐标。
接下来,对目标关键点在相机坐标系中的三维坐标进行坐标系转换,将相 机坐标系转换为世界坐标系,即可得到用户的手势在世界坐标系对应的空间坐标。
在步骤403中,可根据用户的手势对应的空间坐标,确定用户指示的作业方向。
可选地,在本步骤中,可根据用户的手势对应的空间坐标进行直线拟合,得到一条空间直线。接着,将该空间直线向用户的手势的末端延伸的方向,作为用户指示的作业方向,如图4c以及图4d所示。其中,若从图像上取到的关键点包括肩膀对应的关键点,则向用户的手势的末端延伸的方向,指的是:肩膀向手肘处延伸的方向,或者肩膀向手腕处延伸的方向,或者肩膀向手指延伸的方向;若从图像上取到的关键点包括手肘对应的关键点,向用户的手势的末端延伸的方向,指的是:手肘向手腕处延伸的方向,或者手肘向手指延伸的方向,不再赘述。
在步骤404中,目标作业区域与机器人的当前位置所属的区域为不同区域,可包括:目标作业区域与该当前位置所属的区域之间存在实体障碍物,或者存在虚拟障碍物。以下将结合不同的应用场景进行示例性说明。
在一种典型的应用场景中,机器人实现为家庭用的扫地机器人,目标作业区域可实现为家庭中的某一房间,该房间与机器人当前所在的房间之间存在墙体或者门体。例如,机器人当前位于客厅,目标作业区域为卧室。
在另一种典型的应用场景中,机器人实现为餐厅使用的服务员机器人。餐厅较大时,可为不同机器人划定不同的服务区域,以保证有条不紊地向顾客提供服务。在这种场景下,可在餐厅内的不同服务区域之间划定虚拟墙,以根据虚拟墙生成可供机器人使用的导航地图。该虚拟墙在实际空间中是不存在的,但可存在于机器人的导航地图上,机器人可根据虚拟墙生成的导航地图在各自规定的作业范围内进行移动作业。在这种场景下,目标作业区域可实现为与机器人当前所在区域之间存在虚拟墙的另一区域。例如,机器人当前位于就餐区A,目标作业区域位于就餐区B,就餐区A和就餐区B之间划有虚拟墙。
其中,候选作业区域,指的是机器人可前往并且可执行作业任务的所有区域。例如,在家庭环境中,候选作业区域包括一套房子内所有的房间。例如,在餐厅环境内,候选作业区域包括餐厅提供的所有就餐区域。又例如,在商场环境内,候选作业区域可包括商场提供的所有店铺,不再赘述。
目标作业方向是用户指示机器人前往作业的方向。因此,获取到目标作业方向后,可从候选作业区域中,确定与目标作业方向适配的作业区域,作为目标作业区域。
其中,用户指示的作业方向,通过空间直线向用户的手势的末端延伸的方向来表达。基于此,可计算空间直线与候选作业区域所在的平面的交点位置,并根据该交点位置,确定用户指示的目标作业区域。
计算空间直线与候选作业区域所在的平面的交点位置时,可将候选作业区域所在平面作为一空间平面,将计算交点位置的过程,转化为计算空间直线与空间平面的交点的过程。
通常,机器人与候选作业区域位于同一平面,若根据机器人所在空间建立三维坐标系XYZ,则可将候选作业区域所在平面视为Z=0的平面。
交点位置不同时,确定用户指示的目标作业区域的实施方式也不同。以下将进行示例性说明:
实施方式1:若该交点位置在机器人的已知作业区域内,则将交点位置所在的作业区域作为目标作业区域。其中,已知作业区域,指的是在机器人导航地图中已经包含的区域。
实施方式2:若该交点位置不在机器人的已知作业区域内,且该空间直线与候选作业区域所在平面的夹角大于设定的角度阈值,则可从已知作业区域内,确定位于用户指示的作业方向上、且与机器人的当前位置最近的作业区域,作为目标作业区域。
其中,空间直线与候选作业区域所在平面的夹角如图4c所示的α。其中,该设定角度阈值可根据实际需求进行设置。可选地,可根据候选区域的面积, 计算能够覆盖整个候选区域的最大手势角度,并将该最大手势角度作为角度阈值。若空间直线与候选作业区域所在平面的夹角大于设定的角度阈值,则认为用户的手势的指向角度不合适,例如用户的手臂过高甚至手臂与地面平行。
其中,从已知作业区域内,确定位于用户指示的作业方向上、且与机器人的当前位置最近的作业区域时,可将该空间直线在机器人的导航地图上进行投影,得到投影直线。接着,将导航地图上,与投影直线相交,且与机器人的当前位置最近的作业区域,作为目标作业区域。
实施方式3:若该交点位置不在机器人的已知作业区域内,且该空间直线与平面的夹角小于或者等于角度阈值,则根据交点位置,在用户指示的作业方向上寻找所述目标作业区域。
该空间直线与平面的夹角小于或者等于角度阈值时,可认为用户的指向角度合理,但是机器人的导航地图上存在缺失的作业区域。例如,以扫地机器人为例,用户的手势指示目标清扫区域为厨房,但是机器人的导航地图上没有厨房这一区域。
在这种情况下,可根据交点位置,在用户指示的作业方向上寻找目标作业区域,以完成用户指示的作业任务。
可选地,机器人可向用户指示的作业方向移动,直至遇到目标障碍物,该目标障碍物可以是墙体。遇到目标障碍物后,机器人可沿着目标障碍物的边缘向靠近该交点位置的方向进行移动,直至探测到入口。该入口通常是障碍物的阻碍消失的地方,例如墙体上开设的门。若该入口所属的作业区域不在已知作业区域内,则可将该入口所属的作业区域,作为目标作业区域。
需要说明的是,在一些情况下,机器人探测到的入口位于机器人的导航地图上,入口所属的作业区域,为机器人的已知作业区域中的一部分。在这种情况下,机器人可认为仍未探测到目标作业区域的入口。此时,可从探测到的入口进入已知作业区域,并沿着已知作业区域内的障碍物的边缘继续向靠近交点位置的方向进行移动,直至探测到新的入口。若该新的入口所属的作业区域不 在已知作业区域内,则可将该新的入口所属的作业区域,作为目标作业区域。基于此,机器人实现了前往导航地图上未包括的区域进行作业任务的功能,进一步解放了用户的双手。
值得说明的是,机器人的导航地图,通常是根据机器人历史运动轨迹生成的,该导航地图上包含机器人的已知作业区域。例如,针对扫地机器人而言,用户首次将其放在家中使用时,扫地机器人可在家里可进入的房间进行移动清扫,并根据移动轨迹同步绘制导航地图。
若首次清扫时,家中某个房间的门恰好处于关闭状态,导致机器人未前往该房间,那么生成的导航地图上将不包含该房间对应的地图区域。下一次使用机器人时,该房间的门由关闭变为打开,但机器人此时未获知清扫环境产生变化,因而仍旧不会及时去清扫该区域。若用户通过手势指示扫地机器人清扫该房间,那么机器人可按照上述实施方式3提供的方法,寻找该房间,并进行清扫。
值得说明的是,在实施方式3中,机器人在用户指示的作业方向上寻找到目标作业区域后,可在目标作业区域执行作业任务,并可进一步根据执行作业任务形成的轨迹,更新已知作业区域对应的导航地图。基于此,实现了未知作业区域的探索以及导航地图的实时更新,更有利于提升后续执行作业任务的效率。
在步骤405中,可选地,机器人移动至目标作业区域,以执行设定的作业任务时,若目标作业区域位于已知作业区域中,如实施方式1以及实施方式2,则机器人可根据已知作业区域对应的导航地图,规划去往目标作业区域的路线,并根据规划得到的去往目标作业区域的路线,移动至目标作业区域。
在本实施例中,机器人可获取用户的三维测量数据后,可根据用户的三维测量数据获取用户指示的目标作业方向,并可从候选作业区域中,确定与目标作业方向适配的作业区域作为目标作业区域。在目标作业区域与机器人的当前位置所属的区域为不同区域的情况下,机器人可移动至目标作业区域执行设定 的作业任务。进而,机器人实现了基于用户姿态进行移动作业,且不受区域划分的限制,进一步提升了用户对机器人进行控制的灵活度。
需要说明的是,上述实施例所提供方法的各步骤的执行主体均可以是同一设备,或者,该方法也由不同设备作为执行主体。比如,步骤401至步骤402的执行主体可以为设备A;又比如,步骤402和402的执行主体可以为设备A,步骤403的执行主体可以为设备B;等等。
另外,在上述实施例及附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如401、402等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。
需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
以下结合图5a-图5d,以一个具体的应用场景,对本申请实施例提供的机器人控制方法进行进一步说明。
在一种典型的应用场景中,前述各实施例提供的机器人实现为扫地机器人。该扫地机器人可执行前述各实施例提供的机器人控制方法。
在使用扫地机器人的过程中,用户可通过语音指令,唤醒扫地机器人的姿态交互功能,并通过手势姿态控制扫地机器人去往不同的房间进行清扫。例如,用户可对扫地机器人说:请听我指挥。扫地机器人的姿态交互功能被唤醒后,可通过其上安装的双目摄像头对用户进行拍摄,得到用户的图像。接着,采用深度学习技术,对图像上的人体成像结果进行识别,以识别到成像结果上的人体各关键点。接着,从识别到的各关键点中,选取手势对应的目标关键点,即:手肘对应的关键点、手腕对应的关键点。
接着,基于双目深度恢复技术,从双目摄像头采集到的图像上,获取目标关键点的深度信息。基于目标关键点在图像上的坐标以及计算得到的深度信息,可得到目标关键点的三维坐标。
接着,根据目标关键点的三维坐标,计算人体手势的指示方向,并计算人体手势在地面上指示的位置坐标,即:手肘关键点和手腕关键点形成的空间直线与地面的交点。为便于描述,将该交点描述为指示点。
接着,判断该指示点是否在扫地机器人当前的导航地图区域内。
一种情况如图5b所示,该指示点位于当前的导航地图区域内。这种情况下,指示点一般距离用户比较近,且指示点的位置明确。此时,机器人可直接移动到该指示点所属的房间区域清扫。在图5b的示意中,指示点位于导航地图上的房间2中,则扫地机器人可前往房间2执行清扫任务。
另一种情况下,该指示点位于当前的导航地图区域之外。通常,这种情况下,指示点一般距离用户比较远,该情况可能由两种原因导致:
1.用户手势指向的角度不合理,例如手势的角度过大,或者直接水平,进而导致指示点与当前的导航地图无交点或者超出了机器人能够涉足的最大范围。此时,可认为该清扫区域比较远,如图5c所示。在这种情况下,扫地机器人可跨房间寻找该指示方向上与其当前所在区域最近的房间进行清扫。
例如,如图5c所示,指示点在机器人能够涉足的最大范围之外。房间3位于该指示方向上,且距离扫地机器人当前所在的区域最近,那么可将房间3作为目标清扫区域,扫地机器人可前往房间3执行清扫任务。
2.用户手势指向的角度合理,但是指示点在当前的导航地图中不存在,或者,指示点位于当前的导航地图之外,但是位于机器人可涉足的最大范围内。此时,可认为导航地图不完整,存在遗漏的房间。
在这种情况下,如图5d所示,扫地机器人可移动至该指示方向上最近的障碍物的边沿,然后开始沿边搜索可进入的门或者入口。在搜索到门或者入口后,可进入该门或者入口所属的区域,若该区域位于当前的导航地图上,那么可在 该区域内,继续移动至该指示方向上最近的障碍物的边沿,直至搜索到下一个门或者入口,且该门或者入口所属的区域不在导航地图上。
例如,如图5d所示,扫地机器人搜索到房间1的门,并进入房间1后,发现房间1位于导航地图上,那么可继续在房间1中朝着该指示方向搜索可进入的门或者入口。扫地机器人搜索到房间3的入口后,发现房间3未在导航地图上,此时可将房间3作为目标清扫区域,并开始清扫房间3。在清扫房间3的过程中,可根据行走轨迹,记录房间3的地图,并以此更新当前的导航地图。
基于上述实施方式,在扫地机器人的应用场景中,用户可通过姿态,便捷地与扫地机器进行交互,扫地机器人能够根据用户的指示准确地到达清扫区域,且不受障碍物(房间墙壁等)的限制,满足家庭中个性化的清扫需求,再次解放人类双手。
在另一种典型的应用场景中,前述各实施例提供的机器人实现为空气净化机器人。该空气净化机器人可执行前述各实施例提供的机器人控制方法。
在使用空气净化机器人的过程中,用户可通过语音指令,唤醒空气净化机器人的姿态交互功能,并通过手势姿态控制空气净化机器人去往不同的房间执行空气净化任务。例如,用户可对空气净化机器人说:请看我手势。空气净化机器人的姿态交互功能被唤醒后,可通过其上安装的双目摄像头对用户进行拍摄,得到用户的图像。接着,采用深度学习技术和双目深度恢复技术,从采集到的图像上获取用于表征用户的手势的目标关键点的三维坐标。
接着,根据目标关键点的三维坐标,计算人体手势的指示方向,并计算人体手势在地面上指示的位置坐标,即:手肘关键点和手腕关键点形成的空间直线与地面的交点。为便于描述,将该交点描述为指示点。
接着,判断该指示点是否在空气净化机器人当前的导航地图区域内。
若该指示点位于当前的导航地图区域内,则机器人可根据导航地图确定行走路线,并直接移动到该指示点所属的房间区域内执行空气净化任务。
若该指示点位于当前的导航地图区域之外,则可进一步判断用户手势指向的角度是否合理:若用户手势指向的角度不合理,则空气净化机器人可跨房间寻找该指示方向上与其当前所在区域最近的房间执行空气净化任务;若用户手势指向的角度合理,但是指示点在当前的导航地图中不存在,或者,指示点位于当前的导航地图之外,但是位于机器人可涉足的最大范围内。此时,可认为导航地图不完整,存在遗漏的房间。
若存在遗漏房间,则空气净化机器人可移动至该指示方向上最近的障碍物的边沿,然后开始沿边搜索可进入的门或者入口。在搜索到门或者入口后,可进入该门或者入口所属的区域,若该区域位于当前的导航地图上,那么可在该区域内,继续移动至该指示方向上最近的障碍物的边沿,直至搜索到下一个门或者入口,且该门或者入口所属的区域不在导航地图上。此时,空气净化机器人可将该门或者入口所属的区域作为目标空气净化区域,并开始执行空气净化任务。在执行空气净化任务的过程中,空气净化机器人还可根据行走轨迹,记录目标空气净化区域的地图,并以此更新当前的导航地图。
在又一种典型的应用场景中,前述各实施例提供的机器人实现为无人飞行器。该无人飞行器可执行前述各实施例提供的机器人控制方法。
假设无人飞行器需在一较大的园区中执行航拍任务,该园区包含多栋建筑物,在无人飞行器的导航地图上,多栋建筑分别属于不同的区域。在控制无人飞行器的过程中,用户可通过语音指令,唤醒无人飞行器的姿态交互功能,并通过手势姿态控制无人飞行器去往不同的建筑物执行拍摄任务。
例如,用户可对无人飞行器说:请看我手势。无人飞行器的姿态交互功能被唤醒后,可通过其上安装的双目摄像头对用户进行拍摄,得到用户的图像。接着,采用深度学习技术和基于双目深度恢复技术,从拍摄得到的图像上,获取用于表征用户的手势的目标关键点的三维坐标。
接着,根据目标关键点的三维坐标,计算人体手势的指示方向,并计算人 体手势在地面上指示的位置坐标,即:手肘关键点和手腕关键点形成的空间直线与地面的交点。为便于描述,将该交点描述为指示点。
接着,判断该指示点是否在无人飞行器当前的导航地图区域内。
若该指示点位于当前的导航地图区域内,则机器人可根据导航地图确定飞行路线,并直接飞行到该指示点所属的房间区域内执行拍摄任务。
若该指示点位于当前的导航地图区域之外,则可进一步判断用户手势指向的角度是否合理:若用户手势指向的角度不合理,则无人飞行器可跨区域寻找该指示方向上与其当前所在区域最近的建筑物执行拍摄任务;若用户手势指向的角度合理,但是指示点在当前的导航地图中不存在,或者,指示点位于当前的导航地图之外,但是位于园区内机器人可涉足的最大范围内。此时,可认为园区的导航地图不完整,存在遗漏的建筑物。
若存在遗漏建筑物,则无人飞行器可沿着该指示方向飞行,直至搜索到与该指示方向对应,但不在导航地图上的新的区域。此时,无人飞行器可将该新的区域作为目标拍摄区域,并开始执行拍摄任务。在执行拍摄任务的过程中,无人飞行器还可根据目标拍摄区域的位置分布和区域面积,绘制目标拍摄区域的地图,并以此更新当前的导航地图。
需要说明的是,本申请实施例还提供一种存储有计算机程序的计算机可读存储介质,计算机程序被执行时能够实现上述方法实施例中实现机器人控制方法的各步骤。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品 的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的控制器以产生一个机器,使得通过计算机或其他可编程数据处理设备的控制器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他 磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (15)

  1. 一种机器人控制方法,其特征在于,包括:
    响应姿态交互唤醒指令,获取用户的姿态数据;
    根据所述姿态数据,确定所述用户指示的目标作业区域;所述目标作业区域与所述机器人的当前位置所属的区域为不同区域;
    所述机器人移动至所述目标作业区域,以执行设定的作业任务。
  2. 根据权利要求1所述的方法,其特征在于,所述姿态交互指令,包括以下至少一种:
    所述用户发出的用于唤醒所述机器人的姿态交互功能的语音指令;
    所述用户通过终端设备发送的用于唤醒所述机器人的姿态交互功能的控制指令;
    所述用户发出的用于唤醒所述机器人的姿态交互功能的手势指令。
  3. 根据权利要求1所述的方法,其特征在于,获取用户的姿态数据,包括:
    通过安装于所述机器人上的传感器组件,对所述用户进行三维测量,得到三维测量数据;
    根据所述三维测量数据,获取所述用户的手势对应的空间坐标,作为所述用户的姿态数据。
  4. 根据权利要求3所述的方法,其特征在于,所述三维测量数据包括:拍摄所述用户得到的图像以及所述用户与所述机器人之间的距离。
  5. 根据权利要求4所述的方法,其特征在于,根据所述三维测量数据,获取所述用户的手势对应的空间坐标,包括:
    对所述图像进行识别,得到所述用户的姿态关键点;
    从所述姿态关键点中,确定用于表征所述用户的手势的目标关键点;
    根据所述用户与所述机器人之间的距离,确定所述目标关键点与所述机器人之间的距离;
    根据所述目标关键点的坐标以及所述目标关键点与所述机器人之间的距离,确定所述用户的手势对应的空间坐标。
  6. 根据权利要求3所述的方法,其特征在于,根据所述姿态数据,确定所述用户指示的目标作业区域,包括:
    根据所述用户的手势对应的空间坐标,确定所述用户指示的目标作业方向;
    从候选作业区域中,确定与所述目标作业方向适配的作业区域,作为所述目标作业区域。
  7. 根据权利要求6所述的方法,其特征在于,根据所述用户的手势对应的空间坐标,确定所述用户指示的作业方向,包括:
    根据所述用户的手势对应的空间坐标进行直线拟合,得到空间直线;
    将所述空间直线向所述用户的手势的末端延伸的方向,作为所述用户指示的作业方向。
  8. 根据权利要求7所述的方法,其特征在于,从候选作业区域中,确定与所述目标作业方向适配的作业区域,作为所述目标作业区域,包括:
    计算所述空间直线与所述候选作业区域所在的平面的交点位置;
    根据所述交点位置,确定所述用户指示的目标作业区域。
  9. 根据权利要求8所述的方法,其特征在于,还包括:根据所述交点位置,确定所述用户指示的目标作业区域,包括以下任意一种:
    若所述交点位置在所述机器人的已知作业区域内,则将所述交点位置所在的作业区域作为所述目标作业区域;
    若所述交点位置不在所述机器人的已知作业区域内,且所述空间直线与所述平面的夹角大于设定的角度阈值,则从所述已知作业区域内,确定位于所述用户指示的作业方向上、且与所述机器人的当前位置最近的作业区域,作为所述目标作业区域;
    若所述交点位置不在所述机器人的已知作业区域内,且所述空间直线与所述平面的夹角小于或者等于所述角度阈值,则根据所述交点位置,在所述 用户指示的作业方向上寻找所述目标作业区域。
  10. 根据权利要求9所述的方法,其特征在于,根据所述交点位置,在所述用户指示的作业方向上寻找所述目标作业区域,包括:
    向所述用户指示的作业方向移动,直至遇到目标障碍物;
    沿着所述目标障碍物的边缘向靠近所述交点位置的方向进行移动,直至探测到入口;
    将入口所属的作业区域,作为所述目标作业区域;所述入口所属的作业区域不在所述已知作业区域内。
  11. 根据权利要求9所述的方法,其特征在于,还包括:
    在所述用户指示的作业方向上寻找到所述目标作业区域后,在所述目标作业区域执行所述作业任务,并根据执行作业任务形成的轨迹更新所述已知作业区域对应的导航地图。
  12. 根据权利要求9所述的方法,其特征在于,移动至所述目标作业区域,以执行设定的作业任务,包括:
    若所述目标作业区域位于所述已知作业区域中,则根据所述已知作业区域对应的导航地图,规划去往所述目标作业区域的路线;
    根据去往所述目标作业区域的路线,移动至所述目标作业区域。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述目标作业区域与所述机器人的当前位置所属的区域之间存在实体障碍物,或者虚拟障碍物。
  14. 一种机器人,其特征在于,包括:机器人本体,安装在所述机器人本体上的传感器组件、控制器以及运动组件;
    所述传感器组件,用于:响应用户的作业控制指令,获取用户的姿态数据;
    所述控制器,用于:根据所述用户的姿态数据,确定所述用户指示的目标作业位置作业区域,并控制所述运动组件移动至所述目标作业区域,以执行作业任务。
  15. 根据权利要求14所述的机器人,其特征在于,所述传感器组件包括:深度传感器。
PCT/CN2020/142239 2020-01-15 2020-12-31 机器人及其控制方法 WO2021143543A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/793,356 US20230057965A1 (en) 2020-01-15 2020-12-31 Robot and control method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010043539.0A CN113116224B (zh) 2020-01-15 2020-01-15 机器人及其控制方法
CN202010043539.0 2020-01-15

Publications (1)

Publication Number Publication Date
WO2021143543A1 true WO2021143543A1 (zh) 2021-07-22

Family

ID=76772154

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/142239 WO2021143543A1 (zh) 2020-01-15 2020-12-31 机器人及其控制方法

Country Status (3)

Country Link
US (1) US20230057965A1 (zh)
CN (1) CN113116224B (zh)
WO (1) WO2021143543A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113854904A (zh) * 2021-09-29 2021-12-31 北京石头世纪科技股份有限公司 清洁设备的控制方法、装置、清洁设备和存储介质
CN114373148A (zh) * 2021-12-24 2022-04-19 达闼机器人有限公司 云端机器人的建图方法、系统、设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220382282A1 (en) * 2021-05-25 2022-12-01 Ubtech North America Research And Development Center Corp Mobility aid robot navigating method and mobility aid robot using the same
CN116098536A (zh) * 2021-11-08 2023-05-12 青岛海尔科技有限公司 一种机器人控制方法及装置
CN116982883A (zh) * 2022-04-25 2023-11-03 追觅创新科技(苏州)有限公司 清洁操作的执行方法及装置、存储介质及电子装置
CN117315792B (zh) * 2023-11-28 2024-03-05 湘潭荣耀智能科技有限公司 一种基于卧姿人体测量的实时调控系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (zh) * 2013-07-29 2016-03-16 三星电子株式会社 自动清扫系统、清扫机器人和控制清扫机器人的方法
US20160154996A1 (en) * 2014-12-01 2016-06-02 Lg Electronics Inc. Robot cleaner and method for controlling a robot cleaner
CN109330494A (zh) * 2018-11-01 2019-02-15 珠海格力电器股份有限公司 基于动作识别的扫地机器人控制方法、系统、扫地机器人
CN110123199A (zh) * 2018-02-08 2019-08-16 东芝生活电器株式会社 自行式电动吸尘器
CN110575099A (zh) * 2018-06-07 2019-12-17 科沃斯机器人股份有限公司 定点清扫方法、扫地机器人及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010106845A (ko) * 2000-05-23 2001-12-07 이정철 다기능 가정용 로봇 및 그의 제어방법
CN108903816A (zh) * 2018-06-21 2018-11-30 上海与德通讯技术有限公司 一种清扫方法、控制器及智能清扫设备
CN109199240B (zh) * 2018-07-24 2023-10-20 深圳市云洁科技有限公司 一种基于手势控制的扫地机器人控制方法及系统
CN109890573B (zh) * 2019-01-04 2022-05-03 上海阿科伯特机器人有限公司 移动机器人的控制方法、装置、移动机器人及存储介质
CN109920424A (zh) * 2019-04-03 2019-06-21 北京石头世纪科技股份有限公司 机器人语音控制方法、装置、机器人和介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105407774A (zh) * 2013-07-29 2016-03-16 三星电子株式会社 自动清扫系统、清扫机器人和控制清扫机器人的方法
US20160154996A1 (en) * 2014-12-01 2016-06-02 Lg Electronics Inc. Robot cleaner and method for controlling a robot cleaner
CN110123199A (zh) * 2018-02-08 2019-08-16 东芝生活电器株式会社 自行式电动吸尘器
CN110575099A (zh) * 2018-06-07 2019-12-17 科沃斯机器人股份有限公司 定点清扫方法、扫地机器人及存储介质
CN109330494A (zh) * 2018-11-01 2019-02-15 珠海格力电器股份有限公司 基于动作识别的扫地机器人控制方法、系统、扫地机器人

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113854904A (zh) * 2021-09-29 2021-12-31 北京石头世纪科技股份有限公司 清洁设备的控制方法、装置、清洁设备和存储介质
CN114373148A (zh) * 2021-12-24 2022-04-19 达闼机器人有限公司 云端机器人的建图方法、系统、设备及存储介质

Also Published As

Publication number Publication date
CN113116224B (zh) 2022-07-05
CN113116224A (zh) 2021-07-16
US20230057965A1 (en) 2023-02-23

Similar Documents

Publication Publication Date Title
WO2021143543A1 (zh) 机器人及其控制方法
CN109643127B (zh) 构建地图、定位、导航、控制方法及系统、移动机器人
JP7356567B2 (ja) 移動ロボット及びその制御方法
WO2019232806A1 (zh) 导航方法、导航系统、移动控制系统及移动机器人
US20210224579A1 (en) Mobile Cleaning Robot Artificial Intelligence for Situational Awareness
KR102577785B1 (ko) 청소 로봇 및 그의 태스크 수행 방법
WO2019144541A1 (zh) 一种清洁机器人
WO2020140271A1 (zh) 移动机器人的控制方法、装置、移动机器人及存储介质
KR102068216B1 (ko) 이동형 원격현전 로봇과의 인터페이싱
WO2019232803A1 (zh) 移动控制方法、移动机器人及计算机存储介质
Tölgyessy et al. Foundations of visual linear human–robot interaction via pointing gesture navigation
WO2020223975A1 (zh) 在地图上定位设备的方法、服务端及移动机器人
WO2022052660A1 (zh) 仓储机器人定位与地图构建方法、机器人及存储介质
GB2527207A (en) Mobile human interface robot
Chatterjee et al. Vision based autonomous robot navigation: algorithms and implementations
WO2021146862A1 (zh) 移动设备的室内定位方法、移动设备及控制系统
CN109933061A (zh) 基于人工智能的机器人及控制方法
Monajjemi et al. UAV, do you see me? Establishing mutual attention between an uninstrumented human and an outdoor UAV in flight
CN113126632B (zh) 虚拟墙划定和作业方法、设备及存储介质
Tsuru et al. Online object searching by a humanoid robot in an unknown environment
KR20230134109A (ko) 청소 로봇 및 그의 태스크 수행 방법
KR20200052388A (ko) 인공지능 이동 로봇의 제어 방법
Zhang et al. An egocentric vision based assistive co-robot
Chen et al. Design and Implementation of AMR Robot Based on RGBD, VSLAM and SLAM
Langer et al. On-the-fly detection of novel objects in indoor environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20913398

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20913398

Country of ref document: EP

Kind code of ref document: A1