WO2023207611A1 - 清洁操作的执行方法及装置、存储介质及电子装置 - Google Patents

清洁操作的执行方法及装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2023207611A1
WO2023207611A1 PCT/CN2023/088040 CN2023088040W WO2023207611A1 WO 2023207611 A1 WO2023207611 A1 WO 2023207611A1 CN 2023088040 W CN2023088040 W CN 2023088040W WO 2023207611 A1 WO2023207611 A1 WO 2023207611A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
gesture recognition
target
cleaning robot
biometric
Prior art date
Application number
PCT/CN2023/088040
Other languages
English (en)
French (fr)
Inventor
吴飞
郁顺昌
Original Assignee
追觅创新科技(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 追觅创新科技(苏州)有限公司 filed Critical 追觅创新科技(苏州)有限公司
Publication of WO2023207611A1 publication Critical patent/WO2023207611A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • A47L11/4011Regulation of the cleaning machine by electric means; Control systems and remote control systems therefor
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/40Parts or details of machines not provided for in groups A47L11/02 - A47L11/38, or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers, levers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L2201/00Robotic cleaning machines, i.e. with automatic control of the travelling movement or the cleaning operation
    • A47L2201/06Control of the cleaning action for autonomous devices; Automatic detection of the surface condition before, during or after cleaning

Definitions

  • the present application relates to the field of robotics, specifically, to a method and device for performing cleaning operations, a storage medium and an electronic device.
  • the method of controlling the sweeping robot directly through speech recognition is not verified because the voice is not verified, and the noise during the working process of the sweeping robot is relatively large, so the speech recognition effect is poor, and the user experience of controlling the sweeping robot is not good. problem, no effective solution has yet been proposed.
  • Embodiments of the present invention provide a cleaning operation execution method and device, a storage medium and an electronic device to at least solve the problem in related technologies that the method of controlling a sweeping robot through voice recognition is noisy due to the large noise and poor voice recognition effect. The experience of controlling the sweeping robot is not good.
  • a method for performing a cleaning operation including: obtaining a biometric identification result of a target object, and obtaining the biometric characteristics of the target object, wherein the target object is located on the first side of a mobile terminal.
  • the biometric characteristics pass verification, obtain the gesture recognition features of the target object; obtain the control instructions corresponding to the gesture recognition features, and control
  • the cleaning robot performs cleaning operations corresponding to the control instructions.
  • the method when the biometric characteristics pass verification, before obtaining the gesture recognition characteristics of the target object, the method further includes: determining whether the cleaning robot recognizes the awakening of the target object. Gesture recognition feature, wherein the wake-up gesture recognition feature is used to instruct the cleaning robot to turn on a target recognition function, and the target recognition function is used to determine the control instruction corresponding to the gesture recognition feature; when the cleaning robot recognizes If the target object has a wake-up gesture recognition feature, the target recognition function is turned on.
  • the method further includes: turning off the gesture recognition feature of the current object if the gesture recognition feature of the current object is not recognized within a preset time period after the current moment. Target recognition function.
  • obtaining the biometric identification result of the target object and obtaining the biological characteristics of the target object includes at least one of the following: performing biometric identification on the target object by the cleaning robot to obtain the biological characteristics of the target object. Recognition results, and analyzing the biometric characteristics of the target object from the biometric recognition results; receiving the biometric recognition results obtained by performing biometric recognition on the target object by the mobile terminal, and extracting the biometric characteristics from the biometric recognition results. Analyze the biological characteristics of the target object.
  • the method before obtaining the control instruction corresponding to the gesture recognition feature, the method further includes: receiving a setting operation of the target object; in response to the setting operation, setting different gesture recognition features and Correspondence between different control instructions.
  • the method includes: determining whether the cleaning robot recognizes the termination gesture recognition feature of the target object; When the termination gesture recognition feature is recognized, the cleaning operation corresponding to the cleaning operation and currently being executed is terminated.
  • obtaining a control instruction corresponding to the gesture recognition feature includes: obtaining a target correspondence corresponding to the biometric feature of the target object, wherein the target correspondence is the gesture of the target object. Identify the correspondence between the features and the control instructions, and the biometric features of different target objects correspond to different correspondences; obtain the control instructions corresponding to the gesture recognition features from the target correspondence.
  • a device for executing a cleaning operation including: a first acquisition unit, configured to acquire a biometric recognition result of a target object and obtain the biometric characteristics of the target object, wherein: The target object is located in the first scanning area of the mobile terminal or in the second scanning area of the cleaning robot; the second acquisition unit is used to acquire the gesture recognition of the target object when the biometric characteristics pass verification.
  • the first control unit is used to obtain the control instruction corresponding to the gesture recognition feature, and control the cleaning robot to perform the cleaning operation corresponding to the control instruction.
  • the above execution device further includes: a first determination unit for determining whether the cleaning robot recognizes the wake-up gesture recognition feature of the target object, wherein the wake-up gesture recognition feature is used to indicate The cleaning robot turns on a target recognition function, which is used to determine the control instruction corresponding to the gesture recognition feature; when the cleaning robot recognizes the wake-up gesture recognition feature of the target object, it turns on all the control instructions. Describe the target recognition function.
  • the above-mentioned first determining unit is also configured to turn off the target recognition function if the gesture recognition feature of the current object is not recognized within a preset time period after the current moment.
  • the above-mentioned first acquisition unit is also configured to perform biometric recognition on the target object through the cleaning robot, obtain the biometric result, and parse the biometric result from the biometric result.
  • the biometric characteristics of the target object are received; the biometric identification result obtained by performing biometric identification on the target object by the mobile terminal is received, and the biometric characteristics of the target object are parsed from the biometric identification result.
  • the above-mentioned second acquisition unit is further configured to receive a setting operation of the target object; in response to the setting operation, set the corresponding relationship between different gesture recognition features and different control instructions.
  • the above-mentioned first control unit further includes: a second determination unit for determining whether the cleaning robot recognizes the termination gesture recognition feature of the target object; In the case of , terminate the cleaning operation corresponding to the cleaning operation and being executed.
  • the above-mentioned second acquisition unit is also used to acquire a target correspondence corresponding to the biological characteristics of the target object, wherein the target correspondence is the gesture recognition feature and the control instruction of the target object. corresponding relationships, and the biological characteristics of different target objects correspond to different corresponding relationships; the control instructions corresponding to the gesture recognition features are obtained from the target corresponding relationships.
  • a computer-readable storage medium stores a computer program, wherein the computer program is configured to perform the above cleaning operation when running. execution method.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the above-mentioned steps through the computer program. How to perform cleaning operations.
  • the biometric identification result of the target object is obtained, and the biometric characteristics of the target object are obtained, that is, the identity of the target object is identified, wherein the target object is located in the first scanning area of the mobile terminal, or is located in a clean area.
  • the target object is identified through the mobile terminal or the cleaning robot; when the biometric characteristics pass verification, the gesture recognition features of the target object are obtained; the gesture recognition features corresponding to the control instructions, and controls the cleaning robot to perform cleaning operations corresponding to the control instructions; solving the problem that in related technologies, the sweeping robot is controlled directly through voice recognition, because the voice is not verified, and because the sweeping robot is working The noise in the process is large, and the voice recognition effect is poor.
  • the user experience of controlling the sweeping robot is not good. It realizes the identity of the target object who sends the gesture first, improves the safe use of the sweeping robot, and can also Avoid technical effects that affect the intelligent control efficiency of the sweeping robot due to noise.
  • Figure 1 is a schematic diagram of the hardware environment of an optional cleaning operation execution method according to the embodiment of the present application.
  • Figure 2 is a flow chart of an optional cleaning operation execution method according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of another optional cleaning operation execution method according to an embodiment of the present application.
  • Figure 4 is a structural block diagram of an optional cleaning operation execution device according to an embodiment of the present application.
  • Figure 5 is a structural block diagram of an optional electronic device according to an embodiment of the present application.
  • a method for performing a cleaning operation is provided.
  • the above cleaning operation execution method can be applied to a hardware environment composed of a robot 102 and a server 104 as shown in FIG. 1 .
  • the robot 102 can be connected to a server 104 (for example, an Internet of Things platform or a cloud server) through a network to control the robot 102 .
  • a server 104 for example, an Internet of Things platform or a cloud server
  • the above-mentioned network may include but is not limited to at least one of the following: wired network, wireless network.
  • the above-mentioned wired network may include but is not limited to at least one of the following: wide area network, metropolitan area network, and local area network.
  • the above-mentioned wireless network may include at least one of the following: WIFI (Wireless Fidelity, Wireless Fidelity), Bluetooth, and infrared.
  • the cleaning operation execution method in the embodiment of the present application can be executed by the robot 102 or the server 104 alone, or can be executed by the robot 102 and the server 104 jointly.
  • the method for the robot 102 to perform the cleaning operation in the embodiment of the present application may also be performed by a client installed on the robot 102 .
  • Figure 2 is a schematic flowchart of an optional method of performing the cleaning operation according to the embodiment of the present application. As shown in Figure 2, this method The process can include the following steps:
  • Step S202 obtain the biometric recognition result of the target object and obtain the biometric characteristics of the target object, wherein the target object is located in the first scanning area of the mobile terminal or is located in the second scanning area of the cleaning robot;
  • Step S204 If the biological characteristics pass verification, obtain the gesture recognition characteristics of the target object;
  • Step S206 Obtain a control instruction corresponding to the gesture recognition feature, and control the cleaning robot to perform a cleaning operation corresponding to the control instruction.
  • the biometric recognition result of the target object is obtained, and the biometric characteristics of the target object are obtained, that is, the identity of the target object is identified, wherein the target object is located in the first scanning area of the mobile terminal, or is located in the third scanning area of the cleaning robot.
  • the target object is identified through a mobile terminal or cleaning robot; when the biometric characteristics pass verification, the gesture recognition features of the target object are obtained; the control instructions corresponding to the gesture recognition features are obtained , and controls the cleaning robot to perform cleaning operations corresponding to the control instructions; solving the problem in related technologies of controlling the sweeping robot directly through voice recognition, because the voice is not verified, and due to the problems in the working process of the sweeping robot The noise is louder, and the voice recognition effect is poor, and the user experience in controlling the sweeping robot is not good.
  • the identity of the target object can be identified through the mobile terminal or the sweeping robot, the safety of using the sweeping robot is improved, and it can also be used in If it passes the verification, there is no need to control the cleaning robot through noise, but only through gesture recognition, achieving a technical effect that can avoid noise affecting the intelligent control efficiency of the sweeping robot.
  • the method when the biometric characteristics pass verification, before obtaining the gesture recognition characteristics of the target object, the method further includes: determining whether the cleaning robot recognizes the awakening of the target object. Gesture recognition feature, wherein the wake-up gesture recognition feature is used to instruct the cleaning robot to turn on a target recognition function, and the target recognition function is used to determine the control instruction corresponding to the gesture recognition feature; when the cleaning robot recognizes If the target object has a wake-up gesture recognition feature, the target recognition function is turned on; if the cleaning robot does not recognize the target object's wake-up gesture recognition feature, the target recognition function is prohibited from being turned on.
  • the cleaning robot when the biometric characteristics are verified, it is first determined whether the cleaning robot recognizes the wake-up gesture recognition features of the target object. After recognizing the wake-up gesture recognition features of the target object, the cleaning robot is instructed to turn on the target based on the wake-up gesture recognition features.
  • Recognition function the target recognition function is used to identify the control instructions corresponding to the gesture recognition features of the target object, thereby realizing control of the cleaning robot; and in the absence of obtaining the wake-up gesture recognition features, it is prohibited to turn on the target recognition function, by adding Wake up the recognition of gesture recognition features to prevent users from misoperation.
  • a wake-up gesture recognition feature can be added to further enable the cleaning robot's target recognition function with specific gesture recognition features , if the wake-up gesture recognition features are not obtained, it is determined to be a misrecognition, and the target recognition function is not turned on.
  • the method further includes: turning off the gesture recognition feature of the current object if the gesture recognition feature of the current object is not recognized within a preset time period after the current moment. Target recognition function.
  • a preset time is set for the cleaning robot. After being awakened, if it is in the preset time If the gesture recognition characteristics of the target object are not recognized within the time period, the target recognition function will be actively turned off.
  • the host or hostess may accidentally let the robot acquire biometric characteristics and pass the identity verification.
  • the children at home may learn the actions of adults and issue corresponding wake-up gesture recognition features.
  • the cleaning robot will recognize the wake-up gesture. After identifying the features, the target recognition function is turned on and is in a state of waiting to receive gesture recognition features. However, the child will not issue other gesture recognition features to the cleaning robot, and the cleaning robot cannot always be in a state of receiving gesture recognition features. Therefore, After the cleaning robot is awakened, if the gesture recognition characteristics of the target object are not recognized within a preset time period, the cleaning robot will actively turn off the target recognition function to avoid damage to the system.
  • obtaining the biometric identification result of the target object and obtaining the biological characteristics of the target object includes at least one of the following: performing biometric identification on the target object by the cleaning robot to obtain the biological characteristics of the target object. Recognition results, and analyzing the biometric characteristics of the target object from the biometric recognition results; receiving the biometric recognition results obtained by performing biometric recognition on the target object by the mobile terminal, and extracting the biometric characteristics from the biometric recognition results. Analyze the biological characteristics of the target object.
  • users can either authenticate through the recognition unit of the cleaning robot, or authenticate through the mobile terminal device bound to the cleaning robot, so that users can complete the control of the cleaning robot anytime and anywhere; for example, the user While at work, it suddenly occurred to me that a visitor would arrive at home today, and the home had not been cleaned yet.
  • the user completed the control of the cleaning robot through the mobile phone device and controlled it to complete the tasks. Cleaning of certain designated areas of the home.
  • mobile terminal devices may be mobile smart devices such as mobile phones, tablets, smart watches, etc. This application does not limit this.
  • the method before obtaining the control instruction corresponding to the gesture recognition feature, the method further includes: receiving a setting operation of the target object; in response to the setting operation, setting different gesture recognition features and Correspondence between different control instructions.
  • the user can set the cleaning robot to input gesture recognition features and bind control instructions corresponding to the gesture recognition features before use; for example, the user inputs a circle-drawing Gesture recognition feature, the cleaning robot receives the gesture recognition feature, and the user immediately clicks on the command to clean the whole house, and the cleaning robot binds the gesture recognition feature with the control command to clean the whole house; whenever it recognizes the gesture recognition feature of drawing a circle, execute Whole house cleaning instructions.
  • the method includes: determining whether the cleaning robot recognizes the termination gesture recognition feature of the target object; In the case of the termination gesture recognition feature, the cleaning operation corresponding to the cleaning operation and currently being executed is terminated.
  • the cleaning robot While the cleaning robot is performing the cleaning operation corresponding to the recognized control instruction, it will also continue to monitor other gesture recognition features that the user may issue, such as the termination gesture recognition feature.
  • the termination gesture recognition feature When the user controls the cleaning robot to clean a designated area of the home, There may be children in the family who are active in this area. In order to prevent the sweeping robot from tripping over the children and causing safety problems to the children, the user will stop the cleaning robot. Therefore, the cleaning robot will issue a termination gesture recognition in the middle of its work. Feature, if the cleaning robot recognizes the termination gesture recognition feature during operation, it will immediately stop the cleaning operation being performed.
  • obtaining a control instruction corresponding to the gesture recognition feature includes: obtaining a target correspondence corresponding to the biometric feature of the target object, wherein the target correspondence is the gesture of the target object. Identify the correspondence between the features and the control instructions, and the biometric features of different target objects correspond to different correspondences; obtain the control instructions corresponding to the gesture recognition features from the target correspondence.
  • the cleaning robot when it obtains the control instructions corresponding to the gesture recognition features, it will first obtain the gesture recognition features and controls set by the target object based on the acquired biometric characteristics of the target object. The corresponding relationship of the instructions, and the control instructions corresponding to the gesture recognition features are obtained according to the corresponding relationship.
  • the process of the cleaning operation execution method in this optional embodiment may include the following steps:
  • Step S302 input the face recognition data to the server through the mobile phone APP matched with the sweeping machine (equivalent to the above-mentioned cleaning robot);
  • Step S304 start;
  • Step S306 The sweeping machine (equivalent to the above-mentioned cleaning robot) performs face recognition authentication on the user through the preset AI camera; if the verification passes, step S308 is executed; if the verification fails, step S316 is executed;
  • Step S308 capture the user's gesture recognition feature image through the AI camera
  • Step S310 Use the gesture recognition algorithm model built in the sweeping machine (equivalent to the above-mentioned cleaning robot) to recognize the acquired gesture recognition feature image, compare the recognized gesture recognition features with the gesture recognition features in the model library, and determine whether it is Gesture recognition feature set stored in the model library; if not, execute step S312; if yes, execute step S314;
  • Step S312 Prompt the user to input correct gesture recognition features
  • Step S314 Obtain the control instruction corresponding to the gesture recognition feature, issue the control instruction to the motion control module, and control the operation of the sweeping machine (equivalent to the above-mentioned cleaning robot);
  • Step S316 end.
  • a gesture recognition algorithm is set for the sweeper, which acquires and identifies the gesture recognition features issued by the user, and compares them with the gesture recognition features in the model library. If the comparison is successful, the corresponding control instructions are issued, and the sweeper controls the sweeper according to the control instructions. The command executes the corresponding cleaning operation; the sweeper prevents misidentification through double verification of face recognition and additional wake-up gesture recognition feature recognition.
  • the method according to the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or the part that contributes to the existing technology.
  • the computer software product is stored in a storage medium (such as ROM (Read-Only Memory, Read-only memory)/RAM (Random Access Memory, disk, optical disk), including a number of instructions to make a terminal device (can be a mobile phone, computer, server, or network device, etc.) to execute this Apply the methods described in the various examples.
  • FIG. 4 is a structural block diagram of an optional cleaning operation execution device according to an embodiment of the present application. As shown in Figure 4, the device may include:
  • the first acquisition unit 42 is used to obtain the biometric recognition result of the target object and obtain the biometric characteristics of the target object, wherein the target object is located in the first scanning area of the mobile terminal or the second scanning area of the cleaning robot.
  • the second acquisition unit 44 is configured to acquire the gesture recognition characteristics of the target object when the biological characteristics pass verification;
  • the first control unit 46 is configured to obtain a control instruction corresponding to the gesture recognition feature, and control the cleaning robot to perform a cleaning operation corresponding to the control instruction.
  • first acquisition unit 42 in this embodiment can be used to perform the above step S202
  • second acquisition unit 44 in this embodiment can be used to perform the above step S204
  • first control unit in this embodiment 46 can be used to perform the above step S206.
  • the biometric recognition result of the target object is obtained, and the biometric characteristics of the target object are obtained, that is, the identity of the target object is identified, wherein the target object is located in the first scanning area of the mobile terminal, or is located in the third scanning area of the cleaning robot.
  • the target object is identified through a mobile terminal or cleaning robot; when the biometric characteristics pass verification, the gesture recognition features of the target object are obtained; the control instructions corresponding to the gesture recognition features are obtained , and controls the cleaning robot to perform cleaning operations corresponding to the control instructions; solving the problem in related technologies of controlling the sweeping robot directly through voice recognition, because the voice is not verified, and due to the problems in the working process of the sweeping robot The noise is louder, and the voice recognition effect is poor, and the user experience in controlling the sweeping robot is not good.
  • the identity of the target object can be identified through the mobile terminal or the sweeping robot, the safety of using the sweeping robot is improved, and it can also be used in If it passes the verification, there is no need to control the cleaning robot through noise, but only through gesture recognition, achieving a technical effect that can avoid noise affecting the intelligent control efficiency of the sweeping robot.
  • the cleaning robot when the biometric characteristics are verified, it is first determined whether the cleaning robot recognizes the wake-up gesture recognition features of the target object. After recognizing the wake-up gesture recognition features of the target object, the cleaning robot is instructed to turn on the target based on the wake-up gesture recognition features.
  • Recognition function the target recognition function is used to identify the control instructions corresponding to the gesture recognition features of the target object, thereby realizing control of the cleaning robot; and in the absence of obtaining the wake-up gesture recognition features, it is prohibited to turn on the target recognition function, by adding Wake up the recognition of gesture recognition features to prevent users from misoperation.
  • a wake-up gesture recognition feature can be added to further enable the cleaning robot's target recognition function with specific gesture recognition features , if the wake-up gesture recognition feature is not recognized, it is determined to be a misrecognition, and the target recognition function is not turned on.
  • the above-mentioned first determining unit is also configured to turn off the target recognition function if the gesture recognition feature of the current object is not recognized within a preset time period after the current moment.
  • the user may accidentally issue a gesture corresponding to the wake-up gesture recognition feature, or may wake up the cleaning robot and fail to issue subsequent gesture recognition features to control the cleaning robot due to delays due to other matters.
  • a preset time is set for the cleaning robot. After being awakened, if it is in the preset time If the gesture recognition characteristics of the target object are not recognized within the time period, the target recognition function will be actively turned off.
  • the host or hostess may accidentally let the robot acquire biometric characteristics and pass the identity verification.
  • the children at home may learn the adult's actions and make corresponding wake-up gesture recognition features.
  • the target recognition function is turned on and is in a state of waiting to receive gesture recognition features.
  • children will not send other gesture recognition features to the cleaning robot, and the cleaning robot cannot always be in a state of receiving gesture recognition features, so After being awakened, if the cleaning robot does not recognize the gesture recognition characteristics of the target object within the preset time period, it will actively turn off the target recognition function to avoid damage to the system.
  • the above-mentioned first acquisition unit is also configured to perform biometric recognition on the target object through the cleaning robot, obtain the biometric result, and parse the biometric result from the biometric result.
  • the biometric characteristics of the target object are received; the biometric identification result obtained by performing biometric identification on the target object by the mobile terminal is received, and the biometric characteristics of the target object are parsed from the biometric identification result.
  • users can either authenticate through the recognition unit of the cleaning robot, or authenticate through the mobile terminal device bound to the cleaning robot, so that users can complete the control of the cleaning robot anytime and anywhere; for example, the user While at work, it suddenly occurred to me that a visitor would arrive at home today, and the home had not been cleaned yet.
  • the user completed the control of the cleaning robot through the mobile phone device and controlled it to complete the tasks. Cleaning of certain designated areas of the home.
  • the above-mentioned second acquisition unit is further configured to receive a setting operation of the target object; in response to the setting operation, set the corresponding relationship between different gesture recognition features and different control instructions.
  • the user can set the cleaning robot to input gesture recognition features and bind control instructions corresponding to the gesture recognition features before use; for example, the user inputs a circle-drawing Gesture recognition feature, the cleaning robot receives the gesture recognition feature, and the user immediately clicks on the command to clean the whole house, and the cleaning robot binds the gesture recognition feature with the control command to clean the whole house; whenever it recognizes the gesture recognition feature of drawing a circle, execute Whole house cleaning instructions.
  • the above-mentioned first control unit further includes: a second determination unit for determining whether the cleaning robot recognizes the termination gesture recognition feature of the target object; In the case of , terminate the cleaning operation corresponding to the cleaning operation and being executed.
  • the cleaning robot While the cleaning robot is performing the cleaning operation corresponding to the recognized control instruction, it will also continue to monitor other gesture recognition features that the user may issue, such as the termination gesture recognition feature.
  • the termination gesture recognition feature When the user controls the cleaning robot to clean a designated area of the home, There may be children in the family who are active in this area. In order to prevent the sweeping robot from tripping over the children and causing safety problems to the children, the user will stop the cleaning robot. Therefore, the cleaning robot will issue a termination gesture recognition in the middle of its work. Feature, if the cleaning robot recognizes the termination gesture recognition feature during operation, it will immediately stop the cleaning operation being performed.
  • the above-mentioned second acquisition unit is also used to acquire a target correspondence corresponding to the biological characteristics of the target object, wherein the target correspondence is the gesture recognition feature and the control instruction of the target object. corresponding relationships, and the biological characteristics of different target objects correspond to different corresponding relationships; the control instructions corresponding to the gesture recognition features are obtained from the target corresponding relationships.
  • the cleaning robot when it obtains the control instructions corresponding to the gesture recognition features, it will first obtain the gesture recognition features and controls set by the target object based on the acquired biometric characteristics of the target object. The corresponding relationship of the instructions, and the control instructions corresponding to the gesture recognition features are obtained according to the corresponding relationship.
  • Embodiments of the present invention also provide a storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.
  • the above-mentioned storage medium may be configured to store a computer program for performing the following steps:
  • Embodiments of the present invention also provide a computer-readable storage medium that stores a computer program, wherein the computer program is configured to execute the steps in any of the above method embodiments when running.
  • the computer-readable storage medium may include but is not limited to: USB flash drive, read-only memory (ROM), random access memory (Random Access Memory, RAM) , mobile hard disk, magnetic disk or optical disk and other media that can store computer programs.
  • ROM read-only memory
  • RAM random access memory
  • mobile hard disk magnetic disk or optical disk and other media that can store computer programs.
  • An embodiment of the present invention also provides an electronic device, including a memory and a processor.
  • a computer program is stored in the memory, and the processor is configured to run the computer program to perform the steps in any of the above method embodiments.
  • the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
  • the above-mentioned processor may be configured to perform the following steps through a computer program:
  • modules or steps of the present invention can be implemented using general-purpose computing devices. They can be concentrated on a single computing device, or distributed across a network composed of multiple computing devices. They may be implemented in program code executable by a computing device, such that they may be stored in a storage device for execution by the computing device, and in some cases may be executed in a sequence different from that shown herein. Or the described steps can be implemented by making them into individual integrated circuit modules respectively, or by making multiple modules or steps among them into a single integrated circuit module. As such, the invention is not limited to any specific combination of hardware and software.
  • Figure 5 is a structural block diagram of an optional electronic device according to an embodiment of the present application. As shown in Figure 5, it includes a processor 502, a communication interface 504, a memory 506 and a communication bus 508. The processor 502, the communication interface 504 and memory 506 complete communication with each other through communication bus 508, where,
  • Memory 506 for storing computer programs
  • the processor 502 is used to implement the following steps when executing the computer program stored on the memory 506:
  • the communication bus may be a PCI (Peripheral Component Interconnect, Peripheral Component Interconnect Standard) bus, or an EISA (Extended Industry Standard Architecture, Extended Industry Standard Architecture) bus, or the like.
  • the communication bus can be divided into address bus, data bus, control bus, etc. For ease of presentation, only one thick line is used in Figure 5, but it does not mean that there is only one bus or one type of bus.
  • the communication interface is used for communication between the above-mentioned electronic device and other equipment.
  • the above-mentioned memory may include RAM or non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the memory may also be at least one storage device located remotely from the aforementioned processor.
  • the above-mentioned memory 506 may include, but is not limited to, the first acquisition unit 42, the second acquisition unit 44 and the first control unit 46 in the control device of the above-mentioned device. In addition, it may also include but is not limited to other modular units in the control device of the above equipment, which will not be described again in this example.
  • the above-mentioned processor can be a general-purpose processor, which can include but is not limited to: CPU (Central Processing Unit, central processing unit), NP (Network Processor, network processor), etc.; it can also be a DSP (Digital Signal Processing, digital signal processor) ), ASIC (Application Specific Integrated Circuit, application specific integrated circuit), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit, central processing unit
  • NP Network Processor, network processor
  • DSP Digital Signal Processing, digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array, field programmable gate array
  • other programmable logic devices discrete gate or transistor logic devices, discrete hardware components.
  • the device that implements the above cleaning operation method can be a terminal device, and the terminal device can be a smart phone (such as an Android phone, iOS phone, etc.), a tablet Computers, handheld computers, and mobile Internet devices (Mobile Internet Devices, MID), PAD and other terminal equipment.
  • FIG. 5 does not limit the structure of the above-mentioned electronic device.
  • the electronic device may also include more or fewer components (such as network interfaces, display devices, etc.) than shown in FIG. 5 , or have a different configuration than that shown in FIG. 5 .
  • the program can be stored in a computer-readable storage medium, and the storage medium can Including: flash disk, ROM, RAM, magnetic disk or optical disk, etc.
  • the integrated units in the above embodiments are implemented in the form of software functional units and sold or used as independent products, they can be stored in the above computer-readable storage medium.
  • the technical solution of the present application is essentially or contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, It includes several instructions to cause one or more computer devices (which can be personal computers, servers or network devices, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or may be Integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the units or modules may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution provided in this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种清洁操作的执行方法及装置、存储介质及电子装置,方法包括:获取目标对象的生物识别结果,得到目标对象的生物特征,其中,目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内(S202);在生物特征通过验证的情况下,获取目标对象的手势识别特征(S204);获取与手势识别特征对应的控制指令,并控制清洁机器人执行与控制指令对应的清洁操作(S206)。解决了相关技术中,直接通过语音识别方式控制扫地机器人的方式,由于未对语音进行验证,且由于扫地机器人工作过程中的噪声较大,进而语音识别效果差,用户控制扫地机器人的体验度不好的问题。

Description

清洁操作的执行方法及装置、存储介质及电子装置
本申请要求如下专利申请的优先权:于2022年04月25日提交中国专利局、申请号为202210441621.8、发明名称为“清洁操作的执行方法及装置、存储介质及电子装置”的中国专利申请,上述专利申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及机器人领域,具体而言,涉及一种清洁操作的执行方法及装置、存储介质及电子装置。
背景技术
近年来,随着人们生活水平的提高和技术的高速发展,在日常生活中,扫地机器人的存在为人们带来了便利,在一定程度上解放了人们的双手,减少了劳作量,但扫地机器人的扫地方式控制仍是一大难题。
现有技术中,提出了通过语音识别的方式来控制扫地机器人,通过用户与扫地机器人进行语音交互来向扫地机器人发送扫地命令,控制扫地机器人清扫指定区域,保证扫地机器人的工作效率和清洁度。但该方法没有考虑到的问题是,扫地机器人在运行过程中产生的噪音较大,导致用户的语音唤醒和语音交互的识别效果较差,用户体验感不好,且在语音指令通过后,扫地机器人执行清洁操作,未有对语音指令的发送对象进行识别的方案。
针对相关技术中,直接通过语音识别方式控制扫地机器人的方式,由于未对语音进行验证,且由于扫地机器人工作过程中的噪声较大,进而语音识别效果差,用户控制扫地机器人的体验度不好的问题,尚未提出有效的解决方案。
发明内容
本发明实施例提供了一种清洁操作的执行方法及装置、存储介质及电子装置,以至少解决相关技术中,通过语音识别方式控制扫地机器人的方式由于噪声较大,进而语音识别效果差,用户控制扫地机器人的体验度不好的问题。
根据本发明实施例的一个方面,提供了一种清洁操作的执行方法,包括:获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
在一个示例性实施例中,在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征之前,所述方法还包括:确定所述清洁机器人是否识别到了所述目标对象的唤醒手势识别特征,其中,所述唤醒手势识别特征用于指示所述清洁机器人开启目标识别功能,所述目标识别功能用于确定与所述手势识别特征对应的控制指令;在所述清洁机器人识别到了所述目标对象的唤醒手势识别特征的情况下,开启所述目标识别功能。
在一个示例性实施例中,开启所述目标识别功能之后,所述方法还包括:在当前时刻之后的预设时间段内未识别到所述目前对象的手势识别特征的情况下,关闭所述目标识别功能。
在一个示例性实施例中,获取目标对象的生物识别结果,得到所述目标对象的生物特征,至少包括以下之一:通过所述清洁机器人对所述目标对象进行生物特征识别,得到所述生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征;接收所述移动终端对所述目标对象进行生物特征识别所得到的生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征。
在一个示例性实施例中,获取与所述手势识别特征对应的控制指令之前,所述方法还包括:接收所述目标对象的设置操作;响应于所述设置操作,设置不同的手势识别特征与不同的控制指令之间的对应关系。
在一个示例性实施例中,控制所述清洁机器人执行与所述控制指令对应的清洁操作的过程中,所述方法包括:确定所述清洁机器人是否识别到所述目标对象的终止手势识别特征;在识别到所述终止手势识别特征的情况下,终止所述清洁操作对应的,且正在执行的清洁操作。
在一个示例性实施例中,获取与所述手势识别特征对应的控制指令,包括:获取所述目标对象的生物特征对应的目标对应关系,其中,所述目标对应关系为所述目标对象的手势识别特征与控制指令的对应关系,且不同的目标对象的生物特征对应不同的对应关系;从所述目标对应关系中获取与所述手势识别特征对应的控制指令。
根据本发明实施例的另一个方面,还提供了一种清洁操作的执行装置,包括:第一获取单元,用于获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;第二获取单元,用于在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征; 第一控制单元,用于获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
在一个示例性实施例中,上述执行装置还包括:第一确定单元,用于确定所述清洁机器人是否识别到了所述目标对象的唤醒手势识别特征,其中,所述唤醒手势识别特征用于指示所述清洁机器人开启目标识别功能,所述目标识别功能用于确定与所述手势识别特征对应的控制指令;在所述清洁机器人识别到了所述目标对象的唤醒手势识别特征的情况下,开启所述目标识别功能。
在一个示例性实施例中,上述第一确定单元,还用于在当前时刻之后的预设时间段内未识别到所述目前对象的手势识别特征的情况下,关闭所述目标识别功能。
在一个示例性实施例中,上述第一获取单元,还用于通过所述清洁机器人对所述目标对象进行生物特征识别,得到所述生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征;接收所述移动终端对所述目标对象进行生物特征识别所得到的生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征。
在一个示例性实施例中,上述第二获取单元,还用于接收所述目标对象的设置操作;响应于所述设置操作,设置不同的手势识别特征与不同的控制指令之间的对应关系。
在一个示例性实施例中,上述第一控制单元还包括:第二确定单元,用于确定所述清洁机器人是否识别到所述目标对象的终止手势识别特征;在识别到所述终止手势识别特征的情况下,终止所述清洁操作对应的,且正在执行的清洁操作。
在一个示例性实施例中,上述第二获取单元,还用于获取所述目标对象的生物特征对应的目标对应关系,其中,所述目标对应关系为所述目标对象的手势识别特征与控制指令的对应关系,且不同的目标对象的生物特征对应不同的对应关系;从所述目标对应关系中获取与所述手势识别特征对应的控制指令。
根据本发明实施例的又一方面,还提供了一种计算机可读的存储介质,该计算机可读的存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述清洁操作的执行方法。
根据本发明实施例的又一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的清洁操作的执行方法。
在本发明实施例中,获取目标对象的生物识别结果,得到所述目标对象的生物特征,即识别目标对象的身份,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内,即目标对象通过移动终端或清洁机器人进行身份识别;在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作;解决了相关技术中,直接通过语音识别方式控制扫地机器人的方式,由于未对语音进行验证,且由于扫地机器人工作过程中的噪声较大,进而语音识别效果差,用户控制扫地机器人的体验度不好的问题,实现了先对发出手势的目标对象的身份进行识别,提高了扫地机器人的安全使用性,还可以避免由于噪声影响了扫地机器人的智能控制效率的技术效果。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例的一种可选的清洁操作的执行方法的硬件环境的示意;
图2是根据本申请实施例的一种可选的清洁操作的执行方法的流程图;
图3是根据本申请实施例的另一种可选的清洁操作的执行方法的流程示意图;
图4是根据本申请实施例的一种可选的清洁操作的执行装置的结构框图;
图5是根据本申请实施例的一种可选的电子装置的结构框图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本申请。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个实施例,提供了一种清洁操作的执行方法。可选地,在本实施例中,上述清洁操作的执行方法可以应用于如图1所示的由机器人102和服务器104所构成的硬件环境中。如图1所示,机器人102可以通过网络与服务器104(例如,物联网平台或者云端服务器)进行连接,以对机器人102进行控制。
上述网络可以包括但不限于以下至少之一:有线网络,无线网络。上述有线网络可以包括但不限于以下至少之一:广域网,城域网,局域网,上述无线网络可以包括但不限于以下至少之一:WIFI(Wireless Fidelity,无线保真),蓝牙,红外。
本申请实施例的清洁操作的执行方法可以由机器人102或者服务器104单独来执行,也可以由机器人102和服务器104共同执行。其中,机器人102执行本申请实施例的清洁操作的执行方法也可以是由安装在其上的客户端来执行。
以由机器人102来执行本实施例中的清洁操作的执行方法为例,图2是根据本申请实施例的一种可选的清洁操作的执行方法的流程示意图,如图2所示,该方法的流程可以包括以下步骤:
步骤S202,获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;
步骤S204,在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;
步骤S206,获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
通过上述步骤,获取目标对象的生物识别结果,得到所述目标对象的生物特征,即识别目标对象的身份,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内,即目标对象通过移动终端或清洁机器人进行身份识别;在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作;解决了相关技术中,直接通过语音识别方式控制扫地机器人的方式,由于未对语音进行验证,且由于扫地机器人工作过程中的噪声较大,进而语音识别效果差,用户控制扫地机器人的体验度不好的问题,由于可以通过移动终端或者扫地机器人对目标对象的身份进行识别,提高了扫地机器人的使用安全性,还能够在通过验证的情况下,无需通过噪声控制清洁机器人,仅通过手势识别来控制,实现了可以避免由于噪声影响了扫地机器人的智能控制效率的技术效果。
在一个示例性实施例中,在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征之前,所述方法还包括:确定所述清洁机器人是否识别到了所述目标对象的唤醒手势识别特征,其中,所述唤醒手势识别特征用于指示所述清洁机器人开启目标识别功能,所述目标识别功能用于确定与所述手势识别特征对应的控制指令;在所述清洁机器人识别到了所述目标对象的唤醒手势识别特征的情况下,开启所述目标识别功能;在所述清洁机器人未识别到所述目标对象的唤醒手势识别特征的情况下,禁止开启所述目标识别功能。
也就是说,在生物特征通过验证的情况下,先确定清洁机器人是否识别到了目标对象的唤醒手势识别特征,在识别到目标对象的唤醒手势识别特征之后,根据唤醒手势识别特征指示清洁机器人开启目标识别功能,目标识别功能用于识别目标对象的手势识别特征对应的控制指令,进而实现对清洁机器人的控制;而在没有获取到唤醒手势识别特征的情况下,禁止开启目标识别功能,通过增加对唤醒手势识别特征的识别来防止用户误操作。举例说明,以清洁机器人来对用户进行人脸识别为例:家庭中有男主人和女主人以及他们的小孩子,但二人的习惯动作不相同,因此二人录入的唤醒手势识别特征并不相同,即男主人和女主人分别有一套自己设计的手势识别特征来控制清洁机器人,若男主人或女主人偶然令清洁机器人获取到了生物特征通过了验证,而小孩子在一旁玩耍,小孩子的一些随意动作可能会误触发清洁机器人的控制指令;因此,为了避免上述情况出现,可以在生物特征通过验证后,增设一个唤醒手势识别特征识别,以特定的手势识别特征进一步开启清洁机器人的目标识别功能,在没有获取到唤醒手势识别特征的情况下,判定为误识别,不开启目标识别功能。
在一个示例性实施例中,开启所述目标识别功能之后,所述方法还包括:在当前时刻之后的预设时间段内未识别到所述目前对象的手势识别特征的情况下,关闭所述目标识别功能。
可以理解的是,用户在实际操作过程中,也可能存在偶然误使用唤醒手势识别特征对应的手势,或唤醒清洁机器人后因其他事情耽搁了没有发出后续手势识别特征来控制清洁机器人的情况,在上述情况下,为了避免清洁机器人在被唤醒后一直处于开启目标识别功能状态而对清洁机器人功能造成损害,且浪费能源;因此为清洁机器人设置一个预设时间,在被唤醒后,若在预设时间段内未识别到目标对象的手势识别特征,则主动关闭目标识别功能。
举例说明,男主人或女主人可能会偶然令机器人获取到生物特征进而通过了身份验证,家中的小孩子可能由于学习大人的动作而发出了对应的唤醒手势识别特征,清洁机器人在识别到唤醒手势识别特征后,开启了目标识别功能,处于待接收手势识别特征的状态,但小孩子并不会对清洁机器人发出其他手势识别特征,而清洁机器人也不能一直处于接收手势识别特征的状态,因此,清洁机器人在被唤醒后,若预设时间段内未识别到目标对象的手势识别特征时,会主动关闭目标识别功能,以避免对系统造成损伤。
在一个示例性实施例中,获取目标对象的生物识别结果,得到所述目标对象的生物特征,至少包括以下之一:通过所述清洁机器人对所述目标对象进行生物特征识别,得到所述生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征;接收所述移动终端对所述目标对象进行生物特征识别所得到的生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征。
在实际应用过程中用户既可以通过清洁机器人的识别单元进行身份验证,也可以通过清洁机器人绑定的移动终端设备进行身份验证,以方便用户随时随地都能完成对清洁机器人的操控;例如,用户在上班时,突然想到今天家里会有访客到来,而家里的卫生还没有打扫,为了避免家里过于杂乱给客人带来不好的印象,用户通过手机设备完成对清扫机器人的控制,控制其完成对家庭某些指定区域的清扫。
需要说明的是,上述移动终端设备可以是手机、平板,智能手表等移动智能设备,本申请对此不作限制。
在一个示例性实施例中,获取与所述手势识别特征对应的控制指令之前,所述方法还包括:接收所述目标对象的设置操作;响应于所述设置操作,设置不同的手势识别特征与不同的控制指令之间的对应关系。
为了保证用户能够通过手势识别特征准确地控制清扫机器人清扫指定区域,用户在使用之前,可以设置清扫机器人录入手势识别特征并绑定与手势识别特征对应的控制指令;例如,用户录入一个画圆的手势识别特征,清扫机器人接收该手势识别特征,用户随即点击清扫全屋指令,清扫机器人则将该手势识别特征与清扫全屋的控制指令绑定;每当识别到画圆手势识别特征时,执行清扫全屋指令。
基于上述步骤,控制所述清洁机器人执行与所述控制指令对应的清洁操作的过程中,所述方法包括:确定所述清洁机器人是否识别到所述目标对象的终止手势识别特征;在识别到所述终止手势识别特征的情况下,终止所述清洁操作对应的,且正在执行的清洁操作。
清洁机器人在执行与识别到的控制指令对应的清洁操作的过程中,也会继续监控用户可能发出的其他手势识别特征,比如终止手势识别特征,用户在控制清洁机器人清扫家庭指定区域的过程中,可能会出现家里有小孩子在该区域活动的情况,为了避免扫地机器人绊倒小孩子,给小孩子带来安全问题,用户会停止清扫机器人的运转,因此会在清洁机器人工作中途发出终止手势识别特征,清扫机器人在运行过程中如果识别到终止手势识别特征的话,会立即停止正在执行的清洁操作。
在一个示例性实施例中,获取与所述手势识别特征对应的控制指令,包括:获取所述目标对象的生物特征对应的目标对应关系,其中,所述目标对应关系为所述目标对象的手势识别特征与控制指令的对应关系,且不同的目标对象的生物特征对应不同的对应关系;从所述目标对应关系中获取与所述手势识别特征对应的控制指令。
家庭中不同的用户可能存在不同的手势习惯,因此清扫机器人在获取与手势识别特征对应的控制指令的时候,会先根据获取到的目标对象的生物特征获取该目标对象设置的手势识别特征与控制指令的对应关系,根据该对应关系获取与手势识别特征对应的控制指令。
为了更好的理解本发明实施例以及可选实施例的技术方案,以下结合示例对上述的清洁操作的执行方法的流程进行解释说明,但不用于限定本发明实施例的技术方案。
如图3所示,本可选实施例中的清洁操作的执行方法的流程可以包括以下步骤:
步骤S302,通过与扫地机(相当于上述清洁机器人)配套的手机APP录入人脸识别数据到服务器;
步骤S304,启动;
步骤S306,扫地机(相当于上述清洁机器人)通过预设的AI摄像头对用户进行人脸识别身份验证;若验证通过则执行步骤S308,若不通过则执行步骤S316;
步骤S308,通过AI摄像头捕捉用户的手势识别特征图像;
步骤S310,通过扫地机(相当于上述清洁机器人)内置的手势识别算法模型识别获取到的手势识别特征图像,将识别得到的手势识别特征与模型库中的手势识别特征进行比对,判断是否为模型库中存储的手势识别特征集合;若不是则执行步骤S312,若是则执行步骤S314;
步骤S312:提示用户输入正确的手势识别特征;
步骤S314:获取与该手势识别特征对应的控制指令,下发控制指令到运动控制模块,控制扫地机(相当于上述清洁机器人)运行;
步骤S316:结束。
通过本实施例,为扫地机设置手势识别算法,获取并识别用户发出的手势识别特征,与模型库中的手势识别特征进行比对,比对成功则下发对应的控制指令,扫地机根据控制指令执行对应的清扫操作;扫地机通过人脸识别和增设唤醒手势识别特征识别两重验证防止误识别,通过上述步骤,解决了相关技术中,通过语音识别方式控制扫地机器人的方式,由于噪声较大,进而语音识别效果差,用户控制扫地机器人的体验度不好等问题,实现了可以避免由于噪声影响了扫地机器人的智能控制效率的技术效果。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM(Read-Only Memory,只读存储器)/RAM(Random Access Memory,随机存取存储器)、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
根据本申请实施例的又一个方面,还提供了一种用于实施上述清洁操作的执行方法的清洁操作的执行装置。图4是根据本申请实施例的一种可选的清洁操作的执行装置的结构框图,如图4所示,该装置可以包括:
第一获取单元42,用于获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;
第二获取单元44,用于在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;
第一控制单元46,用于获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
需要说明的是,该实施例中的第一获取单元42可以用于执行上述步骤S202,该实施例中的第二获取单元44可以用于执行上述步骤S204,该实施例中的第一控制单元46可以用于执行上述步骤S206。
通过上述装置,获取目标对象的生物识别结果,得到所述目标对象的生物特征,即识别目标对象的身份,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内,即目标对象通过移动终端或清洁机器人进行身份识别;在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作;解决了相关技术中,直接通过语音识别方式控制扫地机器人的方式,由于未对语音进行验证,且由于扫地机器人工作过程中的噪声较大,进而语音识别效果差,用户控制扫地机器人的体验度不好的问题,由于可以通过移动终端或者扫地机器人对目标对象的身份进行识别,提高了扫地机器人的使用安全性,还能够在通过验证的情况下,无需通过噪声控制清洁机器人,仅通过手势识别来控制,实现了可以避免由于噪声影响了扫地机器人的智能控制效率的技术效果。
也就是说,在生物特征通过验证的情况下,先确定清洁机器人是否识别到了目标对象的唤醒手势识别特征,在识别到目标对象的唤醒手势识别特征之后,根据唤醒手势识别特征指示清洁机器人开启目标识别功能,目标识别功能用于识别目标对象的手势识别特征对应的控制指令,进而实现对清洁机器人的控制;而在没有获取到唤醒手势识别特征的情况下,禁止开启目标识别功能,通过增加对唤醒手势识别特征的识别来防止用户误操作。举例说明,以清洁机器人来对用户进行人脸识别为例:家庭中有男主人和女主人以及他们的小孩子,但二人的习惯动作不相同,因此二人录入的唤醒手势识别特征并不相同,即男主人和女主人分别有一套自己设计的手势识别特征来控制清洁机器人,若男主人或女主人偶然令清洁机器人获取到了生物特征通过了验证,而小孩子在一旁玩耍,小孩子的一些随意动作可能会误触发清洁机器人的控制指令;因此,为了避免上述情况出现,可以在生物特征通过验证后,增设一个唤醒手势识别特征识别,以特定的手势识别特征进一步开启清洁机器人的目标识别功能,在没有识别到唤醒手势识别特征的情况下,判定为误识别,不开启目标识别功能。
在一个示例性实施例中,上述第一确定单元,还用于在当前时刻之后的预设时间段内未识别到所述目前对象的手势识别特征的情况下,关闭所述目标识别功能。
可以理解的是,用户在实际操作过程中,也可能存在偶然误发出唤醒手势识别特征对应的手势,或唤醒清洁机器人后因其他事情耽搁了没有发出后续手势识别特征来控制清洁机器人的情况,在上述情况下,为了避免清洁机器人在被唤醒后一直处于开启目标识别功能状态而对清洁机器人功能造成损害,且浪费能源;因此为清洁机器人设置一个预设时间,在被唤醒后,若在预设时间段内未识别到目标对象的手势识别特征,则主动关闭目标识别功能。
举例说明,男主人或女主人可能会偶然令机器人获取到生物特征进而通过了身份验证,家中的小孩子可能由于学习大人的动作而做出了对应的唤醒手势识别特征,清洁机器人在识别到唤醒手势识别特征后,开启了目标识别功能,处于待接收手势识别特征的状态,但小孩子并不会对清洁机器人发出其他手势识别特征,而清洁机器人也不能一直处于接收手势识别特征的状态,因此,清洁机器人在被唤醒后,若预设时间段内未识别到目标对象的手势识别特征时,会主动关闭目标识别功能,以避免对系统造成损伤。
在一个示例性实施例中,上述第一获取单元,还用于通过所述清洁机器人对所述目标对象进行生物特征识别,得到所述生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征;接收所述移动终端对所述目标对象进行生物特征识别所得到的生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征。
在实际应用过程中用户既可以通过清洁机器人的识别单元进行身份验证,也可以通过清洁机器人绑定的移动终端设备进行身份验证,以方便用户随时随地都能完成对清洁机器人的操控;例如,用户在上班时,突然想到今天家里会有访客到来,而家里的卫生还没有打扫,为了避免家里过于杂乱给客人带来不好的印象,用户通过手机设备完成对清扫机器人的控制,控制其完成对家庭某些指定区域的清扫。
在一个示例性实施例中,上述第二获取单元,还用于接收所述目标对象的设置操作;响应于所述设置操作,设置不同的手势识别特征与不同的控制指令之间的对应关系。
为了保证用户能够通过手势识别特征准确地控制清扫机器人清扫指定区域,用户在使用之前,可以设置清扫机器人录入手势识别特征并绑定与手势识别特征对应的控制指令;例如,用户录入一个画圆的手势识别特征,清扫机器人接收该手势识别特征,用户随即点击清扫全屋指令,清扫机器人则将该手势识别特征与清扫全屋的控制指令绑定;每当识别到画圆手势识别特征时,执行清扫全屋指令。
在一个示例性实施例中,上述第一控制单元还包括:第二确定单元,用于确定所述清洁机器人是否识别到所述目标对象的终止手势识别特征;在识别到所述终止手势识别特征的情况下,终止所述清洁操作对应的,且正在执行的清洁操作。
清洁机器人在执行与识别到的控制指令对应的清洁操作的过程中,也会继续监控用户可能发出的其他手势识别特征,比如终止手势识别特征,用户在控制清洁机器人清扫家庭指定区域的过程中,可能会出现家里有小孩子在该区域活动的情况,为了避免扫地机器人绊倒小孩子,给小孩子带来安全问题,用户会停止清扫机器人的运转,因此会在清洁机器人工作中途发出终止手势识别特征,清扫机器人在运行过程中如果识别到终止手势识别特征的话,会立即停止正在执行的清洁操作。
在一个示例性实施例中,上述第二获取单元,还用于获取所述目标对象的生物特征对应的目标对应关系,其中,所述目标对应关系为所述目标对象的手势识别特征与控制指令的对应关系,且不同的目标对象的生物特征对应不同的对应关系;从所述目标对应关系中获取与所述手势识别特征对应的控制指令。
家庭中不同的用户可能存在不同的手势习惯,因此清扫机器人在获取与手势识别特征对应的控制指令的时候,会先根据获取到的目标对象的生物特征获取该目标对象设置的手势识别特征与控制指令的对应关系,根据该对应关系获取与手势识别特征对应的控制指令。
本发明的实施例还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;
S2,在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;
S3,获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
本发明的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述计算机可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本发明的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
在一个示例性实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;
S2,在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;
S3,获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。
图5是根据本申请实施例的一种可选的电子装置的结构框图,如图5所示,包括处理器502、通信接口504、存储器506和通信总线508,其中,处理器502、通信接口504和存储器506通过通信总线508完成相互间的通信,其中,
存储器506,用于存储计算机程序;
处理器502,用于执行存储器506上所存放的计算机程序时,实现如下步骤:
S1,获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;
S2,在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;
S3,获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
可选地,在本实施例中,通信总线可以是PCI (Peripheral Component Interconnect,外设部件互连标准)总线、或EISA (Extended Industry Standard Architecture,扩展工业标准结构)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图5中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信接口用于上述电子装置与其他设备之间的通信。
上述的存储器可以包括RAM,也可以包括非易失性存储器(non-volatile memory),例如,至少一个磁盘存储器。可选地,存储器还可以是至少一个位于远离前述处理器的存储装置。
作为一种示例,上述存储器506中可以但不限于包括上述设备的控制装置中的第一获取单元42、第二获取单元44以及第一控制单元46。此外,还可以包括但不限于上述设备的控制装置中的其他模块单元,本示例中不再赘述。
上述处理器可以是通用处理器,可以包含但不限于:CPU (Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP (Digital Signal Processing,数字信号处理器)、ASIC (Application Specific Integrated Circuit,专用集成电路)、FPGA (Field-Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图5所示的结构仅为示意,实施上述清洁操作的执行方法的设备可以是终端设备,该终端设备可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图5其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图5中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图5所示的不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例中所提供的方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (20)

  1.  一种清洁操作的执行方法,其特征在于,包括:
    获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;
    在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;
    获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
  2.  根据权利要求1所述的方法,其特征在于,在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征之前,所述方法还包括:
    确定所述清洁机器人是否识别到了所述目标对象的唤醒手势识别特征,其中,所述唤醒手势识别特征用于指示所述清洁机器人开启目标识别功能,所述目标识别功能用于确定与所述手势识别特征对应的控制指令;
    在所述清洁机器人识别到了所述目标对象的唤醒手势识别特征的情况下,开启所述目标识别功能。
  3.  根据权利要求2所述的方法,其特征在于,在所述清洁机器人未识别到所述目标对象的唤醒手势识别特征的情况下,禁止开启所述目标识别功能。
  4.  根据权利要求2所述的方法,其特征在于,开启所述目标识别功能之后,所述方法还包括:
    在当前时刻之后的预设时间段内未识别到所述目前对象的手势识别特征的情况下,关闭所述目标识别功能。
  5.  根据权利要求1所述的方法,其特征在于,获取目标对象的生物识别结果,得到所述目标对象的生物特征,包括:
    通过所述清洁机器人对所述目标对象进行生物特征识别,得到所述生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征。
  6.  根据权利要求1所述的方法,其特征在于,获取目标对象的生物识别结果,得到所述目标对象的生物特征,包括:
    接收所述移动终端对所述目标对象进行生物特征识别所得到的生物识别结果,以及从所述生物识别结果中解析到所述目标对象的生物特征。
  7.  根据权利要求1所述的方法,其特征在于,获取与所述手势识别特征对应的控制指令之前,所述方法还包括:
    接收所述目标对象的设置操作;
    响应于所述设置操作,设置不同的手势识别特征与不同的控制指令之间的对应关系。
  8.  根据权利要求1-7任一项所述的方法,其特征在于,控制所述清洁机器人执行与所述控制指令对应的清洁操作的过程中,所述方法包括:
    确定所述清洁机器人是否识别到所述目标对象的终止手势识别特征;
    在识别到所述终止手势识别特征的情况下,终止所述清洁操作对应的,且正在执行的清洁操作。
  9.  根据权利要求1或7所述的方法,其特征在于,获取与所述手势识别特征对应的控制指令,包括:
    获取所述目标对象的生物特征对应的目标对应关系,其中,所述目标对应关系为所述目标对象的手势识别特征与控制指令的对应关系,且不同的目标对象的生物特征对应不同的对应关系;
    从所述目标对应关系中获取与所述手势识别特征对应的控制指令。
  10.  根据权利要求1所述的方法,其特征在于,所述获取所述目标对象的手势识别特征,包括:
    捕捉所述目标对象的手势识别特征图像;
    利用内置的手势识别算法对所述手势识别特征图像进行识别,得到手势识别特征。
  11.  根据权利要求1所述的方法,其特征在于,在所述手势识别特征属于预先存储的手势识别特征时,获取与识别到的所述手势识别特征对应的控制指令。
  12.  根据权利要求1所述的方法,其特征在于,在所述手势识别特征不属于预先存储的手势识别特征时,提示目标对象输入正确的手势识别特征。
  13.  一种清洁操作的执行方法,其特征在于,应用于清洁机器人,所述方法包括:
    获取针对目标对象进行人脸识别的身份验证结果;
    在所述目标对象的身份验证通过的情况下,获取所述目标对象的手势识别特征;
    获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
  14.  根据权利要求13所述的方法,其特征在于,在获取所述目标对象的手势识别特征之前,所述方法还包括:
    确定所述清洁机器人是否识别到了所述目标对象的唤醒手势识别特征,其中,所述唤醒手势识别特征用于指示所述清洁机器人开启目标识别功能,所述目标识别功能用于确定与所述手势识别特征对应的控制指令;
    在所述清洁机器人识别到了所述目标对象的唤醒手势识别特征的情况下,开启所述目标识别功能。
  15.  根据权利要求13所述的方法,其特征在于,开启所述目标识别功能之后,所述方法还包括:
    在当前时刻之后的预设时间段内未识别到所述目前对象的手势识别特征的情况下,关闭所述目标识别功能。
  16.  根据权利要求13所述的方法,其特征在于,控制所述清洁机器人执行与所述控制指令对应的清洁操作的过程中,所述方法包括:
    确定所述清洁机器人是否识别到所述目标对象的终止手势识别特征;
    在识别到所述终止手势识别特征的情况下,终止所述清洁操作对应的,且正在执行的清洁操作。
  17.  根据权利要求13所述的方法,其特征在于,获取与所述手势识别特征对应的控制指令,包括:
    获取所述目标对象的人脸识别结果对应的目标对应关系,其中,所述目标对应关系为所述目标对象的手势识别特征与控制指令的对应关系,且不同的目标对象的人脸识别结果对应不同的对应关系;
    从所述目标对应关系中获取与所述手势识别特征对应的控制指令。
  18.  一种清洁操作的执行装置,其特征在于,包括:
    第一获取单元,用于获取目标对象的生物识别结果,得到所述目标对象的生物特征,其中,所述目标对象位于移动终端的第一扫描区域内,或位于清洁机器人的第二扫描区域内;
    第二获取单元,用于在所述生物特征通过验证的情况下,获取所述目标对象的手势识别特征;
    第一控制单元,用于获取与所述手势识别特征对应的控制指令,并控制所述清洁机器人执行与所述控制指令对应的清洁操作。
  19.  一种计算机可读的存储介质,其特征在于,所述计算机可读的存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至17任一项中所述的方法。
  20.  一种电子装置,包括存储器和处理器,其特征在于,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至17任一项中所述的方法。
PCT/CN2023/088040 2022-04-25 2023-04-13 清洁操作的执行方法及装置、存储介质及电子装置 WO2023207611A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210441621.8 2022-04-25
CN202210441621.8A CN116982883A (zh) 2022-04-25 2022-04-25 清洁操作的执行方法及装置、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2023207611A1 true WO2023207611A1 (zh) 2023-11-02

Family

ID=88517460

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/088040 WO2023207611A1 (zh) 2022-04-25 2023-04-13 清洁操作的执行方法及装置、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN116982883A (zh)
WO (1) WO2023207611A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104793499A (zh) * 2014-01-21 2015-07-22 上海科斗电子科技有限公司 智能交互系统及其软件系统
CN107773169A (zh) * 2017-12-04 2018-03-09 珠海格力电器股份有限公司 一种扫地机器人的控制方法、控制装置及扫地机器人
CN108885456A (zh) * 2016-02-15 2018-11-23 罗伯特有限责任公司 一种控制自主式移动机器人的方法
CN109199240A (zh) * 2018-07-24 2019-01-15 上海斐讯数据通信技术有限公司 一种基于手势控制的扫地机器人控制方法及系统
CN112545373A (zh) * 2019-09-26 2021-03-26 珠海市一微半导体有限公司 扫地机器人的控制方法、扫地机器人及介质
CN113116224A (zh) * 2020-01-15 2021-07-16 科沃斯机器人股份有限公司 机器人及其控制方法
CN113679298A (zh) * 2021-08-27 2021-11-23 美智纵横科技有限责任公司 机器人的控制方法、控制装置、机器人和可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104793499A (zh) * 2014-01-21 2015-07-22 上海科斗电子科技有限公司 智能交互系统及其软件系统
CN108885456A (zh) * 2016-02-15 2018-11-23 罗伯特有限责任公司 一种控制自主式移动机器人的方法
CN107773169A (zh) * 2017-12-04 2018-03-09 珠海格力电器股份有限公司 一种扫地机器人的控制方法、控制装置及扫地机器人
CN109199240A (zh) * 2018-07-24 2019-01-15 上海斐讯数据通信技术有限公司 一种基于手势控制的扫地机器人控制方法及系统
CN112545373A (zh) * 2019-09-26 2021-03-26 珠海市一微半导体有限公司 扫地机器人的控制方法、扫地机器人及介质
CN113116224A (zh) * 2020-01-15 2021-07-16 科沃斯机器人股份有限公司 机器人及其控制方法
CN113679298A (zh) * 2021-08-27 2021-11-23 美智纵横科技有限责任公司 机器人的控制方法、控制装置、机器人和可读存储介质

Also Published As

Publication number Publication date
CN116982883A (zh) 2023-11-03

Similar Documents

Publication Publication Date Title
EP2942698B1 (en) Non-contact gesture control method, and electronic terminal device
CN110535732B (zh) 一种设备控制方法、装置、电子设备及存储介质
WO2018006875A1 (zh) 基于机器人的模式切换方法、装置及计算机存储介质
CN108986821B (zh) 一种设置房间与设备关系的方法和设备
CN112545373B (zh) 扫地机器人的控制方法、扫地机器人及介质
WO2018006374A1 (zh) 一种基于主动唤醒的功能推荐方法、系统及机器人
US10991372B2 (en) Method and apparatus for activating device in response to detecting change in user head feature, and computer readable storage medium
US11216543B2 (en) One-button power-on processing method and terminal thereof
EP3751395A1 (en) Information exchange method, device, storage medium, and electronic device
TW201248495A (en) Voice control system and method thereof
CN107766111B (zh) 一种应用界面的切换方法及电子终端
WO2018045808A1 (zh) 机器人及其动作控制方法和装置
CN110769319A (zh) 待机唤醒交互方法和装置
WO2018195976A1 (zh) 基于指纹识别的开机方法和设备
CN110399708A (zh) 一种双重身份认证方法、装置以及电子设备
WO2020029496A1 (zh) 信息推送方法及装置
CN112908326A (zh) 家居语音控制学习和应用方法及装置
CN106887228B (zh) 机器人的语音控制方法、装置及机器人
WO2023207611A1 (zh) 清洁操作的执行方法及装置、存储介质及电子装置
CN112363861A (zh) 用于地铁购票的语音交互方法及装置
CN112447177A (zh) 全双工语音对话方法及系统
TWI652619B (zh) 應用於智慧型機器人的開機系統及其開機方法
CN108055655A (zh) 一种语音设备加好友的方法、装置、设备及存储介质
WO2019242249A1 (zh) 界面显示方法及电子设备
CN114211486A (zh) 一种机器人的控制方法、机器人及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23795044

Country of ref document: EP

Kind code of ref document: A1