WO2022179239A1 - 一种触摸行为识别方法、装置以及设备 - Google Patents

一种触摸行为识别方法、装置以及设备 Download PDF

Info

Publication number
WO2022179239A1
WO2022179239A1 PCT/CN2021/136066 CN2021136066W WO2022179239A1 WO 2022179239 A1 WO2022179239 A1 WO 2022179239A1 CN 2021136066 W CN2021136066 W CN 2021136066W WO 2022179239 A1 WO2022179239 A1 WO 2022179239A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
action
touch action
duration
threshold
Prior art date
Application number
PCT/CN2021/136066
Other languages
English (en)
French (fr)
Inventor
陈维
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022179239A1 publication Critical patent/WO2022179239A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • G06F3/0447Position sensing using the local deformation of sensor cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • G06F3/04186Touch location disambiguation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means

Definitions

  • the embodiments of the present application relate to the field of artificial intelligence, and in particular, to a touch behavior recognition method, device, and device.
  • Touch Sensing in robots helps robots understand how objects interact in the real world, depending on their weight and stiffness, how a surface feels when touched, how it deforms when touched, and how it moves when pushed .
  • Touch Sensing Only by equipping robots with advanced touch sensors, known as “Tactile Sensing” systems, can they become aware of their surroundings, stay away from potentially damaging influences, and provide information for subsequent tasks such as hand manipulation.
  • tactile sensing technology due to the lack of effective application of tactile sensing technology, most of the current robotic interactive technology systems have inaccurate and unstable movements, and the interaction process is "clumsy", which greatly limits their interaction and cognitive abilities.
  • Multi-channel touch sensors can recognize more complex touch behaviors such as sliding and zooming.
  • Multi-channel capacitive touch sensors are generally presented in the form of touch pads and are mostly used in touch screens.
  • touch sensors are generally installed locally to identify the touch behavior of the touched part. For example, a touch sensor is installed on the head of the robot. When a single contact with the head is sensed, the robot may give corresponding feedback (such as playing a certain voice or displaying a certain expression on the screen).
  • each touch action is isolated.
  • the correct touch action cannot be continuously reported, thus reducing touch action recognition. , thereby reducing the accuracy of touch behavior recognition.
  • the present application provides a touch behavior recognition method, related devices and equipment, which are used to jointly determine the touch action type according to the touch duration of the touch action and the vibration amplitude of the touch action, thereby improving the accuracy of touch action type recognition.
  • the improved touch action type determines the touch action, which can improve the accuracy of touch action recognition.
  • a first aspect of the present application provides a touch behavior recognition method, which is applied to a robot, and the robot includes a capacitive touch sensor, an inertial measurement unit (IMU) and a processor.
  • the capacitive touch sensor detects the touch state information of the touch action
  • the touch state information of the touch action indicates the touch state, or the non-touch state
  • the IMU detects the acceleration information of the touch action
  • the processor according to the touch state of the touch action
  • the information determines the touch duration of the touch action, and determines the touch action type according to the touch duration of the touch action and the vibration amplitude of the touch action.
  • the vibration amplitude of the touch action is determined according to the acceleration information of the touch action, and the touch action type is used to determine touch behavior.
  • the touch duration of the touch action is determined by detecting the touch state information of the touch action, and the vibration amplitude of the touch action is determined by the acceleration information of the touch action, and then the touch duration of the touch action and the vibration amplitude of the touch action are jointly determined.
  • the touch action type is used to improve the accuracy of touch action type recognition. Based on this, the touch action type is determined according to the touch action type with improved accuracy, which can improve the accuracy of touch action recognition.
  • the processor first determines, according to the touch duration of each touch action, a time interval threshold corresponding to each touch action, where the time interval threshold is proportional to the touch duration of the touch action, and then processes The controller splits touch actions according to the time interval threshold.
  • the time interval threshold between different touch actions is positively correlated with the touch duration of the latest touch action or touch sub-action, thereby enabling adaptive update of the time interval threshold. Based on this, the dynamic time interval threshold setting can better determine the continuity of the touch action.
  • the processor can further determine the touch start time point and the touch end time point of the touch action according to the touch state information of the touch action, And according to the touch start time point and the touch end time point, the touch duration of the touch action and the time interval between every two adjacent touch actions are determined. Based on this, when the time interval is greater than or equal to the first time interval threshold, the processor determines that the touch action is a new touch action, and the first time interval threshold value is corresponding to the previous touch action among the two adjacent touch actions If the time interval is less than the first time interval threshold, the processor determines that the touch action is a touch sub-action.
  • the processor can handle the case of continuous changes between touch actions, that is, identify new touch actions and new touch sub-actions that appear in the touch actions, so as to accurately capture transitions between consecutive touch actions, Therefore, different touch actions are no longer isolated from each other, so that the determined result is more in line with the real touch action, thereby improving the accuracy of determining the touch action.
  • the robot includes a plurality of touch parts
  • the capacitive touch sensor includes a plurality of capacitive touch sensors located at the plurality of touch parts. Based on this, since some touch behaviors involve multiple touch parts, the touch behavior needs to be determined according to the touch actions performed at multiple touch parts.
  • the processor can The touch action types of the multiple touch actions of the multiple touch parts determine the touch behavior. For example, a handshake is a touch action of continuing to stroke or long-pressing on the palm of the hand, and a touch action of continuing to stroke or long-pressing on the back of the hand. Based on this, touch actions at different touch parts can be combined into touch actions.
  • the type of touch action corresponding to each part can be further determined, so as to realize the recognition of complex touch behaviors, and further improve the touch The accuracy of behavior recognition.
  • the processor can Touch action types of multiple touch actions, and touch actions are determined according to preset rules.
  • a method for determining a touch behavior through preset rules according to touch action types of multiple touch actions, thereby improving the feasibility of touch action recognition.
  • the processor can The touch action type of each touch action, and the touch action is determined according to the classification model, and the classification model is a model obtained by training the historical touch data as training data.
  • the historical touch data is used as the training data, and the classification model is obtained through training. Since the historical touch data is used as the model training data, the result of the obtained classification model output is more in line with the real touch behavior. Based on this, According to the touch action types of the multiple touch actions, the touch action is determined by the classification model, which can further improve the feasibility of touch action recognition.
  • the touch duration of the touch action is used to distinguish the type of the touch action according to a preset duration threshold.
  • the preset duration threshold includes a first duration threshold and a second duration threshold, and the first duration threshold is smaller than the second duration threshold. Based on this, in one case, a touch action in which the touch duration of the touch action is less than the first duration threshold is a tap. In another case, the touch duration of the touch action is greater than or equal to the first duration threshold, and the touch action whose duration is less than the second duration threshold is stroking. In yet another case, a touch action whose touch duration is greater than or equal to the second duration threshold is a long press.
  • the first duration threshold and the second duration threshold between different touch action types are based on the average interval duration of each touch sub-action and the average contact duration of each touch sub-action in each touch action, Determined by conducting experiments and/or statistics based on large amounts of data.
  • the preset duration thresholds include different duration thresholds, and different duration thresholds can be divided into different touch duration ranges, so that touch actions in different duration ranges can be refined and classified.
  • the touch duration of the action can determine the touch action type of the touch action, so as to realize the division of different touch action types. On the basis of ensuring the feasibility of the solution, the flexibility of the solution is improved.
  • the vibration amplitude of the touch action is used to distinguish the type of the touch action according to a preset vibration amplitude threshold.
  • the touch action type is any one of tapping, stroking, or light pressing.
  • the vibration amplitude of the touch action is greater than or equal to the touch action of the vibration amplitude threshold, the touch action type is any one of re-tapping, re-stroking or re-pressing.
  • the purpose of re-tapping and tapping, or stroking and re-stroking, or tapping and re-pressing can be achieved, thereby improving the accuracy of detecting the type of touch action.
  • a second aspect of the present application provides a touch behavior recognition device, comprising:
  • a detection module configured to detect touch state information of a touch action, wherein the touch state information of the touch action indicates a touch state, or a non-touch state;
  • the detection module is also used to detect the acceleration information of the touch action
  • a processing module configured to determine the touch duration of the touch action according to the touch state information of the touch action
  • the processing module is further configured to determine the touch action type according to the touch duration of the touch action and the vibration amplitude of the touch action, wherein the vibration amplitude of the touch action is determined according to the acceleration information of the touch action, and the touch action type is used to determine the touch behavior.
  • the processing module is further configured to determine a time interval threshold corresponding to each touch action according to the touch duration of each touch action, wherein the time interval threshold is proportional to the touch of the touch action duration;
  • the processing module is further configured to split the touch action according to the time interval threshold.
  • the processing module is further configured to determine a touch start time point and a touch end time point of the touch action according to the touch state information of the touch action;
  • the processing module is further configured to determine the touch duration of the touch action and the time interval between every two adjacent touch actions according to the touch start time point and the touch end time point;
  • Processing module specifically for:
  • the processor determines that the touch action is a new touch action, where the first time interval threshold is the time corresponding to the previous touch action in the two adjacent touch actions interval threshold;
  • the processor determines that the touch action is a touch sub-action.
  • the processing module is further configured to determine the touch behavior according to the touch action types of the multiple touch actions that occur at the multiple touch locations.
  • the processing module is specifically configured to determine touch behaviors according to preset rules according to touch action types of multiple touch actions that occur at multiple touch locations.
  • the processing module is specifically configured to determine touch behaviors according to a classification model according to touch action types of multiple touch actions occurring at multiple touch locations, wherein the classification model is based on historical touches.
  • the data is the training data, the model obtained through training.
  • the touch duration of the touch action is used to distinguish the type of the touch action according to a preset duration threshold.
  • the duration threshold includes a first duration threshold and a second duration threshold, and the first duration threshold is smaller than the second duration threshold;
  • a touch action whose touch duration is less than the first duration threshold is a tap
  • the touch duration of the touch action is greater than or equal to the first duration threshold, and the touch action whose duration is less than the second duration threshold is stroking;
  • a touch action whose touch duration is greater than or equal to the second duration threshold is a long press.
  • the vibration amplitude of the touch action is used to distinguish the type of the touch action according to a preset vibration amplitude threshold
  • the vibration amplitude of the touch action is less than the vibration amplitude threshold, which is any one of tapping, stroking or tapping;
  • a touch action whose vibration amplitude is greater than or equal to the vibration amplitude threshold is any one of re-tapping, re-stroking, or re-pressing.
  • a robot including a capacitive touch sensor, an IMU and a processor.
  • the capacitive touch sensor, the IMU and the processor are coupled to the memory, and can be used to execute instructions in the memory, so as to implement the method in any possible implementation manner of the first aspect above.
  • the touch behavior recognition device further includes a memory.
  • the touch behavior recognition device further includes a communication interface, the processor is coupled to the communication interface, the communication interface is used for inputting and/or outputting information, and the information includes at least one of instructions and data.
  • the touch behavior recognition device is a chip or a chip system configured in the robot.
  • the communication interface may be an input/output interface, an interface circuit, an output circuit, an input circuit, a pin or a related circuit, and the like.
  • the processor may also be embodied as a processing circuit or a logic circuit.
  • a processor including: an input circuit, an output circuit, and a processing circuit.
  • the processing circuit is configured to receive a signal through the input circuit and transmit a signal through the output circuit, so that the processor executes the method in any one of the possible implementation manners of the first aspect.
  • the above-mentioned processor may be a chip
  • the input circuit may be an input pin
  • the output circuit may be an output pin
  • the processing circuit may be a transistor, a gate circuit, a flip-flop, and various logic circuits.
  • the input signal received by the input circuit may be received and input by, for example, but not limited to, a receiver
  • the signal output by the output circuit may be, for example, but not limited to, output to and transmitted by a transmitter
  • the circuit can be the same circuit that acts as an input circuit and an output circuit at different times.
  • the embodiments of the present application do not limit the specific implementation manners of the processor and various circuits.
  • a touch behavior recognition device including a communication interface and a processor.
  • the communication interface is coupled with the processor.
  • the communication interface is used to input and/or output information.
  • the information includes at least one of instructions and data.
  • the processor is configured to execute a computer program, so that the touch behavior recognition apparatus executes the method in any possible implementation manner of the first aspect.
  • processors there are one or more processors and one or more memories.
  • a touch behavior recognition device including a processor and a memory.
  • the processor is configured to read instructions stored in the memory, and can receive signals through a receiver and transmit signals through a transmitter, so that the apparatus performs the method in any possible implementation manner of the first aspect.
  • processors there are one or more processors and one or more memories.
  • the memory may be integrated with the processor, or the memory may be provided separately from the processor.
  • the memory can be a non-transitory memory, such as a read only memory (ROM), which can be integrated with the processor on the same chip, or can be separately set in different On the chip, the embodiment of the present application does not limit the type of the memory and the setting manner of the memory and the processor.
  • ROM read only memory
  • sending a message may be a process of outputting a message from the processor
  • receiving a message may be a process of inputting a received message to the processor.
  • the information output by the processing can be output to the transmitter, and the input information received by the processor can be from the receiver.
  • the transmitter and the receiver may be collectively referred to as a transceiver.
  • the touch behavior recognition device in the fifth aspect and the sixth aspect may be a chip, and the processor may be implemented by hardware or software.
  • the processor When implemented by hardware, the processor may be a logic circuit, an integrated circuit, etc. ;
  • the processor When implemented by software, the processor can be a general-purpose processor, and is implemented by reading software codes stored in a memory, which can be integrated in the processor or located outside the processor and exist independently.
  • a computer program product comprising: a computer program (also referred to as code, or instructions), which, when the computer program is executed, causes the computer to execute any one of the above-mentioned first aspects.
  • a computer program also referred to as code, or instructions
  • a computer-readable storage medium stores a computer program (which may also be referred to as code, or an instruction), when it is run on a computer, causing the computer to execute the above-mentioned first aspect method in any of the possible implementations.
  • a non-volatile computer-readable storage medium stores a computer program (also referred to as code, or instructions) when it runs on a computer , so that the computer executes the method in any possible implementation manner of the first aspect.
  • a computer program also referred to as code, or instructions
  • the present application provides a chip system, the chip system includes a processor and an interface, the interface is used to obtain a program or an instruction, and the processor is used to call the program or instruction to realize or support a robot to realize the first On the one hand the functions involved.
  • the chip system further includes a memory for storing necessary program instructions and data of the robot.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • FIG. 1 is a schematic diagram of an embodiment of a system architecture in an embodiment of the present application
  • FIG. 2 is a schematic diagram of an embodiment of a robot in an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an embodiment of a touch action and a touch sub-action in an embodiment of the present application
  • FIG. 4 is a flowchart of an embodiment of a method for detecting a long-term touch behavior in an embodiment of the present application
  • FIG. 5 is a flowchart of an embodiment of a method for detecting the end of a touch action in an embodiment of the present application
  • FIG. 6 is a flowchart of an embodiment of a touch behavior recognition method in an embodiment of the present application.
  • FIG. 7 is a flowchart of a method for determining a touch action type in combination with an IMU and a capacitive touch sensor in an embodiment of the present application
  • FIG. 8 is a schematic diagram of an embodiment of touch behavior recognition based on a classification model in an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a touch behavior recognition device according to an embodiment of the present application.
  • Touch is one of the main ways that humans coordinate interactions.
  • Sense of Touch can help humans evaluate the properties of objects, such as size, shape, texture, temperature, etc.
  • tactile sense can be used to detect slippage of objects, thereby developing human awareness of the body.
  • Touch transmits various sensory information such as pressure, vibration, pain, and temperature to the central nervous system, helping humans perceive the surrounding environment and avoid potential harm.
  • Research has shown that human touch is better at processing physical features and detailed shapes of objects than sight and hearing.
  • touch sensing in robots helps robots understand how objects interact in the real world, which depends on their weight and stiffness, how a surface feels when touched, how it deforms when touched, and how it moves when pushed. .
  • Multi-channel touch sensors can generally realize the recognition of single touch/click, double touch/click and long-term touch (long press).
  • Multi-channel touch Sensors can recognize more complex touch behaviors such as swiping and zooming.
  • Multi-channel capacitive touch sensors are generally presented in the form of touch pads and are mostly used in touch screens.
  • touch sensors are generally installed locally to identify the touch behavior of the touched part. For example, a touch sensor is installed on the head of the robot. When a single contact with the head is sensed, the robot may give corresponding feedback (such as playing a certain voice or displaying a certain expression on the screen).
  • the touch sensor can recognize single-click, double-click and long-press detection, each touch action is isolated.
  • the correct touch action cannot be continuously reported, thereby reducing the accuracy of the touch action recognition, thereby reducing the accuracy of the touch behavior recognition.
  • an embodiment of the present application provides a touch behavior recognition method, and the touch behavior recognition method is applied to a robot.
  • FIG. 1 is a schematic diagram of an embodiment of the system architecture in the embodiment of the present application.
  • touch action recognition is performed, thereby improving the accuracy of touch behavior recognition.
  • the touched parts include but are not limited to: the top of the head, the left side of the head, the right side of the head, the chest, the left armpit, the right armpit, the stomach, the back of the left hand, the center of the left hand, the back of the right hand, and the center of the right hand.
  • the user's touch behavior on the robot can be identified according to the touch detection and vibration detection of each touch part.
  • FIG. 2 is a schematic diagram of an embodiment of the robot in the embodiment of the present application, as shown in FIG. 2, the robot 200 includes one or more capacitive touch sensors; optionally, the robot 200 may further include one or more inertial measurement units (IMUs).
  • IMUs inertial measurement units
  • the robot 200 can also include a processor.
  • the processor is arranged in the robot 200 .
  • the processor is used to control the operation of the entire robot 200, output control information after receiving, responding or executing relevant control instructions to control the operation of various parts of the robot, and the robot outputs corresponding responses according to the relevant control information.
  • the processor may be referred to as a central processing unit (CPU).
  • the processor may be an integrated circuit chip with logic and/or signal processing capability and computing capability.
  • the processor may also be a general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA), system on chip (SoC) or other programmable logic device, discrete gates or transistors Logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • SoC system on chip
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the
  • the robot 200 can also include the shape, frame, and structure of the robot itself, as well as multiple internal and external components.
  • the robot 200 may further include a housing (not shown in FIG. 2 ), a drive assembly (not shown in FIG. 2 ) for driving the robot 200 to move, and a motion assembly (not shown in FIG. 2 ) for moving the robot 200 , such as wheels or legs, etc., an audio component for receiving external sound/voice information and sound (not shown in Figure 2), a display component for displaying image information, text information or facial expression information (not shown in Figure 2) ), an actuator (not shown in FIG. 2 ) for making the robot perform other movements, such as a robotic arm, etc., as well as various electronic components, circuit structures, parts structures, and the like.
  • the robot 200 provided in this embodiment has a total of 11 touching parts in the whole body.
  • the 11 touching parts of the robot 200 include the top of the head 201, the right side of the head 202, the left side of the head 203, the chest 204, the stomach 205, the right armpit 206, the left Armpit 207, right heart 208, right back 209, left palm 210 and left back 211.
  • copper skins are attached to the inner shell of the aforementioned 11 touch parts of the robot 200, and a corresponding capacitive touch sensor is installed on the copper skin of each touch part, and each capacitive touch sensor is used to detect the touch action of the corresponding touch part.
  • the position of the head touching part of the robot 200 (including the top of the head 201 , the right side of the head 202 and the left side of the head 203 ) is also installed with an IMU, which is used to detect the vibration generated when touching, so as to assist the processor in distinguishing between taps and retakes . Therefore, when an IMU is installed at the position of the head touching part of the robot 200, the touch actions that can be recognized include but are not limited to tapping, continuous tapping, stroking, continuous stroking, long pressing, head re-tapping, head continuous re-tapping, etc. .
  • this embodiment is introduced with 11 touch parts, and in practical applications, the touch parts of the robot can be further divided according to requirements.
  • the IMU introduced in this embodiment is installed on the head of the robot. In practical applications, the IMU can be deployed on any touching part of the robot according to actual requirements. Therefore, the robot shown in FIG. 2 is only used to understand this solution, and should not be understood as a limitation of this solution.
  • the touch behavior recognition method provided by the embodiment of the present application can be applied to various scenarios, and please refer to Table 1 for a specific example.
  • FIG. 3 is a schematic diagram of an embodiment of a touch action and a touch sub-action in an embodiment of the present application. As shown in FIG. 3, the horizontal axis shown in FIG.
  • the vertical line above the time axis is used for
  • the touch start time point representing the touch action or touch sub-action for example, the touch start time point 301 , the touch start time point 303 and the touch start time point 305 .
  • the vertical lines below the time axis are used to represent touch termination time points of the touch action or touch sub-action, for example, touch termination time point 302 , touch termination time point 304 , and touch termination time point 306 .
  • the user in the time period between the adjacent previous touch start time point and the subsequent touch termination time point, the user is in contact with a certain touch part of the robot, for example, the user and a certain touch part of the robot are in contact
  • the contact state is in the time period between the touch start time point 301 and the touch end time point 302, and the user and a certain touch part of the robot are in the time period between the touch start time point 303 and the touch end time point 304.
  • the user is in a non-contact state with a touch part of the robot.
  • the user and a touch part of the robot are in a non-contact state.
  • the non-contact state is in the time period between the touch end time point 302 and the touch start time point 303, and the time period between the touch end time point 304 and the touch start time point 305 between the user and a certain touch part of the robot in a non-contact state.
  • the touch action B is a touch action between the user and a touch part of the robot during the time period between the start time point 305 and the touch end time point 306 , and the touch action B may include a touch state.
  • the touch action A includes a touch sub-action 1 and a touch sub-action 2.
  • the touch sub-action 1 is the touch between the user and a certain touch part of the robot during the time period between the touch start time point 301 and the touch end time point 302.
  • the touch sub-action 2 is a touch action between the user and a certain touch part of the robot in the time period between the touch start time point 303 and the touch end time point 304 .
  • the touch action A is the touch action between the user and a touch part of the robot in the time period between the touch start time point 301 and the touch end time point 304, and the touch action A may include a touch state and a non-touch state .
  • the touch action A shown in FIG. 3 may correspond to the touch action of continuous tapping, and the touch action B may correspond to the touch action of stroking or the touch action of long pressing. It can be understood that the example in FIG. 3 is only used to understand touch actions and touch sub-actions. In practical applications, each touch action may include multiple touch sub-actions, and the number of touch sub-actions should not be construed as a limitation to this solution. limited.
  • the capacitive touch sensor can detect the touch state between the user and a touch part of the robot based on the change of capacitance, and report to the processor that the touch state between the user and a touch part of the robot, for example, is in the touch state
  • the touch state flag "1" is reported, in the non-touch state
  • the touch state flag "0" is reported.
  • the capacitive touch sensor detects that there is a touch state between the user and a touch part of the robot, it reports the touch part and the touch state to the processor.
  • the status flag is "1" therefore, after the processor receives the touch state flag "1", it can determine that a touch behavior has occurred at a certain touch part of the robot.
  • FIG. 4 is a flowchart of an embodiment of a method for detecting a long-term touch behavior in an embodiment of the present application, as shown in FIG. 4 , and the specific steps are as follows.
  • Step S400 the processor starts long-term touch behavior detection.
  • Step S401 the processor determines whether the last touch action ends.
  • step S402 the processor needs to determine whether the last touch action ends. If yes, step S402 is executed. If not, step S403 is executed.
  • the processor may determine whether the previous touch action ends by using a time interval threshold.
  • the time interval threshold may be adaptively adjusted according to the duration and/or time interval of the last touch action and/or the previous one or more touch actions, or dynamically changed, and each time interval threshold may be It is adaptively adjusted according to artificially formulated rules, or it can be automatically calculated based on historical data and learning models using artificial intelligence technology.
  • the time interval between the current touch action and the previous touch action is greater than or equal to the time interval threshold, it can be determined that the previous touch action has ended, and the current touch action is a new touch action.
  • the time interval between the current touch action and the previous touch action is the touch start time point of the current touch action and the touch end time point of the previous touch action.
  • the current touch action is a new touch action, that is, the previous touch action is touch action A, and the current touch action is touch action B.
  • the previous touch action may be touch action A including touch sub-action 1, and the current touch action is touch sub-action 2.
  • touch action A Before touch sub-action 2 occurs (ie, before touch start time point 303 ), touch action A includes touch sub-action 1 but not touch sub-action 2 , and after touch sub-action 2 occurs (eg, touch end time point 304 ) After or after the touch start time point 303 ), touch action A includes both touch sub-action 1 and touch sub-action 2 , and touch sub-action 2 is a new touch sub-action of touch action A.
  • the time interval threshold may be positively correlated with the touch duration of a touch action or touch sub-action that is the closest to the current touch action.
  • the time interval threshold between different touch actions can be determined by formula (1):
  • k indicates an adjustable parameter
  • T touch indicates the touch duration of the most recent touch action or touch sub-action of the current touch action
  • I min indicates the preset shortest time interval
  • I max indicates the preset maximum time interval.
  • T touch indicates the time period from touch start time point 303 to touch end time point 304 .
  • the touch duration of the touch sub-action 2 (T touch in this example) is 100 milliseconds (ms)
  • the duration between the touch termination time point 304 and the touch start time point 305 that is, the time interval
  • the preset shortest time The interval I min is 50 ms
  • the preset maximum time interval I max is 2 seconds (s)
  • the adjustable parameter k is configured to be 0.7
  • the time interval threshold calculated according to formula (1) is 120 ms.
  • the time interval of 350ms is greater than the time interval threshold of 120ms, so it can be determined that the touch action A has ended and the touch action B is a new touch action.
  • Step S402 the processor determines the touch start time point of the current touch action.
  • step S402 the processor first determines through step S401 that a new touch action is started, that is, the previous touch action has ended. Therefore, the current touch action is a new touch action after the previous touch action is completed, and at this time, the processor can determine the start time of the current touch action. Exemplarily, referring to FIG. 3 again, if the previous touch action is touch action A and the current touch action is touch action B, the processor will determine the touch start time point 305 .
  • Step S403 the processor determines the current touch action as a touch sub-action of the previous touch action.
  • step S403 the processor first determines through step S401 that no new touch action occurs, that is, the previous touch action has not ended. Therefore, the current touch action will be determined as the touch sub-action of the previous touch action.
  • the processor can also determine the touch start time point of the touch sub-action and/or the identification information of the touch sub-action, and the like. Exemplarily, please refer to FIG. 3 again, if the previous touch action is touch action A including touch sub-action 1, and the current touch action is touch sub-action 2, then touch sub-action 2 will be determined to be included in touch sub-action A. A new touch sub-action, then the processor can determine the touch start time point 303 of the touch sub-action 2 .
  • the number of touch sub-actions included in the "last touch action” is increased by one.
  • the processor can also update the number of intervals of the touch action (which can be recorded as "numIntervals") and the average interval duration of each touch sub-action in the touch action (which can be recorded as "meanTouchIntervals").
  • the previous touch action includes the first touch sub-action and the second touch sub-action
  • the time interval between the first touch sub-action and the second touch sub-action is 80 ms.
  • the current touch action determines the third touch sub-action of the previous touch action, and the time interval between the second touch sub-action and the third touch sub-action is 100ms
  • the processor determines the type of the touch action, it can also refer to the average interval duration obtained after updating to make determination, so as to satisfy the possibility of different touch actions and improve the accuracy of determining the type of the touch action.
  • Step S404 the processor starts the next long-term touch behavior detection.
  • step S403 or step S402 it will start the next long-term touch behavior detection, that is, repeatedly execute the process shown in FIG. 4 .
  • the processor can determine whether a new touch action occurs in a manner similar to step S401, and then complete the reporting of the touch start time of the new touch action in a manner similar to step S402, or through the same method as step S402. In a similar manner to S403, reporting of the result of determining the current touch action as a new touch sub-action included in the previous touch action is completed.
  • FIG. 5 is a flowchart of an embodiment of a method for detecting the end of a touch action in an embodiment of the present application, as shown in FIG. 5 , and the specific steps are as follows.
  • Step S500 the processor starts the touch action end detection.
  • Step S501 the processor determines whether a new touch sub-action occurs.
  • the processor can determine whether a new touch action occurs in a manner similar to step S401, that is, according to the time The interval threshold is used for judgment, and the description is not repeated here.
  • step S501 If the determination in step S501 is yes, it means that no new touch action occurs, so the current touch action can be determined as a touch sub-action in a manner similar to step S403, that is, it is determined that a new touch sub-action occurs, and step S502 is executed. 3 , when the time interval between the touch termination time point 302 of the touch sub-action 1 and the touch start time point 303 of the touch sub-action 2 is less than the time interval threshold, it can be determined by step S501. The last touch action has not ended, and the current touch action is a new touch sub-action. Then, the previous touch action may be touch action A including touch sub-action 1, and the current touch action is touch sub-action 2.
  • touch action A Before touch sub-action 2 occurs (ie, before touch start time point 303 ), touch action A includes touch sub-action 1 but not touch sub-action 2 , and after touch sub-action 2 occurs (eg, touch end time point 304 ) After or after the touch start time point 303 ), touch action A includes both touch sub-action 1 and touch sub-action 2 , and touch sub-action 2 is a new touch sub-action of touch action A.
  • last touch action is touch sub-action 1
  • current touch action is touch sub-action 2.
  • step S501 determines whether a new touch action has occurred, so it is determined that no new touch sub-action has occurred, and step S503 is executed.
  • Step S502 the processor determines the result.
  • step S502 the processor first determines through step S501 to start a new touch sub-action. Therefore, the current touch action is a new touch sub-action in the previous touch action, and the processor determines this result.
  • the processor may further determine the touch start time point of the touch sub-action and/or determine the identification information of the touch sub-action, and the like. For example, referring to FIG. 3 again, if the processor determines that the new touch sub-action is touch sub-action 2 , the processor can determine the touch start time point 303 of touch sub-action 2 .
  • the number of touch sub-actions included in the "last touch action” is increased by one.
  • the number of touch sub-actions included in touch action A is increased from one (touch sub-action 1) to two (touch sub-action 1 and touch sub-action 2).
  • the processor determines the result, it can also update the number of touch sub-actions in the touch action (can be recorded as "numTouches"), and the average touch duration of each touch sub-action in the touch action (can be recorded as "meanTouchDuration" ).
  • Step S503 the processor determines the touch termination time point of the touch action.
  • step S503 the processor first determines through step S501 that no new touch sub-action occurs, that is, the previous touch action has ended, and starts a new touch action. Therefore, the current touch action is a new touch action after the previous touch action is completed. If the previous touch action includes multiple touch sub-actions, the processor may determine the touch termination time of the last touch sub-action in the previous touch action as the touch termination time of the touch action. Illustratively, referring to FIG. 3 again, the previous touch action (touch action A) has ended, and a new touch action (touch action B) occurs.
  • the processor may also determine the touch start time point 303 of the touch sub-action 2, or the touch duration from the touch start time point 303 to the touch end time point 304 in the touch sub-action 2, and the like.
  • the processor can receive the touch status flag "1" reported by the capacitive touch sensor, and after the capacitive touch sensor senses the touch status flag "1"
  • the processor can receive the touch state flag "0" reported by the capacitive touch sensor, and the time point when the touch state flag "0" is received is the touch termination time of the touch action.
  • the processor receives the touch state identifier "reported by the capacitive touch sensor" at time point 305.
  • the processor can determine that the time point 305 is the touch start time point 305 of the touch action B, and the processor receives the touch status flag "0" reported by the capacitive touch sensor at the time point 306, and the processor can determine The time point 306 is the touch termination time point of the touch action B.
  • the processor may further determine the touch start time point 305 of the touch action B, or the touch duration from the touch start time point 305 to the touch end time point 306 in the touch action B, and the like.
  • Step S504 the processor starts the next touch action end detection.
  • step S503 after the processor completes step S503 or step S502, it will start the next touch action end detection, that is, repeatedly execute the process shown in FIG. 5 .
  • the processor can determine whether a new touch sub-action occurs in a manner similar to step S501, and then complete the reporting of the result of determining the new touch sub-action in a manner similar to step S502, or through The reporting of the touch termination time for the new touch sub-action is completed in a manner similar to step S503.
  • the time interval threshold between different touch actions is positively correlated with the touch duration of the latest touch action or touch sub-action, so that the time interval threshold can be adjusted.
  • the dynamic time interval threshold setting can better determine the continuity of touch actions, and the touch action splitting algorithm proposed in the embodiments of the present application can handle the situation of continuous changes between touch actions. For example, traditional recognition algorithms can only recognize single clicks, double clicks, and long presses, and each action is isolated.
  • the touch action splitting algorithm proposed in the present invention can continuously identify the previous touch action and the current touch action, and can accurately capture the transition between consecutive touch actions, so different touch actions are no longer isolated from each other, so that The determined result is more in line with the real touch action.
  • FIG. 6 is a flowchart of an embodiment of the touch behavior recognition method in the embodiment of the present application.
  • the touch behavior recognition method in the embodiment of the present application can be applied to the robot shown in FIG. 2, based on the Therefore, the specific steps of the touch behavior recognition method are as follows.
  • Step S601 the processor determines the touch duration of the touch action.
  • the processor may implement the detection of a long-time touch action by executing steps S401 to S404 shown in FIG. 4 when the touch starts. Secondly, the detection of the end of the touch action can be realized by executing steps S501 to S504 shown in FIG. 5 when the touch ends.
  • the processor can determine the touch start time and touch end time of the touch action in the long-time touch action detection method and the touch action end detection method, the processor can determine the touch start time and touch end time of the touch action based on the touch start time and touch end time of the touch action. The time is calculated to obtain the touch duration of the touch action, so as to achieve the purpose of determining the touch duration of the touch action.
  • the processor will execute the touch action start algorithm (which can be denoted as “actionOnTouch”), and call the long-time touch action detection shown in FIG. 4 through the touch action start algorithm. (Can be recorded as “detectLongTouch”) method to achieve the purpose of long-term touch action detection.
  • the processor will execute the touch action end algorithm (which can be denoted as “actionOnQuit”), and call the touch action end detection shown in FIG. Denoted as "detectTouchEvent”) method, to achieve the purpose of detecting the end of the touch action.
  • Step S602 the processor determines the type of the touch action according to the touch duration of the touch action.
  • different touch action types may be divided according to the touch duration of the touch action.
  • the touch duration corresponding to the touch action type A is the touch duration range A
  • the touch duration corresponding to the touch action type B is the touch duration range B
  • the touch duration corresponding to the touch action type C is the touch duration range C
  • the touch action type D corresponds to the touch duration range C.
  • the touch duration is the touch duration range D.
  • the touch action type of a touch action with a touch duration less than 200ms may be defined as "pat”, and the touch action type of a touch action with a touch duration greater than or equal to 200ms and less than 2s may be defined as "stroking” , and define the type of touch action whose touch duration is greater than or equal to 2s as "long press”.
  • two duration thresholds of 200ms and 2s are used as the division of the three types of touch actions: stroking, stroking, and long pressing, which is only an example of a possible implementation, not a limitation; Whether the divided time interval is an open interval or a closed interval does not constitute a limitation.
  • the time interval division between different touch action types may be based on the average interval duration of each touch sub-action in the touch action obtained in the foregoing embodiment and the time interval of each touch sub-action in each touch action.
  • the average duration of exposure determined by conducting experiments and/or statistics based on a large number of data.
  • the touch duration ranges corresponding to different touch action types may be the same or overlap, and the touch action type cannot be accurately determined only based on the touch duration range.
  • the above-mentioned other information can be, for example, the motion state of the robot (such as acceleration), the sound/voice information obtained by the robot, the pictures or videos collected by the image acquisition device of the robot, or the information obtained by the robot from other electronic devices (such as sound/voice information, sensor information, picture or video information) and other information.
  • some touch actions cannot be accurately distinguished only by the touch state identifier reported by the capacitive touch sensor. For example, it is difficult to distinguish re-tap and tap only by the touch state identifier reported by the capacitive touch sensor.
  • the acceleration information detected by the IMU can be used, and then the processor determines the vibration amplitude of the touch action according to the acceleration information, and determines the touch action according to the vibration amplitude of the touch action and the touch duration of the touch action. For example, if the vibration amplitude of tapping on the touched part is smaller than the vibration amplitude threshold, it can be further determined that the touch action type is tap.
  • the vibration amplitude of the tapping at the touched part is greater than or equal to the vibration amplitude threshold, it can be further determined that the touch action type is re-tapping. It should be understood that in practical applications, the IMU can further distinguish more touch actions, such as stroking and heavy stroking, or light pressing and heavy pressing, and the vibration amplitude thresholds corresponding to different touch actions are also different. The touch action determined based on the information detected by the IMU and the capacitive touch sensor can be more accurate.
  • FIG. 7 exemplarily shows a flowchart of a method for determining a touch action type in combination with an IMU and a capacitive touch sensor provided by an embodiment of the present application.
  • the method for determining a touch action type specifically includes the following steps.
  • Step S700 the processor starts the identification of the touch action type.
  • Step S701 the processor determines the touch action type.
  • the processor determines the type of the touch action according to the touch duration of the touch action in a similar manner to the foregoing embodiment.
  • Step S702 the processor determines the magnitude relationship between the vibration amplitude and the vibration amplitude threshold.
  • the IMU detects the acceleration information of the touch action, and reports the acceleration information of the touch action to the processor included in the robot.
  • the processor determines the vibration amplitude of the touch action according to the acceleration information of the touch action, and determines the vibration amplitude and the vibration amplitude. The size of the threshold relationship.
  • Step S703 whether the vibration amplitude is greater than or equal to the vibration amplitude threshold.
  • the processor determines whether the vibration amplitude of the touch action is greater than or equal to the vibration amplitude threshold, and if so, executes step S704. If not, step S705 is executed.
  • the processor modifies the touch action type to heavy.
  • step S703 it can be known from step S703 that the vibration amplitude of the touch action is greater than or equal to the vibration amplitude threshold. Based on this, if it is determined in step S701 that the type of touch action is tap, then combined with the result that the vibration amplitude of the touch action is greater than the vibration amplitude threshold, it can be further determined that the type of touch action is re-tapping, and the touch action determined in step S701 Action type corrected from "beat" to "retake”.
  • step S701 if the touch action type is determined to be stroking in step S701, then combined with the result that the vibration amplitude of the touch action is greater than the vibration amplitude threshold, it can be further determined that the touch action type is re-stroking, and the determined in step S701 is Corrected touch action type from "stroking" to "stroking". Other touch action types can also be corrected based on a similar manner in step S704, and the correction of all touch action types is not exhaustive here.
  • the processor modifies the touch action type to light.
  • step S703 it can be known from step S703 that the vibration amplitude of the touch action is less than the vibration amplitude threshold. Based on this, if it is determined in step S701 that the type of touch action is tap, then combined with the result that the vibration amplitude of the touch action is less than the vibration amplitude threshold, it can be further determined that the type of touch action is tap, and the touch action determined in step S701 Action type corrected from "Tap" to "Tap".
  • step S701 determines whether the touch action type is a long press, then combined with the result that the vibration amplitude of the touch action is less than the vibration amplitude threshold, it can be further determined that the touch action type is a tap, and the determined in step S701 Fixed touch action type from "long press” to "tap". Other touch action types can also be corrected based on a similar manner in step S705, and the correction of all touch action types is not exhaustive here.
  • the processor outputs the corrected touch action type.
  • step S704 or step S705 the modified touch action type is output, so that the processor can determine the touch action according to the modified touch action type, so as to further improve the accuracy of the determination of the touch action. For example, if the touch action type determined in step S701 is modified from “tap” to "re-tap” in step S704, the modified touch action type "re-tap” will be output in step S706. Similarly, if the touch action type determined in step S701 is modified from "long press” to "tap” in step S705, the modified touch action type "tap” will be output in step S706.
  • step S602 the acceleration information detected by the IMU and the touch state identifier reported by the capacitive touch sensor can be used to achieve re-tapping and tapping, or light-tapping.
  • Step S603 the processor determines the touch behavior according to the touch action type.
  • the touch action type of each touch part can be determined through step S602. Since some touch behaviors only involve one touch part, the touch behavior can be determined according to the touch action performed on one touch part. For example, stroking the head is the touch action of stroking the head, and patting the belly is patting the belly. touch action.
  • touch behaviors involve multiple touch parts, so the touch behavior needs to be determined according to the touch actions performed at multiple touch parts.
  • touch actions at different touch parts can be combined into touch actions.
  • the robot shown in FIG. 2 introduced in this embodiment includes 11 touch parts, and each touch part has a high degree of discrimination, the rules for fusion perception can be encoded and formulated.
  • the rules for fusion perception can be encoded and formulated.
  • the number of parts touched by the robot is very large (such as electronic skin)
  • coding the rules for fusion perception will be labor-intensive and the generalization ability may be poor.
  • artificial intelligence technology can be used to train a classification model that can identify different complex touch behaviors driven by a large amount of data.
  • the above classification model may be a learning model such as a machine learning model, a deep learning model or a reinforcement learning model.
  • the touch behavior recognition method based on the preset rule and the touch behavior recognition method based on the classification model will be introduced respectively.
  • the preset rule is a rule formulated by humans through coding. For example, if (touch action 1 in touch part 3) and (touch action 4 in touch part 6) then touch behavior 1, it means that if touch action 1 is performed on touch part 3 and touch action 4 is performed on touch part 6, it is a touch Behavior 1. Or, if (touch action 1 in touch part 4) and (touch action 1 in touch part 3) then touch behavior 3, it means that if touch action 1 is performed on touch part 4 and touch action 1 is performed on touch part 3, it is a touch Behavior 1.
  • the preset rule is not limited to two single-part touch actions, and may be a touch action formed by touching one or more parts. For example, (stroking in the left palm) and (stroking in the back of the left hand) and (stroking in the right heart) and (stroking in the back of the right hand) then hold both hands. It means stroking on the center of the left hand, stroking on the back of the left hand, stroking on the center of the right hand, and stroking on the back of the right hand to hold both hands. Therefore, the touch action and the number of touch parts should not be construed as a limitation of this application.
  • the touch behavior of a plurality of touch parts is used as the input of the to-be-trained touch behavior recognition model, and the preset touch behavior is used as the output, and the training is performed by a common model training method.
  • the robot is equipped with capacitive touch sensors at N touch locations
  • FIG. 8 is a schematic diagram of an embodiment of touch behavior recognition based on a classification model in an embodiment of the present application.
  • the touch actions detected at the N touch parts are used as the input of the touch behavior recognition model (a classification model for classifying touch behaviors) obtained after the training is completed.
  • the touch behavior recognition model can be based on the above input , which outputs the touch behavior category. Specifically, as shown in FIG.
  • the touch part 1 and the touch part N are tapped, and the touch part 2 and the touch part N-1 are not touched (in FIG.
  • the information such as stroking at the touch part 3 is used as the input of the touch behavior recognition model, and the touch behavior recognition model can determine the category of the touch behavior according to the above input, and finally output the classification result touch behavior 4 .
  • the touch behavior recognition method in the embodiment of the present application can realize the recognition of complex touch behaviors of multiple touch parts, for example, handshake and Hugs, etc., further improve the accuracy and reliability of touch behavior recognition.
  • the touch sensors in the above embodiments of the present application may also be other types of sensors other than capacitive sensors, for example, touch sensors based on surface acoustic waves and infrared.
  • the IMU in the above-mentioned embodiments of the present application may also be other types of sensors that can detect the motion information of the robot.
  • a contact-type microelectromechanical system (microelectromechanical system, MEMS) microphone can also realize vibration. detection.
  • the robot provided by the above embodiments of the present application can react according to the user's touch action type or touch behavior determined by the robot processor to realize anthropomorphic interaction. For example, if the robot processor determines that the user has made a "hold hands" touch behavior on the robot, the display screen on the robot head can display a smiley expression, and the robot speaker can play the voice "Do you want to play a game with me?". For another example, if the robot processor determines that the user has performed a "continuous heavy blow” touch action on the robot, the display screen of the robot head may display a crying expression, and the robot speaker will play a voice of "Ah, it hurts". Based on the touch behavior recognition method provided by the embodiments of the present application, the robot can accurately recognize and distinguish the continuous actions of the user, improve the accuracy of touch behavior recognition, and then make more accurate responses, conduct more realistic interactions with users, and improve the user experience. experience.
  • the methods provided by the above embodiments of the present application can be applied to any interactive electronic device, such as mobile phones, foldable electronic devices, tablet computers, desktop computers, laptop computers, in addition to robots.
  • Top computers handheld computers, notebook computers, ultra-mobile personal computers (UMPCs), netbooks, cellular phones, personal digital assistants (PDAs), augmented reality (AR) devices, Virtual reality (VR) equipment, artificial intelligence (AI) equipment, wearable equipment, brain-computer interface equipment, in-vehicle equipment, smart home equipment, smart medical equipment, smart fitness equipment or smart city equipment, etc.
  • Embodiments of the present application provide a robot, where the robot includes one or more touch sensors and a processor.
  • one or more touch sensors are used to detect changes in capacitance when a touch behavior is performed.
  • the processor is configured to execute the methods shown in FIG. 4 , FIG. 5 , FIG. 6 and FIG. 7 .
  • the above-mentioned robot further includes an IMU, and the above-mentioned IMU is used to detect acceleration information;
  • the above-mentioned processor is further configured to determine the vibration amplitude of the touch action according to the acceleration information.
  • the touch behavior recognition apparatus includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the embodiments of the present application may divide the touch behavior recognition device into functional modules based on the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 9 is a schematic structural diagram of the touch behavior recognition device in the embodiment of the application, as shown in the figure.
  • the touch behavior recognition device 900 includes:
  • a detection module 901 configured to detect touch state information of a touch action, wherein the touch state information of the touch action indicates a touch state, or a non-touch state;
  • the detection module 901 is further configured to detect the acceleration information of the touch action
  • a processing module 902 configured to determine the touch duration of the touch action according to the touch state information of the touch action
  • the processing module 902 is further configured to determine the touch action type according to the touch duration of the touch action and the vibration amplitude of the touch action, wherein the vibration amplitude of the touch action is determined according to the acceleration information of the touch action, and the touch action type is used to determine the touch behavior .
  • the processing module 902 is further configured to For the touch duration of the touch action, a time interval threshold corresponding to each touch action is determined, wherein the time interval threshold is proportional to the touch duration of the touch action;
  • the processing module 902 is further configured to split the touch action according to the time interval threshold.
  • the processing module 902 is further configured to perform a touch action according to the touch action touch state information, determine the touch start time point and touch end time point of the touch action;
  • the processing module 902 is further configured to determine the touch duration of the touch action and the time interval between every two adjacent touch actions according to the touch start time point and the touch end time point;
  • the processing module 902 is specifically used for:
  • the processor determines that the touch action is a new touch action, where the first time interval threshold is the time corresponding to the previous touch action in the two adjacent touch actions interval threshold;
  • the processor determines that the touch action is a touch sub-action.
  • the processing module 902 is further configured to The touch action types of the multiple touch actions of the multiple touch parts determine the touch behavior.
  • the processing module 902 is specifically configured to For the touch action types of the multiple touch actions of the multiple touch parts, the touch behavior is determined according to a preset rule.
  • the processing module 902 is specifically configured to For the touch action types of the multiple touch actions of the multiple touch parts, the touch behavior is determined according to a classification model, wherein the classification model is a model obtained by training the historical touch data as training data.
  • the touch duration of the touch action is used according to a preset The duration threshold for distinguishing touch action types.
  • the duration threshold includes a first duration threshold and a second duration threshold. duration threshold, the first duration threshold is less than the second duration threshold;
  • a touch action whose touch duration is less than the first duration threshold is a tap
  • the touch duration of the touch action is greater than or equal to the first duration threshold, and the touch action whose duration is less than the second duration threshold is stroking;
  • a touch action whose touch duration is greater than or equal to the second duration threshold is a long press.
  • the vibration amplitude of the touch action is used according to a preset The vibration amplitude threshold of distinguishing the type of touch action;
  • the vibration amplitude of the touch action is less than the vibration amplitude threshold, which is any one of tapping, stroking or tapping;
  • a touch action whose vibration amplitude is greater than or equal to the vibration amplitude threshold is any one of re-tapping, re-stroking, or re-pressing.
  • An embodiment of the present application provides a robot, including at least one capacitive touch sensor, an IMU, at least one processor, a memory, and an input-output (I/O) interface;
  • the at least one capacitive touch sensor, the IMU and the at least one processor are coupled to the memory and the input-output interface;
  • the at least one capacitive touch sensor, the IMU and the at least one processor are used to execute a computer program stored in the memory to execute the method executed by the at least one capacitive touch sensor, the IMU and the at least one processor in any of the above method embodiments.
  • the present application further provides a touch behavior recognition device, including at least one processor, where the at least one processor is configured to execute a computer program stored in a memory, so that the touch behavior recognition device executes the capacitance in any of the foregoing method embodiments.
  • a touch behavior recognition device including at least one processor, where the at least one processor is configured to execute a computer program stored in a memory, so that the touch behavior recognition device executes the capacitance in any of the foregoing method embodiments.
  • the touch behavior recognition device may be one or more chips.
  • the touch behavior recognition device may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or a system on chip (SoC), It can also be a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller (microcontroller).
  • controller unit, MCU it can also be a programmable logic device (PLD) or other integrated chips.
  • PLD programmable logic device
  • the embodiment of the present application also provides a touch behavior recognition device, which includes a processor and a communication interface.
  • the communication interface is coupled with the processor.
  • the communication interface is used to input and/or output information.
  • the information includes at least one of instructions and data.
  • the processor is configured to execute a computer program, so that the touch behavior recognition apparatus executes the method executed by the capacitive touch sensor, the IMU and the processor in any of the above method embodiments.
  • the embodiments of the present application also provide a touch behavior recognition device, which includes a processor and a memory.
  • the memory is used to store a computer program
  • the processor is used to call and run the computer program from the memory, so that the touch behavior recognition apparatus executes the capacitive touch sensor, the IMU and the processor in any of the foregoing method embodiments method performed.
  • each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
  • the processor in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the aforementioned processors may be general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • the methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is run on a computer, the computer is made to execute FIG. 2 , FIG. 5 and A method performed by each unit in the embodiment shown in FIG. 9 .
  • the present application also provides a computer-readable storage medium, where the computer-readable storage medium stores program codes, and when the program codes are run on a computer, the computer is made to execute FIG. 2 , and FIG. 5 and the method performed by each unit in the embodiment shown in FIG. 9 .
  • the modules in the above-mentioned device embodiments correspond to the units in the method embodiments completely, and the corresponding modules or units perform corresponding steps. Other steps may be performed by a processing unit (processor). For functions of specific units, reference may be made to corresponding method embodiments.
  • the number of processors may be one or more.
  • a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be components.
  • One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between 2 or more computers.
  • these components can execute from various computer readable media having various data structures stored thereon.
  • a component may, for example, be based on a signal having one or more data packets (eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals) Communicate through local and/or remote processes.
  • data packets eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Manipulator (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例公开了一种触摸行为识别方法、相关装置以及机器人,该方法应用于机器人,且机器人包括电容触摸传感器,惯性测量单元IMU以及处理器,用于提升触摸行为识别的准确度。本申请实施例方法中,电容触摸传感器检测触摸动作的触摸状态信息,该触摸动作的触摸状态信息指示触摸状态,或,非触摸状态,IMU检测触摸动作的加速度信息,处理器根据触摸动作的触摸状态信息确定触摸动作的触摸时长,并根据触摸动作的触摸时长以及触摸动作的振动幅度,确定触摸动作类型,该触摸动作的振动幅度是根据触摸动作的加速度信息确定的,且触摸动作类型用于确定触摸行为。

Description

一种触摸行为识别方法、装置以及设备
本申请要求于2021年02月26日提交中国专利局、申请号为202110217491.5、发明名称为“一种触摸行为识别方法、装置以及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及人工智能领域,尤其涉及一种触摸行为识别方法、装置以及设备。
背景技术
机器人的触摸传感(Touch Sensing)能够帮助机器人理解现实世界中物体的交互行为,这些行为取决于其重量和刚度,取决于触摸时表面的感觉、接触时的变形情况以及被推动时的移动方式。只有给机器人也配备先进的触摸传感器,即“触觉传感(Tactile Sensing)”系统,才能使其意识到周围的环境,远离潜在的破坏性影响,并为后续的手部操作等任务提供信息。然而,目前大多数机器人交互式技术系统由于缺乏对触觉传感技术的有效应用,其动作不准确、不稳定,交互过程“笨拙”,极大地限制了他们的交互和认知能力。
目前,基于电容触摸传感器的人机交互已有较快发展。多通道的触摸传感器则能够识别滑动和缩放等更复杂的触摸行为,多通道电容触摸传感器一般以触摸板的形式呈现,多用于触摸屏。现有机器人一般在局部安装触摸传感器,识别该触摸部位的触摸行为。例如,在机器人头部安装一个触摸传感器,当感应到头部单次接触时,机器人可能会给出对应反馈(如播放某个语音或屏幕上显示某个表情)。
然而,每个触摸动作之间是隔离的,当出现不间断不同类型触摸动作的时候,如连续单击后进行双击再长按,此时无法连续上报正确的触摸动作,由此降低触摸动作识别的准确度,从而降低触摸行为识别的准确度。
发明内容
本申请提供了一种触摸行为识别方法,相关装置以及设备,用于根据触摸动作的触摸时长以及触摸动作的振动幅度共同确定触摸动作类型,由此提升触摸动作类型识别的准确度,根据准确度提升后的触摸动作类型确定触摸行为,能够提升触摸行为识别的准确度。
本申请的第一方面提供了一种触摸行为识别方法,该方法应用于机器人,且机器人包括电容触摸传感器,惯性测量单元(inertial measurement unit,IMU)以及处理器。基于此,电容触摸传感器检测触摸动作的触摸状态信息,该触摸动作的触摸状态信息指示触摸状态,或,非触摸状态,然后IMU检测触摸动作的加速度信息,其次,处理器根据触摸动作的触摸状态信息确定触摸动作的触摸时长,并根据触摸动作的触摸时长以及触摸动作的振动幅度,确定触摸动作类型,该触摸动作的振动幅度是根据触摸动作的加速度信息确定的,且触摸动作类型用于确定触摸行为。
在该实施方式中,通过检测触摸动作的触摸状态信息确定触摸动作的触摸时长,并通过触摸动作的加速度信息确定触摸动作的振动幅度,再根据触摸动作的触摸时长以及触摸 动作的振动幅度共同确定触摸动作类型,由此提升触摸动作类型识别的准确度,基于此,再根据准确度提升后的触摸动作类型确定触摸行为,能够提升触摸行为识别的准确度。
在本申请的一种可选实施方式中,处理器先根据每个触摸动作的触摸时长,确定对应于每个触摸动作的时间间隔阈值,该时间间隔阈值正比于触摸动作的触摸时长,然后处理器根据时间间隔阈值,拆分触摸动作。
在该实施方式中,不同的触摸动作之间的时间间隔阈值与最近一个触摸动作或触摸子动作的触摸时长正相关,由此能够实现时间间隔阈值的自适应更新。基于此,动态时间间隔阈值设定能够更好地判断触摸动作的连续性。
在本申请的一种可选实施方式中,在本申请提供的触摸行为识别方法中,处理器还能够根据触摸动作的触摸状态信息,确定触摸动作的触摸起始时间点和触摸终止时间点,并根据触摸起始时间点和触摸终止时间点,确定触摸动作的触摸时长,以及,每两个相邻的触摸动作之间的时间间隔。基于此,在时间间隔大于或等于第一时间间隔阈值的情况下,处理器确定触摸动作为新的触摸动作,该第一时间间隔阈值为两个相邻的触摸动作中在前的触摸动作对应的时间间隔阈值,反之,在时间间隔小于第一时间间隔阈值的情况下,处理器确定触摸动作为触摸子动作。
在该实施方式中,处理器能够处理触摸动作之间连续变化的情况,即识别新的触摸动作以及在触摸动作中出现的新的触摸子动作,以准确捕捉连续的触摸动作之间的转变,因此不同的触摸动作之间不再是彼此隔离的,使得确定的结果更符合真实的触摸动作,从而提升确定触摸动作的准确度。
在本申请的一种可选实施方式中,机器人包括多个触摸部位,电容触摸传感器包括位于多个触摸部位的多个电容触摸传感器。基于此,由于有一些触摸行为则涉及多个触摸部位,因此需要根据在多个触摸部位进行的触摸动作即可确定触摸行为,在本申请提供的触摸行为识别方法中,处理器能够根据发生在多个触摸部位的多个触摸动作的触摸动作类型,确定触摸行为。例如,握手为在手心继续抚摸或者长按的触摸动作,以及在在手背继续抚摸或者长按的触摸动作。基于此,在不同的触摸部位进行触摸动作,能够组合成为触摸行为。
在该实施方式中,在机器人所包括的多个触摸部位所发送的触摸动作被确定的基础上,能够进一步地确定每个部位对应的触摸动作类型,实现复杂触摸行为的识别,进一步地提升触摸行为识别的准确度。
在本申请的一种可选实施方式中,在机器人包括多个触摸部位,电容触摸传感器包括位于多个触摸部位的多个电容触摸传感器的基础上,处理器能够根据发生在多个触摸部位的多个触摸动作的触摸动作类型,根据预设规则确定触摸行为。
在该实施方式中,提供了根据多个触摸动作的触摸动作类型,通过预设规则确定触摸行为的方法,由此提升触摸行为识别的可行性。
在本申请的一种可选实施方式中,在机器人包括多个触摸部位,电容触摸传感器包括位于多个触摸部位的多个电容触摸传感器的基础上,处理器根据发生在多个触摸部位的多个触摸动作的触摸动作类型,根据分类模型确定触摸行为,该分类模型是以历史触摸数据 为训练数据,通过训练得到的模型。
在该实施方式中,以历史触摸数据为训练数据,通过训练得到分类模型,由于根据历史触摸数据作为模型训练数据,由此所得到的分类模型输出的结果更符合真实的触摸行为,基于此,根据多个触摸动作的触摸动作类型,通过分类模型确定触摸行为,能够进一步地提升触摸行为识别的可行性。
在本申请的一种可选实施方式中,触摸动作的触摸时长用于根据预设的时长阈值区分触摸动作类型。
在该实施方式中,具体通过根据预设的时长阈值区分触摸动作类型,由此提升区分触摸动作类型的可行性,从而提升本方案的可行性。
在本申请的一种可选实施方式中,上述预设的时长阈值包括第一时长阈值和第二时长阈值,且第一时长阈值小于第二时长阈值。基于此,在一种情况下,触摸动作的触摸时长小于第一时长阈值的触摸动作为拍。在另一种情况下,触摸动作的触摸时长大于或等于第一时长阈值,且小于第二时长阈值的触摸动作为抚摸。在又一种情况下,所处触摸动作的触摸时长大于或等于第二时长阈值的触摸动作为长按。在实际应用中,不同的触摸动作类型之间的第一时长阈值和第二时长阈值是根据每个触摸子动作的平均间隔时长以及每个触摸动作中每个触摸子动作的的平均接触时长、通过进行实验和/或基于大量数据的统计所确定的。
在该实施方式中,预设的时长阈值中包括不同的时长阈值,且不同的时长阈值能够划分不同的触摸时长范围,从而能够对不同时长范围内的触摸动作进行细化分类,由此通过触摸动作的触摸时长即能够确定该触摸动作的触摸动作类型,以实现不同触摸动作类型的划分,在保证本方案可行性的基础上,提升了本方案的灵活性。
在本申请的一种可选实施方式中,触摸动作的振动幅度用于根据预设的振动幅度阈值区分触摸动作类型。在一种情况下,当触摸动作的振动幅度小于振动幅度阈值的触摸动作,此时触摸动作类型为轻拍、轻抚或轻按中任一项。在另一种情况下,当触摸动作的振动幅度大于或等于振动幅度阈值的触摸动作,此时触摸动作类型为重拍、重抚或重按中任一项。
在该实施方式中,在无压力传感器的情况下,能够实现重拍和轻拍,或轻抚以及重抚,或轻按以及重按的目的,由此提升对触摸动作类型检测的准确度。
本申请的第二方面提供了一种触摸行为识别装置,包括:
检测模块,用于检测触摸动作的触摸状态信息,其中,触摸动作的触摸状态信息指示触摸状态,或,非触摸状态;
检测模块,还用于检测触摸动作的加速度信息;
处理模块,用于根据触摸动作的触摸状态信息确定触摸动作的触摸时长;
处理模块,还用于根据触摸动作的触摸时长以及触摸动作的振动幅度,确定触摸动作类型,其中,触摸动作的振动幅度是根据触摸动作的加速度信息确定的,触摸动作类型用于确定触摸行为。
在本申请的一种可选实施方式中,处理模块,还用于根据每个触摸动作的触摸时长,确定对应于每个触摸动作的时间间隔阈值,其中,时间间隔阈值正比于触摸动作的触摸时 长;
处理模块,还用于根据时间间隔阈值,拆分触摸动作。
在本申请的一种可选实施方式中,处理模块,还用于根据触摸动作的触摸状态信息,确定触摸动作的触摸起始时间点和触摸终止时间点;
处理模块,还用于根据触摸起始时间点和触摸终止时间点,确定触摸动作的触摸时长,以及,每两个相邻的触摸动作之间的时间间隔;
处理模块,具体用于:
在时间间隔大于或等于第一时间间隔阈值的情况下,处理器确定触摸动作为新的触摸动作,其中,第一时间间隔阈值为两个相邻的触摸动作中在前的触摸动作对应的时间间隔阈值;
在时间间隔小于第一时间间隔阈值的情况下,处理器确定触摸动作为触摸子动作。
在本申请的一种可选实施方式中,处理模块,还用于根据发生在多个触摸部位的多个触摸动作的触摸动作类型,确定触摸行为。
在本申请的一种可选实施方式中,处理模块,具体用于根据发生在多个触摸部位的多个触摸动作的触摸动作类型,根据预设规则确定触摸行为。
在本申请的一种可选实施方式中,处理模块,具体用于根据发生在多个触摸部位的多个触摸动作的触摸动作类型,根据分类模型确定触摸行为,其中,分类模型是以历史触摸数据为训练数据,通过训练得到的模型。
在本申请的一种可选实施方式中,触摸动作的触摸时长用于根据预设的时长阈值区分触摸动作类型。
在本申请的一种可选实施方式中,时长阈值包括第一时长阈值和第二时长阈值,第一时长阈值小于第二时长阈值;
触摸动作的触摸时长小于第一时长阈值的触摸动作为拍;
触摸动作的触摸时长大于或等于第一时长阈值,且小于第二时长阈值的触摸动作为抚摸;
所处触摸动作的触摸时长大于或等于第二时长阈值的触摸动作为长按。
在本申请的一种可选实施方式中,触摸动作的振动幅度用于根据预设的振动幅度阈值区分触摸动作类型;
触摸动作的振动幅度小于振动幅度阈值的触摸动作,为轻拍、轻抚或轻按中任一项;
触摸动作的振动幅度大于或等于振动幅度阈值的触摸动作,为重拍、重抚或重按中任一项。
第三方面,提供了一种机器人,包括电容触摸传感器,IMU以及处理器。该电容触摸传感器,IMU以及处理器与存储器耦合,可用于执行存储器中的指令,以实现上述第一方面中任一种可能实现方式中的方法。可选地,该触摸行为识别装置还包括存储器。可选地,该触摸行为识别装置还包括通信接口,处理器与通信接口耦合,所述通信接口用于输入和/或输出信息,所述信息包括指令和数据中的至少一项。
在另一种实现方式中,该触摸行为识别装置为配置于机器人中的芯片或芯片系统。当 该触摸行为识别装置为配置于机器人中的芯片或芯片系统时,所述通信接口可以是输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等。所述处理器也可以体现为处理电路或逻辑电路。
第四方面,提供了一种处理器,包括:输入电路、输出电路和处理电路。所述处理电路用于通过所述输入电路接收信号,并通过所述输出电路发射信号,使得所述处理器执行上述第一方面中任一种可能实现方式中的方法。
在具体实现过程中,上述处理器可以为芯片,输入电路可以为输入管脚,输出电路可以为输出管脚,处理电路可以为晶体管、门电路、触发器和各种逻辑电路等。输入电路所接收的输入的信号可以是由例如但不限于接收器接收并输入的,输出电路所输出的信号可以是例如但不限于输出给发射器并由发射器发射的,且输入电路和输出电路可以是同一电路,该电路在不同的时刻分别用作输入电路和输出电路。本申请实施例对处理器及各种电路的具体实现方式不做限定。
第五方面,提供了一种触摸行为识别装置,包括通信接口和处理器。所述通信接口与所述处理器耦合。所述通信接口用于输入和/或输出信息。所述信息包括指令和数据中的至少一项。所述处理器用于执行计算机程序,以使得所述触摸行为识别装置执行第一方面中任一种可能实现方式中的方法。
可选地,所述处理器为一个或多个,所述存储器为一个或多个。
第六方面,提供了一种触摸行为识别装置,包括处理器和存储器。该处理器用于读取存储器中存储的指令,并可通过接收器接收信号,通过发射器发射信号,以使得所述装置执行第一方面中任一种可能实现方式中的方法。
可选地,所述处理器为一个或多个,所述存储器为一个或多个。
可选地,所述存储器可以与所述处理器集成在一起,或者所述存储器与处理器分离设置。
在具体实现过程中,存储器可以为非瞬时性(non-transitory)存储器,例如只读存储器(read only memory,ROM),其可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请实施例对存储器的类型以及存储器与处理器的设置方式不做限定。
应理解,相关的信息交互过程,例如发送消息可以为从处理器输出消息的过程,接收消息可以为向处理器输入接收到的消息的过程。具体地,处理输出的信息可以输出给发射器,处理器接收的输入信息可以来自接收器。其中,发射器和接收器可以统称为收发器。
上述第五方面以及第六方面中的触摸行为识别装置可以是芯片,该处理器可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,该处理器可以是逻辑电路、集成电路等;当通过软件来实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现,该存储器可以集成在处理器中,可以位于该处理器之外,独立存在。
第七方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序(也可以称为代码,或指令),当所述计算机程序被运行时,使得计算机执行上述第一方面中任一种可能实现方式中的方法。
第八方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机 程序(也可以称为代码,或指令)当其在计算机上运行时,使得计算机执行上述第一方面中任一种可能实现方式中的方法。
第九方面,提供了一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计算机程序(也可以称为代码,或指令)当其在计算机上运行时,使得计算机执行上述第一方面中任一种可能实现方式中的方法。
第十方面,本申请提供了一种芯片系统,该芯片系统包括处理器和接口,所述接口用于获取程序或指令,所述处理器用于调用所述程序或指令以实现或者支持机器人实现第一方面所涉及的功能。
在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存机器人必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。
需要说明的是,本申请第二方面至第十方面的实施方式所带来的有益效果可以参照第一方面的实施方式进行理解,因此没有重复赘述。
附图说明
图1为本申请实施例中系统架构的一个实施例示意图;
图2为本申请实施例中机器人的一个实施例示意图;
图3为本申请实施例中触摸动作与触摸子动作的一个实施例示意图;
图4为本申请实施例中长时间触摸行为检测的方法的一个实施例流程图;
图5为本申请实施例中触摸动作结束检测方法的一个实施例流程图;
图6为本申请实施例中触摸行为识别方法的一个实施例流程图;
图7为本申请实施例中结合IMU以及电容触摸传感器确定触摸动作类型的方法流程图;
图8为本申请实施例中基于分类模型进行触摸行为识别的一个实施例示意图;
图9为本申请实施例中触摸行为识别装置一个结构示意图。
具体实施方式
为了使本申请的上述目的、技术方案和优点更易于理解,下文提供了详细的描述。所述详细的描述通过使用方框图、流程图和/或示例提出了设备和/或过程的各种实施例。由于这些方框图、流程图和/或示例包含一个或多个功能和/或操作,所以本领域内人员将理解可以通过许多硬件、软件、固件或它们的任意组合单独和/或共同实施这些方框图、流程图或示例内的每个功能和/或操作。本申请的说明书和权利要求书及附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
触摸(Touch)是人类在进行协调交互时的主要方式之一。通过触摸感知到的触觉(Sense  of Touch)可以帮助人类评估物体的属性,如大小、形状、质地、温度等。此外,还可以利用触觉来检测物体的滑脱,进而发展人类对身体的认识。触觉将压力、振动、疼痛、温度等多种感觉信息传递给中枢神经系统,帮助人类感知周围环境,避免潜在的伤害。研究表明,与视觉和听觉相比,人类的触觉在处理物体的物质特征和细节形状方面更胜一筹。与人类一样,机器人的触摸传感能够帮助机器人理解现实世界中物体的交互行为,这些行为取决于其重量和刚度,取决于触摸时表面的感觉、接触时的变形情况以及被推动时的移动方式。只有给机器人也配备先进的触摸传感器,即“触觉传感”系统,才能使其意识到周围的环境,远离潜在的破坏性影响,并为后续的手部操作等任务提供信息。然而,目前大多数机器人交互式技术系统由于缺乏对触觉传感技术的有效应用,其动作不准确、不稳定,交互过程“笨拙”,极大地限制了他们的交互和认知能力。
因此,基于电容触摸传感器的人机交互已有较快发展,单通道的触摸传感器一般能够实现单次接触/点击、两次接触/点击和长时间接触(长按)的识别,多通道的触摸传感器则能够识别滑动和缩放等更复杂的触摸行为。多通道电容触摸传感器一般以触摸板的形式呈现,多用于触摸屏。现有机器人一般在局部安装触摸传感器,识别该触摸部位的触摸行为。例如,在机器人头部安装一个触摸传感器,当感应到头部单次接触时,机器人可能会给出对应反馈(如播放某个语音或屏幕上显示某个表情)。然而,触摸传感器虽能够对单击、双击和长按检测进行识别,但每个触摸动作之间是隔离的,当出现不间断不同类型触摸动作的时候,如连续单击后进行双击再长按,此时无法连续上报正确的触摸动作,由此降低触摸动作识别的准确度,从而降低触摸行为识别的准确度。
为了解决上述问题,本申请实施例提供了一种触摸行为识别方法,该触摸行为识别方法应用于机器人。为了便于理解,请参阅图1,图1为本申请实施例中系统架构的一个实施例示意图,结合各个触摸部位的触摸检测以及振动检测,进行触摸动作的识别,从而提升对触摸行为识别的准确度。本实施例中,触摸部位包括但不限于:头顶、头左侧、头右侧、胸口、左腋下、右腋下、肚子、左手背、左手心、右手背以及右手心。如图1所示,本申请实施例提供的系统架构中,可根据各个触摸部位的触摸检测及振动检测,识别出用户对机器人的触摸行为。
由于本申请实施例所提供的触摸行为识别方法应用于机器人,下面对本申请实施例所使用的机器人进行介绍,请参阅图2,图2为本申请实施例中机器人的一个实施例示意图,如图2所示,机器人200包括一至多个电容触摸传感器;可选地,机器人200还可以包括一至多个惯性测量单元(inertial measurement unit,IMU)。
具体地,机器人200还能够包含处理器。该处理器设置于机器人200内。该处理器用于控制整个机器人200的运作,接收、响应或执行相关控制指令后输出控制信息来控制机器人各个部位的操作,机器人根据相关控制信息输出相应的反应。该处理器可以称为中央处理器(central processing unit,CPU)。该处理器可能是一种集成电路芯片,具有逻辑和/或信号的处理能力、计算能力。该处理器还可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)、片上系统(SoC)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也 可以是任何常规的处理器等。
进一步地,机器人200还能够包含机器人本身的形体、框架、结构,以及内外多个部件等。例如,机器人200还可以包括壳体(图2未示出),用于驱动机器人200运动的驱动组件(图2未示出),用于使得机器人200移动的运动组件(图2未示出),例如轮子或腿部等,用于接收外界声音/语音信息和发出声音的音频组件(图2未示出),用于显示图像信息、文字信息或者表情信息的显示组件(图2未示出),用于使得机器人进行其他运动的执行机构(图2未示出),例如机械臂等,还有各种电子元件、电路结构、零件结构等。
本实施例中所提供的机器人200的全身共11个触摸部位,机器人200的11个触摸部位包括头顶201,头右侧202,头左侧203,胸口204,肚子205,右腋下206,左腋下207,右手心208,右手背209,左手心210以及左手背211。并且在机器人200的前述11个触摸部位的外壳内层贴有铜皮,每个触摸部位的铜皮处安装对应的电容触摸传感器,每个电容触摸传感器用于检测对应触摸部位的触摸动作。其次,机器人200的头触摸部位置(包括头顶201,头右侧202以及头左侧203)还安装有IMU,用于检测触摸时产生的振动,从而辅助处理器进行轻拍和重拍的区分。因此,当机器人200的头触摸部位置安装有IMU时,可以识别的触摸动作包括但不限于轻拍、连续轻拍、抚摸、连续抚摸、长按、头部重拍、头部连续重拍等。
可以理解的是,本实施例以11个触摸部位进行介绍,在实际应用中,还可以根据需求对机器人的触摸部位进一步地的划分。其次,本实施例所介绍的IMU安装于机器人的头部,在实际应用中,可以根据实际需求将IMU部署于机器人的任何触摸部位。因此图2所示的机器人仅用于理解本方案,而不应理解为本方案的限定。
具体地,本申请实施例提供的触摸行为识别方法能够应用于各个场景,具体示例请参阅表1。
表1
Figure PCTCN2021136066-appb-000001
通过前述介绍可知,由于每个触摸部位的触摸动作检测都是依靠对应的电容触摸传感器进行的,因此每个触摸部位的触摸动作检测都是相互独立的,但运用的算法原理一致,如何检测出每个触摸部位上的触摸动作尤为关键。
因此,为了便于理解,先对本实施例中所提供的触摸动作拆分算法进行介绍,触摸动作拆分算法用于检测用户对机器人某个触摸部位进行的触摸动作(Touch Event)与触摸子 动作(Sub-touch Event)。下面对触摸动作以及触摸子动作进行介绍。请参阅图3,图3为本申请实施例中触摸动作与触摸子动作的一个实施例示意图,如图3所示,图3中示出的横轴为时间轴,时间轴上方的竖线用于表示触摸动作或触摸子动作的触摸起始时间点,例如,触摸起始时间点301,触摸起始时间点303以及触摸起始时间点305。时间轴下方的竖线用于表示触摸动作或触摸子动作的触摸终止时间点,例如,触摸终止时间点302,触摸终止时间点304以及触摸终止时间点306。
基于此,相邻的在前触摸起始时间点与在后触摸终止时间点之间的时间段内,用户与机器人某个触摸部位之间处于接触状态,例如,用户与机器人某个触摸部位在触摸起始时间点301至触摸终止时间点302之间的时间段内处于接触状态,以及,用户与机器人某个触摸部位在触摸起始时间点303至触摸终止时间点304之间的时间段内处于接触状态,以及,用户与机器人某个触摸部位在触摸起始时间点305至触摸终止时间点306之间的时间段内处于接触状态。其次,相邻的在前触摸终止时间点和在后触摸起始时间点之间的时间段内,用户与机器人某个触摸部位之间处于非接触状态,例如,用户与机器人某个触摸部位在触摸终止时间点302至触摸起始时间点303之间的时间段内处于非接触状态,以及,用户与机器人某个触摸部位在触摸终止时间点304至触摸起始时间点305之间的时间段内处于非接触状态。
示例性地,若用户对机器人进行了触摸动作A以及触摸动作B。其中,触摸动作B为在起始时间点305至触摸终止时间点306之间的时间段内,用户与机器人某个触摸部位之间的触摸动作,触摸动作B可以包括触摸状态。而触摸动作A包括触摸子动作1以及触摸子动作2,触摸子动作1为触摸起始时间点301至触摸终止时间点302之间的时间段内,用户与机器人某个触摸部位之间的触摸动作,而触摸子动作2为触摸起始时间点303至触摸终止时间点304之间的时间段内,用户与机器人某个触摸部位之间的触摸动作。虽然触摸子动作1以及触摸子动作2之间存在非触摸状态的时间间隔(即触摸终止时间点302至触摸起始时间点303之间的时间段),但该时间间隔是小于触摸动作A以及触摸动作B之间的时间间隔(即触摸终止时间点304至触摸起始时间点305之间的时间段)的。因此,触摸动作A为触摸起始时间点301至触摸终止时间点304之间的时间段内,用户与机器人某个触摸部位之间的触摸动作,且触摸动作A可以包括触摸状态与非触摸状态。
可选地,图3所示的触摸动作A可对应于连续轻拍的触摸行为,触摸动作B可对应于抚摸的触摸行为或者长按的触摸动作。可以理解的是,图3的示例仅用于理解触摸动作以及触摸子动作,在实际应用中,每个触摸动作可以包括多个触摸子动作,且触摸子动作的数量不应理解为对本方案的限定。
由于电容触摸传感器能够基于电容的变化,检测到用户与机器人的某个触摸部位之间处于触摸状态,并向处理器上报用户与机器人的某个触摸部位之间处于触摸状态,例如,处于触摸状态时上报触摸状态标识“1”,处于非触摸状态上报触摸状态标识“0”,当电容触摸传感器检测到用户与机器人的某个触摸部位之间处于触摸状态,则向处理器上报触摸部位以及触摸状态标识“1”,因此,处理器接收到触摸状态标识“1”后,能够确定在机器人某个触摸部位发生了触摸行为。反之,当电容触摸传感器检测到用户与机器人的某个触 摸部位之间处于非触摸状态,则向处理器上报触摸部位以及触摸状态标识“0”因此,处理器接收到触摸状态标识“0”后,能够确定在机器人某个触摸部位停止触摸行为。基于此,下面将介绍本实施例所提供的长时间触摸动作检测(可记作“detectLongTouch”)方法,用于确定用户与机器人的某个触摸部位之间触摸是否为新的触摸动作的开始。请参阅图4,图4为本申请实施例中长时间触摸行为检测的方法的一个实施例流程图,如图4所示,具体步骤如下。
步骤S400,处理器启动长时间触摸行为检测。
步骤S401,处理器判断上一触摸动作是否结束。
本实施例中,处理器需要判断上一触摸动作是否结束。若是,则执行步骤S402。若否,则执行步骤S403。
具体地,在电容触摸传感器感应到某个触摸部位的触摸动作后,处理器可以通过时间间隔阈值判断上一触摸动作是否结束。可选地,该时间间隔阈值可以是根据上一触摸动作和/或在前一个或多个触摸动作的时长和/或时间间隔自适应调整的,或者说动态变化的,每个时间间隔阈值可以是根据人工制定的规则自适应调整的,也可以是利用人工智能技术,基于历史数据、通过学习模型自动计算得到的。
若当前触摸动作与上一触摸动作之间的时间间隔大于或等于时间间隔阈值,则可以确定上一触摸动作已结束,当前触摸动作为新的触摸动作。具体地,上述当前触摸动作与上一触摸动作之间的时间间隔,为当前触摸动作的触摸起始时间点与上一触摸动作的触摸终止时间点。例如,请再次参阅图3,当触摸动作A的触摸终止时间点304与触摸动作B的触摸起始时间点305之间的时间间隔大于或等于时间间隔阈值,此时通过步骤S401能够判断上一触摸动作已经结束,且当前触摸动作为新的触摸动作,即上一触摸动作即为触摸动作A,当前触摸动作为触摸动作B。
同理可知,若当前触摸动作与上一触摸动作之间的时间间隔小于时间间隔阈值,则可以确定上一触摸动作并未结束,当前触摸动作为上一触摸动作的一个新的触摸子动作。例如,请再次参阅图3,当触摸子动作1的触摸终止时间点302与触摸子动作2的触摸起始时间点303之间的时间间隔小于于时间间隔阈值,此时通过步骤S401能够判断上一触摸动作并未结束,且当前触摸动作不是新的触摸动作,那么上一触摸动作可以为包括触摸子动作1的触摸动作A,当前触摸动作为触摸子动作2。在触摸子动作2发生之前(即触摸起始时间点303之前),触摸动作A包括触摸子动作1而不包括触摸子动作2,而在触摸子动作2发生之后(例如,触摸结束时间点304之后或触摸起始时间点303之后),触摸动作A既包括触摸子动作1、也包括触摸子动作2,触摸子动作2是触摸动作A的一个新的触摸子动作。
在一种可能的实现方式中,时间间隔阈值可以与当前触摸动作最近的一个触摸动作或触摸子动作的触摸时长正相关。具体地,可以通过公式(1)确定不同触摸动作之间的时间间隔阈值:
I=min(k·T touch+I min,I max);  (1)
其中,k指示可调参数,T touch指示当前触摸动作最近的一个触摸动作或触摸子动作的 触摸时长,I min指示预设最短时间间隔,I max指示预设最长时间间隔。
例如,请再次参阅图3,若上一触摸动作为触摸动作A,当前触摸动作为触摸动作B,那么T touch指示触摸起始时间点303至触摸终止时间点304之间的时间段。假设触摸子动作2的触摸时长(此示例中即T touch)为100毫秒(ms)、触摸终止时间点304和触摸起始时间点305之间的时长即时间间隔为350ms,且预设最短时间间隔I min为50ms、预设最长时间间隔I max为2秒(s)、可调参数k被配置为0.7,则此时根据公式(1)计算可得时间间隔阈值为120ms。时间间隔350ms大于时间间隔阈值120ms,因此可以确定触摸动作A已结束,触摸动作B为新的触摸动作。
步骤S402,处理器确定当前触摸动作的触摸起始时间点。
本实施例中,在执行步骤S402之前,处理器首先通过步骤S401进行判断,确定开始新的触摸动作,即上一触摸动作已经结束。因此,当前触摸动作为上一触摸动作完成后的一个新的触摸动作,此时处理器能够确定当前触摸动作的开始时间。示例性地,请再次参阅图3,若上一触摸动作为触摸动作A,当前触摸动作为触摸动作B,那么处理器将确定触摸起始时间点305。
步骤S403,处理器将当前触摸动作确定为上一触摸动作的触摸子动作。
本实施例中,在执行步骤S403之前,处理器首先通过步骤S401进行判断,确定未出现新的触摸动作,即上一触摸动作尚未结束。因此,当前触摸动作将会被确定为上一触摸动作的触摸子动作。可选地,处理器还能够确定触摸子动作的触摸起始时间点和/或确定为触摸子动作的标识信息等。示例性地,请再次参阅图3,若上一触摸动作为包括触摸子动作1的触摸动作A,当前触摸动作为触摸子动作2,则触摸子动作2会被确定为触摸子动作A包括的一个新的触摸子动作,那么处理器可以确定触摸子动作2的触摸起始时间点303。
此种情况下,“上一触摸动作”包括的触摸子动作数量增加一个。例如,请再次参阅图3,在触摸子动作2发生前、后,触摸动作A包括的触摸子动作的数量由一个(触摸子动作1)增加为两个(触摸子动作1和触摸子动作2)。基于此,处理器确定触摸子动作后,还能够更新触摸动作的间隔数(可记作“numIntervals”),以及触摸动作中每个触摸子动作的平均间隔时长(可记作“meanTouchIntervals”)。例如,假设上一触摸动作之前已包括第一触摸子动作以及第二触摸子动作,第一触摸子动作和第二触摸子动作之间的时间间隔时长为80ms。步骤S403将当前触摸动作确定上一触摸动作的第三触摸子动作,且第二触摸子动作和第三触摸子动作之间的时间间隔时长为100ms,则处理器将上一触摸动作的间隔数从1个更新为2个,并且将上一触摸动作的平均间隔时长从80sm更新为(80+100)/2=90ms。
基于此,在触摸动作变化的时候,例如,连续轻拍后更改为连续抚摸,此时每个触摸子动作的平均间隔时长会逐渐变长。因此处理器在确定触摸动作类型时,还能够参考更新后得到的平均间隔时长进行确定,以满足不同触摸动作的可能性,提升确定触摸动作类型的准确度。
步骤S404,处理器启动下一个长时间触摸行为检测。
本实施例中,处理器完成步骤S403或步骤S402后,将启动下一个长时间触摸行为检 测,即重复执行图4所示流程。当下一次触摸行为发生时,处理器能够通过与步骤S401类似的方式判断是否发生新的触摸动作,进而通过与步骤S402类似的方式完成对新触摸动作的触摸起始时间的上报,或者通过与步骤S403类似的方式完成对将当前触摸动作确定为上一触摸动作所包括的新的触摸子动作的结果的上报。
以上对长时间触摸动作检测方法进行了介绍,下面将介绍本申请实施例所提供的触摸动作结束检测(可记作“detectTouchEvent”)方法,用于确定用户与机器人的某个触摸部位之间的触摸动作是否结束。请参阅图5,图5为本申请实施例中触摸动作结束检测方法的一个实施例流程图,如图5所示,具体步骤如下。
步骤S500、处理器启动触摸动作结束检测。
步骤S501、处理器判断是否出现新的触摸子动作。
本实施例中,在发生触摸行为时,电容触摸传感器在电容触摸传感器感应到某个触摸部位的触摸动作后,处理器能够通过与步骤S401类似的方式判断是否发生新的触摸动作,即根据时间间隔阈值进行判断,此处不再重复描述。
若步骤S501判断为是,说明未出现新的触摸动作,由此能够通过与步骤S403类似的方式,将当前触摸动作确定为触摸子动作,即确定出现新的触摸子动作,并执行步骤S502。示例性地,请再次参阅图3,当触摸子动作1的触摸终止时间点302与触摸子动作2的触摸起始时间点303之间的时间间隔小于时间间隔阈值,此时通过步骤S501能够判断上一触摸动作并未结束,且当前触摸动作为新的触摸子动作。那么上一触摸动作可以为包括触摸子动作1的触摸动作A,当前触摸动作为触摸子动作2。在触摸子动作2发生之前(即触摸起始时间点303之前),触摸动作A包括触摸子动作1而不包括触摸子动作2,而在触摸子动作2发生之后(例如,触摸结束时间点304之后或触摸起始时间点303之后),触摸动作A既包括触摸子动作1、也包括触摸子动作2,触摸子动作2是触摸动作A的一个新的触摸子动作。
若上一触摸动作(上一触摸子动作)为触摸子动作1,那么当前触摸动作(新的触摸子动作)为触摸子动作2。
其次,若步骤S501判断为否,说明已出现新的触摸动作,由此确定未出现新的触摸子动作,并执行步骤S503。
步骤S502、处理器确定结果。
本实施例中,在执行步骤S502之前,处理器首先通过步骤S501进行判断,确定开始新的触摸子动作。因此,当前触摸动作为上一触摸动作中的一个新的触摸子动作,处理器确定这一结果。可选地,处理器还可以确定触摸子动作的触摸起始时间点和/或确定为触摸子动作的标识信息等。例如,请再次参阅图3,若处理器确定出现新的触摸子动作为触摸子动作2,那么处理器能够确定触摸子动作2的触摸起始时间点303。
此种情况下,与前述实施类似,“上一触摸动作”包括的触摸子动作数量增加一个。例如,请再次参阅图3,在触摸子动作2发生前、后,触摸动作A包括的触摸子动作的数量由一个(触摸子动作1)增加为两个(触摸子动作1和触摸子动作2)。基于此,处理器确定结果后,还能够更新触摸动作的中触摸子动作的数量(可记作“numTouches”),以及触 摸动作中每个触摸子动作的平均触摸时长(可记作“meanTouchDuration”)。例如,假设上一触摸动作之前已包括第一触摸子动作以及第二触摸子动作,第一触摸子动作的触摸时长为200毫秒,第二触摸子动作的触摸时长为240毫秒。步骤S502将当前触摸动作确定上一触摸动作的第三触摸子动作,且第三触摸子动作的触摸时长为280毫秒,则处理器将上一触摸动作的触摸子动作的数量从2个更新为3个,并且将上一触摸动作中每个触摸子动作的平均触摸时长从(200+240)/2=220毫秒更新为(200+240+280)/3=240毫秒。
步骤S503、处理器确定触摸动作的触摸终止时间点。
本实施例中,在执行步骤S503之前,处理器首先通过步骤S501进行判断,确定未出现新的触摸子动作,即上一触摸动作已经结束,并开始新的触摸动作。因此,当前触摸动作为上一触摸动作完成后的一个新的触摸动作。若上一触摸动作包括多个触摸子动作,那么处理器可以将上一触摸动作中最后一次出现的触摸子动作的触摸终止时间确定为触摸动作的触摸终止时间。示例性地,请再次参阅图3,上一触摸动作(触摸动作A)已经结束,并且出现新的触摸动作(触摸动作B)。由于触摸动作A中最后一次出现的触摸子动作为触摸子动作2,且触摸子动作2的触摸终止时间点为304,因此可以确定触摸动作A的触摸终止时间点为304。可选地,处理器还可以确定触摸子动作2的触摸起始时间点303,或者触摸子动作2中触摸起始时间点303至触摸终止时间点304的触摸时长等。
若触摸动作中未包括触摸子动作,在电容触摸传感器感应到某个触摸部位的触摸动作后,处理器能够接收到电容触摸传感器上报的触摸状态标识“1”后,而在电容触摸传感器感应到对某个触摸部位停止触摸动作,处理器能够接收到电容触摸传感器上报的触摸状态标识“0”,接收到触摸状态标识“0”的时间点即为触摸动作的触摸终止时间。示例性地,请再次参阅图3,对于新的触摸动作(触摸动作B)而言,由于触摸动作B不包括触摸子动作,处理器在时间点305接收到电容触摸传感器上报的触摸状态标识“1”,此时处理器能够确定时间点305为触摸动作B触摸起始时间点305,而处理器在时间点306接收到电容触摸传感器上报的触摸状态标识“0”,此时处理器可以确定时间点306为触摸动作B的触摸终止时间点。可选地,处理器还可以确定触摸动作B的触摸起始时间点305,或者触摸动作B中触摸起始时间点305至触摸终止时间点306的触摸时长等。
步骤S504,处理器启动下一个触摸动作结束检测。
本实施例中,处理器完成步骤S503或步骤S502后,将启动下一个触摸动作结束检测,即重复执行图5所示流程。当下一次触摸动作发生时,处理器能够通过与步骤S501类似的方式判断是否发生新的触摸子动作,进而通过与步骤S502类似的方式完成对将新的触摸子动作确定的结果的上报,或者通过与步骤S503类似的方式完成对新的触摸子动作的触摸终止时间上报。
图3与图5所示实施例的一些可能的实现方式中,不同的触摸动作之间的时间间隔阈值与最近一个触摸动作或触摸子动作的触摸时长正相关,由此能够实现时间间隔阈值的自适应更新。基于此,动态时间间隔阈值设定能够更好地判断触摸动作的连续性,并且本申请实施例所提出的触摸动作拆分算法能够处理触摸动作之间连续变化的情况。例如,传统的识别算法只能识别单击、双击和长按,并且每个动作之间是隔离的。本发明提出的触摸 动作拆分算法则可以连续识别上一触摸动作与当前触摸动作,并且能够准确捕捉连续的触摸动作之间的转变,因此不同的触摸动作之间不再是彼此隔离的,使得确定的结果更符合真实的触摸动作。
基于此,下面将基于前述实施例所介绍的触摸动作检测的方法,对本申请实施例中触摸行为识别方法进行介绍。请参阅图6,图6为本申请实施例中触摸行为识别方法的一个实施例流程图,如图所示,本申请实施例中触摸行为识别方法可以应用于图2所示出的机器人,基于此,触摸行为识别方法的具体步骤如下。
步骤S601,处理器确定触摸动作的触摸时长。
本实施例中,处理器可通过在触摸开始时,执行图4所示出的步骤S401至步骤S404,实现长时间触摸动作的检测。其次,可通过在触摸结束时,执行图5所示出的步骤S501至步骤S504,实现触摸动作结束的检测。
基于此,由于处理器在长时间触摸动作检测方法和触摸动作结束检测方法中,能够确定触摸动作的触摸起始时间以及触摸终止时间,因此处理器能够基于触摸动作的触摸起始时间以及触摸终止时间计算得到触摸动作的触摸时长,以实现确定触摸动作的触摸时长的目的。
具体地,在用户对机器人某个部位开始进行触摸动作时,处理器会执行触摸动作开始算法(可记作“actionOnTouch”),并且通过触摸动作开始算法调用图4示出的长时间触摸动作检测(可记作“detectLongTouch”)方法,实现长时间触摸动作检测的目的。其次,在用户结束对机器人某个部位的触摸动作时,处理器会执行触摸动作结束算法(可记作“actionOnQuit”),并且通过触摸动作结束算法调用图5示出的触摸动作结束检测(可记作“detectTouchEvent”)方法,实现触摸动作结束检测的目的。
步骤S602、处理器根据触摸动作的触摸时长确定触摸动作类型。
本实施例中,可以根据触摸动作的触摸时长对不同的触摸动作类型进行划分。例如,触摸动作类型A对应的触摸时长为触摸时长范围A,触摸动作类型B对应的触摸时长为触摸时长范围B,触摸动作类型C对应的触摸时长为触摸时长范围C,以及触摸动作类型D对应的触摸时长为触摸时长范围D,当步骤S601中获取触摸动作的触摸时长处于触摸时长范围C,那么可以确定该触摸动作的类型为触摸动作类型C。
在一种可能的实现方式中,可以将触摸时长小于200ms的触摸动作的触摸动作类型定义为“拍”,将触摸时长大于或等于200ms,且小于2s的触摸动作触摸动作类型定义为“抚摸”,将触摸时长大于或等于2s的触摸动作触摸动作类型定义类型为“长按”。应理解,本实施例以200ms、2s两个时长阈值作为拍、抚摸以及长按三种触摸动作类型的划分,仅作为一种可能的实现方式的举例,而非限定;本实施例对于时间阈值划分的时长区间为开区间还是闭区间,也不构成限定。在实际应用中,不同的触摸动作类型之间的时间间隔划分可以是根据前述实施例中所获取的触摸动作中每个触摸子动作的平均间隔时长以及每个触摸动作中每个触摸子动作的的平均接触时长、通过进行实验和/或基于大量数据的统计所确定的。
可选地,不同的触摸动作类型对应的触摸时长范围可能相同或者存在交叉,仅根据触 摸时长范围不能够准确确定触摸动作类型,此时可以结合触摸时长之外的其他信息进一步区分触摸动作类型。上述其他信息,可以是例如机器人运动状态(如加速度)、机器人获取的声音/语音信息、机器人的图像采集装置采集的图片或视频或者机器人从其他电子设备获取的信息(如声音/语音信息、传感器信息、图片或视频信息)等信息。
在一些场景中,有些触摸动作仅仅通过电容触摸传感器所上报的触摸状态标识无法进行准确区分,例如,很难仅仅通过电容触摸传感器所上报的触摸状态标识区分重拍和轻拍。此时,可以通过IMU所检测到的加速度信息,然后处理器根据加速度信息确定触摸动作的振动幅度,并通过触摸动作的振动幅度以及触摸动作的触摸时长确定触摸动作。例如,在触摸部位进行拍的振动幅度小于振动幅度阈值,则能够进一步地确定触摸动作类型为轻拍。在触摸部位进行拍的振动幅度大于或等于振动幅度阈值,则能够进一步地确定触摸动作类型为重拍。应理解,在实际应用中,通过IMU还能进一步地分辨更多个触摸动作,例如,轻抚以及重抚,或者轻按以及重按等,并且不同的触摸动作对应的振动幅度阈值也不同。基于IMU以及电容触摸传感器所检测到信息确定的触摸动作能够更为准确。
图7示例性地展示了本申请实施例提供的一种结合IMU以及电容触摸传感器确定触摸动作类型的方法流程图。如图7所示,确定触摸动作类型的方法具体包括如下步骤。
步骤S700、处理器启动触摸动作类型识别。
步骤S701、处理器确定触摸动作类型。
本实施例中,处理器通过与前述实施例类似方式,通过触摸动作的触摸时长确定触摸动作类型。
步骤S702、处理器判断振动幅度与振动幅度阈值的大小关系。
本实施例中,IMU检测触摸动作的加速度信息,并且将触摸动作的加速度信息向机器人包括的处理器上报,处理器根据触摸动作的加速度信息确定触摸动作的振动幅度,并且判断振动幅度与振动幅度阈值的大小关系。
步骤S703、振动幅度是否大于或等于振动幅度阈值。
本实施例中,处理器判断触摸动作的振动幅度是否大于或等于振动幅度阈值,若是,则执行步骤S704。若否,则执行步骤S705。
S704、处理器将触摸动作类型修正为重。
本实施例中,通过步骤S703可知,触摸动作的振动幅度大于或等于振动幅度阈值。基于此,若在步骤S701中确定触摸动作类型为拍,那么结合触摸动作的振动幅度大于振动幅度阈值这一结果,能够进一步地确定触摸动作类型为重拍,并且将步骤S701中所确定的触摸动作类型从“拍”修正为“重拍”。同理可知,若在步骤S701中确定触摸动作类型为抚摸,那么结合触摸动作的振动幅度大于振动幅度阈值这一结果,能够进一步地确定触摸动作类型为重抚,并且将步骤S701中所确定的触摸动作类型从“抚摸”修正为“重抚”。其他的触摸动作类型也能够基于步骤S704的类似方式进行修正,在此不对所有触摸动作类型的修正进行穷举。
S705、处理器将触摸动作类型修正为轻。
本实施例中,通过步骤S703可知,触摸动作的振动幅度小于振动幅度阈值。基于此, 若在步骤S701中确定触摸动作类型为拍,那么结合触摸动作的振动幅度小于振动幅度阈值这一结果,能够进一步地确定触摸动作类型为轻拍,并且将步骤S701中所确定的触摸动作类型从“拍”修正为“轻拍”。同理可知,若在步骤S701中确定触摸动作类型为长按,那么结合触摸动作的振动幅度小于振动幅度阈值这一结果,能够进一步地确定触摸动作类型为轻按,并且将步骤S701中所确定的触摸动作类型从“长按”修正为“轻按”。其他的触摸动作类型也能够基于步骤S705的类似方式进行修正,在此不对所有触摸动作类型的修正进行穷举。
S706、处理器输出修正后的触摸动作类型。
本实施例中,完成步骤S704或步骤S705后,将输出修正后的触摸动作类型,使得处理器能够根据修正后的触摸动作类型确定触摸行为,以进一步地提升触摸行为确定的准确度。例如,若通过步骤S704将步骤S701中所确定的触摸动作类型从“拍”修正为“重拍”,那么在步骤S706中将输出修正后的触摸动作类型“重拍”。同理可知,若通过步骤S705将步骤S701中所确定的触摸动作类型从“长按”修正为“轻按”,那么在步骤S706中将输出修正后的触摸动作类型“轻按”。
因此,在步骤S602所介绍的一些场景中,即在无压力传感器的情况下,能够通过IMU所检测到的加速度信息与电容触摸传感器所上报的触摸状态标识,实现重拍和轻拍,或轻抚以及重抚,或轻按以及重按的目的,并且提升对触摸动作类型检测的准确度。
步骤S603、处理器根据触摸动作类型确定触摸行为。
本实施例中,通过步骤S602能够确定每个触摸部位的触摸动作类型。由于有些触摸行为只涉及一个触摸部位,因此根据在一个触摸部位进行的触摸动作即可确定触摸行为,例如,抚摸头部即为在头部进行抚摸的触摸动作,拍肚子即为在肚子进行拍的触摸动作。
然而,有一些触摸行为则涉及多个触摸部位,因此需要根据在多个触摸部位进行的触摸动作即可确定触摸行为,例如,握手为在手心继续抚摸或者长按的触摸动作,以及在在手背继续抚摸或者长按的触摸动作。基于此,在不同的触摸部位进行触摸动作,能够组合成为触摸行为。
由于本实施例中所介绍的图2所示机器人进包括11个触摸部位,且每个触摸部位之间区分度很高,因此可以编码制定融合感知的规则。然而,当机器人触摸部位数量非常多时(例如电子皮肤),编码制定融合感知的规则将耗费大量的人力且泛化能力可能较差。则此时可以运用人工智能技术,以大量数据为驱动,训练出可以识别出不同复杂触摸行为的分类模型。可选地,上述分类模型可以是例如机器学习模型、深度学习模型或强化学习模型等学习模型。下面分别对基于预设规则进行触摸行为识别以及基于分类模型进行触摸行为识别方法进行介绍。
一、基于预设规则进行触摸行为识别
本实施例中,预设规则是人为通过编码制定的规则。例如,if(触摸动作1 in触摸部位3)and(触摸动作4 in触摸部位6)then触摸行为1,表示若在触摸部位3进行触摸动作1,以及在触摸部位6进行触摸动作4即为触摸行为1。或者,if(触摸动作1 in触摸部位4)and(触摸动作1 in触摸部位3)then触摸行为3,表示若在触摸部位4进行 触摸动作1,以及在触摸部位3进行触摸动作1即为触摸行为1。
可以理解的是,前述示例仅为人为制定的一种规则,在实际应用中,预设的规则不限于两个单部位触摸行为,可以是在一至多个部位进行触摸动作组成触摸行为。例如,(抚摸in左手心)and(抚摸in左手背)and(抚摸in右手心)and(抚摸in右手背)then握住双手。表示在左手心进行抚摸,在左手背进行抚摸,在右手心进行抚摸,以及在右手背进行抚摸即为握住双手。因此触摸动作以及触摸部位的数量不应理解为本申请的限定。
二、基于分类模型进行触摸行为识别
本实施例中,以多个触摸部位的触摸行为作为待训练触摸行为识别模型的输入,以预设触摸行为作为输出,通过常见的模型训练方法进行训练。示例性地,若机器人在N个触摸部位均装有电容触摸传感器,请参阅图8,图8为本申请实施例中基于分类模型进行触摸行为识别的一个实施例示意图。如图8所示,将在N个触摸部位所检测的触摸动作作为训练完成后得到的触摸行为识别模型(用于对触摸行为进行分类的分类模型)的输入,触摸行为识别模型可以根据上述输入,输出触摸行为类别。具体地,如图8所示,将在触摸部位1以及触摸部位N进行拍、在触摸部位2以及触摸部位N-1未进行触摸动作(图8中用“无”指示未进行触摸动作)、在触摸部位3进行抚摸等信息作为触摸行为识别模型的输入,触摸行为识别模型能够根据以上输入,确定触摸行为的类别,最终输出分类结果触摸行为4。
可以理解的是,前述示例均用于理解本方案,多个触摸部位所检测的触摸动作所对应的触摸行为需要根据实际情况灵活确定。
本申请实施例中触摸行为识别方法能够在多个触摸部位上的触摸动作类型被检测出来的基础上,基于人工规则或者分类模型,实现多个触摸部位的复杂触摸行为的识别,例如,握手和拥抱等,进一步地提升触摸行为识别的准确度以及可靠性。
在一些可能的实现方式中,本申请上述实施例中的触摸传感器还可以是除电容传感器之外的其他类型的传感器,例如,基于表面声波和红外的触摸传感器等。
在一些可能的实现方式中,本申请上述实施例中的IMU还可以是其他类型的可以检测机器人运动信息的传感器,例如,利用接触式微机电系统(micro electro mechanical system,MEMS)麦克风同样可以实现振动检测。
在一些可能的实现方式中,本申请上述实施例提供的机器人可以根据机器人处理器确定的用户触摸动作类型或触摸行为作出反应,实现拟人化互动。例如,机器人处理器确定用户对机器人进行了“握住双手”的触摸行为,则机器人头部的显示屏可显示笑脸表情、机器人扬声器播放“你想跟我玩游戏吗?”的语音。又例如,机器人处理器确定用户对机器人进行了“连续重击”的触摸动作,则机器人头部的显示屏可显示哭泣表情、机器人扬声器播放“啊,好疼”的语音。基于本申请实施例提供的触摸行为识别方法,机器人能够准确识别、区分出用户的连续动作,提高触摸行为识别的准确性,进而作出更准确的反应、与用户进行更拟真的互动,提升用户体验。
在一些可能的实现方式中,本申请上述实施例提供的方法除了可以应用于机器人外,还可以应用于任意交互式电子设备中,例如手机、可折叠电子设备、平板电脑、桌面型计 算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、脑机接口设备、车载设备、智能家居设备、智能医疗设备、智能健身设备或智慧城市设备等。
本申请实施例提供一种机器人,该机器人包括一个至多个触摸传感器以及处理器。
其中,一个至多个触摸传感器用于检测进行触摸行为时电容的变化。
处理器用于执行图4,图5,图6以及图7所示出的方法。
可选地,上述机器人还包括IMU,上述IMU用于检测加速度信息;
上述处理器还用于根据加速度信息确定触摸动作的振动幅度。
上述主要从方法的角度对本申请实施例提供的方案进行了介绍。可以理解的是,触摸行为识别装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以基于上述方法示例对触摸行为识别装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
由此,下面对本申请中的触摸行为识别装置进行详细描述,且触摸行为识别装置被设置于机器人,请参阅图9,图9为本申请实施例中触摸行为识别装置一个结构示意图,如图所示,触摸行为识别装置900包括:
检测模块901,用于检测触摸动作的触摸状态信息,其中,触摸动作的触摸状态信息指示触摸状态,或,非触摸状态;
检测模块901,还用于检测触摸动作的加速度信息;
处理模块902,用于根据触摸动作的触摸状态信息确定触摸动作的触摸时长;
处理模块902,还用于根据触摸动作的触摸时长以及触摸动作的振动幅度,确定触摸动作类型,其中,触摸动作的振动幅度是根据触摸动作的加速度信息确定的,触摸动作类型用于确定触摸行为。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,处理模块902,还用于根据每个触摸动作的触摸时长,确定对应于每个触摸动作的时间间隔阈值,其中,时间间隔阈值正比于触摸动作的触摸时长;
处理模块902,还用于根据时间间隔阈值,拆分触摸动作。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,处理模块902,还用于根据触摸动作的触摸状态信息,确定触摸动作的触摸起始时间点和触摸终止时间点;
处理模块902,还用于根据触摸起始时间点和触摸终止时间点,确定触摸动作的触摸时长,以及,每两个相邻的触摸动作之间的时间间隔;
处理模块902,具体用于:
在时间间隔大于或等于第一时间间隔阈值的情况下,处理器确定触摸动作为新的触摸动作,其中,第一时间间隔阈值为两个相邻的触摸动作中在前的触摸动作对应的时间间隔阈值;
在时间间隔小于第一时间间隔阈值的情况下,处理器确定触摸动作为触摸子动作。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,处理模块902,还用于根据发生在多个触摸部位的多个触摸动作的触摸动作类型,确定触摸行为。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,处理模块902,具体用于根据发生在多个触摸部位的多个触摸动作的触摸动作类型,根据预设规则确定触摸行为。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,处理模块902,具体用于根据发生在多个触摸部位的多个触摸动作的触摸动作类型,根据分类模型确定触摸行为,其中,分类模型是以历史触摸数据为训练数据,通过训练得到的模型。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,触摸动作的触摸时长用于根据预设的时长阈值区分触摸动作类型。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,时长阈值包括第一时长阈值和第二时长阈值,第一时长阈值小于第二时长阈值;
触摸动作的触摸时长小于第一时长阈值的触摸动作为拍;
触摸动作的触摸时长大于或等于第一时长阈值,且小于第二时长阈值的触摸动作为抚摸;
所处触摸动作的触摸时长大于或等于第二时长阈值的触摸动作为长按。
在一种可选的实现方式中,在上述图9所对应的实施例基础上,本申请实施例提供的触摸行为识别装置900的另一实施例中,触摸动作的振动幅度用于根据预设的振动幅度阈值区分触摸动作类型;
触摸动作的振动幅度小于振动幅度阈值的触摸动作,为轻拍、轻抚或轻按中任一项;
触摸动作的振动幅度大于或等于振动幅度阈值的触摸动作,为重拍、重抚或重按中任一项。
本申请实施例提供一种机器人,包括至少一个电容触摸传感器,IMU、至少一个处理器、 存储器、输入输出(I/O)接口;
所述至少一个电容触摸传感器,IMU以及至少一个处理器与所述存储器、所述输入输出接口耦合;
所述至少一个电容触摸传感器,IMU以及至少一个处理器用于执行存储器中存储的计算机程序,以执行上述任一方法实施例中至少一个电容触摸传感器,IMU以及至少一个处理器所执行的方法。
本申请还提供了一种触摸行为识别装置,包括至少一个处理器,所述至少一个处理器用于执行存储器中存储的计算机程序,以使得所述触摸行为识别装置执行上述任一方法实施例中电容触摸传感器,IMU以及处理器所执行的方法。
应理解,上述触摸行为识别装置可以是一个或多个芯片。例如,该触摸行为识别装置可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
本申请实施例还提供了一种触摸行为识别装置,包括处理器和通信接口。所述通信接口与所述处理器耦合。所述通信接口用于输入和/或输出信息。所述信息包括指令和数据中的至少一项。所述处理器用于执行计算机程序,以使得所述触摸行为识别装置执行上述任一方法实施例中电容触摸传感器,IMU以及处理器所执行的方法。
本申请实施例还提供了一种触摸行为识别装置,包括处理器和存储器。所述存储器用于存储计算机程序,所述处理器用于从所述存储器调用并运行所述计算机程序,以使得所述触摸行为识别装置执行上述任一方法实施例中电容触摸传感器,IMU以及处理器所执行的方法。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理 器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行图2,图5以及图9所示实施例中的各个单元执行的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行图2,图5以及图9所示实施例中的各个单元执行的方法。
上述各个装置实施例中模块和方法实施例中各个单元完全对应,由相应的模块或单元执行相应的步骤,例如通信单元(收发器)执行方法实施例中接收或发送的步骤,除发送、接收外的其它步骤可以由处理单元(处理器)执行。具体单元的功能可以参考相应的方法实施例。其中,处理器可以为一个或多个。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可 以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种触摸行为识别方法,所述方法应用于机器人,所述机器人包括电容触摸传感器,惯性测量单元IMU以及处理器,其特征在于,所述方法包括:
    所述电容触摸传感器检测触摸动作的触摸状态信息,其中,所述触摸动作的触摸状态信息指示触摸状态,或,非触摸状态;
    所述IMU检测所述触摸动作的加速度信息;
    所述处理器根据所述触摸动作的触摸状态信息确定触摸动作的触摸时长;
    所述处理器根据所述触摸动作的触摸时长以及所述触摸动作的振动幅度,确定触摸动作类型,其中,所述触摸动作的振动幅度是根据所述触摸动作的加速度信息确定的,所述触摸动作类型用于确定触摸行为。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述处理器根据每个所述触摸动作的触摸时长,确定对应于每个所述触摸动作的时间间隔阈值,其中,所述时间间隔阈值正比于所述触摸动作的触摸时长;
    所述处理器根据所述时间间隔阈值,拆分所述触摸动作。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    所述处理器根据所述触摸动作的触摸状态信息,确定所述触摸动作的触摸起始时间点和触摸终止时间点;
    所述处理器根据所述触摸起始时间点和所述触摸终止时间点,确定所述触摸动作的触摸时长,以及,每两个相邻的所述触摸动作之间的时间间隔;
    所述处理器根据所述时间间隔阈值,拆分所述触摸动作,具体包括:
    在所述时间间隔大于或等于第一时间间隔阈值的情况下,所述处理器确定所述触摸动作为新的触摸动作,其中,所述第一时间间隔阈值为两个相邻的所述触摸动作中在前的所述触摸动作对应的所述时间间隔阈值;
    在所述时间间隔小于所述第一时间间隔阈值的情况下,所述处理器确定所述触摸动作为触摸子动作。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述机器人包括多个触摸部位,所述电容触摸传感器包括位于所述多个触摸部位的多个电容触摸传感器,所述方法还包括:
    所述处理器根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,确定所述触摸行为。
  5. 根据权利要求4所述的方法,其特征在于,所述处理器根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,确定所述触摸行为,具体包括:
    所述处理器根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,根据预设规则确定所述触摸行为。
  6. 根据权利要求4所述的方法,其特征在于,所述处理器根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,确定所述触摸行为,具体包括:
    所述处理器根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,根据 分类模型确定所述触摸行为,其中,所述分类模型是以历史触摸数据为训练数据,通过训练得到的模型。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述触摸动作的触摸时长用于根据预设的时长阈值区分所述触摸动作类型。
  8. 根据权利要求7所述的方法,其特征在于,所述时长阈值包括第一时长阈值和第二时长阈值,所述第一时长阈值小于所述第二时长阈值;
    所述触摸动作的触摸时长小于所述第一时长阈值的触摸动作为拍;
    所述触摸动作的触摸时长大于或等于所述第一时长阈值,且小于所述第二时长阈值的触摸动作为抚摸;
    所处触摸动作的触摸时长大于或等于所述第二时长阈值的触摸动作为长按。
  9. 根据权利要求7或8所述的方法,其特征在于,所述触摸动作的振动幅度用于根据预设的振动幅度阈值区分所述触摸动作类型;
    所述触摸动作的振动幅度小于所述振动幅度阈值的触摸动作,为轻拍、轻抚或轻按中任一项;
    所述触摸动作的振动幅度大于或等于所述振动幅度阈值的触摸动作,为重拍、重抚或重按中任一项。
  10. 一种触摸行为识别装置,其特征在于,所述触摸行为识别装置包括:
    检测模块,用于检测触摸动作的触摸状态信息,其中,所述触摸动作的触摸状态信息指示触摸状态,或,非触摸状态;
    所述检测模块,还用于检测所述触摸动作的加速度信息;
    处理模块,用于根据所述触摸动作的触摸状态信息确定触摸动作的触摸时长;
    所述处理模块,还用于根据所述触摸动作的触摸时长以及所述触摸动作的振动幅度,确定触摸动作类型,其中,所述触摸动作的振动幅度是根据所述触摸动作的加速度信息确定的,所述触摸动作类型用于确定触摸行为。
  11. 根据权利要求10所述的装置,其特征在于,所述处理模块,还用于根据每个所述触摸动作的触摸时长,确定对应于每个所述触摸动作的时间间隔阈值,其中,所述时间间隔阈值正比于所述触摸动作的触摸时长;
    所述处理模块,还用于根据所述时间间隔阈值,拆分所述触摸动作。
  12. 根据权利要求11所述的装置,其特征在于,所述处理模块,还用于根据所述触摸动作的触摸状态信息,确定所述触摸动作的触摸起始时间点和触摸终止时间点;
    所述处理模块,还用于根据所述触摸起始时间点和所述触摸终止时间点,确定所述触摸动作的触摸时长,以及,每两个相邻的所述触摸动作之间的时间间隔;
    所述处理模块,具体用于:
    在所述时间间隔大于或等于第一时间间隔阈值的情况下,所述处理器确定所述触摸动作为新的触摸动作,其中,所述第一时间间隔阈值为两个相邻的所述触摸动作中在前的所述触摸动作对应的所述时间间隔阈值;
    在所述时间间隔小于所述第一时间间隔阈值的情况下,所述处理器确定所述触摸动作 为触摸子动作。
  13. 根据权利要求10至12中任一项所述的装置,其特征在于,所述处理模块,还用于根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,确定所述触摸行为。
  14. 根据权利要求13所述的装置,其特征在于,所述处理模块,具体用于根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,根据预设规则确定所述触摸行为。
  15. 根据权利要求13所述的装置,其特征在于,所述处理模块,具体用于根据发生在所述多个触摸部位的多个触摸动作的所述触摸动作类型,根据分类模型确定所述触摸行为,其中,所述分类模型是以历史触摸数据为训练数据,通过训练得到的模型。
  16. 根据权利要求10至15中任一项所述的装置,其特征在于,所述触摸动作的触摸时长用于根据预设的时长阈值区分所述触摸动作类型。
  17. 根据权利要求16所述的装置,其特征在于,所述时长阈值包括第一时长阈值和第二时长阈值,所述第一时长阈值小于所述第二时长阈值;
    所述触摸动作的触摸时长小于所述第一时长阈值的触摸动作为拍;
    所述触摸动作的触摸时长大于或等于所述第一时长阈值,且小于所述第二时长阈值的触摸动作为抚摸;
    所处触摸动作的触摸时长大于或等于所述第二时长阈值的触摸动作为长按。
  18. 根据权利要求16或17所述的装置,其特征在于,所述触摸动作的振动幅度用于根据预设的振动幅度阈值区分所述触摸动作类型;
    所述触摸动作的振动幅度小于所述振动幅度阈值的触摸动作,为轻拍、轻抚或轻按中任一项;
    所述触摸动作的振动幅度大于或等于所述振动幅度阈值的触摸动作,为重拍、重抚或重按中任一项。
  19. 一种机器人,其特征在于,包括:
    包括电容触摸传感器、惯性测量单元IMU、处理器、存储器以及输入输出(I/O)接口;
    所述电容触摸传感器、IMU以及处理器与所述存储器、所述输入输出接口耦合;
    所述电容触摸传感器、IMU以及处理器通过运行所述存储器中的计算机指令以执行如权利要求1至9中任一项所述的方法。
  20. 一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1至9中任一项所述的方法。
PCT/CN2021/136066 2021-02-26 2021-12-07 一种触摸行为识别方法、装置以及设备 WO2022179239A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110217491.5A CN115047993A (zh) 2021-02-26 2021-02-26 一种触摸行为识别方法、装置以及设备
CN202110217491.5 2021-02-26

Publications (1)

Publication Number Publication Date
WO2022179239A1 true WO2022179239A1 (zh) 2022-09-01

Family

ID=83047780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/136066 WO2022179239A1 (zh) 2021-02-26 2021-12-07 一种触摸行为识别方法、装置以及设备

Country Status (2)

Country Link
CN (1) CN115047993A (zh)
WO (1) WO2022179239A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090055019A1 (en) * 2007-05-08 2009-02-26 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN105677077A (zh) * 2015-12-28 2016-06-15 小米科技有限责任公司 单击事件的判定方法和装置
CN107885448A (zh) * 2017-10-30 2018-04-06 努比亚技术有限公司 应用触摸操作的控制方法、移动终端及可读存储介质
CN108237536A (zh) * 2018-03-16 2018-07-03 重庆鲁班机器人技术研究院有限公司 机器人控制系统
CN211576216U (zh) * 2020-04-13 2020-09-25 天津塔米智能科技有限公司 一种触摸检测装置和机器人
CN112000273A (zh) * 2020-08-26 2020-11-27 深圳前海微众银行股份有限公司 输入方法、装置、设备及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090055019A1 (en) * 2007-05-08 2009-02-26 Massachusetts Institute Of Technology Interactive systems employing robotic companions
CN105677077A (zh) * 2015-12-28 2016-06-15 小米科技有限责任公司 单击事件的判定方法和装置
CN107885448A (zh) * 2017-10-30 2018-04-06 努比亚技术有限公司 应用触摸操作的控制方法、移动终端及可读存储介质
CN108237536A (zh) * 2018-03-16 2018-07-03 重庆鲁班机器人技术研究院有限公司 机器人控制系统
CN211576216U (zh) * 2020-04-13 2020-09-25 天津塔米智能科技有限公司 一种触摸检测装置和机器人
CN112000273A (zh) * 2020-08-26 2020-11-27 深圳前海微众银行股份有限公司 输入方法、装置、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN115047993A (zh) 2022-09-13

Similar Documents

Publication Publication Date Title
US10905350B2 (en) Camera-guided interpretation of neuromuscular signals
CN114341779B (zh) 用于基于神经肌肉控制执行输入的系统、方法和界面
US11567573B2 (en) Neuromuscular text entry, writing and drawing in augmented reality systems
TWI476633B (zh) 傳輸觸覺資訊的系統和方法
JP6144743B2 (ja) ウェアラブル装置
Devi et al. Low cost tangible glove for translating sign gestures to speech and text in Hindi language
CN108829239A (zh) 终端的控制方法、装置及终端
JP7259447B2 (ja) 発話者検出システム、発話者検出方法及びプログラム
KR20210079162A (ko) 청각장애인을 위한 수화 번역 서비스 시스템
WO2022179239A1 (zh) 一种触摸行为识别方法、装置以及设备
US11262850B2 (en) No-handed smartwatch interaction techniques
WO2022194029A1 (zh) 一种机器人的反馈方法及机器人
EP4276591A1 (en) Interaction method, electronic device, and interaction system
CN117063142A (zh) 用于自适应输入阈值化的系统和方法
CN106873779B (zh) 一种手势识别装置及手势识别方法
US20230305633A1 (en) Gesture and voice controlled interface device
JP2018027282A (ja) 生体情報に連動するスマートボードシステム及びその方法
US20220261085A1 (en) Measurement based on point selection
JP7390891B2 (ja) クライアント装置、サーバ、プログラム、及び、情報処理方法
Nakamura Embedded Facial Surface Sensing and Stimulation: Toward Facial Surface Interaction in Virtual Environment
KR20180044535A (ko) 홀로그래피 스마트홈 시스템 및 제어방법
TW201711010A (zh) 運用肌電與慣性感測裝置識別手勢之手語翻譯系統
Kumari et al. Gesture Recognizing Smart System
KR20200127312A (ko) 홀로그래피 영상을 이용하는 옷의 쇼핑 장치 및 방법
WO2023196671A1 (en) Techniques for neuromuscular-signal-based detection of in-air hand gestures for text production and modification, and systems, wearable devices, and methods for using these techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927663

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927663

Country of ref document: EP

Kind code of ref document: A1