CN112698747A - Robot touch interaction method and robot - Google Patents

Robot touch interaction method and robot Download PDF

Info

Publication number
CN112698747A
CN112698747A CN202011601350.5A CN202011601350A CN112698747A CN 112698747 A CN112698747 A CN 112698747A CN 202011601350 A CN202011601350 A CN 202011601350A CN 112698747 A CN112698747 A CN 112698747A
Authority
CN
China
Prior art keywords
touch
robot
emotion
interactive
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011601350.5A
Other languages
Chinese (zh)
Inventor
李泽华
张涛
申鑫瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202011601350.5A priority Critical patent/CN112698747A/en
Publication of CN112698747A publication Critical patent/CN112698747A/en
Priority to PCT/CN2021/132315 priority patent/WO2022142866A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot touch interaction method and a robot, wherein the robot touch interaction method comprises the following steps: after the robot is started, detecting whether a first touch operation on the robot exists in real time, wherein the first touch operation is generated after at least one preset position of a touch sensor on the robot is touched; when the first touch operation is detected, recording the preset position corresponding to the first touch operation as a touch position; determining an emotional attribute according to the touch position, and acquiring first interaction operation information according to the emotional attribute; and controlling the robot to execute a first interactive operation according to the first interactive operation information. According to the invention, good emotional interaction can be carried out between the robot and the interaction personnel, so that the robot is more vivid, interesting and humanized, the interaction personnel can feel that the robot is more life and affinity, and the user experience is improved.

Description

Robot touch interaction method and robot
Technical Field
The invention relates to the field of robots, in particular to a robot touch interaction method and a robot.
Background
With the development of science and technology, robots are widely applied in various industries, but at present, the development of robots is often embodied in the aspect of application functions, that is, the robots only need to execute a task (for example, a meal delivery robot only needs to complete a meal delivery task) and do not need to have other interactive behaviors with users, so that when the robots are used at present, the robots are often felt to be stiff and mechanical without humanity, and the user experience is reduced.
Disclosure of Invention
Therefore, it is necessary to provide a robot touch interaction method and a robot to solve the problems of mechanical stiffness and reduced user experience.
A robotic touch interaction method, comprising:
after the robot is started, detecting whether a first touch operation on the robot exists in real time, wherein the first touch operation is generated after at least one preset position of a touch sensor on the robot is touched;
when the first touch operation is detected, recording the preset position corresponding to the first touch operation as a touch position;
determining an emotional attribute according to the touch position, and acquiring first interaction operation information according to the emotional attribute;
and controlling the robot to execute a first interactive operation according to the first interactive operation information.
The robot comprises a control module and a touch sensor connected with the control module, wherein the control module is used for executing the robot touch interaction method.
According to the robot touch interaction method and the robot, after the robot is started, whether a first touch operation on the robot exists or not is detected in real time, and the first touch operation is generated after at least one preset position of a touch sensor on the robot is touched; when the first touch operation is detected, recording the preset position corresponding to the first touch operation as a touch position; determining an emotional attribute according to the touch position, and acquiring first interaction operation information according to the emotional attribute; and controlling the robot to execute a first interactive operation according to the first interactive operation information. According to the invention, the first interactive operation information of the robot can be determined according to the touch position sensed by the robot through the touch sensor and the first interactive operation is executed, so that the robot and the interactive personnel can interact with each other emotionally, the robot is more vivid, interesting and humanized, the interactive personnel can feel that the robot is more lifelike and more compatible, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic flow chart illustrating a method for robot touch interaction according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a robot touch interaction method according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The robot touch interaction method provided by the embodiment can be applied to a control module of a robot, wherein the control module is communicated with terminal equipment. The terminal devices include, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. A control module is mounted within the robot housing, which may be integrated by one or more sub-modules.
In one embodiment, as shown in fig. 1, a robot touch interaction method is provided, which includes the following steps:
s10, detecting whether a first touch operation on the robot exists in real time after the robot is started, wherein the first touch operation is generated after at least one preset position of a touch sensor on the robot is touched; the robot starting includes starting the robot from a state that the robot is not powered on to a state that the robot is powered on (the robot is set to be automatically started after being powered on), and starting modes such as starting the robot through a preset instruction after being powered on (the robot is set to be not automatically started after being powered on, but needs to be started after receiving the preset instruction, and the preset instruction can be generated after a preset key on the robot is pressed, or automatically generated at regular time, or generated after the robot is automatically restarted, and the like).
Understandably, the preset position can be set to any position on the robot body according to requirements, for example, the preset position can be the head, ears, face, arms and other parts of the robot; the preset positions are set to be multiple; at least one contact sensor is required to be deployed according to requirements at each preset position of the robot, and then, after a person touches any one or more preset positions, the contact sensor can send a touch signal to a control module of the robot, and then it is determined that a first touch operation exists.
Further, in step S10, after the robot is turned on, the method further includes:
detecting whether a plurality of preset roles corresponding to the robot are marked or not; in the embodiment, after the robot is started, the currently set interactive role of the robot is obtained; the interactive role refers to one of different preset roles set for the robot, and in the invention, one robot can be set to different interactive roles under different conditions, for example, during meal delivery, the interactive role can be a meal delivery attendant, a meal diner and the like, and during guest greeting, the interactive role can be a guest greeting person, a visitor and the like. When the preset role is marked, the marked preset role is the interactive role currently set by the robot. The above-mentioned marks may be set by different numbers, letters, pictures, brightness or colors.
When all the preset roles are not marked, prompting an interactive person to select the preset roles, marking the preset roles selected by the interactive person and recording the preset roles as the interactive roles; that is, when there is no marked preset role, it is indicated that the interactive role is not currently selected for the robot, at this time, it is required to prompt the interactive person to select one preset role as the interactive role, and then, after the interactive person selects the preset role, the selected preset role is marked and recorded as the interactive role.
When the marked preset role exists, recording the marked preset role as an interactive role. That is, when the robot has been selected and marked with a predetermined role, it indicates that the robot has currently been set with an interactive role. Understandably, when the marked preset role exists, after the marked preset role is recorded as the interactive role, the interactive personnel can also actively modify the interactive role, namely unmark the currently marked preset role, and further mark other preset roles.
And S20, when the first touch operation is detected, recording the preset position corresponding to the first touch operation as a touch position. Specifically, when a first touch operation on the robot is detected, recording the preset position corresponding to the first touch operation as a touch position; understandably, the touch position corresponding to the first touch operation may include one preset position or a plurality of preset positions.
S30, determining an emotion attribute according to the touch position, and acquiring first interaction operation information according to the emotion attribute; the first interactive operation information includes, but is not limited to, interactive voice, interactive expression, interactive behavior and the like; understandably, the first interactive operation information is information corresponding to the interactive operation of the robot with the interactive personnel after the touch position is touched; the interactive voice refers to voice which can be played through a preset voice player installed on the robot, for example, when there is some fun, the interactive voice can be a voice which is played by firstly playing a micro-laughter and then playing 'i am very fun'; the interactive expression can be an expression picture displayed by a preset expression display module (the preset expression display module can be a display screen set at the face position of the robot), for example, a smiling face can be displayed at happy hours, a crying face can be displayed at hard hours, and the like; the interactive behavior can be the action of the limbs of the robot matched with the interactive voice and the interactive expression, the action can be set according to the limb structure and the temperament of the interactive character, for example, the head of the robot can be twisted when people are angry, and the head of the robot is twisted from facing interactive people to facing away from the interactive people.
Wherein, a preset position is associated with an emotion sub-attribute; the emotion sub-attribute refers to the emotion type required to be expressed when the preset position of the robot is touched, such as happiness, vitality, photophobia and the like; the emotion sub-attributes corresponding to the same preset position of different preset roles may be the same or different. For example, some preset characters may be set to be shy for the emotion sub-attribute of touching the ear (a preset position), and some preset characters may be set to be happy for the emotion sub-attribute of touching the ear. The emotion attribute corresponding to the touch position of the robot is determined according to the emotion sub-attributes of all preset positions contained in the touch position.
Preferably, in step S30, the acquiring first interaction operation information according to the emotion attribute further includes: and acquiring first interaction operation information according to the emotion attributes and the interaction roles. That is, since each interactive character may have different interactive operations corresponding to the same emotional attribute, when determining the first interactive operation information, the currently corresponding interactive character of the robot needs to be considered in addition to the emotional attribute.
Further, in step S30, the determining an emotional attribute according to the touch position includes:
determining whether the touch position contains only one of the preset positions;
when the touch position only comprises one preset position, setting the emotion sub-attribute corresponding to the preset position as the emotion attribute corresponding to the touch position; that is, in this embodiment, if the touch position is only one preset position, since one preset position is associated with one emotion sub-attribute, the emotion attribute corresponding to the touch position is the emotion sub-attribute corresponding to the preset position.
And when the touch position comprises at least two preset positions, acquiring at least two emotion sub-attributes respectively corresponding to each preset position in the touch position, and determining the emotion attribute corresponding to the touch position according to the at least two emotion sub-attributes. That is, if the touch position includes at least two preset positions, since one preset position is associated with one emotion sub-attribute, the emotion attribute corresponding to the touch position needs to be further determined according to the at least two emotion sub-attributes corresponding to the at least two preset positions.
Specifically, the determining the emotion attribute corresponding to the touch position according to at least two emotion sub-attributes includes:
classifying the different emotion sub-attributes corresponding to the touch position to obtain different emotion classification groups, and recording the same attribute number of the same emotion sub-attributes in each emotion classification group; that is, when a plurality of preset positions are touched simultaneously, the touch position includes a plurality of preset positions, each preset position corresponds to one emotion sub-attribute, and there may be a case where the emotion sub-attributes corresponding to the plurality of preset positions are the same, for example, the emotion sub-attributes corresponding to the left ear and the right ear may be the same. Understandably, in an embodiment, if the different emotion sub-attributes corresponding to the touch position are classified, and after obtaining the different emotion classification groups, only one emotion classification group exists (for example, the left ear and the right ear of the robot are touched at the same time), at this time, the emotion attribute corresponding to the emotion classification group may be directly recorded as any emotion sub-attribute in the emotion classification group (the emotion sub-attribute in each emotion classification group is the same). After obtaining different emotion classification groups, when two or more emotion classification groups exist, the same attribute number of the same emotion sub-attribute in each emotion classification group is recorded (since the emotion sub-attributes in each emotion classification group are the same, the same attribute number is the number of the emotion sub-attributes in the emotion classification group), and the next step is carried out.
Acquiring emotion severity coefficients respectively associated with the emotion sub-attributes corresponding to each emotion classification group; in this embodiment, each emotion sub-attribute is associated with an emotional intensity coefficient, i.e., for example, a somewhat haphazard emotional intensity coefficient of 0.6; the emotional intensity coefficient of the photophobia is 0.7; the emotional intensity coefficient is 0.8 or the like.
And recording the product of the same attribute quantity corresponding to the same emotion classification group and the emotion severity coefficient as an emotion value, and recording the emotion sub-attribute corresponding to the emotion classification group with the highest emotion value as the emotion attribute corresponding to the touch position. That is, in this embodiment, because a plurality of preset positions are touched at the same time, according to this embodiment, after the emotion value is determined according to the same attribute number and the emotion severity coefficient, the emotion attribute corresponding to the touched position is determined according to the emotion value, so that the accuracy and stability of the robot expressing emotion are improved, and emotional disorders caused by different emotion sub-attributes corresponding to a plurality of touched positions are avoided. Understandably, when at least two highest values exist in the finally obtained emotion values, one highest value emotion value can be randomly selected, and the corresponding emotion sub-attribute is recorded as the emotion attribute corresponding to the touch position. It is also possible to set a priority expression queue of emotion sub-attributes (each interactive character is associated with a priority expression queue, and emotion sub-attributes in the priority expression queue are arranged according to requirements such as the character of the interactive character), and record the emotion sub-attribute with the highest emotion value in the at least two emotion sub-attributes corresponding to the emotion classification group as the emotion attribute corresponding to the touch position.
In another embodiment, after different emotion sub-attributes corresponding to the touch position are classified, and different emotion classification groups are obtained, regardless of the existence of a plurality of emotion classification groups, the same attribute number of the same emotion sub-attribute in each emotion classification group is recorded according to the above steps, and the emotion severity coefficients respectively associated with the emotion sub-attributes corresponding to each emotion classification group are obtained, and further, according to the fact that the product between the same attribute number corresponding to the same emotion classification group and the emotion severity coefficient is recorded as an emotion value, the emotion sub-attribute corresponding to the emotion classification group with the highest emotion value is recorded as the emotion attribute corresponding to the touch position. Therefore, different first interaction operation information can be determined according to the emotion attributes corresponding to different emotion values; that is, when the emotion sub-attributes corresponding to two different touch positions of the same interactive character are the same and the emotion values are different, the two pieces of first interactive operation information corresponding to the two touch positions correspond to the same emotion sub-attribute, but the expression intensity of the emotion sub-attribute is reflected according to the size of the emotion value, that is, the greater the emotion value is, the stronger the emotion expression intensity is; for example, the emotion sub-attribute is happy, and when the emotion value is 0.6, the emotional expression intensity is a little happy; at an emotional value of 1.2, the emotional expression was strongly pronounced.
And S40, controlling the robot to execute a first interactive operation according to the first interactive operation information. Wherein the first interactive operation comprises: displaying the interactive expression through a preset expression display module, playing the interactive voice through a preset voice player, and enabling the robot to execute the interactive behavior; the preset expression display module and the preset voice player are both installed on the robot. That is, the interactive voice refers to a voice that can be played through a preset voice player installed on the robot, for example, when there is some fun, the interactive voice may be a voice that first plays a micro laughter and then plays "i am very happy"; the interactive expression can be an expression picture displayed by a preset expression display module (the preset expression display module can be a display screen set at the face position of the robot), for example, a smiling face can be displayed at happy hours, a crying face can be displayed at hard hours, and the like; the interactive behavior can be a robot limb action matched with the interactive voice and the interactive expression, the action can be set according to the limb structure and the temperament of the interactive character, for example, the head of the robot can be twisted when angry is produced, the head of the robot is twisted from facing to the interactive person to facing away from the interactive person, and the like, in some embodiments, if the limb of the robot needs to perform other functions and is inconvenient to act, the robot can also be set to forbid the execution of the interactive behavior, and at the moment, the interactive behavior is not executed when the touch position of the robot is touched; or, if some touch positions are touched, the body motion corresponding to the interaction does not need to be executed, the interaction behavior may be directly set to a null value, and at this time, the interaction behavior is not executed when the touch positions of the robot are touched.
According to the invention, the first interactive operation information of the robot can be determined according to the touch position sensed by the robot through the touch sensor (and the currently set interactive role of the robot) and the first interactive operation (interactive expression, interactive voice, interactive behavior and the like) is executed, so that the robot and the interactive personnel can interact with each other emotionally, the robot is more vivid, interesting and humanized, the interactive personnel can feel the life and affinity of the robot, and the user experience is improved.
In an embodiment, as shown in fig. 2, after the step S40, that is, after the controlling robot performs the first interactive operation according to the first interactive operation information, the method further includes:
s50, if the first touch operation is not terminated, recording the touch duration in real time; the touch duration refers to a time interval between a time point recorded in real time and a time point at which the existence of the first touch operation on the robot is detected;
s60, when the touch duration exceeds a preset duration threshold, acquiring second interaction operation information corresponding to the first interaction operation information; the emotion expression level of the second interactive operation information is higher than that of the first interactive operation information; the emotion expression level represents the emotion expression intensity corresponding to each emotion attribute; specifically, the second interactive operation information includes, but is not limited to, an upgraded interactive voice, an upgraded interactive expression, an upgraded interactive behavior, and the like; the preset time threshold may be set according to a requirement, such as 3 seconds, 4 seconds, 5 seconds, and the like. The upgraded interactive voice is also voice which can be played through a preset voice player installed on the robot, but is an upgraded version of emotion corresponding to the interactive voice in step S30; for example, the interactive voice is a little happy, and the interactive voice can be a voice that firstly plays a micro-laughter and then plays 'i am happy'; the upgrade interactive voice may be happy, and may be that first a haha-sized sound is played, and then "i am really happy too soon! Similarly, the upgraded interactive expression also refers to an expression picture displayed by the preset expression display module, but is an upgraded version of the emotion corresponding to the interactive expression in step S30; the upgrade interactive behavior may be the robot limb movement matching the upgrade interactive voice and the upgrade interactive emotion, but it is an upgraded version of the emotion corresponding to the interactive behavior in step S30.
S70, controlling the robot to execute a second interactive operation according to the second interactive operation information; the second interactive operation comprises: displaying the upgrade interactive expression through a preset expression display module, playing the upgrade interactive voice through a preset voice player, and enabling the robot to execute the upgrade interactive behavior. In this embodiment, if the first touch operation of the interactive person on the robot is not terminated within the preset time threshold, the control module upgrades the emotion of the robot, and at this time, the second interactive operation is executed to interact with the interactive person with stronger emotion than the first interactive operation, so that the interaction of the robot is more vivid and humanized.
In an embodiment, before the step S50, that is, before the step of recording the touch duration in real time if the first touch operation is not terminated, the method includes:
detecting whether a touch signal of an interactive person received by a touch sensor at the touch position is interrupted; that is, the touch signal refers to a touch signal transmitted to the control module when the touch sensor is touched by an interactive person, and it is understood that, in the present invention, the touch sensor may determine whether the touch sensor is touched by the interactive person by sensing a human body current, and the like, and if the touch sensor does not sense the human body current, it may be that other objects inadvertently contact the robot, and at this time, it is considered that the touch sensor is not touched by the interactive person, and the touch sensor does not transmit the touch signal to the control module. Understandably, the interruption of the touch signal is confirmed when the touch sensor does not continuously (discontinuously, suspended) receive the touch signal of the interacting person. That is, the interactive person considers the touch signal to be interrupted once the interactive person separates from the touch sensor and stops contacting.
And when the touch signal of the interactive person received by the touch sensor at the touch position is not interrupted, confirming that the first touch operation is not terminated. That is, when the interactive person is in contact with the touch sensor all the time, the touch signal is considered to be continuous and not interrupted, and the first touch operation is considered not to be terminated, the step S50 may be entered to continuously record the touch duration in real time.
In an embodiment, after detecting whether the touch signal of the interactive person received by the touch sensor at the touch position is interrupted, the method further includes:
recording the interruption duration in real time when the touch signal of the interactive personnel received by the touch sensor at the touch position is interrupted; the interruption duration refers to the time interval between the current time point recorded in real time and the time point of interruption of the touch signal; that is, the interactive person, once separated from the touch sensor, considers the touch signal to be interrupted. At this time, recording of the interruption time period is started.
When the interruption time length does not reach a preset interruption time threshold, detecting whether a second touch operation consistent with the emotion attribute of the touch position exists in the interruption time length in real time; the preset interrupt time threshold may be set according to requirements.
And when detecting that a second touch operation consistent with the emotional attribute of the touch position exists in the interruption duration, confirming that the first touch operation is not terminated. After the touch signal is interrupted, if the interruption time length does not reach a preset interruption time threshold, monitoring a second touch operation consistent with the emotion attribute of the touch position corresponding to the first touch operation in real time; for example, the left ear is touched (the emotional sub-attribute corresponding to the current interactive character and the left ear is sensitive) to be changed into the right ear (the emotional sub-attribute corresponding to the current interactive character and the right ear is also sensitive) at this time, it is determined that the first touch operation is not terminated, and at this time, the step S50 is still performed to record the touch duration in real time.
Further, after the detecting whether there is a second touch operation consistent with the emotional attribute of the touch position in real time, the method further includes:
and confirming that the first touch operation is terminated when detecting that a third touch operation which is inconsistent with the emotional attribute of the touch position exists in the interruption duration. That is, after the touch signal is interrupted, if the interruption duration does not reach the preset interruption time threshold, the second touch operation, which is inconsistent with the emotion attribute of the touch position corresponding to the first touch operation, is monitored in real time; for example, the left ear is touched (the first touch operation, the emotion sub-attribute corresponding to the current interactive character and the left ear being touched is sensitive) to be changed into the brain bag top (the second touch operation, the emotion sub-attribute corresponding to the current interactive character and the brain bag top being touched is open), at this time, it is determined that the first touch operation is terminated, and then, the first interactive operation corresponding to the first touch operation is stopped to be executed synchronously, and then, referring to step S10, the control module continues to monitor whether there is another touch operation on the robot in real time.
In an embodiment, after the step S50, that is, after the real-time recording interruption duration, the method further includes:
and confirming that the first touch operation is terminated when any touch operation is not detected in the interruption time length and the interruption time length reaches a preset interruption time threshold value. That is, after the touch signal is interrupted, if the interruption duration reaches the preset interruption time threshold, any touch operation is not monitored; that is, the interactive person does not continue to touch the robot, and at this time, it is determined that the first touch operation is terminated, and then the first interactive operation corresponding to the first touch operation is stopped from being executed synchronously, and further, referring to step S10, the control module continues to monitor whether there are other touch operations on the robot in real time.
Further, in step S20, when it is detected that the first touch operation on the robot exists, the preset position corresponding to the first touch operation is recorded as the touch position, and then, if the robot is not yet controlled to perform the first interaction operation according to the first interaction operation information, but a third touch operation on the robot is detected, the preset position corresponding to the third touch operation is recorded as a new touch position, and then, the new touch position is substituted for the original touch position, and then, step S30 and subsequent steps are performed.
When detecting that there is a first touch operation on the robot, recording the preset position corresponding to the first touch operation as a touch position, and if the robot has been controlled to execute the first interactive operation according to the first interactive operation information but has not been executed completely, detecting that there is a third touch operation on the robot, at this time, first determining a new touch position corresponding to the third touch operation, and determining a new emotion attribute according to the new touch position (refer to the method for determining the emotion attribute according to the touch position in the above embodiment); and upgrading the first interactive operation information to the second interactive operation information when the new emotion attribute is consistent with the emotion attribute corresponding to the original first touch operation.
And when the new emotion attribute is not consistent with the emotion attribute corresponding to the original first touch operation, the first interaction operation is executed at first, so that sudden interruption of action execution is avoided, and the interaction of the robot is abrupt. At the time when the first interaction operation is completed, it is determined whether the third touch operation is terminated (refer to the determination of whether the first touch operation is terminated in the above embodiment), and if the third touch operation is terminated, refer to step S10, the control module continues to monitor whether there is another touch operation (and does not perform the third touch operation) on the robot in real time. If the third touch operation is not terminated, the preset position corresponding to the third touch operation is recorded as a new touch position, and then the new touch position is substituted for the original touch position, and the steps S30 and subsequent steps are correspondingly executed.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a robot is further provided, and is characterized by comprising a control module and a touch sensor connected to the control module, wherein the control module is used for executing the robot touch interaction method.
The execution function of the control module corresponds to the robot touch interaction method in the embodiment one by one. For specific definition of the control module, reference may be made to the above definition of the robot touch interaction method, which is not described herein again. The sub-modules in the control module can be implemented in whole or in part by software, hardware and a combination thereof. The sub-modules can be embedded in a hardware form or independent of a processor in the computer device, or can be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the sub-modules.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to computer readable instructions, which may be stored in a non-volatile readable storage medium or a volatile readable storage medium, and when executed, the computer readable instructions may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A robot touch interaction method is characterized by comprising the following steps:
after the robot is started, detecting whether a first touch operation on the robot exists in real time, wherein the first touch operation is generated after at least one preset position of a touch sensor on the robot is touched;
when the first touch operation is detected, recording the preset position corresponding to the first touch operation as a touch position;
determining an emotional attribute according to the touch position, and acquiring first interaction operation information according to the emotional attribute;
and controlling the robot to execute a first interactive operation according to the first interactive operation information.
2. The robot touch interaction method of claim 1, wherein after the controlling robot performs the first interaction operation according to the first interaction operation information, the method further comprises:
if the first touch operation is not terminated, recording the touch duration in real time; the touch duration refers to a time interval between a time point recorded in real time and a time point at which the existence of the first touch operation on the robot is detected;
when the touch duration exceeds a preset duration threshold, acquiring second interaction operation information corresponding to the first interaction operation information; the emotion expression level of the second interactive operation information is higher than that of the first interactive operation information;
and controlling the robot to execute second interactive operation according to the second interactive operation information.
3. The robot touch interaction method of claim 2, wherein if the first touch operation is not terminated, recording the touch duration in real time comprises:
detecting whether a touch signal of an interactive person received by a touch sensor at the touch position is interrupted;
and when the touch signal of the interactive person received by the touch sensor at the touch position is not interrupted, confirming that the first touch operation is not terminated.
4. The robot touch interaction method of claim 3, wherein after the touch signal of the interactive person received by the touch sensor for detecting the touch position is interrupted, the method further comprises:
recording the interruption duration in real time when the touch signal of the interactive personnel received by the touch sensor at the touch position is interrupted; the interruption duration refers to the time interval between the current time point recorded in real time and the time point of interruption of the touch signal;
when the interruption time length does not reach a preset interruption time threshold, detecting whether a second touch operation consistent with the emotion attribute of the touch position exists in the interruption time length in real time;
and when detecting that a second touch operation consistent with the emotional attribute of the touch position exists in the interruption duration, confirming that the first touch operation is not terminated.
5. The method of robotic touch interaction of claim 4, wherein said detecting in real time whether there is a second touch operation consistent with emotional attributes of the touch location further comprises:
and confirming that the first touch operation is terminated when detecting that a third touch operation which is inconsistent with the emotional attribute of the touch position exists in the interruption duration.
6. The robot touch interaction method of claim 5, further comprising, after recording the duration of the interruption in real time:
and confirming that the first touch operation is terminated when any touch operation is not detected in the interruption time length and the interruption time length reaches a preset interruption time threshold value.
7. The method of robotic touch interaction of claim 1, wherein said determining emotional attributes from said touch locations comprises:
determining whether the touch position contains only one of the preset positions;
when the touch position only comprises one preset position, setting the emotion sub-attribute corresponding to the preset position as the emotion attribute corresponding to the touch position;
and when the touch position comprises at least two preset positions, acquiring at least two emotion sub-attributes respectively corresponding to each preset position in the touch position, and determining the emotion attribute corresponding to the touch position according to the at least two emotion sub-attributes.
8. The method of claim 7, wherein determining the sentiment attributes corresponding to the touch location from the at least two sentiment sub-attributes comprises:
classifying the different emotion sub-attributes corresponding to the touch position to obtain different emotion classification groups, and recording the same attribute number of the same emotion sub-attributes in each emotion classification group;
acquiring emotion severity coefficients respectively associated with the emotion sub-attributes corresponding to each emotion classification group;
and recording the product of the same attribute quantity corresponding to the same emotion classification group and the emotion severity coefficient as an emotion value, and recording the emotion sub-attribute corresponding to the emotion classification group with the highest emotion value as the emotion attribute corresponding to the touch position.
9. The robot touch interaction method of claim 1, further comprising, after the robot is turned on:
detecting whether a plurality of preset roles corresponding to the robot are marked or not;
when all the preset roles are not marked, prompting an interactive person to select the preset roles, marking the preset roles selected by the interactive person and recording the preset roles as the interactive roles;
recording the marked preset role as an interactive role when the marked preset role exists;
the acquiring of the first interaction operation information according to the emotion attribute further includes:
and acquiring first interaction operation information according to the emotion attributes and the interaction roles.
10. A robot comprising a control module and a touch sensor connected to the control module, the control module being configured to perform the robot touch interaction method of any of claims 1 to 9.
CN202011601350.5A 2020-12-29 2020-12-29 Robot touch interaction method and robot Pending CN112698747A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011601350.5A CN112698747A (en) 2020-12-29 2020-12-29 Robot touch interaction method and robot
PCT/CN2021/132315 WO2022142866A1 (en) 2020-12-29 2021-11-23 Touch-based interaction method for robot, and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011601350.5A CN112698747A (en) 2020-12-29 2020-12-29 Robot touch interaction method and robot

Publications (1)

Publication Number Publication Date
CN112698747A true CN112698747A (en) 2021-04-23

Family

ID=75512118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011601350.5A Pending CN112698747A (en) 2020-12-29 2020-12-29 Robot touch interaction method and robot

Country Status (2)

Country Link
CN (1) CN112698747A (en)
WO (1) WO2022142866A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142866A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Touch-based interaction method for robot, and robot
CN117215302A (en) * 2023-07-24 2023-12-12 北京小米机器人技术有限公司 Intelligent device control method and device, intelligent device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130094058A (en) * 2012-02-15 2013-08-23 주식회사 케이티 Communication system, apparatus and computer-readable storage medium
CN107239822A (en) * 2017-06-09 2017-10-10 上海思依暄机器人科技股份有限公司 Information interaction method, system and robot
CN107463291A (en) * 2017-07-28 2017-12-12 上海木爷机器人技术有限公司 The robot with personification performance based on touch
CN107976919A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 A kind of Study of Intelligent Robot Control method, system and electronic equipment
KR20180115551A (en) * 2017-04-13 2018-10-23 젤릭스 주식회사 A Robot capable of emotional expression and operation method thereof
CN110362661A (en) * 2018-04-08 2019-10-22 微软技术许可有限责任公司 The chat robots technology for seeing position with three
CN110471534A (en) * 2019-08-23 2019-11-19 靖江市人民医院 Information processing method and tele-medicine management system based on Emotion identification

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604204B (en) * 2009-07-09 2011-01-05 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN105345818B (en) * 2015-11-04 2018-02-09 深圳好未来智能科技有限公司 Band is in a bad mood and the 3D video interactives robot of expression module
KR20190116190A (en) * 2019-09-23 2019-10-14 엘지전자 주식회사 Robot
CN112698747A (en) * 2020-12-29 2021-04-23 深圳市普渡科技有限公司 Robot touch interaction method and robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130094058A (en) * 2012-02-15 2013-08-23 주식회사 케이티 Communication system, apparatus and computer-readable storage medium
KR20180115551A (en) * 2017-04-13 2018-10-23 젤릭스 주식회사 A Robot capable of emotional expression and operation method thereof
CN107239822A (en) * 2017-06-09 2017-10-10 上海思依暄机器人科技股份有限公司 Information interaction method, system and robot
CN107463291A (en) * 2017-07-28 2017-12-12 上海木爷机器人技术有限公司 The robot with personification performance based on touch
CN107976919A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 A kind of Study of Intelligent Robot Control method, system and electronic equipment
CN110362661A (en) * 2018-04-08 2019-10-22 微软技术许可有限责任公司 The chat robots technology for seeing position with three
CN110471534A (en) * 2019-08-23 2019-11-19 靖江市人民医院 Information processing method and tele-medicine management system based on Emotion identification

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142866A1 (en) * 2020-12-29 2022-07-07 深圳市普渡科技有限公司 Touch-based interaction method for robot, and robot
CN117215302A (en) * 2023-07-24 2023-12-12 北京小米机器人技术有限公司 Intelligent device control method and device, intelligent device and storage medium
CN117215302B (en) * 2023-07-24 2024-06-21 北京小米机器人技术有限公司 Intelligent device control method and device, intelligent device and storage medium

Also Published As

Publication number Publication date
WO2022142866A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
US12045923B2 (en) Emoji recording and sending
US11618170B2 (en) Control of social robot based on prior character portrayal
KR102661019B1 (en) Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof
US9304621B1 (en) Communication via pressure input
US9807559B2 (en) Leveraging user signals for improved interactions with digital personal assistant
US10860345B2 (en) System for user sentiment tracking
CN109521927B (en) Robot interaction method and equipment
KR20160010449A (en) Method, user terminal and server for information exchange communications
CN111240482B (en) Special effect display method and device
CN105389001A (en) Method for providing visual reality service and apparatus for the same
KR20170029398A (en) Method and electronic apparatus for providing application
CN112698747A (en) Robot touch interaction method and robot
KR20200101014A (en) Electronic device supporting recommendation and download of avatar
US20200019242A1 (en) Digital personal expression via wearable device
CN108563327B (en) Augmented reality method, device, storage medium and electronic equipment
KR20200092207A (en) Electronic device and method for providing graphic object corresponding to emotion information thereof
CN110822641A (en) Air conditioner, control method and device thereof and readable storage medium
KR20200067593A (en) Electronic device for generating augmented reality emoji and method thereof
CN114712862A (en) Virtual pet interaction method, electronic device and computer-readable storage medium
KR20180082777A (en) Communion robot system for senior citizen
TW202138971A (en) Interaction method and apparatus, interaction system, electronic device, and storage medium
CN111383346B (en) Interactive method and system based on intelligent voice, intelligent terminal and storage medium
US11550528B2 (en) Electronic device and method for controlling operation of accessory-mountable robot
CN111506184A (en) Avatar presenting method and electronic equipment
JP7414735B2 (en) Method for controlling multiple robot effectors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination